SaifPunjwani commited on
Commit
5ae8d68
·
verified ·
1 Parent(s): 0ba7a29

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. transcript/allocentric_-ghLPVYtlYE.txt +543 -0
  2. transcript/allocentric_-h-cz7yY-G8.txt +796 -0
  3. transcript/allocentric_10kNbp1PObo.txt +85 -0
  4. transcript/allocentric_1K3qsFYm0iM.txt +232 -0
  5. transcript/allocentric_261LDpwV_TY.txt +8 -0
  6. transcript/allocentric_3c2MJ71DEWg.txt +416 -0
  7. transcript/allocentric_5TjEcK0f5jY.txt +36 -0
  8. transcript/allocentric_5mIGIS_OblE.txt +78 -0
  9. transcript/allocentric_94HekWSIqLM.txt +137 -0
  10. transcript/allocentric_BUObbn7i_qo.txt +48 -0
  11. transcript/allocentric_D8FMbC7RoIg.txt +353 -0
  12. transcript/allocentric_EAikQVqvqnY.txt +361 -0
  13. transcript/allocentric_EmFQUDV67xQ.txt +456 -0
  14. transcript/allocentric_EoEVXS8K5w4.txt +133 -0
  15. transcript/allocentric_FTZHpKQbqbQ.txt +567 -0
  16. transcript/allocentric_FqBzVmlXQMA.txt +0 -0
  17. transcript/allocentric_GddQd53mgEk.txt +497 -0
  18. transcript/allocentric_Ikg0gmekByE.txt +987 -0
  19. transcript/allocentric_JFj8kWm_N-Y.txt +115 -0
  20. transcript/allocentric_KfHUtWHQ8vM.txt +487 -0
  21. transcript/allocentric_LtGY85JXTUM.txt +536 -0
  22. transcript/allocentric_OOXcH9dJsWA.txt +465 -0
  23. transcript/allocentric_QOkrS1v7Ywk.txt +97 -0
  24. transcript/allocentric_SKhsavlvuao.txt +28 -0
  25. transcript/allocentric_TGwnvyUlc18.txt +95 -0
  26. transcript/allocentric_Y89Cd_0wXik.txt +26 -0
  27. transcript/allocentric_ZkZjfqo6h3I.txt +4 -0
  28. transcript/allocentric_Zq0a__Ltr3Q.txt +413 -0
  29. transcript/allocentric_aiWpeqABPw8.txt +376 -0
  30. transcript/allocentric_bvMm8gfFbZ8.txt +81 -0
  31. transcript/allocentric_d3bfdfuruRg.txt +12 -0
  32. transcript/allocentric_dhD_mNoStPs.txt +0 -0
  33. transcript/allocentric_gmc4wEL2aPQ.txt +474 -0
  34. transcript/allocentric_hDy_SaQng68.txt +19 -0
  35. transcript/allocentric_iTB6WoxmJ7A.txt +1042 -0
  36. transcript/allocentric_ihKXQbYeV5k.txt +65 -0
  37. transcript/allocentric_ixW35N_AXSA.txt +587 -0
  38. transcript/allocentric_l-KxSSf4gyM.txt +955 -0
  39. transcript/allocentric_mFJK-t4s-sE.txt +9 -0
  40. transcript/allocentric_mQc1sNumTp8.txt +575 -0
  41. transcript/allocentric_u6v-LAy6Whk.txt +502 -0
  42. transcript/allocentric_uMpEHwPgHzk.txt +386 -0
  43. transcript/allocentric_vRWVgcPVaK4.txt +696 -0
  44. transcript/allocentric_vkqjB6ofThA.txt +21 -0
  45. transcript/allocentric_wOhLMEKLTKE.txt +88 -0
  46. transcript/allocentric_yVT7dO_Tf4E.txt +1467 -0
  47. transcript/challenge_0qj-w4nYvdk.txt +34 -0
  48. transcript/challenge_2RWZ-lPgMoM.txt +2 -0
  49. transcript/challenge_3_dAkDsBQyk.txt +31 -0
  50. transcript/challenge_8yGhNwDMT-g.txt +42 -0
transcript/allocentric_-ghLPVYtlYE.txt ADDED
@@ -0,0 +1,543 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 15.000] Hi, welcome. I think we're ready to begin. Sorry for the technical delay. Welcome. It's nice to see you, even if slightly distantly.
2
+ [15.000 --> 29.000] Hello. I'm excited to be here. We have four talks today on four dimensions of diversity and spatial cognition, culture, context, age and ability.
3
+ [29.000 --> 44.000] And I won't spend too much time introducing. I'm just going to hand the floor over to our first speaker, Tyler Marguetis.
4
+ [44.000 --> 56.000] Hi, welcome. I'm Tyler Marguetis. You see Merced. And this is actually the representation of my talk created by Dolly on the basis of the title, not not the worst job.
5
+ [56.000 --> 68.000] So I'm going to start with one of the founders of cognitive science, not known Chomsky, the manual contact, sort of the OG nativist.
6
+ [68.000 --> 79.000] And he had some really strong opinions about the nature of spatial cognition. So we write, we know what is outside us, alien so far as it stands in relation to ourselves.
7
+ [79.000 --> 84.000] So he's sort of arguing for a kind of primacy of the egocentric.
8
+ [84.000 --> 91.000] We find in the relationship to our body the first ground from which to derive the concept of regions in space.
9
+ [91.000 --> 98.000] The vertical plane that divides the body in two outwardly similar parts supplies the ground for distinction between right and left.
10
+ [98.000 --> 106.000] So you have this argument, he doesn't catch it out in terms of cognition, but you can sort of imagine a modern day Kantian.
11
+ [106.000 --> 117.000] Supposing that our egocentric notion of space left right front back is going to be the foundation of all further elaborations of spatial cognition for a long time.
12
+ [117.000 --> 121.000] This was assumed to be true.
13
+ [121.000 --> 132.000] And in the last few decades, there's been a lot of evidence that this isn't the case. So I'm just going to start off by quickly reviewing some evidence that this is in the case drawing from my own work out of selfishness.
14
+ [132.000 --> 134.000] And self interest.
15
+ [134.000 --> 139.000] So here's a project that we did down in Wohaka, Mexico.
16
+ [139.000 --> 145.000] And we put you on where folks are bilingual in Spanish and is misappetech.
17
+ [145.000 --> 161.000] And we're interested in the frame of reference that they would use spontaneously to reason about small scale space sometimes referred to as table top space. Here's a task we gave them.
18
+ [161.000 --> 163.000] So they watch these.
19
+ [163.000 --> 168.000] These little motion events were things toppled down.
20
+ [168.000 --> 173.000] There's sort of various configurations, different shapes, different sizes.
21
+ [173.000 --> 183.000] And then we had them move on the other side of the screen, rotate nine degrees and we asked them, what did you see tell us what you saw.
22
+ [183.000 --> 203.000] And this is what people said. So here's an example of a participant who saw something that was running from left to right from his perspective. So notice he's from a reason turn 90 degrees now.
23
+ [203.000 --> 218.000] Speaking in Spanish, audio is not playing, but notice this really clear rightward gesture. So he's reproducing the vector of motion, right, this axis of motion relative to his own body. He's maintained the motion, even though he's rotated.
24
+ [218.000 --> 220.000] So he's doing it egocentric.
25
+ [220.000 --> 228.000] He seems to have encoded and is now reproducing and is reason about that previous motion event in an egocentric frame of reference.
26
+ [228.000 --> 237.000] Here's someone else in the same community who saw the exact same motion event in the exact same direction.
27
+ [237.000 --> 249.000] Very, very different motion, right. So the gesture stroke is away from the body, but sort of think about this. If the thing was moving left to right from her perspective, it was actually moving towards the screen behind her.
28
+ [249.000 --> 269.000] So she's reproduced the exact right cardinal motion, even though she's rotated her body, she hasn't maintained the orientation relative to herself. She's maintained the orientation relative to the world. So this is an allocentric, you know, other than self from a reference that she's using to keep track of that spatial spatial motion.
29
+ [270.000 --> 285.000] So quick little summary here. So they saw this thing. They had to remember it, reproduce it. And we saw that people were using two very different strategies, two very different frames of reference to encode and reproduce those motion events. So spatial relations.
30
+ [285.000 --> 291.000] Yeah, so I'm going to refer to these as egocentric and allocentric.
31
+ [291.000 --> 303.000] This variation is, I think, really important from the perspective of cognitive science, because space is also used as a foundation for other conceptual domains, like time and number.
32
+ [303.000 --> 327.000] So one really nice example coming from Lara Borditz, Borditzky and Alice Gabi, involved US Americans, but also Australian Aboriginal folks in popperons, which is Aboriginal town, who were asked to order a series of discs that showed a temple progression. So here it's a picture of a man getting older. I think this is actually Lara Borditzky's grandfather.
33
+ [327.000 --> 339.000] And the question then was what spatial arrangement would people use to sort of naturally depict the flow of time using space.
34
+ [339.000 --> 344.000] US Americans almost invariably did it left to right. I think perhaps invariably did it left to right.
35
+ [344.000 --> 354.000] So if you looked at the direction relative to North East South and West, it turns out the pop ones were reliably arranging these discs going westward.
36
+ [354.000 --> 364.000] So for them time sequentially sort of this sequential process was going from East to West. So this is an allocentric spatial control of time.
37
+ [364.000 --> 370.000] So we're going to look at the other side of the map, which is the area that we're going to look at on the prevalent form of reference in that area, which is allocentric.
38
+ [370.000 --> 374.000] You get something similar for number.
39
+ [374.000 --> 382.000] Here's some of my work with colleagues, Kenzie Cooper, writer Rafael Nunez, similar disc arrangement task, but this time with number dots, asked them to arrange them.
40
+ [382.000 --> 398.000] So there are a couple of days where each line here is showing one particular vector of arrangement, the you know participants and pop a new guinea, who often rely on allocentric frame of reference for talking about space seem to be going every way relative to the body.
41
+ [398.000 --> 404.000] So they're arranging these discs in the right order, but with no preferred direction, maybe a slight bias to the right.
42
+ [404.000 --> 417.000] So this American is no surprise are all sort of orienting things to the right and this sort of aligns with the prevalence of an egocentric frame of reference in California and most weird cultures.
43
+ [417.000 --> 425.000] So this preference for an allocentric egocentric frame of reference for space ends up being really important for how we think about other things to.
44
+ [425.000 --> 430.000] So there's a lot of social frames of reference right coordinate systems for spatial relations.
45
+ [430.000 --> 440.000] They supply a foundation for other conceptual domains for small scale space weird humans typically default to an egocentric frame of reference left right front back.
46
+ [440.000 --> 459.000] But there's considerable cross cultural variability with some groups preferring an allocentric frame of reference some communities even speaking languages that don't have words for left and right so lacking sort of the standard lexical items that we might expect on a contien story to be available everywhere.
47
+ [459.000 --> 477.000] So the kinds of questions that these observations raised for me are things like why do we see this cross cultural variability and how is the cognitive organization of other conceptual demands built on that spatial frame of reference, how can we explain this patchwork pattern of cross cultural variability.
48
+ [478.000 --> 498.000] So people have done a lot of work on this and the standard approach that they've used is a comparative method where you take two groups that differ along some really salient dimension like for instance the frame of reference that's preferentially used in language and then you see whether they also differ in how they think about space.
49
+ [498.000 --> 506.000] So you know people might compare the you know and pop up in a guinea to undergrads at UC San Diego.
50
+ [506.000 --> 527.000] Those differ in for instance the availability of cultural artifacts for number and also in the frame of reference that's preferentially used in language and you might zoom in on one of those dimensions find some difference along that dimension that you really care about and then try to correlate it with some difference in cognition and hope that that is.
51
+ [528.000 --> 535.000] Actually a causal relationship right sort of correlational this is a standard approach that people have used.
52
+ [535.000 --> 551.000] The problem here is that with any kind of small scale comparison like this there's going to be many dimensions of difference so we looked at the difference in the availability of material artifacts for number I think that a glance you can see that there are number of other salient differences between these field sites.
53
+ [551.000 --> 569.000] Countless ones and so this sort of raises the question about whether the particular difference in dimension of variability that we really care about as the explanatory factor was really the thing that mattered and not all these other things that are also changing.
54
+ [569.000 --> 580.000] And as compared to method with small scale groups there's also sometimes a difficulty of actually having cumulative insight that builds on the observations of past people.
55
+ [580.000 --> 590.000] So for instance, there's a picture of a chmone town so I think we're going to hear about this from Ben pit next who's done some fantastic work with this group.
56
+ [590.000 --> 603.000] So I think a bunch of folks have a wonderful paper that came out last year looking at spatial controls of number size and time in this group showing that they don't necessarily align wonderful observation.
57
+ [603.000 --> 622.000] There were actually a number of past studies within another group studying each of these domains independently showing this missed alignment and it was just really difficult to know that that past work had already been done because there was sort of published as you's one off studies and that's why it was actually important for Ben to go out and do the one single study to bring it all together.
58
+ [622.000 --> 633.000] It's really really difficult to get this cumulative insight when these papers are being published as individual elements.
59
+ [633.000 --> 648.000] The other sort of limitation of this comparative approach is it really encourages an outside in unidirectional simple causal story where you focus on language as the explanatory influence or maybe cultural artifacts or some sort of embodied practice like finger counting.
60
+ [648.000 --> 657.000] And you vary just that one thing and you're like, ah, yes, it has an influence or it's correlated with cognition.
61
+ [657.000 --> 669.000] And alternative to that perspective though is a much more ecological perspective where we recognize that there might be causal influences going from each of these elements on to a special cognition.
62
+ [669.000 --> 683.000] They might also be influencing each other, the influences might be bidirectional. The language that people speak might reflect their preferences for thinking in particular ways, which then in turn shape the way that they gesture which shapes the kinds of artifacts that they have.
63
+ [683.000 --> 694.000] And these kinds of simple comparative methods make it really, really difficult to adopt this kind of rich multi causal perspective and often encourage a much more unique causal approach.
64
+ [694.000 --> 700.000] So this is sort of a more ecological perspective where you look at relationships between elements.
65
+ [700.000 --> 705.000] Okay, so how can we advance our understanding of cross culture diversity and firms of reference.
66
+ [705.000 --> 714.000] Well, this comparative method has been really powerful, but there are many dimensions of difference that might be in consequential, but might not be.
67
+ [714.000 --> 723.000] This approach makes it really difficult to get cumulative understanding and it encourages this unique causal account at the expense of a richer ecological perspective.
68
+ [723.000 --> 726.000] So what's the alternative.
69
+ [726.000 --> 731.000] I propose at list abstract thought and language across space.
70
+ [731.000 --> 744.000] So this is a large scale data bank that I've been developing with my colleague Kevin Holmes at read working with a wonderful team of undergrads that's going to be a living data bank of existing studies of cross cultural diversity in frame of reference.
71
+ [744.000 --> 748.000] So this isn't a systematic review or a meta analysis.
72
+ [748.000 --> 753.000] The idea is that this is going to be continuously added to and available as a resource to the community.
73
+ [753.000 --> 757.000] So that instead of reinventing the wheel or sort of doing these one off analyses.
74
+ [757.000 --> 766.000] We have the power of the collective to sort of shape and decide between causal accounts of cross culture diversity and F war.
75
+ [766.000 --> 771.000] We did it. We did a systematic search through Google scholar for targeted search terms.
76
+ [771.000 --> 778.000] We include studies that involve an empirical study of spatial from a reference or spatial reputations of number and time.
77
+ [778.000 --> 786.000] And at least one non-English speaking sample so trying to get beyond just studies of folks in California or Toronto.
78
+ [786.000 --> 800.000] And we first sort of did an unrestricted search and then went year by year from 1970 all the way to 2020, which is when this first version of the data bank stopped being populated, although we're going to continue to populate it going forward.
79
+ [800.000 --> 804.000] And then for each study and group, we coded things like the preferred from a reference.
80
+ [804.000 --> 808.000] What did people use in the reasoning, egocentric, allocentric or both.
81
+ [808.000 --> 815.000] What terms of reference were actually available on the task where they forced to use one and then just test it to see how well they did or could they choose between approaches.
82
+ [815.000 --> 818.000] What were the dependent measures.
83
+ [818.000 --> 823.000] What was the scale of the task so tabletop larger.
84
+ [823.000 --> 831.000] Where were they and what language they speak and be coded this in ways that allows for interoperability with other data sets.
85
+ [831.000 --> 837.000] So far we have 347 studies from almost 140 publications.
86
+ [837.000 --> 844.000] This is almost a quarter of a hundred thousand participants from a variety of places around the world.
87
+ [844.000 --> 855.000] You can see the variety of languages that have been studied for spatial from a reference, but you might also notice that there are patches of the world that have been really understudied really under samples.
88
+ [855.000 --> 859.000] So most of South America, much of Africa.
89
+ [859.000 --> 867.000] It was similar story for time, although a really nice large scale study of spatial controls of time in Brazil.
90
+ [867.000 --> 873.000] And then a much sparse representation of spatial frames of reference of number.
91
+ [873.000 --> 879.000] So we could start looking at scale at what frame of reference people use for number and time.
92
+ [879.000 --> 883.000] So this is looking at an egocentric frame of reference along the lateral axis.
93
+ [883.000 --> 896.000] And we see this prevalence of a right word perspective, right where you sort of assume that time goes from left to right, but also existence of left word cases, mixed cases, and also some people who didn't have a preference at all.
94
+ [896.000 --> 911.000] And we look at the use of a sageal axis for time where you have a actually lots of evidence for mixed uses, although also some preference for for but also lots of new evidence coming out that some people think of the future is behind them.
95
+ [911.000 --> 919.000] And what I find especially interesting is that in a lot of cases, we have multiple domains that have been studied in the same group.
96
+ [919.000 --> 930.000] Dutch English German Italian Japanese Mandarin, those are all language communities where we have evidence for how they think about space, time and number and since 2020 also now that some on in.
97
+ [930.000 --> 939.000] So what's next, this data is going to be released publicly. We want to share it with the community. So this is hopefully going to be in good shape by winter 2023.
98
+ [939.000 --> 953.000] And then when we release it, we also have a web interface so that if folks have a study that they think belongs, they could code their own data update will sort of do a sanity check quality check and update it so it's a living document that's used as a tool for us.
99
+ [953.000 --> 962.000] And I would love to hear from folks who are working on spatial friends of reference of what you would want to have coded about your own data to properly representative.
100
+ [962.000 --> 972.000] And the goal is that this is a collective resource for people working on cross cultural variability, spatial from a reference in a larger global ecological perspective.
101
+ [972.000 --> 977.000] And this makes them really cool things possible. So new approaches to old questions.
102
+ [977.000 --> 983.000] So we could ask you know this classic chestnut of what predicts the particular thing of reference an orientation of a domain.
103
+ [983.000 --> 997.000] And we could look at scale about whether urban versus rural living actually is predictive. And with enough field sites that we could sort of rule out alternative explanations in terms of say visibility of the skyline for instance.
104
+ [997.000 --> 1000.000] But we can ask you the new questions.
105
+ [1000.000 --> 1009.000] Like when are space time a number aligned in the conceptualization. Why are there sometimes not. So this is a question that was picked up by Ben pit in 2021.
106
+ [1009.000 --> 1018.000] And we could sort of reflect on why there are no attested allocentric controls of number. There's no group that's ever observed that is that true at scale.
107
+ [1018.000 --> 1035.000] And there's systematic contingencies between domains. So for instance, if you conceptualize space, egocentrically, does that mean that as a community or almost never or perhaps never going to conceptualize time allocentrically right these questions are now possible if you have large scale data available plus.
108
+ [1035.000 --> 1044.000] So we're going to have an ecological perspective on friends of reference right we've made fantastic progress over the last few decades on documented cross cultural variability that's been great.
109
+ [1044.000 --> 1050.000] But most of this has been done in small scale studies that haven't been brought into conversation with each other.
110
+ [1050.000 --> 1052.000] There's limits to that.
111
+ [1052.000 --> 1066.000] And the hope is that atlas abstract thought and language across space, this public data bank is going to leverage all the collective labor that many people have been doing in this field to understand the cognitive ecosystem of space time and number.
112
+ [1066.000 --> 1076.000] Thank you so much.
113
+ [1076.000 --> 1087.000] So we're a little we're a little over time. So maybe we have time for one question as we transition. If anybody wants to jump up to a mic, you're welcome to.
114
+ [1087.000 --> 1091.000] Hi, that's that's fantastic.
115
+ [1091.000 --> 1107.000] It's it's I believe called a mapping review. It's something that Zoey and go and I recently started for episodic memory development also was an open source database and I think you know so this is fantastic.
116
+ [1107.000 --> 1125.000] I'm really exciting. One request I don't know if it's possible as a spatial cognition researcher more than a spatial language researcher is I think it's super important to know something about the physical terrain in which the group groups live.
117
+ [1125.000 --> 1131.000] You know is it sloped is there a major body of water, etc. Is that at all possible.
118
+ [1131.000 --> 1146.000] So one thing that won't exactly get at what you want, but it's going to be an approximation is mapping our data to existing databases that have some of that ecological environmental information.
119
+ [1146.000 --> 1160.000] So an example is de place, which is a big database of cultural environmental and sort of ecological terrain things. So I won't tell us exactly what was happening that village, but it gives us a sense of what's happening in the area.
120
+ [1160.000 --> 1171.000] And so that's one way that we're trying to get at that. Yeah, thanks so much. Okay, thank you, everyone.
121
+ [1171.000 --> 1175.000] Great. Okay, and just to remind people.
122
+ [1175.000 --> 1188.000] We are going to have some time at the end of all the talks for a more general discussion. So if you do have other questions for Tyler, I know there was one in the chat. We're not ignoring you. We'll just save it for later.
123
+ [1188.000 --> 1200.000] Great. Okay. I'm up next. I'm Benjamin Pitt. And today I want to present some work ongoing work in collaboration with Alex Carstensen.
124
+ [1200.000 --> 1210.000] Is it albony Ted Gibson and Steve Pantidosey. And this is work that tries to identify some of the some of the mechanisms of cognitive diversity.
125
+ [1210.000 --> 1218.000] In this case, diversity in people's basic cognitive frameworks for representing space, the same ones that Tyler just introduced.
126
+ [1218.000 --> 1230.000] So here is to try to clarify why we see this sort of variation in spatial memory and in spatial language within and across groups. So similar, similar goals here.
127
+ [1230.000 --> 1240.000] So just to review briefly, if I ask if I showed you this scene and ask you to describe where the ball is, there's a lot of things you might say you might say the ball is in front of the chair.
128
+ [1240.000 --> 1246.000] So the right of the chair, you might say it's north of the chair. And in some culture, you might say it's down river of the chair.
129
+ [1246.000 --> 1262.000] So all of these are examples of different different spatial reference frames. And there's a lot of flavors and a lot of different classifications. I'm not going to get into the details of them. I'm going to instead focus on on this, this primary distinction that I'm interested in the same one that Tyler just alluded to.
130
+ [1262.000 --> 1272.000] Ego-centric versus allocentric. So just to remind you, ego-centric spatial reference frames are coordinate systems that are defined by the size of the body like my left and right.
131
+ [1272.000 --> 1278.000] And allocentric frames are those defined by features of the spatial environment like uphill or north.
132
+ [1278.000 --> 1289.000] Okay, so, so different language groups are very dramatically as we just saw in the kind of spatial language they use and they've been, they've been studied and classified accordingly.
133
+ [1289.000 --> 1302.000] But they also differ in the spatial reference frame they use when reasoning about space, even when no language is required. So Tyler just showed us a very cool example of a non linguistic test of spatial reference frame using gesture.
134
+ [1302.000 --> 1320.000] Another, another way to test this and there's many, but one that's relevant that's going to be relevant here is what I'll call the reconstruction task and the way this task typically works is the participant faces a study table and studies some novel set of objects like three animal figures in a row.
135
+ [1320.000 --> 1338.000] Practices reconstructing that array rotates 180 degrees to a test table and then is asked to reconstruct the array and the trick here is that there's two right answers, one that corresponds to an ego centric frame and the other that corresponds to an allocentric frame.
136
+ [1338.000 --> 1355.000] And what's been shown beautiful in lots of beautiful work in lots of, in lots of distinct cultures is clear what appears to be clear preferences for one spatial reference frame or another and real differences across groups.
137
+ [1355.000 --> 1365.000] So some groups appear to be predominantly ego centric and others appear to be predominantly allocentric in the way they solve these non verbal spatial tasks.
138
+ [1365.000 --> 1372.000] And I want to ask has been asked many times before, which is why right why do we see the sorts of variation.
139
+ [1372.000 --> 1385.000] Some researchers on the basis of the cross correlational evidence that that we've we've seen have suggested that the differences in spatial memory are actually caused by differences in spatial languages.
140
+ [1385.000 --> 1402.000] So they're interesting possibility and sort of an ongoing debate, but it's not the question I want to focus on today, the question that I want to ask is why do either of these things vary right why do people talk or think differently about space within and across cultures.
141
+ [1402.000 --> 1410.000] So I want to suggest here it has to do with differences in visual spatial perception and memory.
142
+ [1410.000 --> 1414.000] So specifically, I'm going to talk about what's called mirror invariance.
143
+ [1414.000 --> 1423.000] This is a phenomenon that's been that's been studied in visual cognition for a long time, which can basically be glossed as spatial confusion on the lateral access.
144
+ [1423.000 --> 1429.000] And this is sort of most obvious to many of us, probably observing children trying to learn to read and write.
145
+ [1429.000 --> 1438.000] And one thing that becomes very clear is that distinct distinguishing letters like B and D is incredibly hard and is like years in the making.
146
+ [1438.000 --> 1447.000] And is actually particular or most strong on the lateral access. So B and D is harder than say B and P.
147
+ [1447.000 --> 1455.000] This is something that's not just a feature of kids learning to read and write, but is actually observed across across species.
148
+ [1455.000 --> 1476.000] So in in octopuses and in monkeys and in other other species as well, you find the same the same mirror invariance basically the images on the right, which are lateral reflections are much harder to learn to distinguish them, the images on the left, which are equally different physically, but
149
+ [1476.000 --> 1480.000] are easier to distinguish psychologically.
150
+ [1480.000 --> 1497.000] And you find the same mirror invariance in some adult groups, especially in low literacy populations where people are asked to distinguish mirror images, either in two dimensional or three dimensional shapes, and they insist that they're the same even when viewing them simultaneously.
151
+ [1497.000 --> 1508.000] So the lesson from that that literature, I would summarize is the lateral access is weird, right, people very dramatically in their ability to make left right stink discriminations.
152
+ [1508.000 --> 1526.000] And yet despite the sort of peculiarity of this access that the lateral access most studies of frame of reference use depend on it, either they study it only or they sort of rely on it primarily to classify groups into study how they how they think and speak about.
153
+ [1526.000 --> 1531.000] And then we're going to talk about a back space.
154
+ [1531.000 --> 1544.000] So I want to suggest that actually these differences in in frame of reference that we are that we're seeing across cultures and perhaps within cultures, maybe driven at least in part by the differences in people's perceptions of left right space.
155
+ [1544.000 --> 1555.000] And this is a really great example of how the spatial decision was built on on previous work by Peggy Lee and Linda Barbonnell and our own Tyler Margettes and Stephen Levinson before them, among others.
156
+ [1555.000 --> 1563.000] Here what I want to do is propose and test us a sort of general form of this proposal, which I'm calling the the spatial discrimination hypothesis.
157
+ [1563.000 --> 1573.000] Which says simply when reasoning or speaking about spatial relations, people tend to use the relevant spatial continuum that they can better discriminate самого that continuum is defined by the sides of their body, which is not really fit for a lot of people.
158
+ [1573.000 --> 1578.600] by the sides of their body or by the features of their spatial environment.
159
+ [1578.600 --> 1584.200] So if that's true, then people who are unaccustomed to making left-right discriminations should
160
+ [1584.200 --> 1590.000] sometimes abandon that access, that egocentric access, in favor of other more reliable
161
+ [1590.000 --> 1596.600] allocentric things like where the river is or where the hill is, what were the hill.
162
+ [1596.600 --> 1602.080] That is, we should expect any allocentric preferences to be strongest on the lateral
163
+ [1602.080 --> 1608.400] access, and this is actually a prediction that already has some support in the literature.
164
+ [1608.400 --> 1613.040] On the strongest prediction of this account though, people's frame of reference preferences
165
+ [1613.040 --> 1619.120] might actually reverse across axes where people prefer allocentric space on the lateral
166
+ [1619.120 --> 1625.800] access but prefer egocentric space on the sagittal axis, the front back axis.
167
+ [1625.800 --> 1630.400] Alternatively of course, people may have no choice but to fixate predominantly on just
168
+ [1630.400 --> 1635.360] one frame of reference as has been suggested in the literature.
169
+ [1635.360 --> 1639.880] And so to test this, it's hard to test this in American adults who tend to be overwhelmingly
170
+ [1639.880 --> 1643.080] egocentric even on the lateral access.
171
+ [1643.080 --> 1648.360] But a better test bed can be found in the Chimane, a group of farmer foragers, indigenous
172
+ [1648.360 --> 1652.280] to the Bolivian Amazon who I've had the pleasure of working with.
173
+ [1652.280 --> 1656.160] There's a lot that's interesting about Chimane culture, but one thing that's relevant
174
+ [1656.160 --> 1661.720] here is that they have relatively few of the artifacts and practices that emphasize left
175
+ [1661.720 --> 1667.560] right spatial distinctions in the experience of, say, US Americans or Canadians, experiences
176
+ [1667.560 --> 1672.640] like reading or writing or driving cars or using sinks.
177
+ [1672.640 --> 1678.880] So and instead, Chimane people are known to navigate large parts of the Amazon on foot,
178
+ [1678.880 --> 1684.000] starting when they're children and they cover large areas and when we ask them to point
179
+ [1684.000 --> 1688.480] up river or east, they they're impressively good at it even when when in an enclosed
180
+ [1688.480 --> 1689.960] space.
181
+ [1689.960 --> 1695.080] So these features of Chimane culture suggest that they may have good allocentric spatial
182
+ [1695.080 --> 1696.080] abilities.
183
+ [1696.080 --> 1700.880] And the question we want to ask here is whether they use allocentric spatial reference
184
+ [1700.880 --> 1707.560] frames generally or whether they use them selectively for making distinctions on the lateral
185
+ [1707.560 --> 1709.960] axis.
186
+ [1709.960 --> 1716.480] So to test this, we compared first we compared a frame of reference use across axes using
187
+ [1716.480 --> 1717.480] the reconstruction task.
188
+ [1717.480 --> 1719.520] This is the same task I talked about earlier.
189
+ [1719.520 --> 1728.040] You memorize an array of objects, rotate 180 degrees and reconstruct it like this.
190
+ [1728.040 --> 1731.680] But importantly, we had them do this not just on the lateral axis, but also on the
191
+ [1731.680 --> 1734.640] sagittal axis, same participants.
192
+ [1734.640 --> 1739.720] And then we had we also had those participants do what is an even simpler test of the spatial
193
+ [1739.720 --> 1744.600] reference frames, which is which we call the selection task.
194
+ [1744.600 --> 1750.800] The way this works is we lay out five identical cups, one set on each table.
195
+ [1750.800 --> 1755.600] Participants are asked to touch a target cup on the target table, sorry, on the study table.
196
+ [1755.600 --> 1761.920] They turn around 180 degrees and touch or asked to touch the corresponding cup at the at
197
+ [1761.920 --> 1762.920] the test table.
198
+ [1762.920 --> 1768.200] And of course, the the idea here is to is that one one response corresponds to an egocentric
199
+ [1768.200 --> 1771.480] frame and the other to an allocentric frame.
200
+ [1771.480 --> 1775.280] Okay, so that was those are non linguistic tests of frame of reference.
201
+ [1775.280 --> 1776.800] Let's see what they did.
202
+ [1776.800 --> 1779.520] So we'll start with the lateral axis.
203
+ [1779.520 --> 1783.840] This is in the reconstruction task and what you can see is there's a preference here for
204
+ [1783.840 --> 1785.640] allocentric spatial frames of reference.
205
+ [1785.640 --> 1790.760] This is this is common among among unindustrialized groups.
206
+ [1790.760 --> 1795.880] But if we look at the sagittal axis, we see this clear reversal where the same participants
207
+ [1795.880 --> 1803.040] in the same room using the same materials are prefer using egocentric space to solve
208
+ [1803.040 --> 1807.400] this the same task when it's on the sagittal axis.
209
+ [1807.400 --> 1810.320] And we find the same reversal in the selection task.
210
+ [1810.320 --> 1814.800] So what's going on in both of these tasks is that people these chimony adults are using
211
+ [1814.800 --> 1819.360] non different non linguistic frames of reference on different axes.
212
+ [1819.360 --> 1820.920] Cool.
213
+ [1820.920 --> 1826.880] So here's sort of a schematic of of that result, allocentric on on one axis and egocentric
214
+ [1826.880 --> 1829.680] on the other.
215
+ [1829.680 --> 1835.760] So that's consistent with one of the predictions of our spatial discrimination hypothesis.
216
+ [1835.760 --> 1838.280] But then the question is what about spatial language, right?
217
+ [1838.280 --> 1843.040] We just I just showed you tasks that don't involve any any language at all.
218
+ [1843.040 --> 1848.200] Now we want to ask do people talk differently about spatial relations on different axes,
219
+ [1848.200 --> 1851.760] the way that they think differently about them on different axes?
220
+ [1851.760 --> 1856.200] Or alternatively, perhaps people simply align with the dominant coding system in their
221
+ [1856.200 --> 1857.280] speech community, right?
222
+ [1857.280 --> 1862.680] You could imagine that for the purposes of communication, it might be beneficial to converge on a single
223
+ [1862.680 --> 1863.680] system.
224
+ [1863.680 --> 1865.280] Okay.
225
+ [1865.280 --> 1871.160] So in experiment two, we tested a new group of chimony adults in their spatial language
226
+ [1871.160 --> 1873.760] on the same two axes.
227
+ [1873.760 --> 1876.640] We did this using the director, matcher tasks.
228
+ [1876.640 --> 1880.680] So the way that this task works, it's designed to elicit spatial language.
229
+ [1880.680 --> 1887.280] We give the director, who in this case is the man on the left, a simple spatial array.
230
+ [1887.280 --> 1890.440] In this case, it's a toy chicken and pig.
231
+ [1890.440 --> 1895.760] And he's asked to describe it to the matcher, the woman on the right, whose job it is is
232
+ [1895.760 --> 1898.640] to try to reconstruct an identical array.
233
+ [1898.640 --> 1903.200] And of course, the trick here, I guess, is that they can't see each other.
234
+ [1903.200 --> 1906.440] You can sort of see that there's an opaque barrier that separates them.
235
+ [1906.440 --> 1910.360] They can't see each other or each other's gestures or the figures.
236
+ [1910.360 --> 1915.360] And the idea is that that encourages the director to encode all of the relevant spatial information
237
+ [1915.360 --> 1917.360] into his speech.
238
+ [1917.360 --> 1918.360] And they did.
239
+ [1918.360 --> 1919.920] They set a lot.
240
+ [1919.920 --> 1924.640] And they said things like the pig is on my side and the chicken is more over there, facing
241
+ [1924.640 --> 1930.840] east, put the pig to the west and the chicken to the east facing me, so on and so forth.
242
+ [1930.840 --> 1935.080] We had 18 directors and we went through and coded all of their spatial language as either
243
+ [1935.080 --> 1937.120] egocentric or allocentric.
244
+ [1937.120 --> 1940.280] Okay, so here's what we found.
245
+ [1940.280 --> 1941.800] Let's start with the lateral axis again.
246
+ [1941.800 --> 1946.120] We see the same preference in the lateral axis for allocentric space.
247
+ [1946.120 --> 1952.600] And on the on the sagittal axis, we see the same reversal where on the sagittal axis,
248
+ [1952.600 --> 1957.680] they're talking, they're speaking about space using egocentric frames.
249
+ [1957.680 --> 1964.640] So here the summary is that just like in the nonverbal condition, the nonverbal tasks,
250
+ [1964.640 --> 1967.960] these traumatic participants are using different frames of reference and different axes in
251
+ [1967.960 --> 1971.880] this case in their spatial language.
252
+ [1971.880 --> 1974.520] So okay, what does this tell us?
253
+ [1974.520 --> 1977.160] I'm running out of time.
254
+ [1977.160 --> 1979.320] I'll just summarize briefly.
255
+ [1979.320 --> 1983.880] I think the first thing this tells us is that it makes it pretty clear that people do not
256
+ [1983.880 --> 1987.280] fixate predominantly on just one spatial frame of reference, right?
257
+ [1987.280 --> 1994.760] That actually, instead, FOR use varies within an individual across these spatial axes.
258
+ [1994.760 --> 2000.080] And so whenever we're testing only one of these axes, like the lateral axis, we're going
259
+ [2000.080 --> 2005.760] to get only sort of a one dimensional picture of that person's FOR use.
260
+ [2005.760 --> 2011.400] And second, it shows that spatial memory patterns with spatial language not only across cultures,
261
+ [2011.400 --> 2014.280] but within the same language group.
262
+ [2014.280 --> 2019.400] To be clear, this is neat, I think, but it doesn't actually clarify whether there's a causal
263
+ [2019.400 --> 2023.640] relationship between them and what that causal relationship might be.
264
+ [2023.640 --> 2028.960] And finally, the thing that I find sort of most motivating about these initial findings
265
+ [2028.960 --> 2034.360] is that they suggest that spatial discriminability could potentially explain variation at other
266
+ [2034.360 --> 2035.760] levels as well.
267
+ [2035.760 --> 2042.760] They're not just variation across axes, but across cultures between individuals and overdevelopment.
268
+ [2042.760 --> 2047.920] And we have some ongoing work now to start to look at variation in the relationship between
269
+ [2047.920 --> 2053.400] spatial discriminability and FOR use at these other levels as well.
270
+ [2053.400 --> 2059.000] And by sort of by doing so, by studying conceptual diversity at all of these levels.
271
+ [2059.000 --> 2063.560] And I'll quote Lee and Glytman here, the quest is for a unified explanation of when and
272
+ [2063.560 --> 2068.720] why individuals or populations be they speakers of one language or another pre-linguistic
273
+ [2068.720 --> 2074.440] humans or members of other species solve spatial problems in varying ways.
274
+ [2074.440 --> 2081.120] Okay, that I just want to thank the Chimane, first of all, and our translators, my funders
275
+ [2081.120 --> 2088.480] and institutions that have supported me and of course, my collaborators and advisors.
276
+ [2088.480 --> 2089.280] Thanks very much.
277
+ [2094.560 --> 2101.560] Okay, do we have, let's see, time do we have, I think we have three minutes for questions.
278
+ [2101.560 --> 2106.560] Sure, we'll have a question real quick.
279
+ [2106.560 --> 2109.560] Yeah, I've got one.
280
+ [2109.560 --> 2115.560] And that's about sort of the distinction between the egocentric and allocentric and what that conceptually is.
281
+ [2115.560 --> 2121.560] I noticed that one of the sort of two two different kinds of reference that were both called allocentric
282
+ [2121.560 --> 2124.560] there and the coding scheme were referring to sort of different entities.
283
+ [2124.560 --> 2128.560] There was both East and West, but there was also references to parts of the table.
284
+ [2128.560 --> 2133.560] And that sort of makes me think, you know, if you're on a ship, you might refer to Port and Starboard
285
+ [2133.560 --> 2138.560] or you might refer to North and South, depending on whether you're talking about where you are in the ship
286
+ [2138.560 --> 2141.560] or where the ship is in the world.
287
+ [2141.560 --> 2147.560] And I was wondering if, oh no, I've actually put on the question about that.
288
+ [2147.560 --> 2161.560] But I was wondering if these are, oh right, I was wondering if you noticed any particular patterns between a sort of absolute global scale frame of allocentric reference and other sort of mezzo level allocentric frames of reference and whether you notice any patterns and the qualitative data.
289
+ [2161.560 --> 2163.560] Yeah, yeah, it's a great question.
290
+ [2163.560 --> 2166.560] And there's certainly a lot of ways.
291
+ [2166.560 --> 2172.560] There's a lot of flavors and sub flavors of of allocentric in particular.
292
+ [2172.560 --> 2175.560] We haven't really dug into it deeply.
293
+ [2175.560 --> 2185.560] You might have noticed that in the language data, I sort of, although my columns showed, you go centric analysis, centric we do, we do break it down some.
294
+ [2185.560 --> 2191.560] There's more analyses that need to be done there are for the nonverbal data.
295
+ [2191.560 --> 2203.560] And we don't, at least these tasks don't allow us to distinguish between things like like are you using the sides of the table or are you using, you know, the flow of the river.
296
+ [2203.560 --> 2211.560] But you can imagine other other tasks that are able to make that distinction. And I think that's a worthwhile endeavor.
297
+ [2211.560 --> 2212.560] Yeah.
298
+ [2212.560 --> 2224.560] I'm going to pass the floor on now.
299
+ [2224.560 --> 2230.560] Okay, let me get this ready.
300
+ [2230.560 --> 2237.560] Okay, you're using your laptop. Okay, next we have Holly Huey who's here.
301
+ [2237.560 --> 2241.560] To talk to us about diversity in.
302
+ [2241.560 --> 2245.560] Special cognition across age groups.
303
+ [2245.560 --> 2248.560] Okay.
304
+ [2248.560 --> 2254.560] Yes, you are. Do you want this.
305
+ [2254.560 --> 2257.560] Okay.
306
+ [2257.560 --> 2266.560] All righty. Hi everyone.
307
+ [2266.560 --> 2280.560] My name is Holly Huey. I'm a PhD student at UCSD.
308
+ [2280.560 --> 2288.560] I think I have some slides issues. One second.
309
+ [2288.560 --> 2291.560] Let's try more time.
310
+ [2291.560 --> 2292.560] All right.
311
+ [2292.560 --> 2309.560] Switch.
312
+ [2309.560 --> 2319.560] I'm sorry for the tech issues.
313
+ [2319.560 --> 2320.560] Cool.
314
+ [2320.560 --> 2326.560] All right, so thank you, Sarah, for my name is Holly. I'm from the University of California, San Diego.
315
+ [2326.560 --> 2333.560] But I'm excited to put that work that I've done previously at NYU with Dr. Moria Dillon, along with Matthew Jordan and you've all heart.
316
+ [2333.560 --> 2338.380] And I'm going to talk about the
317
+ [2338.380 --> 2343.560] laboratory, which is where you have the lab and where you the lab for the developing mind is really interested in exploring spatial cognition through the lens of geometry.
318
+ [2343.560 --> 2349.560] Across generations and across time, humans have developed really rich formal systems of geometry.
319
+ [2349.560 --> 2359.560] So formal geometry underlines much much of human achievement from science and technology to art and architecture.
320
+ [2359.560 --> 2364.560] And we have a very good idea of how we can interact concepts.
321
+ [2364.560 --> 2373.560] So for example, concepts like a point being infinitely small or a line as being infinitely long.
322
+ [2373.560 --> 2380.560] But despite the abstractness of these concepts, humans appear to intuitively grasp these foundational definitions of geometry.
323
+ [2380.560 --> 2387.560] Perhaps in part, based on how we physically interact with these spaces and objects within our environments.
324
+ [2387.560 --> 2395.560] And so, I think that's a very common point A to point B. A straight line is the shortest most efficient path to take.
325
+ [2395.560 --> 2402.560] And so a major thrust of this work is uncovering the context that shape our natural geometric intuitions.
326
+ [2402.560 --> 2410.560] And moreover what their developmental origins are that eventually lead to such sophisticated formal geometry systems.
327
+ [2410.560 --> 2419.560] And so to investigate these these really broad questions, a really big aim of this study that I'll be presenting is to benchmark what those intuitions are in the first place.
328
+ [2419.560 --> 2424.560] And whether they hold across different contexts.
329
+ [2424.560 --> 2439.560] Now, previous studies exploring humans intuitive natural geometry has frequently converged on the conclusion that regardless of formal schooling humans are spontaneously attuned to foundational principles of planar Euclidean geometry.
330
+ [2439.560 --> 2447.560] Principles related to concepts such as lines parallelism perpendicularity and symmetry.
331
+ [2447.560 --> 2456.560] And prominent theoretical perspective suggests that there are two separate systems, cognitive systems for geometry that have emerged through human evolution.
332
+ [2456.560 --> 2468.560] So one system prioritizes distance and direction information to support navigation and is often investigated by asking children to navigate spaces of varying enclosures.
333
+ [2468.560 --> 2476.560] So for example, on the left study show that four year olds could reorient in rooms that were enclosed by short or tall walls.
334
+ [2476.560 --> 2484.560] But failed to reorient in spaces that were merely defined by lines on the ground or by pillars at the corners of the spaces.
335
+ [2484.560 --> 2497.560] Additionally, on the right other work has shown that children were easily orient in spaces with more extreme dimensions such as the rooms that are on the left versus on the right that are converging towards more square features.
336
+ [2497.560 --> 2506.560] And a second system is theorized to prioritize length and angle information in order to support visual form recognition.
337
+ [2506.560 --> 2517.560] And this is often pro by asking participants to make judgments about shapes of varying lengths and angles and to identify other shapes that deviate in such information relative to others.
338
+ [2517.560 --> 2528.560] And then we have a different object deviation task. So for example, all these shapes are here that I'm highlighting are different from their corresponding array.
339
+ [2528.560 --> 2536.560] And what we're a key aspect to note is that this literature has frequently used stimuli within planar Euclidean context.
340
+ [2536.560 --> 2549.560] Without investigating humans and to visions about non planar context, these conclusions that we have so far fall short at the moment in comprehensively describing what composes are intuitive geometry.
341
+ [2549.560 --> 2559.560] Limited but in really important cross culture research really inspired by the prior work that we've seen has begun to pro both humans planar and non planar intuitions.
342
+ [2559.560 --> 2568.560] But in limited ways. So intriguingly, this work also suggests that humans intuitive geometry reflects planar Euclidean principles.
343
+ [2568.560 --> 2575.560] So for example, in this one study, adults for the Meronda Rukutraib of the Amazon were asked questions about shapes of both planar and spherical services.
344
+ [2575.560 --> 2584.560] Here they were trying to identify the location of the apex of what would be the completed triangle, as well as what it's magnitude of the angle would be.
345
+ [2584.560 --> 2590.560] And while this work in particular offers really rich insights on the nature of human intuitions about non planar context.
346
+ [2590.560 --> 2600.560] It was limited to relatively few questions prevented in the spherical context and not really any specific questions about principles of geometry just broad questions about shapes.
347
+ [2600.560 --> 2612.560] And so building on this particular study, what we wanted to do was to more specifically investigate children and adult intuitions about principles of geometry in spherical or non of planar context.
348
+ [2612.560 --> 2627.560] While non planar context that we navigate in real life, look, something more like this. So the surface that I'm displaying now, we decided to probe children and adult intuitions about a more simple surface such as a sphere.
349
+ [2627.560 --> 2642.560] In our study, we evaluated 48 six to eight year olds children and 48 adults judgments from the US about a single foundational concept that being their judgments about linearity.
350
+ [2642.560 --> 2658.560] So on a plane, straight lines are the shortest distance between two points. However, on the surface of a sphere lines can look curved or straight, but importantly lines that appear straight are not always the shortest distance between two points.
351
+ [2658.560 --> 2668.560] And so what we wanted to see was whether or not these children and adults were sensitive to how lines and distances interact with these non planar services.
352
+ [2668.560 --> 2677.560] And so we investigated our participants intuition about this ambiguity of spheres. We presented them with a series of 2D images of 3 spheres.
353
+ [2677.560 --> 2690.560] In one kind of image, we presented them with geodesics. A geodesic is the shortest path between two points illustrated here as the solid black line between the little purple and orange dots.
354
+ [2690.560 --> 2701.560] And if this path were continued around the whole sphere, it would be defined as a great circle with the largest diameter and it would cut the sphere in half.
355
+ [2701.560 --> 2717.560] And another kind of image we presented paths and spheres that would not be the shortest paths between these points on they're called arcs. So in this case, if a path were continued around the full sphere, it would not be the circle with the greatest diameter and with their for not cut these sphere in half.
356
+ [2717.560 --> 2729.560] And additionally, we want to vary whether these paths were geodesics or arcs. We could also vary when and out these paths looked like straight or curved lines by rotating the angle of these spheres.
357
+ [2729.560 --> 2736.560] So for example, if this top sphere were rotated upwards, the geodesic could instead look like a curved path.
358
+ [2736.560 --> 2745.560] Similarly, if the bottom sphere were rotated downwards, the arc could instead look like this curved path here.
359
+ [2745.560 --> 2755.560] And although I've used examples of spheres with decreased opacity participants were shown opaque spheres and only the discrete paths between the two points.
360
+ [2755.560 --> 2767.560] One more thing is that we actually adapted that we actually adapted about our stimuli is that the arcs looks like curved lines that were matched in distance between the points of their corresponding geodesic curves.
361
+ [2767.560 --> 2777.560] And we're made to near be near opposites of their corresponding geodesic lines in their apparent curvature. And I can talk a little bit more about that in a bit.
362
+ [2777.560 --> 2796.560] So for each test trial, participants saw a pair of spheres. This was a 2 FC and we'll tell that the purple point in each image was a very lazy snail who they were told always took the easiest most efficient path to a orange mushroom represented as an orange dot on the images.
363
+ [2796.560 --> 2803.560] Partisums were told here are two paths, which path is the easiest path that the snail can take to get to the mushroom.
364
+ [2803.560 --> 2810.560] And participants pointed to which path was the easiest one on the screen. And this script is used the exact same between children and adults.
365
+ [2810.560 --> 2816.560] To note, this work builds upon the prior research using navigation paradigms.
366
+ [2816.560 --> 2839.560] And here's an example of two kinds of trials that we presented participants with. So critically, there were there were the two kinds of paired spheres. So any trial participants could either see a pair of spheres comparing arcs that were rotated to appear as straight lines and geodesics that appeared as curved lines or arcs that appeared as curved lines and the same corresponding geodesic curve line.
367
+ [2839.560 --> 2845.560] Same meaning here, a geodesic of the same length shown any different orientation though.
368
+ [2846.560 --> 2854.560] Now in both of these kinds of paired spheres that you geodesic curve would be the most efficient path between these two points and so would be the correct answer here.
369
+ [2854.560 --> 2867.560] And we generated a similarly such that these spheres were presented at random varied orientations that hold degree values, avoiding perfect horizontal or verticals, although these rotations were matched across pairs in each trial.
370
+ [2867.560 --> 2875.560] And additionally, the apparent but not the absolute distance between the points were matched between and across pairs.
371
+ [2875.560 --> 2883.560] And we vary these distances between points at five possible distances and six different heights.
372
+ [2883.560 --> 2892.560] So we're getting to the results, something I want to highlight is that we actually wanted children to avoid selecting a geodesic that looked like a straight line simply because it be in the middle of the sphere.
373
+ [2892.560 --> 2904.560] And we thought that children might attribute a unique, unique status to them, a special status. And so we wanted to avoid that. And so this meant that we actually only included geodesics that looked like curved lines.
374
+ [2904.560 --> 2918.560] Getting to the results looking to our right, our paired conditions are along the x axis, the walk through shortly. And because our paradigm was a comparison between two options, chance performance was at 50%.
375
+ [2918.560 --> 2928.560] So when presented with both a geodesic and arc that looked like curves, 68 year olds were surprisingly accurate at choosing the geodesic path.
376
+ [2928.560 --> 2932.560] Here geodesic curve responses were shown in blue here.
377
+ [2932.560 --> 2945.560] And while we had no strong predictions about how children would perform in this condition, we were nonetheless surprised by how well they did given the subtle difference between the paths and that we merely flipped them across their x axis.
378
+ [2945.560 --> 2957.560] And this suggests that children were sensitive to how paths interact with spherical services, despite the obvious fact that 68 year olds don't have experience with formal spherical geometry.
379
+ [2957.560 --> 2973.560] However, when these children were presented with a geodesic curve, that would be the correct response. And an arc that would appear as a straight line. Children showed a robust bias for selecting the straight, but in this case, incorrect line as the most efficient path between two points.
380
+ [2973.560 --> 2986.560] In other words, children could flexibly choose between the correct geodesic curve when there was no competing straight lines, but nonetheless demonstrated a planar bias when given the option to choose a straight line.
381
+ [2986.560 --> 2995.560] By comparison, adults exceeded a selecting the correct geodesic curve, regardless of the kinds of pairings that they had of spheres.
382
+ [2995.560 --> 3006.560] But something that is intriguing to note is that while participants of adult participants of performance was overall higher than children's, their pattern of results is actually quite similar to children's.
383
+ [3006.560 --> 3021.560] So here, referring to how the blue bars here are actually higher when geodesics are compared to curved arts, but performance actually drops for both children and adults when geodesics are compared to arcs that look like straight lines.
384
+ [3021.560 --> 3032.560] And so what's so intriguing is that we still see a planar bias and adults. In fact, there actually isn't a significant interaction between these two age groups.
385
+ [3032.560 --> 3037.560] So I'm quite surprised to see this kind of consistency across ages.
386
+ [3037.560 --> 3045.560] I don't have a plot for this unfortunately, but I do want to mention that we conducted some exploratory analyses on age effects between these 68 girls.
387
+ [3045.560 --> 3057.560] And what we found was that the older children performed better than younger children on curved art conditions, indicating that children get better understanding how curved lines interact with spherical surfaces.
388
+ [3057.560 --> 3069.560] And the youngest children were still performing above chance in this condition, suggesting that even the youngest kids and 60 year olds had surprisingly accurate judgments at picking out efficient curve paths on spherical surfaces.
389
+ [3069.560 --> 3076.560] However, we didn't find any significant age effects when children were given the option of an incorrect art that looked like a straight line.
390
+ [3076.560 --> 3082.560] And this indicates a pretty consistent planar bias across younger and older children.
391
+ [3082.560 --> 3091.560] But in conclusion, we began with a question of the extent to which humans are spontaneously attuned to foundational principles of planar equity and geometry.
392
+ [3091.560 --> 3102.560] And while our work doesn't adjudicate whether this claim is correct or incorrect, our work provides a broader perspective to how children and adults planar biases extend.
393
+ [3102.560 --> 3113.560] And that both age rooms are able to make surprisingly accurate judgments about paths on spherical surfaces, depending on whether or not they're being compared to curved lines or straight lines.
394
+ [3113.560 --> 3128.560] And this suggests that our explicit reasoning about simple geometric figures is not comprehensively explained alone by Euclidean principles, especially given the fact that even adults are rarely taught in their formal education principles of spherical geometry.
395
+ [3128.560 --> 3138.560] Again, suggesting that such geometric intuitions are likely grounded on our everyday activities, perhaps in our navigation of various surfaces.
396
+ [3138.560 --> 3148.560] Now, a critical aspect of our design is to note that we couched participants judgments in the context of navigation and that may have an effect enhanced to their performance.
397
+ [3148.560 --> 3156.560] So in particular, questions about spherical linearity were posed in the context of an agent's navigation and efficient action.
398
+ [3156.560 --> 3162.560] Prior research has shown that even infants have strong intuitions about the fact that intentional agents take efficient actions.
399
+ [3162.560 --> 3172.560] And so it's possible that this paradigm was particularly well suited for drawing upon the flexible intuitions that children might have about non planar geometry.
400
+ [3172.560 --> 3183.560] However, we also saw a remarkable consistency in both children and adults planar biases in their consistent selection of straight lines as being the most efficient paths between points on spheres.
401
+ [3183.560 --> 3190.560] And perhaps this could be the case because we use 2D pictures instead of 3D objects or even animations.
402
+ [3190.560 --> 3196.560] We showed participants 2D pictures of 3D surfaces because they might see them as such in formal geometry textbooks.
403
+ [3196.560 --> 3210.560] But using 3D pictures might have made their intuitions about 3D geometry harder to access, especially given the fact that any intuitions about straight arcs, which pure straight only appear from one viewpoint of the sphere.
404
+ [3210.560 --> 3218.560] Instead, it's also what we might want to show animation such as this in which an actress movement along paths and folds across time.
405
+ [3218.560 --> 3226.560] And this might better facilitate participants performance. So here I've shown an example of what an inefficient arc might look like.
406
+ [3226.560 --> 3235.560] And here's another example of a possible geodesic curve that would be an efficient path for this agent.
407
+ [3235.560 --> 3240.560] And honestly, there are a number of other geometric principles that might have hindered participants performance.
408
+ [3240.560 --> 3250.560] For example, while we match the distance between points on spheres, this actually meant that the apparent paths of the geodesic curves were in fact longer than the arcs as they appeared as lines.
409
+ [3250.560 --> 3256.560] And these perceptual features may have in fact impacted our participants' judgments about efficient paths.
410
+ [3256.560 --> 3264.560] And so future work might fiddle around with these properties of our stimulus, but also we would love to explore other ways in which other properties besides linearity are playing.
411
+ [3264.560 --> 3268.560] And propels intuitions about geometry.
412
+ [3268.560 --> 3274.560] So in summary, our work demonstrates that both children and adults succeed in their judgments of spherical linearity.
413
+ [3274.560 --> 3286.560] However, children demonstrated a plan by us to judge the most efficient path to be straight lines, which is largely consistent with prior work, but surprisingly remained consistent even among adults.
414
+ [3287.560 --> 3304.560] But lastly, given their successes with making judgments about lines on spheres, children may develop a natural geometry that is not merely limited to the Euclidian plane, but that draws upon intuitions gained from our everyday activities, for example, and judgments about agents efficient navigation.
415
+ [3305.560 --> 3314.560] Unfortunately, this benchmarking of US children and adults performance, that's the groundwork for other further development to work probing the origins of our GMMETCH intuitions.
416
+ [3314.560 --> 3321.560] And whether our early reasoning about simple geometric figures also shows flexible but planar biases.
417
+ [3321.560 --> 3333.560] And of course, as well as further calls for cross cultural examinations of how different environmental context and education may shape our intuitions about different geometric properties, the objects and spaces that we interact with.
418
+ [3333.560 --> 3343.560] So with that, I'd like to thank all of you for lending me your time and attention and many, many thanks to my collaborators and co authors, I'm happy Jordan, you've all heart and more Dylan, thank you.
419
+ [3346.560 --> 3351.560] And I'm happy to take any questions.
420
+ [3352.560 --> 3371.560] Yeah, I'm just wondering if there was any sort of secondary effect of which orientation the sphere was in given that our ordinary navigate navigation is in a context with hills and gravity and not in a context, well, I guess there are bowls, but they're still gravity.
421
+ [3371.560 --> 3376.560] That's very interesting. I don't know if the top of my head if we saw orientation effects.
422
+ [3376.560 --> 3388.560] We did ask children just to describe what they were thinking about as they were making these judgments. I don't remember anyone saying that gravity was a had an effect on those snails navigation across the spheres.
423
+ [3388.560 --> 3403.560] So some work in our manuscript that I would encourage you to look at, there was an effect of curvature in terms of whether or not children should more or less planer biases with like a less curved arc. I think you'd be interested in looking at that. So thank you.
424
+ [3403.560 --> 3405.560] Thank you.
425
+ [3405.560 --> 3416.560] Yeah, great talk. I wanted to follow up on what you were talking about how much the binding that you get might be dependent upon the fact that you use two dimensional representations rather than the 3D forms.
426
+ [3416.560 --> 3445.560] And when I play that out of my head, I feel like people, even the kids would do much better on a 3D globe than they would do on the two dimensional drawing and if you agree with that, then I wanted to comment on how much that has to do then with the kids, not overall understanding of the 3D form and the geometric relations, but on the translation process to this specific to dimensional drawing.
427
+ [3445.560 --> 3449.560] It's a very interesting question.
428
+ [3449.560 --> 3459.560] Just as an anecdote, I actually had a very difficult time coming up with animations myself of how to convey what a geodesic was using 2D images.
429
+ [3459.560 --> 3470.560] When I've done this presentation in person with a posted presentation, they actually bring along a little ping pong ball. That's a 3D object and I show people by orienting the ping pong ball with different arcs and stuff.
430
+ [3470.560 --> 3476.560] So I very much agree that using 2D pictures is very difficult in that mapping.
431
+ [3476.560 --> 3486.560] There is something very interesting that we couched their judgments in navigation. So as we navigate our very surfaces, we can't pick up the globe and move it around.
432
+ [3486.560 --> 3495.560] It is very possible though that children and adults are actually drawing up on the intuitions of how they interact with little objects. So some manipulable objects.
433
+ [3495.560 --> 3508.560] And we have no strong prediction about whether or not is navigation or manipulation small objects, but you're right, it is very much easier to manipulate and orient and see how the rotation affects.
434
+ [3508.560 --> 3511.560] People's judgments about what is the inefficient path.
435
+ [3511.560 --> 3523.560] An example I like to use in the past is that a lot of us flew here to Toronto and when you looked at your display on the airport, you might have seen a trajectory of your plane going across the globe.
436
+ [3523.560 --> 3531.560] It is generally look curved, but if you play around the screen, it's kind of fun to pick out the one point viewpoint in which you can make the path look straight.
437
+ [3531.560 --> 3536.560] So I also agree that being able to manipulate animations would be a very intriguing direction to explore.
438
+ [3536.560 --> 3539.560] Thank you.
439
+ [3539.560 --> 3543.560] Your works.
440
+ [3543.560 --> 3549.560] Thanks.
441
+ [3549.560 --> 3552.560] Okay, let's see.
442
+ [3552.560 --> 3561.560] I'm going to start if you don't mind. I'm going to start with one from online just because I've done a bad job of attending to them.
443
+ [3561.560 --> 3568.560] There was a question earlier for Tyler, I believe.
444
+ [3568.560 --> 3571.560] That says great idea.
445
+ [3571.560 --> 3574.560] Question. This is from Barbara Landau.
446
+ [3574.560 --> 3589.560] Question. How will you define categorize the differences between so called language versus non linguistic tasks. What qualifies as purely non linguistic.
447
+ [3589.560 --> 3593.560] First thanks for the words of encouragement much appreciated.
448
+ [3593.560 --> 3602.560] So we're actually not that invested in drawing a hard line between linguistic and non linguistic.
449
+ [3602.560 --> 3609.560] In part, that's sort of the general approach to this project is to be rather agnostic. So we just get all the data together.
450
+ [3609.560 --> 3619.560] And that if you have a particular theoretical orientation that leads you to treat, say, a particular memory task is non linguistic, then run with that.
451
+ [3619.560 --> 3624.560] But we want the data to be useful for folks of all stripes.
452
+ [3624.560 --> 3629.560] And so we actually do have some studies that involve spoken responses.
453
+ [3629.560 --> 3641.560] So people, so we didn't include studies that were purely linguistic citations that were just trying to document the preferred from a reference used in language.
454
+ [3641.560 --> 3647.560] But we did include studies where people were reasoning or remembering and responding through speech.
455
+ [3647.560 --> 3653.560] Is that considered when we want to consider that non linguistic because it's sort of just an expression of an underlying reasoning.
456
+ [3653.560 --> 3667.560] I wouldn't say so, but the idea is to sort of build up this data bank in a way that isn't making strong commitments either way.
457
+ [3667.560 --> 3677.560] Yes, thank you all for the very interesting talks. We've only gotten very recently into the topic of spatial navigation and representation.
458
+ [3677.560 --> 3692.560] And I've been wondering whether the differences that you've all described in your various researches can be understood from a more normative perspective.
459
+ [3692.560 --> 3701.560] Like what is the organism trying to achieve and how does the representation chosen follow from these goals?
460
+ [3708.560 --> 3712.560] Sure, I can answer.
461
+ [3712.560 --> 3717.560] Yeah, I know I think that's a that's a that's a great way of framing it.
462
+ [3717.560 --> 3723.560] And you know, looking cross culturally and cross linguistically at differences in spatial family reference.
463
+ [3723.560 --> 3731.560] You do see so like one argument, for instance, that you get a reliance on these allocentric frames of reference in cases where you don't have literacy.
464
+ [3731.560 --> 3741.560] You don't have practices that impose any symmetry along this lateral axis, but you do have really salient landmarks that are available that then become really useful to us.
465
+ [3741.560 --> 3750.560] So there's a there's a river that runs through so in those communities you often get like uphill downhill reckoning or there's a mountain.
466
+ [3750.560 --> 3767.560] And that makes sense totally from this perspective of people just trying to communicate and reason in reliable ways that don't break down during the community counter and also sort of just work for themselves as individual cognitive agents.
467
+ [3767.560 --> 3775.560] So I think that's that's a great perspective totally. Yeah, and that's sort of in the background for a lot of the way that I think about it.
468
+ [3775.560 --> 3779.560] And I think that's a great way to speak a little bit from the navigation standpoint.
469
+ [3779.560 --> 3789.560] I think first and foremostly by couching this navigation and also efficient agents that is that it's the underlying goal is that we're trying to take the most efficient paths across services.
470
+ [3789.560 --> 3793.560] I think there's also something to be said that we were providing paths to our participants.
471
+ [3793.560 --> 3802.560] And I've also dabbled around a little bit of game design and I've wondered what would be like if partitions were able to navigate themselves across these spheres or planes or et cetera.
472
+ [3802.560 --> 3814.560] And I think that's a great question or just like gut sense is that people would be over estimating the amount of curvature and a study like that would provide us a more fine grain understanding of the trajectories that people choose and also how they overcorrect later on down the fact.
473
+ [3814.560 --> 3825.560] But I think a study like that down the line would be particularly useful as we do a lot of remote navigation of autonomous vehicles across different services and such.
474
+ [3825.560 --> 3838.560] But thanks one one thing I wanted about was how far do you want to have to travel for this to become relevant like relative to the curvature of this fear.
475
+ [3838.560 --> 3853.560] So I can actually repeat what you said the how relevant this is to what extent yeah like like this distinction between the arc and the geod and the geodesic I mean this doesn't become relevant when you're only trying to go from here to the bathroom.
476
+ [3853.560 --> 3861.560] But this is important when you try to travel substantial distances around the globe. Can you comment on this.
477
+ [3861.560 --> 3882.560] For sure so yes I also think this is very relevant to flight paths that we've had to take you have objective targets on our own earth but of course that would be this would be highly relevant for space navigation to other planets for instance where you would need to make a prediction about what the most efficient trajectory would be with again remote navigation.
478
+ [3882.560 --> 3897.560] I also think there's something to be said about these were third party agents there could also be differences if you were to be navigating a car yourself and from your own perspective what might look curve straight.
479
+ [3897.560 --> 3903.560] Thanks for the question I'm going to take I'm going to alternate now and take another question from the interwebs.
480
+ [3903.560 --> 3910.560] Neocon asks has a question for Tyler interesting talk Tyler and cool project.
481
+ [3910.560 --> 3917.560] Are your analyses also focusing on how good tasks are at capturing the intended spatial controls.
482
+ [3917.560 --> 3931.560] For example the card sorting task you mentioned at the start those are confounded by the fact that visual sequence comprehension requires a fluency that is modulated by exposure and practice to visual narratives.
483
+ [3931.560 --> 3938.560] See for example my book who understands comics and my poster tomorrow.
484
+ [3938.560 --> 3944.560] I'll definitely go to the poster nail and I love your book.
485
+ [3944.560 --> 3957.560] Yeah so again I don't want I don't think Kevin and I want to be making judgments about task quality we really just want to be documenting what tasks people use.
486
+ [3957.560 --> 3973.560] But it's a really interesting question whether certain dependent measures or certain tasks produce more reliable results whether you get really really clear demarcation between allocentric versus egocentric preferring communities with say a non card sorting task.
487
+ [3973.560 --> 3986.560] And then do you see sort of like a messy more ass of confounding with card sorting so that's that's type of result I could fall out from the data bank.
488
+ [3986.560 --> 4000.560] And would be the type of thing that you know Neil himself could totally do with the data set but we're not going in and doing actual coding of task quality on purpose.
489
+ [4000.560 --> 4029.560] Great thoughts. Yeah the question for Benjamin I guess but for everyone maybe so I was wondering whether you have any implicit tasks in your semantic population because they're feeling I have from your research works but maybe it's my take is that you often use and in the literature is often the case explicit tasks such as for example order the elements in a serious but I don't know and that's my take if it's entirely fair to use this task to challenge.
490
+ [4029.560 --> 4045.560] Mental representation such as the number line for example that has been traditionally proved to exist through implicit task when we when we prove that the mental number line exists and people from different languages different cultures different ages organized things from left to right.
491
+ [4045.560 --> 4064.560] We never ask them organize the elements we we ask them to to perform party judgments magnitude comparison very implicit tasks and the people that believe in the mental number line never says that this is something to do with the explicit way we organize the world.
492
+ [4064.560 --> 4093.560] So I would think that your point will be much greater if you provide like snark like effect tasks for the time when it's only providing like explicit tasks I'm not entirely sure it really destroys the mental number line of thought is not saying you say that but I know that a lot of people taking your work use this as an argument to say mental number line well it's all bullshit doesn't exist but the task are different right yeah yeah interesting yeah well to be clear I I will
493
+ [4094.560 --> 4123.560] I would never argue that the mental number line is bullshit I think it's very real. I take I agree with you that there are some important differences between implicit and explicit tasks if I if I had thought that it was sort of practical to do a smart like task with the Chimane then I very well might try that I haven't figured out a way that they're very unfamiliar with computers and we have evidence to suggest that that even very simple task that you give them.
494
+ [4124.560 --> 4143.560] So if you have a problem on on a computer interface the results are different when when because of the interface setting that aside this sort of practicalities aside I my I guess line of thinking on this is that
495
+ [4143.560 --> 4172.560] the explicit tasks are great for showing that these are sort of that things like the mental number line are automatic they but I think it's true that if you asked people who show strong left to right snark effects right if you ask those same people to organize numbers I mean this has been done if you ask them to organize numbers or organize events in time as Tyler was talking about if you make it an explicit task then you find the same or
496
+ [4172.560 --> 4190.560] the same result right I'm not sure that the inverse is true right so I so point taken that it could be that people have different some kind of different implicit mapping that explicit mapping but I guess my thinking is if
497
+ [4190.560 --> 4219.560] If you have if you have if you have this if you have some kind of mapping from that has a particular direction then you should at least find it in explicit tasks right that if you're like no no I want you to think I want you to really stop and think about where you're putting these on the basis of number as opposed to like parody right or as a or like color or some some of these sort of magnitude in different tasks.
498
+ [4220.680 --> 4231.640] Then you then you should find it right the fact that we don't in the work I think you're talking about that you don't find strong directional biases in my work or Tyler's work.
499
+ [4232.960 --> 4250.240] In these in these more explicit tasks I think is actually maybe is as good is good evidence that those those mappings don't that those mappings wouldn't also be found by implicit tasks and maybe you have maybe you think differently about that and I'm curious to hear no I think I agree but.
500
+ [4250.560 --> 4280.120] One reason more to like overcome the practical it is probably practical problems of testing implicit tasks I i've been testing people in the media for example and it doesn't take much to learn like an iPad if the task is relatively easy so maybe the Atlas framework would help but yeah I think it's it's a great point but let's test it like let's test implicit tasks in this population so that we sure we can make the I would expect I would expect very similar results but it's it's an empirical question absolutely yeah.
501
+ [4281.560 --> 4282.560] Okay.
502
+ [4285.760 --> 4305.240] Okay let's let's switch maybe to the back I'm not sure who was first but I'm just going to alternate oh sure so I'm then my questions for you also it's a really nice talking project and I guess my question is like the lateral sagittal difference it's such a whopping difference like it's a big you know experiential difference in ease of access absolutely.
503
+ [4305.880 --> 4320.640] And the differences you're finding in description are like 60 40 maybe 70 30 and I guess i'm curious if you have intuitions about why there's still so much variability in descriptions that people are using when there's this very big difference in ease of access.
504
+ [4321.640 --> 4342.320] It's a good question I mean I guess my intuition is you know I think one of the lessons of one of the takeaways from the data is that people have at any moment seem to have access to egocentric and allocentric frames both sort of in memory and in language and so.
505
+ [4343.320 --> 4353.400] I don't I don't find it personally I don't find it that surprising that we find a mix there are it's it's.
506
+ [4355.160 --> 4368.240] You know there we know even from I mean I guess I would I would sort of have the same question about why there are differences individual differences even in cultures where it's overwhelmingly egocentric where right like.
507
+ [4368.880 --> 4380.160] I'm on Canadians, for example right like sometimes in these tasks people actually do give all eccentric responses responses and I think that's an interesting and your question I mean one intuition I have about that is.
508
+ [4381.240 --> 4394.760] I mean the answer that sort of that special that my hypothesis would offer is like there are real individual differences in people's ability to discriminate left right space right answer that could account for for for some of that variability.
509
+ [4395.120 --> 4412.600] My sister for example is a 35 year old who struggles terribly with left and right is highly educated so I was wondering yeah are the are if I can follow are the differences that you're finding from averaging across people stronger within person like do you get strong within person preferences or like within a moment within a person.
510
+ [4413.560 --> 4417.160] Whether there's consistency within people yeah I have I didn't show that.
511
+ [4417.480 --> 4436.480] My memory my memory of this is that is that we find the we find the same reversal in the majority of people but so it's not just that that some people are are you know speaking one way and others are speaking the other way.
512
+ [4437.440 --> 4445.720] But I don't have I don't have a I don't remember offhand exactly how how strong the consistency is it's a good question.
513
+ [4446.480 --> 4446.760] Yeah.
514
+ [4447.680 --> 4448.280] Right please.
515
+ [4448.800 --> 4459.000] Yeah so unfortunately this question is also mostly directed towards you although I think there's times to what Tyler and Holly were talking about at least two.
516
+ [4459.000 --> 4488.800] But so your argument as I see it is that the egocentric frame is more salient than the elocentric frame for for the sagittal task and that makes perfect sense because just intuitively you feel like oh yeah whether this is close to me or far from me is a super salient dimension but I guess I'm worried a little bit about circularity until you tell me like what is the determinants of.
517
+ [4489.000 --> 4505.440] Salience and and I guess that's potentially a tie into Tyler's project to try to figure out what are these dimensions of salience and and I guess more generally i'm a little bit worried or which is like the interest in your.
518
+ [4507.040 --> 4515.520] Foughts on whether salience is just a single thing or whether it's also going to depend upon task and priming etc.
519
+ [4516.920 --> 4517.320] Yeah.
520
+ [4519.720 --> 4532.120] So I think the part of your question I think is about sort of what determines how I mean I try to avoid the word salience and use the word discriminability instead and I I.
521
+ [4533.360 --> 4538.680] It's a it's a fair question like what what does the screaming like what does your discriminability mean.
522
+ [4539.120 --> 4554.000] The way that I and others have operationalized it is basically the ability to sort of reliably distinguish the position or orientation of objects on a particular access which right so like the like distinguishing these from these.
523
+ [4557.280 --> 4564.120] That that it's an interesting question sort of what causes differences in that i'm not sure if this is what you're asking but.
524
+ [4565.080 --> 4582.520] I think there are people who have who have positive various hypotheses i don't think we really know the answer but it seems that reading and writing experience is a great way to figure out how to distinguish things like these indeed but it extends beyond letters.
525
+ [4582.920 --> 4597.480] But it's not the only thing that can do that there's evidence that other sorts of lateralized practices like weaving can can't seem at least correlate with the ability to to do this sort of left right to discrimination.
526
+ [4597.960 --> 4606.760] I don't think i don't i'm not sure I see the the circularity that you're pointing to that the claim is that.
527
+ [4607.320 --> 4616.920] Those that basically that these sort of differences in cultural experience like reading and writing and maybe weaving and other things.
528
+ [4617.560 --> 4620.360] And sort of allow some people.
529
+ [4621.800 --> 4631.160] To overcome this default mirror invariance that animals have and that on the left right which applies to the left right access and.
530
+ [4631.160 --> 4647.160] And and that as a consequence that access becomes more reliable for encoding spatial information but if you can't keep track of whether it's a B or a D or whether it's on this side or that side of you.
531
+ [4647.160 --> 4655.160] Of you then it's just not a useful continuum to use for spatial memory or spatial language or spatial language.
532
+ [4655.960 --> 4657.160] Can I want to pass it to you.
533
+ [4657.160 --> 4671.160] Yeah, so I think that yeah, I think Tyler is going to continue this just because of circularity involves fleshing out exactly the dependencies across cultures in terms of like literacy and and the salience I think yeah.
534
+ [4672.120 --> 4682.760] Yeah, so yeah literacy is definitely in the short term plans for atlas along weaving that's great I hadn't thought about that or read that and I think that information is in.
535
+ [4682.760 --> 4697.960] Is existing cross cultural databases like de place and so we'll be able to pull that in so that's great but I was wondering if you thought of actually using either a natural experiment or real experiment to manipulate discriminability along with a lot of access so putting a glove on someone.
536
+ [4697.960 --> 4715.960] On one hand or finding people who have some unusual morphology either due to disability can generally are accident where I imagine that your prediction then is that that would increase discriminability and would lead them to sort of preferentially encode things equals entry along with that access is that is that right.
537
+ [4715.960 --> 4725.960] Yeah, yeah exactly you're describing exactly our plans which is yeah that's exactly the idea and and so yeah so I agree if that that is.
538
+ [4725.960 --> 4739.960] So yeah so I agree if that that from that the correlation no matter how strong the correlational data is between you know people's spatial discriminators people's mirror and variance and their uses fish reference friends.
539
+ [4739.960 --> 4754.960] It's we can't make strong calls of claims from from that at all what what would be required is something like what Tyler is describing which is okay let's let's try to manipulate mirror and variance which I think is should be possible in the short term.
540
+ [4754.960 --> 4759.960] And see you know if that has an effect in what effect that has on.
541
+ [4759.960 --> 4772.960] I'm realizing that we are over time I'm happy to stay and people are welcome to stay but no one will be offended if people need to go to lunch or whatever but I think you need to go to lunch.
542
+ [4772.960 --> 4782.960] Okay so then in that case why don't we unless yeah why don't we break and and those that can hang out will just hang out over here and happy to keep discussing.
543
+ [4782.960 --> 4789.960] Thanks so much for coming.
transcript/allocentric_-h-cz7yY-G8.txt ADDED
@@ -0,0 +1,796 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 11.040] Okay, I've got a lot of content.
2
+ [11.040 --> 13.760] If you want to talk to me, I'm at Rialite on Twitter.
3
+ [13.760 --> 15.240] I'll answer questions there.
4
+ [15.240 --> 18.240] If you're on the Slack channel, I created a room called HTM.
5
+ [18.240 --> 23.280] If you ask any questions there, I'll come back around afterwards and answer them.
6
+ [23.280 --> 26.240] I'm a community manager at Nementa.
7
+ [26.240 --> 30.160] Oliver Cody's open source, so I manage the open source community.
8
+ [30.160 --> 31.600] Okay, my agenda.
9
+ [31.600 --> 33.280] Why WCA is not intelligent?
10
+ [33.280 --> 38.000] I'm going to talk a lot about cortical anatomy, the power of the pyramidal neuron, which
11
+ [38.000 --> 42.000] is crucial to our theory, and we're going to talk about layers and columns as those are
12
+ [42.000 --> 47.480] structures within the cortex, and then I'm going to do a very deep and quick dive on
13
+ [47.480 --> 53.160] the entire HTM theory, our theory of how intelligence works in the cortex, even sensory
14
+ [53.160 --> 54.160] motor stuff.
15
+ [54.880 --> 57.200] All right, I already talked to you about this.
16
+ [57.200 --> 61.160] I don't think WCA, I will produce intelligence from the reasons that I've told you, but
17
+ [61.160 --> 64.560] I'm going to tell you explicitly why and what's missing.
18
+ [64.560 --> 67.440] So there's two things.
19
+ [67.440 --> 68.640] One is realistic neurons.
20
+ [68.640 --> 74.000] I mentioned that the point neuron that's used in machine learning systems today, it's
21
+ [74.000 --> 75.000] too simple.
22
+ [75.000 --> 80.040] So I'm going to tell you what we need to model in this neuron.
23
+ [80.040 --> 86.120] And the other thing, which may not be immediately evident to you, is that for an intelligent system
24
+ [86.120 --> 88.840] to exist, it has to be able to move.
25
+ [88.840 --> 94.880] Someone name anything that is intelligent that does not move.
26
+ [94.880 --> 95.880] What?
27
+ [95.880 --> 99.360] I don't want to repeat, I don't even know what it was.
28
+ [99.360 --> 100.360] Okay.
29
+ [100.360 --> 101.840] Okay, you can't.
30
+ [101.840 --> 106.160] There is nothing that is intelligent that does not move.
31
+ [106.160 --> 110.480] And the reason is, we have to be able to explore our environment.
32
+ [110.480 --> 115.760] We have to, the way we understand reality, the way we understand the world is by testing
33
+ [115.760 --> 120.660] it, is by interacting with it, is by doing something, is by taking an action that changes
34
+ [120.660 --> 122.040] something in the world.
35
+ [122.040 --> 124.760] That's how we differentiate ourselves from our environment.
36
+ [124.760 --> 128.600] We have to know who we are versus what our environment is.
37
+ [128.600 --> 130.520] When we move this thing, what happens?
38
+ [130.520 --> 134.960] When we get a different stimulus, when I move over here, my whole perspective of the
39
+ [134.960 --> 136.400] world changes.
40
+ [136.400 --> 138.080] You don't really think about that.
41
+ [138.080 --> 141.880] But I have a completely different picture when I move here than when I move here.
42
+ [141.880 --> 145.000] And my brain is just seamlessly integrating it all together.
43
+ [145.000 --> 148.760] So movement is really important and I'm going to talk about that.
44
+ [148.760 --> 149.760] So let's talk about brains.
45
+ [149.760 --> 152.800] We're going to do a little deep dive into the cortex.
46
+ [152.800 --> 155.320] I'm going to talk about the new cortex, which is the wrinkly stuff.
47
+ [155.320 --> 157.160] The old brain will just drop out.
48
+ [157.160 --> 160.480] If you spread this out, it looks like a sheet.
49
+ [160.480 --> 164.000] It looks like it's about the size and proportion of a dinner napkin.
50
+ [164.000 --> 166.040] You'll see these structures.
51
+ [166.040 --> 168.880] This is a homogenous structure than the new cortex.
52
+ [168.880 --> 170.680] It's the same throughout.
53
+ [170.680 --> 176.760] And at its core is this little cortical processing unit that we call a cortical column.
54
+ [176.760 --> 179.440] And so I'm going to talk about this.
55
+ [179.440 --> 185.560] Now we've known that in the cortex, there are layers for over 100 years.
56
+ [185.560 --> 186.560] This is Cahal.
57
+ [186.560 --> 191.480] He's a famous neuroscientist who did all these drawings of the cortex back in the late
58
+ [191.480 --> 195.360] 1800s and there's still used in textbooks today.
59
+ [195.360 --> 198.040] So we found these layers a long time ago.
60
+ [198.040 --> 199.800] We knew they existed.
61
+ [199.800 --> 209.000] But this idea of a column in addition to the layer, sort of creating this kind of structure,
62
+ [209.000 --> 211.240] a logical structure is sort of new.
63
+ [211.240 --> 214.720] Because you can't look at the cortex and see the columns.
64
+ [214.720 --> 218.120] But we have the technology now to see them that they're there.
65
+ [218.120 --> 224.040] You can see because of the cellular structure that these structures do exist.
66
+ [224.040 --> 225.840] So I'm going to talk about these layers in columns.
67
+ [225.840 --> 232.160] So columns contain layers and layers contain neurons.
68
+ [232.160 --> 235.280] So this is just a drawing of a layer I did.
69
+ [235.280 --> 237.040] These layers are roughly cylindrical.
70
+ [237.040 --> 240.160] I mean, this is all sort of abstract.
71
+ [240.160 --> 244.200] But it's definitely true that layers contain neurons, columns contain layers.
72
+ [244.200 --> 248.360] The pyramidal neuron is an amazing computation engine.
73
+ [248.360 --> 252.800] This is really the atomic computing unit in our model as the pyramidal neuron.
74
+ [252.800 --> 258.480] And in any, I think today, even in ANN system, it's the primary compute model.
75
+ [258.480 --> 262.640] But there's something that we need in this model that we don't have today.
76
+ [262.640 --> 265.600] So we do have an inactive state and an active state.
77
+ [265.600 --> 267.120] I mean, this is important.
78
+ [267.120 --> 269.080] What the neuron does is it activates.
79
+ [269.080 --> 270.080] Like it turns on.
80
+ [270.080 --> 271.760] It spikes.
81
+ [271.760 --> 272.680] But we need another one.
82
+ [272.680 --> 274.000] We need a predictive state.
83
+ [274.000 --> 275.640] And this turns out to be really important.
84
+ [275.640 --> 278.560] A neuron needs to know, yes, I'm active or no, I'm not.
85
+ [278.560 --> 279.920] But I think I might be.
86
+ [279.920 --> 281.440] I think I might be active soon.
87
+ [281.440 --> 285.240] That's an important prospect because that's what your brain is constantly doing.
88
+ [285.240 --> 292.240] It's constantly making predictions about what it's going to see next.
89
+ [292.240 --> 298.880] Now, in addition to these different states, the neuron also has different integration zones.
90
+ [298.880 --> 301.600] So a little neuroscience lesson.
91
+ [301.600 --> 306.840] There's three different types of dendritic segments that a neuron can have.
92
+ [306.840 --> 308.720] Proximal, which is feed-forward input.
93
+ [308.720 --> 312.840] That's like direct input, usually from in the direction of the sensory organ.
94
+ [312.840 --> 316.840] So some senses are coming up and we're processing that primary input.
95
+ [316.840 --> 322.120] And when we have distal, this is sort of a lateral input from basal dendrites.
96
+ [322.120 --> 328.480] And this is a contextual information that is used to modulate that proximal signal.
97
+ [328.480 --> 335.440] Typical feedback is generally coming from layers that are higher up in the column or
98
+ [335.440 --> 337.440] other parts of the cortex.
99
+ [337.440 --> 340.520] Like the higher up in the entire hierarchy of intelligence.
100
+ [340.520 --> 343.520] I'm not even going to talk about hierarchy today.
101
+ [343.520 --> 345.080] So these three things are really important.
102
+ [345.080 --> 347.720] It's not just one signal.
103
+ [347.720 --> 352.880] It's like the neuron is looking at all three of these different zones and deciding am I
104
+ [352.880 --> 353.880] active or not?
105
+ [353.880 --> 354.880] Am I predictive or not?
106
+ [354.880 --> 355.880] That's what it's job.
107
+ [359.040 --> 362.040] So I told you that layers contain neurons, right?
108
+ [362.040 --> 367.160] So it follows that if our neurons have these integration zones and they're all oriented
109
+ [367.160 --> 373.160] in the same fashion, layers themselves will also have these integration zones.
110
+ [373.160 --> 377.840] This lets us treat this as sort of its own little compute module.
111
+ [377.840 --> 380.440] So these layers also compute.
112
+ [380.440 --> 384.280] A layer can have 100,000 neurons in it.
113
+ [384.280 --> 389.160] But typically all of the distal input to that layer to those neurons will come from a
114
+ [389.160 --> 390.480] common place.
115
+ [390.480 --> 393.200] All of the proximal input will come from a common place.
116
+ [393.200 --> 395.560] Sometimes their split, it depends.
117
+ [395.560 --> 400.120] But the point is the layer doesn't know where it's getting its input from.
118
+ [400.120 --> 401.840] It does the same thing.
119
+ [401.840 --> 404.640] So that's across all your brain.
120
+ [404.640 --> 411.160] If you're processing visual input, a smatic input, which is touch, auditory input, the
121
+ [411.160 --> 416.560] cortical columns that are processing that sensory data are doing the same exact thing,
122
+ [416.560 --> 418.200] the same process.
123
+ [418.200 --> 421.320] And it's all about these layers in the columns.
124
+ [421.320 --> 429.160] So like I said, proximal input is usually a driver signal and these are modulatory signals.
125
+ [429.160 --> 433.760] I'm going to show you some, these aren't simulations, they're just visualizations of
126
+ [433.760 --> 436.360] these systems running.
127
+ [436.360 --> 438.280] So and it sort of looks like this.
128
+ [438.280 --> 439.560] It's like a cube.
129
+ [439.560 --> 442.480] This is the equivalent of a layer in software.
130
+ [442.480 --> 446.480] So when you see this think of a layer, it's a bunch of neurons.
131
+ [446.480 --> 450.480] I'm not very good at animating, so they're cubes.
132
+ [450.480 --> 454.320] So you're going to see this later, I just want you to know that is a layer.
133
+ [454.320 --> 458.080] And each one of those cubes represents a pyramidal neuron and their color represents what
134
+ [458.080 --> 460.560] state it is in.
135
+ [460.560 --> 462.040] Okay.
136
+ [462.040 --> 465.560] I have to talk to you about sparse distributed representations.
137
+ [465.560 --> 471.920] And it's really hard to change to this, but imagine a neuron, just one neuron.
138
+ [471.920 --> 477.720] It might have thousands and thousands of dendrites, like potential connections to other neurons,
139
+ [477.720 --> 478.720] right?
140
+ [478.720 --> 482.560] And it's always looking at those connections and it's always seeing what's active, what's
141
+ [482.560 --> 484.560] not, and deciding am I next?
142
+ [484.560 --> 485.560] Am I active?
143
+ [485.560 --> 486.560] Am I next?
144
+ [486.560 --> 487.560] Am I active all the time?
145
+ [487.560 --> 492.720] If you were to take all those dendrites and kind of wrap them up into a fiber, you know,
146
+ [492.720 --> 494.040] like a fiber optic scabel.
147
+ [494.040 --> 497.000] I like to think of it as a fiber optics cable.
148
+ [497.000 --> 504.040] And then you look at the end of it, you know, that's an SDR, a sparse distributed representation.
149
+ [504.040 --> 508.720] In your brain, only 2% of your neurons are active at any point in time.
150
+ [508.720 --> 513.720] Each one of those neurons that's active represents something semantically or more than one thing,
151
+ [513.720 --> 515.840] could represent many things.
152
+ [515.840 --> 520.720] It turns out this format, the sparsity and the distribution of it are really, really important
153
+ [520.720 --> 523.960] to how your brain computes.
154
+ [523.960 --> 528.160] And each one of those bits has to have some semantic meaning.
155
+ [528.160 --> 531.920] So if you're a neuron and you're deciding whether to fire or not, you're constantly looking
156
+ [531.920 --> 534.240] at this long bit array, right?
157
+ [534.240 --> 536.280] All in SDR is an array of bits.
158
+ [536.280 --> 538.120] It's really simple.
159
+ [538.120 --> 541.600] But only 2% of them are going to be on at any time.
160
+ [541.600 --> 546.120] Based upon which ones are on, it's going to help me decide as a neuron whether I fire
161
+ [546.120 --> 547.120] or not.
162
+ [547.120 --> 550.600] Now I'm going to be looking at my proximal SDR, it's coming from below, and I'm going
163
+ [550.600 --> 556.040] to be looking at distal SDRs and potentially apical SDRs all to decide whether I'm going
164
+ [556.040 --> 557.840] to fire or not.
165
+ [557.840 --> 566.000] I'm going to show you some of the properties of SDRs really quickly because I can.
166
+ [566.000 --> 571.000] Just to give you an impression of what an SDR looks like.
167
+ [571.000 --> 574.160] That's not a very good one.
168
+ [574.160 --> 579.160] Well, you can't see the whole thing, but this is like a 256-bit SDR.
169
+ [579.160 --> 583.960] It's a 2% sparsity with 5 bits on.
170
+ [583.960 --> 591.760] And the capacity of this particular SDR, 256 bits with 5 bits on, there are 8.8 billion
171
+ [591.760 --> 596.520] ways to arrange these 5 bits in the space, which is pretty large.
172
+ [596.520 --> 601.880] But we typically use SDRs that are like this big.
173
+ [601.880 --> 607.040] And we turn like 40 bits on.
174
+ [607.040 --> 612.360] So the capacity is much, much larger.
175
+ [612.360 --> 615.720] It's more than there are atoms in the observable universe.
176
+ [615.720 --> 618.320] The point is you'll never run out of space here.
177
+ [618.320 --> 622.760] In this, if you think about this as a fiber optics cable and you've got a signal coming across
178
+ [622.760 --> 626.920] that cable, you can represent anything in it, like forever.
179
+ [626.920 --> 630.920] So that, I mean, if that's not amazing.
180
+ [630.920 --> 633.880] Okay.
181
+ [633.880 --> 637.000] One other thing I'm going to do a quick, this is a really important part.
182
+ [637.000 --> 639.280] Property of SDRs, too.
183
+ [639.280 --> 641.120] On the left, I've got one random SDR.
184
+ [641.120 --> 643.520] In the middle here, I have another random SDR.
185
+ [643.520 --> 648.440] On the right here, I have their overlap, which is all the bits that they both share.
186
+ [648.440 --> 651.200] Seems really simple, really, really important.
187
+ [651.200 --> 652.880] Because this is a similarity score.
188
+ [652.880 --> 656.000] This is how close these SDRs are.
189
+ [656.000 --> 658.520] And another important one is the union.
190
+ [658.520 --> 663.200] So if this contains some semantics, some description about some states somewhere, and this also
191
+ [663.200 --> 667.400] does, too, then this contains both of those semantics.
192
+ [667.400 --> 672.280] And the previous one contains the semantics that they both share.
193
+ [672.280 --> 678.280] And this is really important because as a neuron, if you're constantly scanning all of
194
+ [678.280 --> 684.120] your distal dendrites looking at all the neurons that are on or not, you want to know if
195
+ [684.120 --> 686.120] you've seen that pattern before.
196
+ [686.120 --> 690.720] So it's really nice to have this property to compare, oh, if I've seen this SDR, if
197
+ [690.720 --> 692.480] I've seen this SDR before.
198
+ [692.480 --> 693.480] It's easy to do that.
199
+ [693.480 --> 694.480] I'm just writing this one.
200
+ [694.480 --> 699.480] The predicted state of a neural neural would be biochemical.
201
+ [699.480 --> 700.480] Yes.
202
+ [700.480 --> 701.480] Okay.
203
+ [701.480 --> 711.760] So the question was, what does a predictive state mean biochemically?
204
+ [711.760 --> 717.600] In the neuroscience, the neuroscience terminology is a depolarized neuron.
205
+ [717.600 --> 719.000] It has something to do with ion channels.
206
+ [719.000 --> 720.000] And I don't know.
207
+ [720.000 --> 721.720] I'm not a neuroscientist.
208
+ [721.720 --> 724.920] But look up depolarized pyramidal neurons.
209
+ [724.920 --> 728.080] That is basically what we say as a predictive state.
210
+ [728.080 --> 729.080] Okay.
211
+ [729.080 --> 733.760] Back to the slide for a moment.
212
+ [733.760 --> 734.760] Talk about SDRs.
213
+ [734.760 --> 735.760] Okay.
214
+ [735.760 --> 737.280] Let's talk about encoders real quick.
215
+ [737.280 --> 741.240] Encoders in biology are your senses.
216
+ [741.240 --> 744.080] So think about your retina or your ear or whatever.
217
+ [744.080 --> 748.280] Your optic nerve, for example, looks just like that fibroptics cable that I showed you.
218
+ [748.280 --> 754.160] And your retina is doing a ton of work to produce a semantic representation of what you're
219
+ [754.160 --> 757.440] seeing and pipe it into your brain.
220
+ [757.440 --> 759.200] We don't study senses.
221
+ [759.200 --> 760.200] My company doesn't.
222
+ [760.200 --> 763.680] So all of our encoders are really stupid, really simple.
223
+ [763.680 --> 764.680] Not so stupid.
224
+ [764.680 --> 767.160] They're simple.
225
+ [767.160 --> 770.680] But replicating the retina in the cochleos is extremely hard.
226
+ [770.680 --> 774.840] And we're working on the cortex, not the other things.
227
+ [774.840 --> 779.680] So our encoders, but the fact is in order to test these theories, we have to have semantic
228
+ [779.680 --> 784.400] representations to push into the system and try and get it to understand the patterns
229
+ [784.400 --> 785.400] in those.
230
+ [785.400 --> 790.760] So we've done that somewhat artificially, not artificially, non-biologically.
231
+ [790.760 --> 795.060] So these examples of encoders, I'm just going to show you one example, like this date
232
+ [795.060 --> 796.060] encoder.
233
+ [796.060 --> 797.300] This doesn't even exist in your brain.
234
+ [797.300 --> 798.300] We just made this up.
235
+ [798.300 --> 803.800] Imagine you had a watch in your brain that constantly told you exactly what time it was.
236
+ [803.800 --> 806.320] It's not part of the season it was or what time of day it was.
237
+ [806.320 --> 809.880] You just always knew exactly what time it was.
238
+ [809.880 --> 812.640] That's sort of what this is like.
239
+ [812.640 --> 817.520] So an example of this, I am taking a date.
240
+ [817.520 --> 818.520] Here's today.
241
+ [818.520 --> 820.520] Here's tomorrow.
242
+ [820.520 --> 823.480] And I'm encoding four dimensions of semantics.
243
+ [823.480 --> 827.680] I'm encoding what day of week it is, the weekend, the time of day and the season.
244
+ [827.680 --> 830.720] And you'll know that I'm not labeling anything.
245
+ [830.720 --> 837.240] I'm just setting a section of bits in the representation to represent that semantically.
246
+ [837.240 --> 842.960] So as I go forward in time, the day of week cycles periodically through its space, the
247
+ [842.960 --> 846.440] weekend cycles, the time of day I haven't touched yet.
248
+ [846.440 --> 852.000] But you can see the season is also slowly periodically moving as I move forward in time.
249
+ [852.000 --> 856.920] And if I do touch the time, you can see the time of day also moving.
250
+ [856.920 --> 860.880] And here is the whole encoding for this timestamp.
251
+ [860.880 --> 866.280] We can simply take all of those subencodings and just concatenate them together and we're
252
+ [866.280 --> 867.280] done.
253
+ [867.280 --> 869.760] So this represents the date.
254
+ [869.760 --> 873.560] This is a way to encode semantically a date.
255
+ [873.560 --> 879.600] And we also have, well I'll show you this input space.
256
+ [879.600 --> 885.600] This sort of introduces the idea of, because we can encode a bunch of different disparate
257
+ [885.600 --> 891.160] data and put them all in one SDR and then pass it into the system, we typically call
258
+ [891.160 --> 894.160] that the input space for the system.
259
+ [894.160 --> 897.240] So for example, here's some graph data.
260
+ [897.240 --> 900.320] This is just power consumption at a building or something.
261
+ [900.320 --> 903.360] So you can see there's obvious temporal patterns in it.
262
+ [903.360 --> 907.200] And we make this smaller.
263
+ [907.200 --> 915.480] As I go through the days, you'll see the power value, which is this bucket right here,
264
+ [915.480 --> 920.600] is cycling a lot because that's the main value that I want to encode here.
265
+ [920.600 --> 923.760] The rest of it is just time of weekend weekend.
266
+ [923.760 --> 928.040] Those other two values down at the bottom and you'll see the weekend one go from one to
267
+ [928.040 --> 929.040] the other.
268
+ [929.040 --> 933.840] So we're encoding not only a scalar value, but at what time it was recorded.
269
+ [933.840 --> 935.960] So there's automatically an association there.
270
+ [935.960 --> 937.960] Yes, you had a question.
271
+ [937.960 --> 945.480] Yeah, you could do that.
272
+ [945.480 --> 949.760] The thing is, if you make it bigger, it's weighted differently.
273
+ [949.760 --> 950.760] So it depends.
274
+ [950.760 --> 953.600] Like, I wanted to weight all those generally the same.
275
+ [953.600 --> 958.640] But how big you make the range for your encoding if you're going to concatenate it all together
276
+ [958.640 --> 963.240] with a bunch of other encodings, it affects the weighting is important.
277
+ [963.240 --> 966.840] Yeah, yeah, it's all about how big should we make these?
278
+ [966.840 --> 970.240] Yeah, it's about importance of the future.
279
+ [970.240 --> 973.360] So you can sort of see that this is what an input space.
280
+ [973.360 --> 976.240] Again, imagine the fiber optics cable that you're looking at.
281
+ [976.240 --> 978.840] This is sort of how it may be lighting up over time.
282
+ [978.840 --> 980.880] This could be an input to your brain.
283
+ [980.880 --> 984.680] But this is totally, you know, we made this up, right?
284
+ [984.680 --> 988.760] Like I could take a completely different encoding mechanism.
285
+ [988.760 --> 992.440] So I changed the scalar encoding mechanism to one that kind of randomly distributes that
286
+ [992.440 --> 996.160] bucket instead of keeping it a continuous bucket.
287
+ [996.160 --> 998.160] It does the same exact thing.
288
+ [998.160 --> 1003.400] So you can imagine there may be thousands of ways you might be able to semantically encode
289
+ [1003.400 --> 1007.760] this specific data into the system so they can understand it.
290
+ [1007.760 --> 1010.360] This is just one way.
291
+ [1010.360 --> 1012.360] Okay.
292
+ [1012.360 --> 1014.360] So that's encoders.
293
+ [1014.360 --> 1020.320] All right, so here's where it gets deep.
294
+ [1020.320 --> 1026.720] So in the brain, in one of those layers, in some of those layers, in the brain, we have
295
+ [1026.720 --> 1029.520] these little structures called mini columns.
296
+ [1029.520 --> 1031.840] And what a mini column is is a grouping of neurons.
297
+ [1031.840 --> 1034.400] You actually see them here.
298
+ [1034.400 --> 1039.640] It's when neurons group together and share proximal input.
299
+ [1039.640 --> 1043.840] And this happens everywhere in your brain and every one of these cortical columns, some
300
+ [1043.840 --> 1046.480] of the layers are doing this type of operation.
301
+ [1046.480 --> 1052.360] And what it's actually doing, the point of it, is to take that spatial input that we just
302
+ [1052.360 --> 1058.280] saw, like that input space, and redistribute it so that we have control over that representation.
303
+ [1058.280 --> 1060.760] And I think you'll see why in a minute.
304
+ [1060.760 --> 1065.520] But we do that by creating these mini column structures and saying every cell in this column
305
+ [1065.520 --> 1069.600] is going to share its proximal input.
306
+ [1069.600 --> 1073.320] And this is all about feed-forward proximal input.
307
+ [1073.320 --> 1077.720] And it also has to maintain the semantic similarity of the input.
308
+ [1077.720 --> 1080.920] I have to show you this for you, to make any sense.
309
+ [1080.920 --> 1085.760] I've got a lot of demos.
310
+ [1085.760 --> 1087.640] Overlapped connected synapses.
311
+ [1087.640 --> 1092.480] OK, so you remember when I showed you that layer in software, I called it, and it was
312
+ [1092.480 --> 1094.800] like this big three-dimensional grid.
313
+ [1094.800 --> 1100.840] Imagine that this is that grid seen from the top.
314
+ [1100.840 --> 1106.240] So each one of these boxes is actually a column of neurons, not just one neuron.
315
+ [1106.240 --> 1109.920] So we would call these mini columns.
316
+ [1109.920 --> 1113.160] And they all, and here's an example input space.
317
+ [1113.160 --> 1115.600] We'll pay attention to that.
318
+ [1115.600 --> 1119.280] So this would be just some random input that would come in this input space.
319
+ [1119.280 --> 1124.880] Each one of these columns has a specific relationship to that input space.
320
+ [1124.880 --> 1130.640] So what I'm showing here is that columns potential pool of connections that it might make
321
+ [1130.640 --> 1134.200] to the input space, proximal connections.
322
+ [1134.200 --> 1137.240] Every one of these columns has a different potential pool.
323
+ [1137.240 --> 1138.760] They're just randomly initialized.
324
+ [1138.760 --> 1143.920] We're trying to set it up like the brain.
325
+ [1143.920 --> 1148.080] Because every neuron is not able to connect to every other neuron, or else we just make
326
+ [1148.080 --> 1150.000] it, all of them connect to all of them.
327
+ [1150.000 --> 1151.000] And there's a reason for that.
328
+ [1151.000 --> 1152.400] I'm not going to get into it.
329
+ [1152.400 --> 1156.560] But every one of these has a different relationship with the space.
330
+ [1156.560 --> 1161.880] When we create this state, we call it a spatial pool, but it's really just sort of a spatial
331
+ [1161.880 --> 1167.080] pooling operation that we're running, we also set up each one of these columns to have
332
+ [1167.080 --> 1170.840] random initials connections with the input.
333
+ [1170.840 --> 1174.040] So it can be immediately stimulated.
334
+ [1174.040 --> 1181.680] Each one of these connections can only exist in its potential pool.
335
+ [1181.680 --> 1184.600] There will never be a connection in the white areas.
336
+ [1184.600 --> 1190.760] And also, each one of these connections has a permanence value associated with it.
337
+ [1190.760 --> 1196.320] So some of your deep learning guys might find this familiar because it's basically heavy
338
+ [1196.320 --> 1197.320] in learning.
339
+ [1197.320 --> 1200.720] We're taking, is that the right term?
340
+ [1200.720 --> 1206.880] We're taking, so for that one that I highlighted, here's its permanence.
341
+ [1206.880 --> 1208.320] It's like 0.6 something.
342
+ [1208.320 --> 1211.880] Its connection threshold is 0.1, so it's connected.
343
+ [1211.880 --> 1214.120] All of them initially are different.
344
+ [1214.120 --> 1215.440] This one's a little different.
345
+ [1215.440 --> 1216.440] That one's a little different.
346
+ [1216.440 --> 1218.160] You can see the chart on the side.
347
+ [1218.160 --> 1219.440] The red ones are not connected.
348
+ [1219.440 --> 1220.600] They're too low.
349
+ [1220.600 --> 1222.840] So those started off as not being connected.
350
+ [1222.840 --> 1225.000] This is sort of the initial state of the pooler.
351
+ [1225.000 --> 1227.280] Let me show you how it learns.
352
+ [1227.280 --> 1234.280] Okay, same setup for this visualization with a little bit of a difference.
353
+ [1234.280 --> 1237.480] All right, so we have a real input coming in here.
354
+ [1237.480 --> 1240.560] It looks familiar, right, from the one I showed you earlier.
355
+ [1240.560 --> 1243.800] So all these buckets, they actually mean something.
356
+ [1243.800 --> 1248.000] And here's an example of one of those columns.
357
+ [1248.000 --> 1249.720] It didn't activate, it didn't become active.
358
+ [1249.720 --> 1254.680] The whole point of the spatial pooler is to turn these activations into these activations
359
+ [1254.680 --> 1258.400] and retain the semantic meaning of the input space.
360
+ [1258.400 --> 1262.320] But allow us to normalize how many bits are on.
361
+ [1262.320 --> 1265.400] So that bit that I checked is not active.
362
+ [1265.400 --> 1271.680] These green boxes are places where it had connections that overlap with the input space.
363
+ [1271.680 --> 1273.840] So we're doing an SDR comparison here, right?
364
+ [1273.840 --> 1277.800] We're saying how many of your connections overlap with this input space, it's the green
365
+ [1277.800 --> 1278.800] ones do.
366
+ [1278.800 --> 1281.800] That was apparently not enough for it to become active.
367
+ [1281.800 --> 1287.720] This one, however, had enough of these connections overlapping with its input space to become
368
+ [1287.720 --> 1288.720] active.
369
+ [1288.720 --> 1294.960] So what we'll do is we'll do this calculation for every column in the structure.
370
+ [1294.960 --> 1300.440] How much do you overlap with your connections overlap with this input space?
371
+ [1300.440 --> 1305.280] And if they overlap enough, we'll typically stack rank all those columns and then cut them
372
+ [1305.280 --> 1306.280] off somewhere.
373
+ [1306.280 --> 1307.920] That's sort of what I should try to show over there.
374
+ [1307.920 --> 1313.480] All of those above, I don't know, 44 somewhere in there.
375
+ [1313.480 --> 1315.040] We're going to turn those columns on.
376
+ [1315.040 --> 1319.800] By turning those columns on, they inhibit their neighbors from turning on.
377
+ [1319.800 --> 1324.160] By doing this sort of stack ranking, this competition.
378
+ [1324.160 --> 1327.280] And this now represents the semantics of that.
379
+ [1327.280 --> 1335.560] And the last step is if these columns win, their proximal connections to these bits in
380
+ [1335.560 --> 1338.120] the input space that were correct, those get increased.
381
+ [1338.120 --> 1340.760] Those permanent values get increased.
382
+ [1340.760 --> 1344.560] So it reinforces, I recognize that spatial pattern.
383
+ [1344.560 --> 1347.280] That specific spatial pattern, I'm going to see it again and I'm going to recognize it
384
+ [1347.280 --> 1348.560] again.
385
+ [1348.560 --> 1354.840] For all of the ones where the connections did not overlap with that space in the act of
386
+ [1354.840 --> 1356.560] columns, then we'll decrement them.
387
+ [1356.560 --> 1361.160] So if we played this for a long time and it was learning real patterns and we inspected
388
+ [1361.160 --> 1367.560] that column again, we might see that there's many less connections in the parts of the space
389
+ [1367.560 --> 1371.120] where it hasn't gained an affinity to.
390
+ [1371.120 --> 1374.320] It hasn't started connecting to.
391
+ [1374.320 --> 1377.760] So that's actually spatial pooling.
392
+ [1377.760 --> 1386.440] So what we end up with, I'm going to do more animations here.
393
+ [1386.440 --> 1395.000] So you sort of understand, hopefully, that these minicoloms all represent some proximal
394
+ [1395.000 --> 1396.640] connection to an input space.
395
+ [1396.640 --> 1401.960] They get activated if the connections are connected to are active.
396
+ [1401.960 --> 1407.000] And then once those columns are activated, then we're going to choose what cells within
397
+ [1407.000 --> 1411.880] the columns become activated and that's the next step.
398
+ [1411.880 --> 1416.360] So I just showed you this is all proximal input that we've talked about with spatial pooler.
399
+ [1416.360 --> 1418.520] But there's other stuff going on here.
400
+ [1418.520 --> 1424.020] There's distal connections happening between the neurons in the layer, just in this one
401
+ [1424.020 --> 1425.120] layer.
402
+ [1425.120 --> 1430.440] So we're getting our distal context from the same layer.
403
+ [1430.440 --> 1434.240] So the neurons in this layer are sort of looping back to themselves and giving themselves
404
+ [1434.240 --> 1435.960] context.
405
+ [1435.960 --> 1442.640] And what do you do when the only context you have is yourself.
406
+ [1442.640 --> 1450.000] Your context is essentially your past, the states that you have been in in the past.
407
+ [1450.000 --> 1453.840] So that's what we call this temporal memory algorithm.
408
+ [1453.840 --> 1461.840] So it identifies the context of each input that we get based upon the state of the distal
409
+ [1461.840 --> 1467.160] rights or the distal connections that the primal neuron has.
410
+ [1467.160 --> 1472.080] And it works entirely within the structure of many columns that we've just activated.
411
+ [1472.080 --> 1475.160] And it will put the cells into a predictive state if necessary.
412
+ [1475.160 --> 1480.400] So I have to show you an animation of this.
413
+ [1480.400 --> 1483.240] This is by far my coolest animation.
414
+ [1483.240 --> 1484.240] Okay.
415
+ [1484.240 --> 1486.240] Bear with me.
416
+ [1486.240 --> 1489.120] I have to set this up.
417
+ [1489.120 --> 1491.040] So this is going to be a sequencer.
418
+ [1491.040 --> 1493.640] Just a little stupid note sequencer.
419
+ [1493.640 --> 1494.640] You see this?
420
+ [1494.640 --> 1495.640] Okay.
421
+ [1495.640 --> 1496.640] Okay.
422
+ [1496.640 --> 1500.320] You hear the notes.
423
+ [1500.320 --> 1502.360] So this is the input space.
424
+ [1502.360 --> 1505.440] Each note is being encoded in a different block of cells.
425
+ [1505.440 --> 1508.560] This one is a rest and it's not used.
426
+ [1508.560 --> 1512.720] The spatial pooler is activating columns over here.
427
+ [1512.720 --> 1514.760] And I can show this a little better if I spread them out.
428
+ [1514.760 --> 1516.840] So you see the columns, right?
429
+ [1516.840 --> 1520.360] So this is spatial pooling happening right now.
430
+ [1520.360 --> 1525.800] I'm going to show you how cells within those active columns become active and become
431
+ [1525.800 --> 1528.320] predictive.
432
+ [1528.320 --> 1532.360] So let's stop this right here.
433
+ [1532.360 --> 1534.080] So let's show the active cells.
434
+ [1534.080 --> 1536.000] Can you guys see this okay in the back?
435
+ [1536.000 --> 1537.520] I know it's going to.
436
+ [1537.520 --> 1539.160] Okay.
437
+ [1539.160 --> 1543.240] So now I'm showing you active cells.
438
+ [1543.240 --> 1545.520] I can dive right in here.
439
+ [1545.520 --> 1546.520] This is E.
440
+ [1546.520 --> 1549.120] So these are the active cells for an E.
441
+ [1549.120 --> 1554.040] And I haven't told you this before, but these cells represent E. It's learned it enough
442
+ [1554.040 --> 1559.560] times that they do at this point represent E. These active cells represent A. But when
443
+ [1559.560 --> 1563.040] we loop around to the first one, look what just happened.
444
+ [1563.040 --> 1564.840] What's up with that?
445
+ [1564.840 --> 1565.840] Okay.
446
+ [1565.840 --> 1571.840] The thing is I'm sending the sequence in and I'm not looping it.
447
+ [1571.840 --> 1574.520] I'm not doing like over and over and over and over.
448
+ [1574.520 --> 1576.840] I'm sending four notes in and then I'm resetting.
449
+ [1576.840 --> 1578.760] I'm sending four notes in and then I'm resetting.
450
+ [1578.760 --> 1583.220] I'm trying to train the sequence, but I don't want to train it on some endless infinite
451
+ [1583.220 --> 1587.240] loop of things or else it'll just think how long does this ever go on, you know?
452
+ [1587.240 --> 1593.440] If you imagine it's hard to temporally cut off a sequence that contains loops.
453
+ [1593.440 --> 1596.840] So we're just going to cut it off manually.
454
+ [1596.840 --> 1598.520] Run it, cut it off, run it, cut it off.
455
+ [1598.520 --> 1602.480] So every time it sees an A, it sees it out of context.
456
+ [1602.480 --> 1603.480] Every time.
457
+ [1603.480 --> 1605.480] Is it never follows anything?
458
+ [1605.480 --> 1611.000] If we see a C sharp, it knows, oh, C sharp follows A so I know these exact bits are going
459
+ [1611.000 --> 1612.040] to turn on.
460
+ [1612.040 --> 1614.600] So that leads to my point.
461
+ [1614.600 --> 1620.360] There are two ways that a neuron within a minicolumn will become active.
462
+ [1620.360 --> 1622.200] There's two ways.
463
+ [1622.200 --> 1632.520] The first one is if there are any neurons that are already in a predictive state.
464
+ [1632.520 --> 1636.520] If we have it a column activate and we look through it and we say, oh, there's a neuron
465
+ [1636.520 --> 1638.760] that's in a predictive state, you win.
466
+ [1638.760 --> 1639.760] You were right.
467
+ [1639.760 --> 1643.720] I mean, that neuron was correct because in the last time step, it thought, I think I'm
468
+ [1643.720 --> 1647.720] going to be pretty, I think I've seen this before, I'm going to go into a predictive state.
469
+ [1647.720 --> 1650.520] So when we get to the next one, we'll activate it.
470
+ [1650.520 --> 1653.760] But that's not happening here because we've never seen A come after anything.
471
+ [1653.760 --> 1655.400] We have no context.
472
+ [1655.400 --> 1660.080] So if we have no context, if we get an unrecognized input and there are no predictive
473
+ [1660.080 --> 1662.480] cells here, we are going to act.
474
+ [1662.480 --> 1663.480] We're going to act.
475
+ [1663.480 --> 1666.800] Activate every cell in the column because we don't know.
476
+ [1666.800 --> 1667.800] We're confused.
477
+ [1667.800 --> 1669.880] It could be anything.
478
+ [1669.880 --> 1676.200] And what will happen is over time, we'll pick a cell to represent that new sequence and
479
+ [1676.200 --> 1679.880] it will represent it going forward.
480
+ [1679.880 --> 1686.680] So now you're probably wondering great, but how do we put a cell into a predictive state?
481
+ [1686.680 --> 1689.440] So I told you the two ways it could be active.
482
+ [1689.440 --> 1693.960] If there is a predictive cell in the column that activates, if there's no predictive cells
483
+ [1693.960 --> 1697.760] in the column, they all activate, we call this bursting, by the way.
484
+ [1697.760 --> 1702.800] I think that's a neuroscience term.
485
+ [1702.800 --> 1706.560] But how do they become predictive?
486
+ [1706.560 --> 1714.600] So how we decide whether something is going to become predictive or not is, we'll go through,
487
+ [1714.600 --> 1718.640] based on some input, we'll have to go through every single cell in the column.
488
+ [1718.640 --> 1721.960] I'm just going to show you which ones are currently predicted.
489
+ [1721.960 --> 1727.080] At C-sharp, for example, we're predicting that these cells are going to come next.
490
+ [1727.080 --> 1731.840] This is the cells for E because we've seen this 10, 20 times so far.
491
+ [1731.840 --> 1739.840] The reason that those become predictive is because they have these distal connections
492
+ [1739.840 --> 1745.680] that have already grown because I've been playing this over and over and over.
493
+ [1745.680 --> 1751.240] This becomes predictive because we've already done the transition A to C-sharp over and over
494
+ [1751.240 --> 1752.240] and over.
495
+ [1752.240 --> 1758.920] So every time we've done that, we've grown segments and reinforced learning from this C-sharp
496
+ [1758.920 --> 1765.640] or this prediction of E to the previous state which was C-sharp.
497
+ [1765.640 --> 1770.080] If I look at some of these others that aren't predictive, that's because they have no segments.
498
+ [1770.080 --> 1773.520] They have no connections.
499
+ [1773.520 --> 1785.440] So if I were to move one time step forward, you should see all of these blue cells turn orange.
500
+ [1785.440 --> 1788.640] So we correctly predicted E.
501
+ [1788.640 --> 1789.640] So let me show you something interesting.
502
+ [1789.640 --> 1795.720] I'm going to turn this off and I'm going to let this play a little.
503
+ [1795.720 --> 1802.480] I'm going to show you how bursting really works, how we really learn a new sequence.
504
+ [1802.480 --> 1803.480] So I've learned this pretty well.
505
+ [1803.480 --> 1805.680] A, C-sharp, E, A.
506
+ [1805.680 --> 1807.960] What if I stop it?
507
+ [1807.960 --> 1810.400] Let's stop it right here at C.
508
+ [1810.400 --> 1813.400] These predictive cells, what are they predicting?
509
+ [1813.400 --> 1814.400] E.
510
+ [1814.400 --> 1817.080] I'm going to change that.
511
+ [1817.080 --> 1819.760] Let's make it F. We haven't seen it F before.
512
+ [1819.760 --> 1824.960] So when I scoot this forward, let's turn on active cells.
513
+ [1824.960 --> 1828.160] So we're at the state right now, we're at C-sharp.
514
+ [1828.160 --> 1831.240] All these active cells represent C-sharp.
515
+ [1831.240 --> 1837.680] We're predicting these cells, which are the cells for E.
516
+ [1837.680 --> 1840.800] When we move forward, we're not going to get those cells.
517
+ [1840.800 --> 1845.680] Does anyone know what's going to happen?
518
+ [1845.680 --> 1848.360] What was that?
519
+ [1848.360 --> 1852.640] All the new columns, because we're going to have a new spatial input.
520
+ [1852.640 --> 1854.120] Something that we've never seen before.
521
+ [1854.120 --> 1856.880] We haven't even shown F to this system yet.
522
+ [1856.880 --> 1858.920] We're going to have a new spatial input.
523
+ [1858.920 --> 1860.000] Here it is.
524
+ [1860.000 --> 1862.520] And almost all of them burst.
525
+ [1862.520 --> 1867.720] What this is doing is, so we've seen A, C-sharp, E before.
526
+ [1867.720 --> 1870.800] And now we're seeing A, C-sharp, F-sharp.
527
+ [1870.800 --> 1871.560] Never seen that.
528
+ [1871.560 --> 1874.360] So the system's like, whoa, everything bursts.
529
+ [1874.360 --> 1877.880] So we get this spatial input that's new, except for this one,
530
+ [1877.880 --> 1883.040] because it must apparently share some semantics with E.
531
+ [1883.040 --> 1885.280] But for the most part, everything bursts.
532
+ [1885.280 --> 1887.520] And we're like, this is a new sequence.
533
+ [1887.520 --> 1890.360] So at this point, I'm just going to play this forward.
534
+ [1893.360 --> 1896.200] So we're going to see these columns burst for a while,
535
+ [1896.200 --> 1899.640] until it gets the rhythm of the new sequence.
536
+ [1899.640 --> 1902.720] And then it will eventually stop bursting and realize,
537
+ [1902.720 --> 1904.600] OK, well, this is normal.
538
+ [1904.600 --> 1905.080] This is normal.
539
+ [1905.080 --> 1906.080] I've seen this sequence before.
540
+ [1906.080 --> 1909.760] I've seen A, C-sharp, E, and A, C-sharp, F-sharp.
541
+ [1909.760 --> 1913.400] So it has stopped bursting, except for that very first A.
542
+ [1913.400 --> 1915.080] I can back at C-sharp.
543
+ [1915.080 --> 1917.680] I'm going to show you all the predictive cells now.
544
+ [1917.680 --> 1920.840] Does it look like there's about twice as many predictive cells?
545
+ [1920.840 --> 1923.040] Why is that?
546
+ [1923.040 --> 1924.120] It's predicting both.
547
+ [1924.120 --> 1926.600] It's predicting E, and it's predicting F-sharp,
548
+ [1926.600 --> 1928.320] because it's seen both of them.
549
+ [1928.320 --> 1930.480] Now, if I were to play this on and on and on and over,
550
+ [1930.480 --> 1931.960] and over eventually, it'll forget that it
551
+ [1931.960 --> 1934.560] had ever had a C-sharp to E transition.
552
+ [1934.560 --> 1938.680] And it will just have the pattern C-sharp to F-sharp.
553
+ [1938.680 --> 1941.480] All of these things are tunable.
554
+ [1941.480 --> 1943.600] So you can make a system that forgets really fast
555
+ [1943.600 --> 1946.160] or remembers forever or whatever.
556
+ [1946.160 --> 1948.000] So hopefully you get the gist.
557
+ [1948.000 --> 1949.200] This is sequence memory.
558
+ [1949.200 --> 1952.120] This is how it's a really simple example, OK?
559
+ [1952.120 --> 1956.360] This size of the structures is small or then we usually use.
560
+ [1956.360 --> 1958.400] I just can't visualize the size of structures
561
+ [1958.400 --> 1960.520] that we typically build these systems with.
562
+ [1963.600 --> 1968.320] OK, I'm running out of time.
563
+ [1968.320 --> 1969.240] OK.
564
+ [1969.240 --> 1972.160] So you might be wondering, because I said,
565
+ [1972.160 --> 1972.680] what am I supposed to do?
566
+ [1972.680 --> 1974.440] I have seven minutes left.
567
+ [1974.440 --> 1976.240] OK.
568
+ [1976.240 --> 1978.920] You might be wondering, hey, you said that the distal signal came
569
+ [1978.920 --> 1979.680] from somewhere else.
570
+ [1979.680 --> 1985.400] But in this example, we had all the input coming proximally.
571
+ [1985.400 --> 1988.200] But we were feeding the distal input back into ourselves.
572
+ [1988.200 --> 1989.560] We weren't getting it from somewhere else,
573
+ [1989.560 --> 1991.920] from some other layer or some other part of the cortex.
574
+ [1991.920 --> 1995.120] Like I said, that's what provides the temporal context
575
+ [1995.120 --> 1997.400] for the layer, because you're using yourself
576
+ [1997.400 --> 1999.440] as your context, as your reference.
577
+ [1999.440 --> 2001.160] That's how that works.
578
+ [2001.160 --> 2003.080] Now, what if, and I don't have much time to go over this,
579
+ [2003.080 --> 2005.960] but what if we change this a bunch?
580
+ [2005.960 --> 2008.240] And we said, OK, we're going to have that proximal input
581
+ [2008.240 --> 2010.520] be a sensory feature that some sensor has
582
+ [2010.520 --> 2013.080] felt on some object.
583
+ [2013.080 --> 2015.160] And we're going to have the distal input
584
+ [2015.160 --> 2018.280] be the allocentric location of that object.
585
+ [2018.280 --> 2020.440] So if you think about an object, you can think about a coffee
586
+ [2020.440 --> 2022.680] cup, for example, anywhere in the world.
587
+ [2022.680 --> 2024.120] Anywhere, I could think about it there.
588
+ [2024.120 --> 2025.120] I could think about it there.
589
+ [2025.120 --> 2026.520] I could place it wherever I want.
590
+ [2026.520 --> 2029.320] You have an allocentric representation of that object.
591
+ [2029.320 --> 2030.920] I mean, it's self-contained.
592
+ [2030.920 --> 2033.640] It doesn't require any other coordinate framework.
593
+ [2033.640 --> 2036.440] It just exists a coffee cup, right?
594
+ [2036.440 --> 2040.720] So that's what I mean by allocentric.
595
+ [2040.720 --> 2045.120] And I'm going to give you another demo of this, which
596
+ [2045.120 --> 2047.000] is in PowerPoint.
597
+ [2047.000 --> 2047.800] I didn't make this one.
598
+ [2047.800 --> 2048.560] Somebody else did it.
599
+ [2048.560 --> 2049.800] It was so good.
600
+ [2049.800 --> 2051.520] I just wanted to do it.
601
+ [2051.520 --> 2055.800] So here we're talking about a single column
602
+ [2055.800 --> 2059.320] of a single cortical column with two layers.
603
+ [2059.320 --> 2061.640] So this is sort of our newer research stuff
604
+ [2061.640 --> 2064.000] coming out of our company.
605
+ [2064.000 --> 2066.160] So we've identified this cortical circuit
606
+ [2066.160 --> 2067.760] that we've seen exist.
607
+ [2067.760 --> 2071.080] It exists in layer four and layer two, three of the cortex,
608
+ [2071.080 --> 2076.840] where layer four has many columns, and it receives input.
609
+ [2076.840 --> 2080.200] And we're going to send the distal input in as the allocentric
610
+ [2080.200 --> 2083.320] location on an object that we're about to touch.
611
+ [2083.320 --> 2089.480] When we touch that object, we get sensory information
612
+ [2089.480 --> 2092.560] that comes in as proximal input.
613
+ [2092.560 --> 2095.600] We make a prediction, which is what the location is,
614
+ [2095.600 --> 2098.520] based on all the things we've ever touched on that part
615
+ [2098.520 --> 2100.680] of an object before.
616
+ [2100.680 --> 2102.200] And if we're correct, then that's what
617
+ [2102.200 --> 2106.800] represents this feature at a location on an object.
618
+ [2106.800 --> 2109.960] That gets passed up to another layer.
619
+ [2109.960 --> 2111.720] And I can't go into the details.
620
+ [2111.720 --> 2112.840] I just don't have time.
621
+ [2112.840 --> 2115.760] But this other layer now, it's getting proximal input
622
+ [2115.760 --> 2120.040] from that layer below it that's sending in sensory location
623
+ [2120.040 --> 2121.280] features.
624
+ [2121.280 --> 2124.480] And it's going to decide, based on that sensory feature
625
+ [2124.480 --> 2128.200] at that location, these are the neurons that have represented
626
+ [2128.200 --> 2131.000] that before, and these actually represent objects.
627
+ [2131.000 --> 2132.920] So in this case, it's ambiguous.
628
+ [2132.920 --> 2135.080] If we touch this point on this object,
629
+ [2135.080 --> 2138.520] well, that feels like a cup and a can, and a bowl.
630
+ [2138.520 --> 2142.720] But if we touch it again, we do the same thing.
631
+ [2142.720 --> 2144.480] This is a different feature.
632
+ [2144.480 --> 2145.240] It's not smooth.
633
+ [2145.240 --> 2147.560] It's kind of the rim.
634
+ [2147.560 --> 2149.160] And then we pass it up.
635
+ [2149.160 --> 2152.200] Then we can take the ball right out.
636
+ [2152.200 --> 2154.000] And this is all those union properties.
637
+ [2154.000 --> 2155.000] I'm talking about SDRs.
638
+ [2155.000 --> 2156.800] These are all SDRs.
639
+ [2156.800 --> 2158.800] So we can now identify, OK, it's not a ball.
640
+ [2158.800 --> 2161.200] After the second touch, let's touch it again.
641
+ [2161.200 --> 2163.840] Here's another unique feature at a location.
642
+ [2163.840 --> 2167.040] And I can now rule out the can.
643
+ [2167.040 --> 2169.080] So I know now this is a coffee cup.
644
+ [2169.080 --> 2172.120] These bits, these on bits, in that output layer
645
+ [2172.120 --> 2177.800] represent an object in your brain, or in the brain of the software.
646
+ [2177.800 --> 2180.160] We can go even further with this, with multiple columns.
647
+ [2180.160 --> 2181.960] And this is the cool thing.
648
+ [2181.960 --> 2185.600] If we have, imagine that each column represents a finger.
649
+ [2185.600 --> 2186.800] It doesn't really work like this in the brain.
650
+ [2186.800 --> 2190.960] You've got lots of columns working with just one finger pad.
651
+ [2190.960 --> 2192.920] But imagine that you had three columns,
652
+ [2192.920 --> 2195.000] and each was represented a finger.
653
+ [2195.000 --> 2196.640] It's really useful if this finger can
654
+ [2196.640 --> 2197.760] inform this finger, right?
655
+ [2197.760 --> 2199.760] If you're touching something and they're both touching,
656
+ [2199.760 --> 2201.040] they are doing that.
657
+ [2201.040 --> 2204.640] Your brain, because these columns are sharing information,
658
+ [2204.640 --> 2209.480] when you touch something, you get feedback from this finger
659
+ [2209.480 --> 2212.360] that helps this finger understand what you're touching.
660
+ [2212.360 --> 2216.840] So what happens here is, as we simultaneously touch this object,
661
+ [2216.840 --> 2218.800] and they all get different locations
662
+ [2218.800 --> 2221.280] with different features from these fingers,
663
+ [2221.280 --> 2223.440] the same thing happens.
664
+ [2223.440 --> 2226.960] And then in the output columns, they're all ambiguous,
665
+ [2226.960 --> 2228.960] but one's more ambiguous than the other.
666
+ [2228.960 --> 2231.280] Because it's like, well, this feels like this or this,
667
+ [2231.280 --> 2233.280] and the other is like, it's definitely not a ball.
668
+ [2233.280 --> 2237.640] Well, in your brain, these columns are informing each other.
669
+ [2237.640 --> 2241.000] And if two of the three columns are like, that's not a ball,
670
+ [2241.000 --> 2242.920] why not share that information with your other finger
671
+ [2242.920 --> 2244.760] that thinks it might be a ball?
672
+ [2244.760 --> 2248.760] And it can actually feed back and inform
673
+ [2248.760 --> 2252.440] that the column that's associated with that sensory input,
674
+ [2252.440 --> 2255.800] that that's not a ball, and updated representation.
675
+ [2255.800 --> 2259.320] The same thing, if we do yet another grasp,
676
+ [2259.320 --> 2261.440] with different features, all feeding up
677
+ [2261.440 --> 2264.800] into the object recognition column, this output layer
678
+ [2264.800 --> 2268.320] is really doing a temporal pooling operation.
679
+ [2268.320 --> 2271.120] It has a library of all the objects it's ever learned,
680
+ [2271.120 --> 2273.840] and it's just narrowing down, narrowing down.
681
+ [2273.840 --> 2275.600] Every time you touch something, it's like, well, it's not that,
682
+ [2275.600 --> 2277.320] it's not that, that's what it is.
683
+ [2277.320 --> 2279.240] And when we've got lots of columns working together,
684
+ [2279.240 --> 2282.040] all sharing, what they think that object is,
685
+ [2282.040 --> 2285.120] that you're touching, then it can be much faster.
686
+ [2285.120 --> 2287.040] It can identify the objects much faster.
687
+ [2290.280 --> 2291.160] Do we do one more touch?
688
+ [2291.160 --> 2291.960] No, that's it.
689
+ [2291.960 --> 2293.160] OK.
690
+ [2293.160 --> 2295.200] And I have two minutes.
691
+ [2295.200 --> 2297.480] So let me get through these last bit of slides.
692
+ [2297.480 --> 2299.560] Like I said, on the Slack channel, there's
693
+ [2299.560 --> 2300.920] HTML, there's one of the rooms.
694
+ [2300.920 --> 2304.880] You can ask me questions there, and I'll be around today.
695
+ [2304.880 --> 2308.160] All of our code is open source.
696
+ [2308.160 --> 2311.440] We've got a research code, or core code, et cetera.
697
+ [2311.440 --> 2312.800] I'm the open source community manager,
698
+ [2312.800 --> 2315.520] so if you deal with any of the stuff you're dealing with me,
699
+ [2315.520 --> 2317.240] I'm happy to help.
700
+ [2317.240 --> 2322.120] And other thing is, all of our research papers are accessible.
701
+ [2322.120 --> 2324.680] We try to make everything as transparent as accessible
702
+ [2324.680 --> 2325.960] as possible.
703
+ [2325.960 --> 2327.800] And I have a YouTube channel.
704
+ [2327.800 --> 2331.800] So I do, everything I've shown you is on this YouTube channel.
705
+ [2331.800 --> 2334.760] I have this whole lesson called HTML school that goes through
706
+ [2334.760 --> 2338.960] everything from bit arrays to temporal memory sequences
707
+ [2338.960 --> 2340.160] from the ground up.
708
+ [2340.160 --> 2342.080] And I'm working on some of the sensory motor stuff
709
+ [2342.080 --> 2343.040] that I just talked about.
710
+ [2343.040 --> 2346.000] There will be more episodes about that coming soon.
711
+ [2346.000 --> 2349.440] So if you're interested in this, please check us out.
712
+ [2349.440 --> 2351.120] nementa.org.
713
+ [2351.120 --> 2353.400] I am Rialite at Twitter and GitHub.
714
+ [2353.400 --> 2356.120] And you can follow my company nementa also.
715
+ [2356.120 --> 2357.480] I think that's it.
716
+ [2357.480 --> 2357.760] Yeah.
717
+ [2357.760 --> 2359.920] So I'm happy to take questions in the very limited time
718
+ [2359.920 --> 2361.680] that I have.
719
+ [2361.680 --> 2362.480] Yes?
720
+ [2363.000 --> 2369.000] I think that's the best thing about using this.
721
+ [2369.000 --> 2371.280] Yeah, that's where, well, OK.
722
+ [2371.280 --> 2374.280] Our goal is to try and understand how intelligence works.
723
+ [2374.280 --> 2376.480] This just happens to be something that we can see,
724
+ [2376.480 --> 2378.720] and we can posturalize.
725
+ [2378.720 --> 2381.400] We think that this is how object recognition works.
726
+ [2381.400 --> 2383.520] And it's not just your fingers, right?
727
+ [2383.520 --> 2384.680] It's your eyes.
728
+ [2384.680 --> 2385.800] It's your ears.
729
+ [2385.800 --> 2387.640] When you think of a coffee cup, you're not
730
+ [2387.640 --> 2389.560] just thinking about how it feels.
731
+ [2389.560 --> 2391.560] You're thinking about what it looks like.
732
+ [2391.560 --> 2394.280] So all of those, all that sensory input
733
+ [2394.280 --> 2396.320] contributes to your representation of objects
734
+ [2396.320 --> 2397.680] that you know in the world.
735
+ [2397.680 --> 2399.920] So the existence of a coffee cup
736
+ [2399.920 --> 2402.840] exists everywhere in your brain all at once.
737
+ [2402.840 --> 2404.720] And because, well, I wouldn't say everywhere,
738
+ [2404.720 --> 2407.560] but lots of places in your brain, the places that process
739
+ [2407.560 --> 2409.840] your somatic senses, your visual, your audits,
740
+ [2409.840 --> 2414.360] well, you can't really hear coffee cups, but yeah.
741
+ [2414.360 --> 2417.000] What else does it mean to you that you're not
742
+ [2417.960 --> 2418.960] doing that?
743
+ [2418.960 --> 2419.960] Yeah.
744
+ [2419.960 --> 2421.960] That's a thing that you're not doing that you're not
745
+ [2421.960 --> 2423.040] doing whatever you're doing.
746
+ [2423.040 --> 2424.320] No.
747
+ [2424.320 --> 2424.840] No.
748
+ [2424.840 --> 2426.480] So the question was, if you get impaired,
749
+ [2426.480 --> 2427.840] if you have a brain injury or something,
750
+ [2427.840 --> 2431.360] and oh, if your sensory input is impaired,
751
+ [2431.360 --> 2433.320] so what generally happens if you're blind
752
+ [2433.320 --> 2436.000] or something happens to your senses is,
753
+ [2436.000 --> 2438.480] your brain is very plastic, so it'll reroute.
754
+ [2438.480 --> 2441.160] It'll take your auditory input and pipe it into the parts
755
+ [2441.160 --> 2443.080] of your brain that's not getting any input.
756
+ [2443.080 --> 2444.400] And you'll see that.
757
+ [2444.400 --> 2446.160] I mean, there's a reason why blind people
758
+ [2446.160 --> 2448.560] have enhanced senses in other areas.
759
+ [2448.560 --> 2451.720] It's because their brain's still working at 100%.
760
+ [2451.720 --> 2452.560] Yes?
761
+ [2456.560 --> 2457.480] They're great.
762
+ [2457.480 --> 2459.360] Neomorphic computing chips.
763
+ [2459.360 --> 2460.920] OK, just a note about hardware.
764
+ [2460.920 --> 2462.280] There's nothing in hardware right now
765
+ [2462.280 --> 2466.400] that can run this stuff really naturally and fast.
766
+ [2466.400 --> 2468.240] Like, we run it obviously on CPUs.
767
+ [2468.240 --> 2469.680] We can run it on GPUs.
768
+ [2469.680 --> 2472.240] But we're not going to take advantage of this architecture
769
+ [2472.240 --> 2474.800] until we have hardware that is plastic,
770
+ [2475.800 --> 2478.760] until we can represent neurons that can grow
771
+ [2478.760 --> 2482.280] and degrade connections to each other.
772
+ [2482.280 --> 2484.160] We're not going to be able to do this in hardware.
773
+ [2484.160 --> 2486.240] There's a lot of stuff we can do with software,
774
+ [2486.240 --> 2489.080] but there's definitely more than one organization working
775
+ [2489.080 --> 2490.720] on that type of plastic hardware.
776
+ [2493.480 --> 2494.080] Anything else?
777
+ [2494.080 --> 2494.760] Any other questions?
778
+ [2494.760 --> 2495.640] Yes, in the middle.
779
+ [2495.640 --> 2501.480] I'm going to take the first part of the learning model.
780
+ [2501.480 --> 2503.320] The biggest limitation of the current deep learning
781
+ [2503.320 --> 2504.160] models.
782
+ [2507.520 --> 2511.680] Well, you know the thing about integrating all of those things
783
+ [2511.680 --> 2514.920] to go after a common goal, I think that.
784
+ [2516.920 --> 2519.680] I mean, I think deep learning can do some really great
785
+ [2519.680 --> 2522.880] narrow things like the facial recognition and voice
786
+ [2522.880 --> 2524.600] recognition and all that stuff.
787
+ [2524.600 --> 2526.360] But putting it together into a package
788
+ [2526.360 --> 2529.120] that understands human humans.
789
+ [2529.120 --> 2531.160] That understands intelligence.
790
+ [2531.160 --> 2535.720] I mean, that you can communicate with is really hard.
791
+ [2535.720 --> 2538.440] By the way, who thinks Alexa is intelligent?
792
+ [2538.440 --> 2540.280] All right.
793
+ [2540.280 --> 2542.480] All right.
794
+ [2542.480 --> 2544.280] OK, thanks everybody for coming to my talk.
795
+ [2544.280 --> 2545.280] Really appreciate it.
796
+ [2545.280 --> 2547.080] Thank you.
transcript/allocentric_10kNbp1PObo.txt ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 10.180] For the next speaker, Parissa Abe di Kozani, she is doing her postdoc at your university
2
+ [10.180 --> 17.120] in Canada and her talk is titled Building a Commercial Network to Study the Combination
3
+ [17.120 --> 20.080] of the Public Health Centre and Ego Centre Information.
4
+ [20.080 --> 21.080] Thank you.
5
+ [21.080 --> 22.080] Okay.
6
+ [22.080 --> 23.080] Okay.
7
+ [23.080 --> 32.080] Screen.
8
+ [32.080 --> 36.080] Good morning, everyone.
9
+ [36.080 --> 40.080] Thanks for attending this session.
10
+ [40.080 --> 47.080] In this part, I am going to quickly explain our project, which is at very early stages in understanding
11
+ [47.080 --> 52.080] how a loss and we can egos and information are combined in the brain.
12
+ [52.080 --> 59.080] So when planning to reach to a visual target, one can code the special position of the object
13
+ [59.080 --> 75.080] with regard to body parts, for example, here with regard to the hand, or can code information with regard to the other objects in the surrounding area, which the second one is called a loss and recording and the first one is called egos and recording.
14
+ [75.080 --> 89.080] Many studies, both behavioral and neurophysiology studies show that actually this allocentric and egosentric information are combined for reaching purposes.
15
+ [89.080 --> 104.080] For example, one of the recent studies by my colleague in Dr. Crawford's lab, when they recorded the activity in the frontal eye field of monkeys, they showed that the moral codes.
16
+ [104.080 --> 120.080] In the moral code, there is an embedded if allocentric information. So more specifically, what they did, they trained monkeys to reach to remembered position of targets in the presence of allocentric landmarks.
17
+ [120.080 --> 123.080] So let's go through the task very quickly together.
18
+ [123.080 --> 133.080] So here in the first image, the monkeys are trained to fixate on the orange dot and the landmarks are presented as two crossing lines.
19
+ [133.080 --> 144.080] And after a fixation period, the target appears as the white dot here, close to the landmarks and then disappears after 100 milliseconds.
20
+ [144.080 --> 156.080] And after a delay period, a mask appears of both landmark and target everything disappears. And after random delay, now the landmark again appears on the screen.
21
+ [156.080 --> 164.080] And it could be in two different conditions either in the same position as previous image or as a shifted position in different direction.
22
+ [164.080 --> 173.080] And what they found and then the monkey, of course, was trained to reach to the remembered position of the target when the fixation was disappeared.
23
+ [173.080 --> 192.080] And what they found as that similar to human behavioral studies, also the gaze position, the final gaze position of the monkey was shifted toward the shifted landmark position, which is showing the influence of the landmarks and this shift was around 30%.
24
+ [192.080 --> 204.080] And they look at the single neuron activity, they found that early visual responses were coding most information in egocentric frame here, isent centered coordinate.
25
+ [204.080 --> 214.080] But when they look at at the find our later responses or more responses, what they found is that those codes showing again mainly coding in egocentric.
26
+ [214.080 --> 226.080] But within that coding was embedded about 30% shift toward the aloecentric coding and this was an indicator of the presence of aloecentric coding also in the brain for reaching purposes.
27
+ [226.080 --> 239.080] However, while all these studies showing the effect of aloecentric information is not clear how and where in the network this combination might happen.
28
+ [239.080 --> 242.080] So this is the goal of this study.
29
+ [242.080 --> 253.080] And previously for egocentric framework, neural networks were a very useful tool to provide insights of like how these processes can happen.
30
+ [253.080 --> 258.080] And our goal in this project is to use the same methodology.
31
+ [258.080 --> 263.080] So there are several challenges that we first need to tackle the first thing.
32
+ [263.080 --> 275.080] So as I said, we're using our own network, but before I can go into the architecture, I'm going to quickly explain what type of signal we are feeding to the network and expecting it from the network to generate.
33
+ [275.080 --> 286.080] So since our goal is to replicate the result by my colleague, similarly, we will provide an visual visual input here images with the landmark and target position.
34
+ [286.080 --> 294.080] And then if you remember, we had two different images, one was landmark and target and one only with the landmark.
35
+ [294.080 --> 304.080] And in the first item of our project, we are only interested in the special, special coding. And so we remove the temporal aspect of this coding.
36
+ [304.080 --> 315.080] And for that reason, currently we are just stacking up the two images together. We hope to add recurrency or recurrent connection later on to our network.
37
+ [315.080 --> 332.080] But so this is what we are visual input and in order to dissociate between egocentric and aloe century, then we varied the initial case position and accordingly, we varied our visual input to replicate the routine of projection.
38
+ [332.080 --> 340.080] As an input for our network. So these are the two input and finally to make it more neurophysiologically visible.
39
+ [340.080 --> 349.080] We expect our network to generate the motor output, similar to the FVF motor responses that my colleague recorded.
40
+ [349.080 --> 354.080] So this would be a population code of the final case position.
41
+ [354.080 --> 368.080] To very quickly go through the architecture that we are proposing here. So since we have images here, we are using convolutional neural networks to extract information from images.
42
+ [368.080 --> 393.080] And what we are doing here, we are just trying to replicate every visual responses. So we are aiming to use physiologically feasible. So here we use only two two convolution layers, we use gab or filters different in different orientation to kind of model the simple cells.
43
+ [393.080 --> 411.080] And then we have rectification, normalization and special pulling and again gab or filters, but before the before flattening the image defeated to the final part of our network, then we included a feature pulling and this for is the only part that we train in our convolution on network.
44
+ [411.080 --> 422.080] And this this part is the part that we allow our network to be selective and choose which feature map amongst all the feature map that was generated using our gab or filters.
45
+ [422.080 --> 429.080] It wants to combine and generate abstract feature maps of the visual input.
46
+ [429.080 --> 440.080] And then we use this initial gate gate position with the extracted information from the images fit that to a two layer fully connected network.
47
+ [440.080 --> 454.080] And in order to generate the final more output and this part is the part that we are planning to analyze to compare with the processes that is happening in the brain.
48
+ [454.080 --> 483.080] So here what I did and so I just finished designing the network here it's this is a very simple or very early result of the network performance and what I did is that I got the population code and used a linear decoder to transform that into two D gaze positions and in the y axis I'm plotting the predicted gaze position from my network and the x axis now I'm plotting the desired gaze position based on the simulation.
49
+ [483.080 --> 493.080] Data generated again based on the behaviors and you can see that the network is trainable and it can predicts.
50
+ [493.080 --> 504.080] And the next step for our network is now to analyze and go inside the individual units and compare them with the behavior of the network.
51
+ [504.080 --> 517.080] And then my time is up I would like to thank my co-authors and my lab member and funding agencies and now I'm open to questions for anyone who is interested to go deeper into the architecture of the network.
52
+ [517.080 --> 521.080] Thank you.
53
+ [521.080 --> 524.080] Thank you, Parisa.
54
+ [524.080 --> 532.080] Currently we don't have any questions in the question and answer panel and please post your questions in the panel and so that we can go through it.
55
+ [532.080 --> 536.080] We have actually time for plenty of questions.
56
+ [536.080 --> 545.080] And could you please give us I mean could you please explore your network a little bit more because it seems like you just go through it very shortly.
57
+ [545.080 --> 556.080] Yeah, so so actually the network them kind of maybe the main part of the network. So did the goal of this is to create a model which is like replicating the brain as much as it's possible.
58
+ [556.080 --> 562.080] So it's not like about like training just a convolution network to do a classification.
59
+ [562.080 --> 572.080] So what we did is here I'm just giving a quick overview of the convolution part again just a big picture.
60
+ [572.080 --> 589.080] And so if this is my if this is the hit map of my input, this would be the final output of my network. It's kind of resized and and creates a Gaussian like kind of activity of all of these pixels.
61
+ [589.080 --> 598.080] And you can see that around the the the the the the allocentric part and the target part. Now I have higher activation.
62
+ [598.080 --> 610.080] And so the one of the main part of or one of the main part or challenges of this project is that was that to train it or to create the architecture that is more
63
+ [610.080 --> 622.080] physiologically understandable. So if I go through it. So here if I'm what I'm showing is that how I'm performing the convolution rectification normalization and pulling.
64
+ [622.080 --> 636.080] And as I said, I'm just simply using some gabber filters. These are all predefined. And so gabber filters are like they can just here in this example extract the orientation of the line for us.
65
+ [636.080 --> 653.080] So if I have the example of like only having two lines here. So the first one would like let's say extract the vertical and horizontal and then we distributed filtering that we have here and has been shown that this can be a process that is happening in the early visual areas.
66
+ [653.080 --> 664.080] So this creates like a feature maps and of it with the combination of different line orientation. And this is the part that we allow our network now to choose.
67
+ [664.080 --> 675.080] For example, if we have only like horizontal and vertical crosses or any other direction. So this this is the part that our network will be trained to create the abstract features.
68
+ [675.080 --> 686.080] And why this why we are selecting this and why this is important is that if you think about coordinator transformation is that we have many many many options.
69
+ [686.080 --> 694.080] And the issue in the brain is that we don't know what to look for and we don't know where to look for it. And by creating such a thing.
70
+ [694.080 --> 708.080] So what we are planning to see is that at what stage this information is coded is that like this allocentric information present all over the network and at some point today use it or not using it.
71
+ [708.080 --> 732.080] So so I didn't have the time to go through some of the result. But when we looked at fully connected layers, what we did if we just extract here, if I go through like the the weights and what I can see is that from the early ones on my fully connected layer, I can see the presence of the
72
+ [732.080 --> 748.080] process into my into the base. So I can see that the network is in a distributed fashion trying to extract this process position with regard to the target position and pass it to the other layers.
73
+ [748.080 --> 761.080] So the next step for us would be for sure to just try not to simulate a situation that no allocentric information is coded and see if the again this would be persistent in the network or not.
74
+ [761.080 --> 772.080] I mean one of our panelists audiences asking a kale lavash is asking was the images introduced to the network as RGB or binary.
75
+ [772.080 --> 777.080] No, they are binary right now they are binary so they are not RGB.
76
+ [777.080 --> 780.080] So we need to make a big difference.
77
+ [780.080 --> 785.080] If I introduce them as RGB, no, I don't think so.
78
+ [785.080 --> 794.080] The plan is so this is the so interestingly no one started that as a frame of like allocentric coding. So this is the first step.
79
+ [794.080 --> 803.080] But I know that in naturalistic images, the first image I showed you that when it was like actual object and we have data of human data.
80
+ [803.080 --> 811.080] So what is for our project is actually to then train them on naturalistic objects.
81
+ [811.080 --> 832.080] And so that would be the that would be the next step for our project. But as I said, the main or the main goal for us first now is that when we train the monkeys, we want to see what signal we should look for in the brain for type of coordinate phrase we should expect to find in the brain and which areas are the most probable to code those information for us.
82
+ [832.080 --> 835.080] That sounds very interesting.
83
+ [835.080 --> 843.080] And thank you very much for your talk again, if you have any more questions for Paris, please just type them in the question and answer panel.
84
+ [843.080 --> 845.080] Paris will be around to answer them.
85
+ [845.080 --> 850.080] And now we will move on to our speaker.
transcript/allocentric_1K3qsFYm0iM.txt ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 11.400] I used to be rather childish, but then again, who wasn't?
2
+ [11.400 --> 15.920] We were old children once, but have you noticed that some adults never seem to really grow
3
+ [15.920 --> 16.920] up?
4
+ [16.920 --> 21.280] They remain childish, and I don't mean that in an innocent, carefree way, rather, they
5
+ [21.280 --> 26.000] can be petulant when they don't get what they want, and they're not the center of attention.
6
+ [26.400 --> 30.040] And we call them immature, because that's what they are.
7
+ [30.040 --> 32.720] They're big babies.
8
+ [32.720 --> 36.920] But we can all be big babies, especially when we're thinking about our own problems.
9
+ [36.920 --> 42.840] And I want to explain why that's the case, and what we can do about it.
10
+ [42.840 --> 45.240] We all start as egocentric children.
11
+ [45.240 --> 50.440] This was a term used by the great Swiss child psychologist Jean-Pierre Gé, to describe
12
+ [50.440 --> 56.600] the way the child sees the world with his self or his ego at the center looking outwards.
13
+ [56.600 --> 62.080] Not only does the egocentric child see the world from only this perspective, but they
14
+ [62.080 --> 66.000] assume that others share that same view as well.
15
+ [66.000 --> 73.120] Moreover, they also believe that others have the same ideas, the same beliefs, and thoughts.
16
+ [73.120 --> 77.200] So in theory of mind tasks, for example, when asked to consider what another person is
17
+ [77.200 --> 82.120] thinking or what's on their mind, the egocentric child will just extrapolate from what they
18
+ [82.120 --> 86.720] know and assume that others have exactly the same thoughts.
19
+ [86.720 --> 93.520] If I show an egocentric three-year-old, this tube of smarties, and ask what's inside
20
+ [93.520 --> 100.680] here, then in all likelihood they'll say smarties, or M&Ms, of their American.
21
+ [100.680 --> 105.720] If I then open it up to reveal that, in fact, it contains not smarties, but pencils.
22
+ [106.400 --> 109.760] Well, first of all, children find this absolutely hilarious.
23
+ [109.760 --> 113.600] It really is that easy to entertain a three-year-old.
24
+ [113.600 --> 117.440] But if I ask them, what did you think was in here before I showed you?
25
+ [117.440 --> 123.200] The three-year-old will say, pencils, simply having forgotten just a moment ago that they
26
+ [123.200 --> 128.120] had a mistake in, or what we say in psychology, a false belief.
27
+ [128.120 --> 132.760] Now that's interesting, but what's more surprising is that if you ask them, what will another
28
+ [132.760 --> 137.760] child say, Mary, what will she say if I ask her what's inside here, having not seen
29
+ [137.760 --> 138.760] it?
30
+ [138.760 --> 143.080] Then the egocentric child says that Mary thinks there'll be pencils inside there, as if
31
+ [143.080 --> 147.880] somehow Mary knows the true state of the world and can read the child's own mind.
32
+ [147.880 --> 152.560] And we do know other egocentric adults who assume that everyone can know exactly what
33
+ [152.560 --> 154.960] they're thinking.
34
+ [154.960 --> 159.840] Or consider the classic piagetine three-mountain's task.
35
+ [159.840 --> 163.920] In that situation, a child sits at a table and on the opposite side is another child or
36
+ [163.920 --> 165.240] an adult.
37
+ [165.240 --> 170.440] And on the table, you have a model of three paper-mash-a-mountains of different sizes and different
38
+ [170.440 --> 175.400] shapes and different colors, each with a distinguishing landmark.
39
+ [175.400 --> 179.440] You can take photographs of the model from around the table at different angles, and then
40
+ [179.440 --> 183.240] lay these out as an array of photographs.
41
+ [183.240 --> 186.960] If you ask the child to pick the photograph that corresponds to the view that they can
42
+ [186.960 --> 189.920] see, they find this trivially easy.
43
+ [189.920 --> 193.640] But if you ask them to choose a photograph that corresponds to the view from the other
44
+ [193.640 --> 198.920] side of the table, what the other person can see, then the egocentric child persists in
45
+ [198.920 --> 203.880] selecting their own perspective again, ignoring what the other person must be seeing a mirror
46
+ [203.880 --> 205.880] image.
47
+ [205.880 --> 211.480] And finally, and most amusingly, don't be surprised if you're playing hide and seek
48
+ [211.480 --> 212.720] with an egocentric child.
49
+ [212.720 --> 217.240] And if they run over to hide by standing in the corner and picking up a waste paper basket
50
+ [217.240 --> 221.240] and then pulling over their head or taking a blanket or throwing over their head or simply
51
+ [221.240 --> 225.640] standing in the middle of the room and covering their eyes with their hands.
52
+ [225.640 --> 226.640] Why?
53
+ [226.640 --> 235.480] Well, they think if they can't see you, well, it's just as a reason, you can't see them.
54
+ [235.480 --> 241.040] Now if you are an individual who thinks that everyone sees the world the same ways you
55
+ [241.040 --> 246.000] do and has the same thoughts and ideas and beliefs, then that's going to present considerable
56
+ [246.000 --> 251.680] obstacles to being and getting along with others who don't share the same ideas, beliefs
57
+ [251.680 --> 253.680] of views of the world.
58
+ [253.680 --> 260.440] In order to be accepted and socialized, a child has to put aside their egocentric bias and
59
+ [260.440 --> 265.200] adopt the more allocentric perspective, capable of seeing things from a different angle as
60
+ [265.200 --> 267.360] it were.
61
+ [267.360 --> 271.960] People with development children do get accepted, they do become socialized, they do cooperate
62
+ [271.960 --> 274.240] and communicate with others.
63
+ [274.240 --> 280.320] But it's not entirely clear that this egocentric bias ever entirely goes away.
64
+ [280.320 --> 284.560] Like many thoughts and behaviors I've studied over the years, I think that these become
65
+ [284.560 --> 288.160] dormant related and they're always with us.
66
+ [288.160 --> 293.440] They're a little bit like the infantile reflexes that we're all born with.
67
+ [293.440 --> 299.360] These are motor reflexes such as the grasp reflex or other reflexes which serve a purpose.
68
+ [299.360 --> 304.200] But as we develop an age, they disappear or at least they appear to disappear.
69
+ [304.200 --> 305.960] But in fact they don't.
70
+ [305.960 --> 311.480] Rather, they become suppressed or inhibited by the development of the cortical mechanisms
71
+ [311.480 --> 316.480] of the brain, which we're sure much later in development, in particular the prefrontal
72
+ [316.480 --> 318.480] cortical areas.
73
+ [318.480 --> 323.680] These areas exert executive function or control and they operate to self-regulate our thoughts
74
+ [323.680 --> 325.920] and behaviors.
75
+ [325.920 --> 330.840] So if you're an adult, unfortunately if you have some cortical damage, such as being in
76
+ [330.840 --> 335.440] a coma, you can see the reemergence of these early ways of thinking and behaving.
77
+ [335.440 --> 341.240] You can see the reemergence of the infantile grasp reflex for example.
78
+ [341.240 --> 347.400] Now I think this explains why in very young children who don't have mature cortical mechanisms,
79
+ [347.400 --> 353.240] why they become slaves to their impulses and urges, which is why they have temper tantrums
80
+ [353.240 --> 356.080] or have behavioral meltdown.
81
+ [356.080 --> 358.360] But it's not just children who can have a behavioral meltdown.
82
+ [358.360 --> 364.080] We've heard about CEOs in the boardroom throwing their toys out of the prime and we've recently
83
+ [364.080 --> 368.880] seen some very famous celebrities behaving in very childish ways in public arenas.
84
+ [368.880 --> 371.960] In fact, we can all behave like that.
85
+ [371.960 --> 374.920] You simply have to put someone to stress.
86
+ [374.920 --> 380.240] So for example, if you put adults in a situation where they don't think they have control, then
87
+ [380.240 --> 385.400] they perform much worse on those three-mind tasks I mentioned and even versions of the three
88
+ [385.400 --> 386.400] mountains task.
89
+ [386.400 --> 391.920] In short, under stress we regress.
90
+ [391.920 --> 397.720] Now I think this explains why we have such long childhoods, the longest proportional childhoods
91
+ [397.720 --> 399.920] of any animal on the planet.
92
+ [399.920 --> 405.400] And of course our childhoods were shorter centuries ago getting married at 14 and working
93
+ [405.400 --> 406.400] at 8.
94
+ [406.400 --> 409.440] But that's because life expectancy was much shorter.
95
+ [409.440 --> 414.480] Today with our much longer life spans of 80 years or so, nevertheless we still spend about
96
+ [414.480 --> 421.080] a fifth of that life span learning to become independent adults.
97
+ [421.080 --> 426.360] Learning to become less dependent on our parents and connected with others.
98
+ [426.360 --> 430.080] And also interdependent.
99
+ [430.080 --> 434.480] Because that's how our species developed, a highly social animal that learns from others
100
+ [434.480 --> 435.880] around us.
101
+ [435.880 --> 441.280] And this is why we are so compelled to be part of a group and why the prospect of being
102
+ [441.280 --> 447.360] isolated or excluded is such emotionally damaging to most people.
103
+ [447.360 --> 449.680] But it's not just emotionally damaging.
104
+ [449.680 --> 452.000] It's also physically very dangerous for us.
105
+ [452.000 --> 457.520] For example, loneliness is well recognized as a contributing factor to the earlier death
106
+ [457.520 --> 459.520] in many elderly people.
107
+ [459.520 --> 466.240] The morbidity risk associated with loneliness is higher than moderate smoking and even obesity.
108
+ [466.240 --> 468.520] So it's imperative that we're accepted.
109
+ [468.520 --> 472.440] It's imperative that we form these social relationships.
110
+ [472.440 --> 477.120] Now you might imagine that with the development of the internet and the popularity of social
111
+ [477.120 --> 482.240] media, the opportunities for forming social relationships is enhanced.
112
+ [482.240 --> 487.840] But in many ways social media has become anti-social media.
113
+ [487.840 --> 493.000] And the reason is because it's making us more egocentric again.
114
+ [493.000 --> 495.080] So hands up and be honest now.
115
+ [495.080 --> 497.680] Hands up if you've ever Googled yourself.
116
+ [497.680 --> 502.520] Okay, hands up if you've ever taken a selfie.
117
+ [502.520 --> 507.400] And the reason we do this, of course, is because of the need to be recognized, the need
118
+ [507.400 --> 509.480] to be validated.
119
+ [509.480 --> 512.360] We do this because we don't want to be excluded.
120
+ [512.360 --> 516.240] We become hypersensitive to the possibility that maybe we're missing out on things, which
121
+ [516.240 --> 521.600] leads to this well-known phenomenon of FOMO, the fear of missing out.
122
+ [521.600 --> 528.000] And when that prospect emerges, we become insecure and we become vulnerable to the curse
123
+ [528.000 --> 530.040] of the self.
124
+ [530.040 --> 535.640] Because when we're focused on ourselves, we have a very egocentric view and we amplify
125
+ [535.640 --> 537.480] and blow everything out of proportion.
126
+ [537.480 --> 541.680] Our problems just seem immense in comparison to everything else.
127
+ [541.680 --> 546.800] And so in order to become a happier person, we've got to learn to adopt a different perspective
128
+ [546.800 --> 551.320] and put a distance between ourselves and our problems.
129
+ [551.320 --> 555.720] And I want to demonstrate that now with a little bit of audience participation.
130
+ [555.720 --> 558.480] I want you all to think of a problem.
131
+ [558.480 --> 563.040] Not a global problem, but a problem which is specific to you, a personal problem.
132
+ [563.040 --> 565.080] Maybe it's financial.
133
+ [565.080 --> 567.920] Maybe somebody said something horrible to you.
134
+ [567.920 --> 572.760] Maybe it was a relationship which isn't quite working out.
135
+ [572.760 --> 575.600] Whatever it is, I want you to talk about that problem in a moment.
136
+ [575.600 --> 579.400] But I know this is a public auditorium, so I don't want you to speak out loud.
137
+ [579.400 --> 582.120] Rather, I want you to use your inner voice.
138
+ [582.120 --> 585.320] Or if you're watching this as a recorded video, you can speak out loud.
139
+ [585.320 --> 589.640] But today, just use your inner voice to talk about the problem in the following way,
140
+ [589.640 --> 591.640] with the following statements.
141
+ [591.640 --> 597.040] Because I want you to refer to your problem using the first person terms of I and me.
142
+ [597.040 --> 600.440] So I'll give myself as an example.
143
+ [600.440 --> 609.400] I am worried about my TEDx talk because I don't think the audience is enjoying it and that
144
+ [609.400 --> 612.520] upsets me your turn.
145
+ [612.520 --> 616.720] I am thinking about whatever my problem is because of the consequences and this upsets
146
+ [616.720 --> 617.720] you.
147
+ [617.720 --> 620.560] Do that now.
148
+ [620.560 --> 626.000] Okay, so how does that make you feel?
149
+ [626.000 --> 629.600] Probably not too great because first of all, I've just reminded you of a problem that
150
+ [629.600 --> 631.000] you've probably forgotten about.
151
+ [631.000 --> 636.000] I've made you focus on it and I've reminded you and made you recognize and acknowledge
152
+ [636.000 --> 638.760] how unhappy it's going to make you.
153
+ [638.760 --> 640.280] What a jerk I am.
154
+ [640.280 --> 642.920] But don't worry, I have a quick fix.
155
+ [642.920 --> 647.240] I want you to do the same thing again, but this time don't use any first person terms.
156
+ [647.240 --> 653.240] I want you to use non-first person terms of he and she or him and her.
157
+ [653.240 --> 655.800] And most importantly, I want you to use your own name.
158
+ [655.800 --> 664.160] So going back to my example, Bruce is worried about his TEDx talk because he thinks that
159
+ [664.160 --> 668.200] the audience doesn't like it and this upsets him.
160
+ [668.200 --> 669.760] Your turn with your problem.
161
+ [669.760 --> 672.760] Do that now.
162
+ [672.760 --> 678.040] Okay, so how does that make you feel in comparison?
163
+ [678.040 --> 684.080] Compared to talking about it in the first person condition, which did you find less stressful?
164
+ [684.080 --> 687.920] Put your hands up if you thought talking about your problem in the first person was less
165
+ [687.920 --> 688.920] stressful.
166
+ [688.920 --> 689.920] Okay.
167
+ [689.920 --> 695.400] And put your hands up if you thought talking about your problem in the third person was
168
+ [695.400 --> 696.880] less stressful.
169
+ [696.880 --> 697.880] Great.
170
+ [697.880 --> 701.840] And this is typically what we find around about eight or nine people in the audiences.
171
+ [701.840 --> 706.200] I've tried this exercise with find that talking about and reflecting upon your problem in
172
+ [706.200 --> 711.800] the third person is somehow a lot less distressing than talking about it in the first person.
173
+ [711.800 --> 713.640] This is called psychological distancing.
174
+ [713.640 --> 718.840] It's a technique that's being explored by the psychologist Ethan Cross as a way of
175
+ [718.840 --> 723.520] modulating your emotional reaction, regulating it.
176
+ [723.520 --> 726.400] In fact, it can be used to prepare for stressful situations.
177
+ [726.400 --> 732.160] In one of his studies, he had students sprung upon with an unexpected task of speaking in
178
+ [732.160 --> 736.000] public and they were told this was going to be a really important presentation and they
179
+ [736.000 --> 737.880] were going to be judged on it.
180
+ [737.880 --> 742.600] And for one half of the students, he told them to prepare for it using the first person
181
+ [742.600 --> 744.200] terms.
182
+ [744.200 --> 748.800] And in the other group, he asked them to reflect upon the upcoming talk using third person
183
+ [748.800 --> 749.800] terms.
184
+ [750.400 --> 754.640] And what he found was that on self-reports, the students who had prepared using the third
185
+ [754.640 --> 757.920] person found the experience actually a lot less stressful.
186
+ [757.920 --> 763.320] But what was more interesting was that independent judges who didn't know what condition each
187
+ [763.320 --> 769.120] student had been entered into also rated the students who have prepared using third person
188
+ [769.120 --> 773.800] as coming over as much more relaxed, more confident and convincing.
189
+ [773.800 --> 776.040] So this is not just good for the individual.
190
+ [776.040 --> 780.280] It provides you with the skills to present yourself.
191
+ [780.280 --> 782.000] So what's going on here?
192
+ [782.000 --> 786.280] Well, we normally never speak about ourselves in the third person.
193
+ [786.280 --> 791.080] The only people that do that are royalty when they say we are not amused.
194
+ [791.080 --> 795.680] Rather, we use the first person terms of I because that's how we experience the stream
195
+ [795.680 --> 799.120] of consciousness from the first person perspective.
196
+ [799.120 --> 803.680] When we're referring to another person, then of course we use the third person.
197
+ [803.680 --> 808.680] So when you speak about yourself in the third person, this automatically transposes you
198
+ [808.680 --> 812.760] from an egocentric perspective into one which is allocentric.
199
+ [812.760 --> 816.960] It puts a distance between yourself and your problem.
200
+ [816.960 --> 821.200] It's a little bit like talking to a friend or consoling a colleague.
201
+ [821.200 --> 825.120] Now you might feel bad for their problem, but you don't feel anywhere near as bad as
202
+ [825.120 --> 826.840] if you were the person experiencing it.
203
+ [826.840 --> 831.400] So this psychological distancing helps you with this.
204
+ [831.400 --> 837.720] Now I've been teaching a course called The Science of Happiness here at the University
205
+ [837.720 --> 838.960] of Bristol.
206
+ [838.960 --> 844.840] And in this course we cover the theory behind what makes us happy, some of the psychology,
207
+ [844.840 --> 851.600] some of the physiology, but we also get the students to practice positive psychology interventions.
208
+ [851.600 --> 856.120] And I've come to realize that all of these interventions to a greater or lesser extent
209
+ [856.120 --> 860.960] work, probably because they introduced this distancing effect.
210
+ [860.960 --> 865.120] In other words, they shift us from a very egocentric to a more allocentric perspective,
211
+ [865.120 --> 867.840] either directly or indirectly.
212
+ [867.840 --> 873.120] Directly in the case of, for example, doing an act of kindness or some form of altruism
213
+ [873.120 --> 876.480] where your attention is directed towards someone else.
214
+ [876.480 --> 883.280] But even more contemplative acts, such as meditation or going for walk in nature or
215
+ [883.280 --> 890.000] experiencing awe, I think these work because they shift this egocentric view on our problems
216
+ [890.000 --> 893.240] and directed out towards the world around us.
217
+ [893.240 --> 896.720] And this puts a distance between us.
218
+ [896.720 --> 901.360] You know, one of the most awesome things you can do apparently is to go into space and
219
+ [901.360 --> 903.280] look back at our planet.
220
+ [903.280 --> 908.760] It's said to produce the most profound emotional experience called the overview effect.
221
+ [908.760 --> 913.640] And I think this is the ultimate in psychological distancing because if you're in the International
222
+ [913.640 --> 918.480] Space Station orbiting the planet, then your problems back on Earth are 250 miles away.
223
+ [918.480 --> 923.080] And if you look in the other direction, you can see the vast expanse of the universe.
224
+ [923.080 --> 925.880] So everything is put into perspective.
225
+ [925.880 --> 932.120] To paraphrase the astronaut, Ed Gibson, when you look back at your planet, then your life
226
+ [932.120 --> 938.200] and your concerns seem diminutive in comparison to the size of the universe.
227
+ [938.200 --> 943.040] Now I doubt I will ever make it into space, but maybe some of you will.
228
+ [943.040 --> 948.120] But for the rest of us back down on Earth, we can all experience a better, healthier
229
+ [948.120 --> 949.480] mental life.
230
+ [949.480 --> 954.080] We simply have to remember to try and shift our perspective to become more authentic.
231
+ [954.080 --> 958.360] We have to go up and stop being big babies.
232
+ [958.360 --> 959.360] Thank you.
transcript/allocentric_261LDpwV_TY.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.220] we performed a mixed science study to compare the effect of horizontal and vertical fields
2
+ [5.220 --> 10.200] of view on egocentric distance perception and four different realistic virtual environments.
3
+ [10.200 --> 14.700] The results indicated more accurate distance judgments with larger horizontal fields of
4
+ [14.700 --> 18.120] view, with no significant effect of vertical field of view.
5
+ [18.120 --> 22.280] More accurate distance judgment and indoor environments compared to outdoor environments
6
+ [22.280 --> 23.280] was observed.
7
+ [23.280 --> 28.060] Also, participants' judge distance says more accurately in colored environments versus
8
+ [28.060 --> 29.060] on colored environments.
transcript/allocentric_3c2MJ71DEWg.txt ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.500] Thank you.
2
+ [8.500 --> 11.820] So first I want to thank the organizers of this great summit.
3
+ [11.820 --> 13.420] It's been a fantastic experience for me.
4
+ [13.420 --> 19.740] I think that I am learning at least as much and probably more than I plan to tell you.
5
+ [19.740 --> 20.740] So I'm making out.
6
+ [20.740 --> 22.620] I hope you guys are liking it too.
7
+ [22.620 --> 24.620] Anyway, we are going to shift gears.
8
+ [24.620 --> 27.780] I am a clinical cognitive and behavioral neurologist.
9
+ [28.780 --> 30.780] Much less talk about mechanism.
10
+ [30.780 --> 32.780] It will also be something of a schmorgasbord talk.
11
+ [32.780 --> 40.780] Because in recent years we have seen a real explosion of interest in clinical applications of TDCS.
12
+ [40.780 --> 50.780] And really there are far too many for me to try to address all of the ways in which clinicians have tried to use this technology in the last few years.
13
+ [50.780 --> 60.780] So I'm really going to cherry pick a few topics that either I do some work in or I'm interested in or I think seem especially promising.
14
+ [60.780 --> 62.780] So I have nothing to disclose.
15
+ [62.780 --> 66.780] I really should learn to be more entrepreneurial about this.
16
+ [66.780 --> 71.780] Alright, so the way I frame this is just by asking a few core questions.
17
+ [71.780 --> 76.780] Literally, who, where, why, and when about using TDCS clinically?
18
+ [76.780 --> 77.780] So let's start with why.
19
+ [77.780 --> 82.780] Why would you think to use TDCS in a clinical population?
20
+ [82.780 --> 87.780] So we will spend some time thinking about why it might be a good therapy, what are some of its advantages?
21
+ [87.780 --> 90.780] And then we will talk a little bit about the where and how.
22
+ [90.780 --> 96.780] How do you go about stimulating a patient, what kinds of parameters would be important to think about?
23
+ [96.780 --> 99.780] You have already heard about them, but we will sort of recap that.
24
+ [99.780 --> 101.780] And then we will get to the heart of our discussion.
25
+ [101.780 --> 103.780] A little bit of what?
26
+ [103.780 --> 106.780] What diseases are of interest to us at least at this time?
27
+ [106.780 --> 110.780] And here I'll focus on, like I said, I'm cherry picking a little bit.
28
+ [110.780 --> 117.780] I'll focus on four areas in particular, cognitive remediation, and I'll sort of extend that to cognitive enhancement a little bit,
29
+ [117.780 --> 120.780] although we'll have another talk about that a little bit later in the day.
30
+ [120.780 --> 124.780] And that'll link intimately with cognitive neuroscience as well.
31
+ [124.780 --> 127.780] We'll talk about TDCS to manipulate different cognitive domains.
32
+ [127.780 --> 136.780] And then we'll focus on stroke recovery and a few particular kinds of deficits within stroke and what TDCS is being used for.
33
+ [136.780 --> 146.780] And then I'll just say a couple of words about chronic pain and depression because there are a couple of areas that there's been some rising interest using TDCS.
34
+ [146.780 --> 155.780] And then finally, I just want to touch very briefly as a clinician who has individuals emailing him and calling him and sometimes texting him,
35
+ [155.780 --> 164.780] asking if they should get TDCS to improve various aspects of either their dysfunction or increasingly their normal function.
36
+ [164.780 --> 166.780] I want to talk a little bit about the who of TDCS.
37
+ [166.780 --> 173.780] And this sort of enters into clinical ethics, who gets stimulated, and who stimulates these patients.
38
+ [173.780 --> 182.780] And so we'll talk a little bit about that, especially as self-made or do it yourself TDCS is becoming more popular.
39
+ [182.780 --> 187.780] Alright, so first, our rationale for TDCS as a therapy.
40
+ [187.780 --> 195.780] Well, there are a number of reasons we could choose to why one would want to use TDCS as a clinical therapy.
41
+ [195.780 --> 197.780] First of all, you've heard a lot.
42
+ [197.780 --> 203.780] I'll recap in just one slide, but you've heard a lot about the relevant mechanisms of action.
43
+ [203.780 --> 209.780] There are very good reasons to believe that it's doing a lot at the neural level.
44
+ [209.780 --> 212.780] And so, you know, first of all, believe it does something.
45
+ [212.780 --> 217.780] Secondly, there are a lot of practical advantages with respect to clinical implementation,
46
+ [217.780 --> 224.780] and also those practical advantages afford some flexibility with respect to clinical approaches.
47
+ [224.780 --> 230.780] So without belaboring this, because you've heard two talks really touching on this topic in detail,
48
+ [230.780 --> 238.780] there are a number of mechanisms of action TDCS that have been observed or speculated that give us confidence that it has activity at the neural level
49
+ [238.780 --> 242.780] that can be translated into clinical applications.
50
+ [242.780 --> 246.780] So you've heard a lot about how it modifies the synaptic micro-environment,
51
+ [246.780 --> 254.780] and we've touched on how it induces what we think our LTP-like effects in neural systems.
52
+ [254.780 --> 264.780] We may have other prolonged neurochemical changes, and we actually heard a little bit about how it has complex interactions with other neurotransmitter systems.
53
+ [264.780 --> 272.780] And then we did hear from Michael H. G. a little bit about how the effects of TDCS are not only local, not only at the site of stimulation,
54
+ [272.780 --> 276.780] but there are also connectivity-driven remote effects.
55
+ [276.780 --> 280.780] So you're not just stimulating at the site where underneath the pads,
56
+ [280.780 --> 284.780] but you're stimulating other areas that are connected to them through synaptic connections,
57
+ [284.780 --> 289.780] or through entire networks of neurons devoted to specific operations.
58
+ [289.780 --> 293.780] And then there, one thing that wasn't really touched on,
59
+ [293.780 --> 301.780] but has been discussed a little bit in the literature, is that there's a potential for non-neuronal structures to be affected in ways that might affect neural activity as well.
60
+ [301.780 --> 307.780] So for example, electrical effects on vasomotor tone and cerebral blood vessels, that kind of thing,
61
+ [307.780 --> 315.780] which hasn't really fully been explored to the point of understanding how it affects things, but is another potential mechanism.
62
+ [315.780 --> 323.780] So now that we have the sense, you know, you've had sort of two hours to get your head around the idea,
63
+ [323.780 --> 328.780] that TDCS really does something in the brain, and that something could really be,
64
+ [328.780 --> 331.780] have a significant effect on downstream behaviors.
65
+ [331.780 --> 335.780] Let's talk a little bit about some of the practical advantages of the technology.
66
+ [335.780 --> 339.780] First, I want to talk about safety and its tolerability.
67
+ [339.780 --> 343.780] It is very well tolerated. It is very safe to date.
68
+ [343.780 --> 351.780] No serious adverse effects from TDCS have been demonstrated in normal or clinical populations.
69
+ [351.780 --> 355.780] And you can look at a number of safety papers with respect to this.
70
+ [355.780 --> 359.780] I chose this paper primarily because we wrote it.
71
+ [359.780 --> 364.780] And so this is an over 130 subjects, a little over 270 TDCS sessions.
72
+ [364.780 --> 369.780] We had no serious adverse effects. All the reported side effects were mild.
73
+ [369.780 --> 374.780] They included at the time of stimulation, tingling, itching, burning, a little bit of discomfort.
74
+ [374.780 --> 381.780] They went away rapidly after stimulation where subjects reported only the mildest of effects.
75
+ [381.780 --> 388.780] Another advantage, just a practical, logistical advantage of doing TDCS, is that it's portable.
76
+ [388.780 --> 393.780] If you were playing 20 questions and you were trying to guess at a TDCS unit,
77
+ [393.780 --> 396.780] and you asked the question, is it smaller than a bread box, the answer is yes.
78
+ [396.780 --> 399.780] It's very convenient in that way.
79
+ [399.780 --> 403.780] That convenience lends itself to pairing with other therapies.
80
+ [403.780 --> 415.780] Once you have the unit strapped to your head, you can pair it with other therapies like occupational therapy, physical therapies, beach therapy, different behavioral and cognitive tasks.
81
+ [415.780 --> 420.780] In those ways, it is quite amenable to co-intervention.
82
+ [420.780 --> 424.780] Then of course, the price point is right.
83
+ [424.780 --> 430.780] The costs range from the hundreds of dollars to, if you get the unit with all the bells and whistles,
84
+ [430.780 --> 434.780] I mean we're really talking about in the thousands of dollars, but not too much more.
85
+ [434.780 --> 441.780] Relative to medical technologies in general, this is a very inexpensive device.
86
+ [441.780 --> 448.780] For better or for worse, it can be implemented without advanced medical training.
87
+ [448.780 --> 455.780] We can ask, we're building the next generation of units, might be rocket science,
88
+ [455.780 --> 458.780] turning on this generation of ones and using them is not.
89
+ [458.780 --> 468.780] In that way, it has a number of practical advantages that lend themselves to its clinical implementation.
90
+ [468.780 --> 476.780] We can think about using TDCS in two categories of ways in clinical populations.
91
+ [476.780 --> 480.780] The first is as a type of replacement therapy.
92
+ [480.780 --> 489.780] By that, I mean that there may be some kinds of treatments where the side effects of whatever medication you're taking are intolerable.
93
+ [489.780 --> 494.780] Or we talked about how inexpensive, portable and accessible the technology is,
94
+ [494.780 --> 501.780] it might be the case that the standard of care therapy in a particular place, say someplace in the developing world,
95
+ [501.780 --> 505.780] maybe unavailable or perhaps too expensive to implement.
96
+ [505.780 --> 512.780] Well, a device as portable and easily applied as TDCS can actually stand in in that situation.
97
+ [512.780 --> 520.780] The more common way in which TDCS ends up being explored in clinical populations is as a type of augmentation,
98
+ [520.780 --> 526.780] that idea being that it can be used to enhance the efficacy of existing therapies,
99
+ [526.780 --> 536.780] whether those therapies be medication therapies or some type of behavioral treatment that can be paired with stimulation.
100
+ [536.780 --> 541.780] Just to talk a little bit, and you've heard in our first two talks a lot of introduction about this point,
101
+ [541.780 --> 549.780] so just touch on it briefly, just to talk a little bit about the parameters that would be relevant for a clinical study.
102
+ [549.780 --> 554.780] I think one of the first and most important of them, if you haven't gotten this impression already,
103
+ [554.780 --> 557.780] is the placement of the electrodes.
104
+ [557.780 --> 563.780] This might seem, by now, two talks into it, like a self-evident point to this audience.
105
+ [563.780 --> 571.780] However, if you look out in the literature as well as in the lay literature of individuals who are starting to apply TDCS to themselves,
106
+ [571.780 --> 576.780] there's a certain indiscriminate quality with respect to how specific mechanisms,
107
+ [576.780 --> 580.780] what type of cognition or what type of mental operation one is trying to manipulate,
108
+ [580.780 --> 585.780] or what type of function one is trying to manipulate with respect to where one is putting the electrode.
109
+ [585.780 --> 589.780] The first point is that the electrode placement counts.
110
+ [589.780 --> 595.780] Just as an example of this, these are two papers from our lab.
111
+ [595.780 --> 600.780] The only point here I want to demonstrate, and this is modeling done with Maromes' help,
112
+ [600.780 --> 606.780] is that placement of electrodes really has an effect on where we think the current is distributed.
113
+ [606.780 --> 612.780] Here is placement of the electrodes over the dorsal lateral prefrontal cortex with a cathode,
114
+ [612.780 --> 615.780] and over the mastoid on the opposite side for the reference electrode,
115
+ [615.780 --> 624.780] yields us this distribution of current compared that to placement of the anode and the cathode over the bilateral temporal-pridal junction,
116
+ [624.780 --> 629.780] and it just yields us a different estimate of where the maximal current is.
117
+ [629.780 --> 634.780] It really does seem to matter with respect to where you put your electrodes,
118
+ [634.780 --> 638.780] including, and we've touched on this point a little bit already, the reference electrode.
119
+ [638.780 --> 646.780] This often neglected electrode that can be placed either extra-cephalically,
120
+ [646.780 --> 651.780] sometimes it's placed on the shoulder or behind the mastoid or various ores,
121
+ [651.780 --> 655.780] the other electrode is sometimes placed over the forehead, very common montage,
122
+ [655.780 --> 663.780] it's placed the electrode over the forehead, and there's evidence to suggest that the estimated distribution of current varies pretty strongly,
123
+ [663.780 --> 666.780] depending on where you place that reference electrode.
124
+ [666.780 --> 672.780] So the first point in terms of clinical parameters for stimulation is what is it that you're trying to manipulate,
125
+ [672.780 --> 675.780] and where are you putting your electrodes to get there?
126
+ [675.780 --> 679.780] I also say, and also using another set of models from our own,
127
+ [679.780 --> 682.780] that electrode size matters.
128
+ [682.780 --> 685.780] This is actually a paper of his from a few years ago,
129
+ [685.780 --> 694.780] and what it demonstrates is there's two different sizes of reference electrode placed over the superorbital space,
130
+ [695.780 --> 699.780] and one very large one very small, we've talked about this with Nietzsche in the first talk,
131
+ [699.780 --> 705.780] that it makes a big difference with respect to where we think the current is distributed.
132
+ [705.780 --> 712.780] So electrodes size, electrode distribution, those placement, those things are really of critical importance.
133
+ [712.780 --> 720.780] And then of course, I think it goes without saying that the intensity of stimulation is a core parameter that needs to be addressed,
134
+ [720.780 --> 726.780] and it's one that I think is being explored and needs to be explored in a somewhat more nuanced way.
135
+ [726.780 --> 733.780] And we heard in our first talk, and just emphasize it, that the effects of stimulation at one intensity,
136
+ [733.780 --> 739.780] that the relationship between effect and intensity may not scale linearly.
137
+ [739.780 --> 744.780] So the effect that you expect at two milliamps with an electrode of a certain polarity,
138
+ [744.780 --> 751.780] might not just be an exaggeration of the effect that you received at one milliamp with the electrode of that polarity.
139
+ [751.780 --> 755.780] So that's a second dimension that is critical to think about in these kinds of studies.
140
+ [755.780 --> 761.780] And then of course, it goes without saying how long you're stimulating, and then how often you're stimulating.
141
+ [761.780 --> 764.780] We've touched on this point as well in our mechanisms talk,
142
+ [764.780 --> 771.780] but the one thing I'll say is that for clinical purposes, it is pretty clear that multiple sessions,
143
+ [771.780 --> 782.780] if you're trying to induce some lasting effect in a clinical population, it appears to be the case that the studies that do that,
144
+ [782.780 --> 790.780] that really induce that kind of enduring effect, have multiple sessions of stimulation paired with some type of behavior typically.
145
+ [790.780 --> 796.780] Alright, and so, as I told you, the placement of the electrodes is important.
146
+ [796.780 --> 802.780] Having some understanding of where we think the current is going is important.
147
+ [802.780 --> 808.780] And so that underscores the growing importance of having some type of way of predicting.
148
+ [808.780 --> 813.780] And one technique that is emerging, I think with a growing number of clinicians who are interested,
149
+ [813.780 --> 820.780] or clinical researchers who are interested in TDCS, is to model these kinds of current flows.
150
+ [820.780 --> 827.780] And so, again, this is work that Maroma has been very pivotal in, so I'm just showing another one of his slides.
151
+ [827.780 --> 834.780] And so, what it demonstrates here is that you can estimate the flow of current between electrodes,
152
+ [834.780 --> 837.780] and again, it can be used to predict where you're going.
153
+ [837.780 --> 854.780] But also, what's illustrated here is that with conventional stimulation, you can often have a distribution of current that is much more distributed than what you might have anticipated if you thought that the only place where you were going to receive maximum current was under the pads.
154
+ [854.780 --> 861.780] Alright, and so here, for instance, in this pretty typical montage of one electrode over the motor cortex and one over the frontal pole,
155
+ [861.780 --> 868.780] you've got this distribution of current, which is bilateral, and is sort of a swath across much of the frontal lobes.
156
+ [868.780 --> 881.780] And so, not to, again, sort of point to work that someone else is doing here, but in the same room, there is a development underway of these high definition TDCS systems.
157
+ [881.780 --> 900.780] And so, as someone focusing on the clinical aspects, I do have to, I would be remiss if I didn't mention that systems are underway or are being used, implemented, that really aim to focus stimulation in a much more spatially, in a way that has higher spatial resolution.
158
+ [901.780 --> 911.780] Alright, so let's shift gears now. Stop talking about why TDCS is so great to use and how we should use it, and talk about what it is that we are starting to use it for.
159
+ [911.780 --> 929.780] And I want to start by focusing on studies in cognition, because I think that TDCS is one technology that has allowed cognitive neuroscience to have a much more translational bent, you know, studies of cognition that actually end up having practical clinical implications.
160
+ [930.780 --> 939.780] And so, here I'm going to focus on just a few of the many different types of cognitive studies that have been done using TDCS, and I'm certain you're going to hear about more later.
161
+ [939.780 --> 945.780] I'm going to focus just a little bit on language, learning and memory, mathematical reasoning, and executive functions.
162
+ [946.780 --> 957.780] So, in the domain of language, there have been a number of studies using TDCS in healthy individuals that have some degree of applicability potentially to clinical populations.
163
+ [957.780 --> 972.780] So, they include improved acquisition of novel names, so you make up sort of generate novel names for objects, you teach individuals those names, and it turns out that individuals getting stimulation over the peri-silbian language circuit will acquire those names more robustly.
164
+ [972.780 --> 986.780] The same with a novel grammar, so what's illustrated here in this figure is a study in which the investigators created a new grammar, taught this new grammar to individuals, and they were getting left hemisphere stimulation.
165
+ [986.780 --> 1000.780] And what they found was that after they were learning with TDCS, they were better able to detect errors of the new grammar, grammatical errors in this new grammar they had never known before, when getting TDCS compared to Sham.
166
+ [1000.780 --> 1016.780] Also, TDCS has been used to increase verbal fluency. In our lab, this is where a couple of years ago from a fellow at the time, who's now faculty member at Georgetown, Peter Terkel-Talb, TDCS has been used at least transiently to enhance reading abilities.
167
+ [1016.780 --> 1023.780] So, this was a study in which we studied normal subjects, and they received anodil stimulation over the left.
168
+ [1023.780 --> 1030.780] This is the posterior temporal cortex or temporal parietal junction, however you want to think about it, cathode over the right.
169
+ [1030.780 --> 1041.780] And what we found was that they had an increase, these are again normal individuals getting TDCS in reading efficiency on standardized measures of reading.
170
+ [1041.780 --> 1051.780] And so, these were, again, healthy individuals. This is an area of the brain that's been implicated in visual word form recognition and the ability to read.
171
+ [1051.780 --> 1059.780] And so, you know, we're able to manipulate this in a way that we think might have some applicability down the road to populations like individuals with developmental dyslexia.
172
+ [1060.780 --> 1069.780] This was a study that was alluded to a little bit earlier in the session today, a study by recent colleagues involving motor learning.
173
+ [1069.780 --> 1083.780] And here, TDCS was applied. Subjects were learning this complicated motor task. I had to sort of play this little video game with a force transducer where they would squeeze and the little sensor here or the little indicator here would jut out.
174
+ [1083.780 --> 1090.780] Depending on how hard they squeezed and they had to sort of go in a very complicated pattern or maybe two then back then one then back then three then back.
175
+ [1090.780 --> 1098.780] So, they learned this very complicated task. They used TMS to identify where the motor cortex was and then stuck TDCS there.
176
+ [1098.780 --> 1107.780] And after five days of training, they acquired the skill more rapidly than subjects who received sham.
177
+ [1107.780 --> 1119.780] And after about three months of testing, right, this is day 85 out here, they still had acquisition of this or retention of this motor skill better than subjects who had received sham TDCS.
178
+ [1119.780 --> 1128.780] So, again, work in normal populations that points to work that could be done in the reacquisition of skills in patient populations.
179
+ [1128.780 --> 1145.780] I'll focus for a moment on a couple of investigations that look at frontal lobe functions because there are a lot of patient populations with cognitive deficits where we think that some intervention on executive function in frontal lobe abilities could be useful.
180
+ [1145.780 --> 1154.780] One that I'll focus on first is again for our group and it involves mental flexibility with common objects.
181
+ [1154.780 --> 1161.780] So, what we asked the subjects to do here was to provide novel uses for common objects.
182
+ [1161.780 --> 1172.780] So, for example, you would see this object, right, this very clunky looking shoe and a person might say, well, well, if you made me think of something new to do with that, I might use it to hammer in nails.
183
+ [1172.780 --> 1182.780] I mean, use it as a hammer. Okay, great. So, what we did was we actually inhibited or we used cathodal TDCS over the frontal lobe.
184
+ [1182.780 --> 1196.780] And what we found was that using cathodal TDCS actually decreased the response time. Individuals were able to respond more quickly as to what alternative use they would come up with for this kind of object.
185
+ [1196.780 --> 1207.780] And moreover, they were much less likely to default. In other words, a number of times when you're asked to come up with an unusual use for a thing that you're very used to using one way, you don't come up with anything.
186
+ [1207.780 --> 1216.780] They were much less likely to actually default on that answer than individuals who had either gotten anodil stimulation there or sham stimulation there.
187
+ [1216.780 --> 1230.780] So, we thought that it had some impact on their mental flexibility, in fact that perhaps inhibiting that region sort of freed them of a certain amount of cognitive control allowed them to think more flexibly.
188
+ [1230.780 --> 1241.780] Another area, again, focusing on frontal lobe functions, where I think that there's a lot of potential for a future clinical purchase for a TDCS is an appetite of behavior.
189
+ [1241.780 --> 1250.780] And that can include things like cravings and potentially down the road, things like addiction. So, this is a study I realized the print is really small.
190
+ [1250.780 --> 1259.780] So, I can barely read it myself, but I think you can all see that the lines separate quite nicely. This is a study of food craving.
191
+ [1259.780 --> 1270.780] So, this is a 19 subjects. They were asked to evaluate how much they were craving certain foods. They were reporting food cravings. And then they got stimulation or, again, of the frontal lobe.
192
+ [1270.780 --> 1286.780] And they reported, both during stimulation and after stimulation, that they felt like that craving was somehow diminished, that their ability to control their own behavior and resist if that food were to be presented to them was somehow increased.
193
+ [1286.780 --> 1297.780] And this is one of a family of studies. The reason I chose it is because, well, it's such a ubiquitous issue, right? I mean, imagine if we had some way to diminish the degree to which we crave unhealthful foods.
194
+ [1297.780 --> 1304.780] But also, sort of, touches on the broader topic of appetite of behavior that can be modulated using this technology.
195
+ [1304.780 --> 1323.780] And then, some work that really garnered public attention has sort of caught the public eye a couple of years ago. And there's been some work extending it with transcranial random noise stimulation, was Roy Cohen-Kodosh's work involving what he referred to as numerical competence.
196
+ [1323.780 --> 1333.780] And this will take a little bit of unpacking that actually did this study. So this has to do with how people process number concepts.
197
+ [1333.780 --> 1341.780] And so what this group did was they taught individuals a certain set of symbols, the symbols are previously meaningless.
198
+ [1341.780 --> 1355.780] And so what they taught them was that these symbols have a relationship to one another. So this squiggly shape that doesn't look like anything in particular is greater than that squiggly shape is less than this squiggly shape is greater than, right, the odd or ordinal relationship.
199
+ [1355.780 --> 1362.780] And stimulated them as they were learning these relationships in two different montages versus sham.
200
+ [1362.780 --> 1383.780] And then what they found was when they subjected these individuals to two behavioral tasks, the individuals who got stimulation with right and total TBCS were more likely to think of these objects that they had been taught these ordinal relationships and to think of them in number like ways.
201
+ [1383.780 --> 1399.780] And so the way they explored that is that they had what was called a number stoop task. So these numbers would appear side by side and they might be mismatched in shape, right, one might be bigger than the other or the other smaller.
202
+ [1399.780 --> 1413.780] And so what they did was they either had congruent trials and congruent trials and congruent trials. The bigger shape was actually the shape that had the bigger value that the subject had learned in the incongruent trials, they were mismatched. So the bigger shape had a smaller value.
203
+ [1413.780 --> 1428.780] And so as individuals sort of took on or understood the value of these things, they generated an incongruency effect. It took them longer to respond if the size of the object and its value were mismatched.
204
+ [1428.780 --> 1437.780] What they found is that TDCS could enhance that idea that these items have these specific values, right, by enhancing this interference effect.
205
+ [1437.780 --> 1460.780] The second way they looked at it was, and this takes a little sort of discussion as well, explaining, it turns out that when we're sort of early on, before we've really formed very strong number concepts, we don't tend to think of numbers or allocate them in an exactly linear way, the way that we do when we sort of acquire these concepts really well and understand them.
206
+ [1460.780 --> 1473.780] So if we make a number line, you or I all understand that one is equally distant to two as nine is to ten. Well it turns out that small children don't necessarily do that before they sort of solidify some of these number space relationships.
207
+ [1473.780 --> 1490.780] And so what they found was that individuals who had not benefited or had the montage that didn't work. So left an nodal or sham, they had this non-linear relationship with these objects when they were asked to put the objects in a line by value.
208
+ [1490.780 --> 1498.780] Whereas the individuals who had received stimulation sort of put them in a linear order, the way we do when we understand things to be numbers.
209
+ [1498.780 --> 1509.780] So it was sort of indirect evidence that these individuals were taking on a sort of more number like concept with stimulation of the Pridal Cortex.
210
+ [1509.780 --> 1516.780] Alright, so then focusing, again shifting gears a little bit focusing on post stroke rehabilitation.
211
+ [1516.780 --> 1523.780] There have really been three areas in which TDCS has been applied and we'll touch on them briefly.
212
+ [1523.780 --> 1534.780] Parasus, visual spatial neglects and aphasia. And so in all three of these, I want to introduce this concept that's going to come up several times.
213
+ [1534.780 --> 1540.780] And that's the notion of interhemispheric interactions and interhemispheric inhibition.
214
+ [1540.780 --> 1546.780] So this is an idea that dates back actually depending on what kinds of deficits you're talking about decades.
215
+ [1546.780 --> 1554.780] But it's the idea that normally in the normative state, we obviously have a lot of connections running between the two hemispheres.
216
+ [1554.780 --> 1563.780] Many of them through the Corpus Colossum. Well it turns out that physiologically many of these connections are inhibitory in nature.
217
+ [1563.780 --> 1576.780] And so the way this model, the way this theory goes, is that if you have a unilateral elusion, one hemisphere, obviously it's not functioning as well as it had, the other one is intact.
218
+ [1576.780 --> 1587.780] But in addition to being intact, it's also released from inhibition, right, from the inhibition that it had been experiencing from its contralateral now injured hemisphere.
219
+ [1587.780 --> 1600.780] So with that being the case, the way this model goes is that there's increased inhibition from the intact hemisphere due to its increased activity to the injured hemisphere.
220
+ [1600.780 --> 1616.780] And if you assume that there are peri-leasional areas of the injured hemisphere that are in the process of rehabilitation trying to recover, if I can anthropomorphize them, trying to recover, you know, you don't really want to add this added insult of transclosal inhibition.
221
+ [1616.780 --> 1630.780] So that's how that model goes. And so with respect to motor recovery, that engenders a few different approaches that you could take with respect to parisus.
222
+ [1630.780 --> 1643.780] And so looking at a few studies over the last seven, eight years or so, you know, there have been a number of studies where the principal approach has been to excite the damaged hemisphere.
223
+ [1644.780 --> 1652.780] Using anodal stimulation, and we sort of go back and forth about whether or not that's the right way to frame what anodal stimulation really doing.
224
+ [1652.780 --> 1660.780] But, you know, according to our conventional thoughts about TDCS, that anodal stimulation increased the activity of this hemisphere.
225
+ [1660.780 --> 1668.780] And so, you know, there have been a number of studies, both in chronic and subacute populations that implement TDCS in that way.
226
+ [1668.780 --> 1684.780] But there have also been a number of studies that take the opposite approach. Right? Let's tamp down the extra excitation, the pathological activity of the right hemisphere to diminish its interhemispheric inhibition.
227
+ [1684.780 --> 1696.780] So there have been a number of studies that looked at using cathodal stimulation and sort of tamping down activity in that hemisphere in order to allow for periligional areas of the damaged hemisphere to recover.
228
+ [1697.780 --> 1710.780] And then finally, one of the advantages of TMS over TMS is that you can actually apply bilateral stimulation. Right? So you can have a cathode and an anode. So why not leverage that?
229
+ [1710.780 --> 1723.780] So there are emerging studies, a few in the last few years and more recently, looking at doing both applying anodal stimulation to the Ipsil-elitional side and cathodal stimulation to the controversial side.
230
+ [1724.780 --> 1734.780] I just want to focus on one more recent study that came out this year in part because it has a larger, many of these studies are small in terms of their patient populations.
231
+ [1734.780 --> 1747.780] This is a slightly larger one, but also because it was one of the first to directly compare anodal stimulation to cathodal stimulation of the controversial or intact hemisphere to sham.
232
+ [1748.780 --> 1760.780] And just looking at results in this subacute population followed out to three months. Again, this is another one of those sets of graphs where it's too small for you to actually read it, but the lines dissociate.
233
+ [1760.780 --> 1772.780] So you can tell the point I'm trying to make anodal stimulation of the Ipsil-elitional hemisphere, the damaged hemisphere and cathodal stimulation of the controversial, controversial lateral hemisphere up here.
234
+ [1773.780 --> 1778.780] Sham stimulation is down here in pretty much all cases.
235
+ [1778.780 --> 1787.780] And these are various measures of motor recovery, of motor strength, hand grip, both in the upper extremity as well as in the lower extremity.
236
+ [1787.780 --> 1795.780] And then there are a number of stroke scales, clinical stroke scales, same story. You can't read these, but just follow the dissociation.
237
+ [1796.780 --> 1803.780] Sham's down here, right? In this case, higher is worse, so there's sham and there's anodal and cathodal stimulation.
238
+ [1803.780 --> 1810.780] So we're alluding to the idea that there are multiple approaches that one could adopt in terms of facilitating recovery with TDCS.
239
+ [1813.780 --> 1821.780] So let me again, within the domain of stroke recovery, shift to a more cognitive deficit, and that is aphasia.
240
+ [1822.780 --> 1826.780] And so I'm not sure how many of you are familiar with aphasia syndrome.
241
+ [1826.780 --> 1827.780] Yes, sir.
242
+ [1827.780 --> 1828.780] Can I ask one question?
243
+ [1828.780 --> 1829.780] Yeah.
244
+ [1829.780 --> 1834.780] Has anyone shown that bilateral anodal cathodal also works?
245
+ [1834.780 --> 1836.780] Yes. Yes. So there are...
246
+ [1836.780 --> 1838.780] Because there are negative findings out there as well.
247
+ [1838.780 --> 1850.780] There are. There are. So there are some mixed findings with respect to bilateral anodal cathodal stimulation, but there also are some positive, I mean, as you know, sort of it's a mixed bag.
248
+ [1850.780 --> 1852.780] So it's something that people are exploring more.
249
+ [1852.780 --> 1853.780] It's one of those things that...
250
+ [1853.780 --> 1863.780] Now that you've sort of stopped talking about it, it's sort of interesting because on a theoretical basis, if this model was strictly true, you would expect that it would be the montage that would give you your most bang for the buck.
251
+ [1863.780 --> 1867.780] But that hasn't really sort of borne that much fruit yet.
252
+ [1872.780 --> 1877.780] All right, so for those of you who aren't clinically familiar with aphasia, I just wanted to say a word about it.
253
+ [1878.780 --> 1885.780] It's an acquired deficit of language in the context of stroke. It actually affects about a fifth of patients who have stroke.
254
+ [1885.780 --> 1891.780] And stroke, whether you may or may not realize, is the number one cause of morbidity in this country.
255
+ [1891.780 --> 1893.780] It's a number one cause of disability.
256
+ [1893.780 --> 1903.780] So we're talking about something that's causing a fifth, you know, it's present in a fifth of cases, this disease that's the leading cause of disability affects about a million people in this country, about 100,000 new patients annually.
257
+ [1904.780 --> 1909.780] It usually results from injury to a left peri-silvian circuit, to the left hemisphere.
258
+ [1909.780 --> 1919.780] You can broadly categorize stroke deficits according to a lesion location where more anterior lesions often engender a non-fluent aphasia,
259
+ [1919.780 --> 1925.780] from patients that poor grammar, slow speech, that kind of thing, and more posterior lesions of the...
260
+ [1925.780 --> 1933.780] There's a more posterior temporal lobe, typically engender, more fluent aphasia where comprehension is often affected.
261
+ [1933.780 --> 1936.780] And as I said, it's a very common cause of morbidity.
262
+ [1937.780 --> 1944.780] So in thinking about aphasia, this idea of enter hemispheric interactions again comes into play.
263
+ [1944.780 --> 1952.780] You can imagine several ways in which one might apply TDCS in order to try and facilitate language recovery.
264
+ [1953.780 --> 1967.780] So if you have, you know, normally a left hemisphere peri-silvian circuit that's functioning to subserv language, after injury of that circuit, there's something, some thinking that peri-leasional areas of the left hemisphere compensate,
265
+ [1967.780 --> 1979.780] that they sort of take on some of the function of damaged left hemisphere areas, in which case you would think that your optimal strategy would be to try and enhance activity of left hemisphere peri-leasional areas.
266
+ [1979.780 --> 1986.780] So skipping over this figure for a second, over here on the far right is that enter hemispheric and habitual model I've told you about, right?
267
+ [1986.780 --> 2002.780] So you can imagine that in addition to these areas, these peri-leasional areas struggling to do their best to recover from aphasia, that they are also receiving deleterious, enter hemispheric inhibitory inputs from regions of the right hemisphere.
268
+ [2002.780 --> 2010.780] So according to that model, an alternative approach that you might consider taking might be to actually apply some form of inhibitory stimulation, right?
269
+ [2010.780 --> 2013.780] Maybe cathodal TDCS to that hemisphere.
270
+ [2013.780 --> 2030.780] And then finally, maybe it's the case, and there's certainly evidence from functional imaging and behavioral evidence as well, suggests that this might be true, that right hemisphere areas aren't necessarily all bad when they have increased activity in the setting of stroke and aphasia,
271
+ [2030.780 --> 2037.780] and maybe they play some compensatory role. So in that case, you might actually be interested in taking the opposite approach, right?
272
+ [2037.780 --> 2049.780] Maybe you want to facilitate activity of the right hemisphere for instance, perhaps, just theoretically, if left hemisphere lesions are especially large, there's no peri-leasional area to speak up to actually try and enhance.
273
+ [2049.780 --> 2055.780] Maybe it's the right hemisphere that's trying to do the compensatory work with respect to language.
274
+ [2055.780 --> 2066.780] And it turns out, corresponding to these various theories of how aphasia recovery might work, that there are a number of studies that address this in both hemispheres.
275
+ [2066.780 --> 2071.780] So basically that each of these approaches has been attempted in clinical populations.
276
+ [2071.780 --> 2081.780] So that includes cathodal stimulation of left hemisphere, with the idea of getting these peri-leasional areas more excited.
277
+ [2081.780 --> 2102.780] And cathodal stimulation of the right hemisphere, anodal stimulation of the right hemisphere, basically all of these different approaches have been tried, for each of those montages, at least some evidence of efficacy, although not universally positive evidence.
278
+ [2102.780 --> 2112.780] And again, the point of this is not for you to read it. I'm just putting it up there to show that there have been a number of different studies, but I'll also highlight that there are a number of limitations.
279
+ [2112.780 --> 2120.780] So the limitations, just to highlight them, again, not for you to read, but for me just to point out that they generally have small samples, these aphasia studies.
280
+ [2120.780 --> 2123.780] I mean, we're talking mostly single digit patient studies.
281
+ [2123.780 --> 2128.780] The clinical syndromes that these patients have are very heterogeneous.
282
+ [2128.780 --> 2135.780] So remember I told you that there are non-fluon aphasias, there are fluon aphasias, some patients have both, there are global aphasias.
283
+ [2135.780 --> 2144.780] So many of these studies will involve multiple different types of aphasia subtypes, as well as lesion locations, and krennicities, subacute versus chronic aphasia.
284
+ [2144.780 --> 2169.780] There's also variability in stimulation parameters that may be coming into play. And so as we just talked about both in our earlier talks and today, it turns out that the stimulation parameters may have like intensity, may have a dramatic effect on what kind of a neurophysiologic effect you're expecting, and consequently potentially what kind of behavioral effect you're expecting.
285
+ [2170.780 --> 2177.780] And then what I think of is one, the real limitations of any rehabilitation study that doesn't have it is, is good follow up.
286
+ [2177.780 --> 2190.780] So only a handful of studies in aphasia, and I can actually extend this into the motor domain as well, actually follow these patients out for any considerable length of time to see if these benefits that they experience are persistent.
287
+ [2191.780 --> 2201.780] I can't leave this topic without showing at least a little bit of some of the data that we're collecting in our laboratory right now using TDCS in patients with chronic aphasia.
288
+ [2201.780 --> 2214.780] So this is a protocol where they come in, they get stimulated two weeks in a row, these are two five day weeks, and then we follow them out months afterwards, it's sham controlled, it's a partial crossover study, so the sham subjects end up getting stimulated.
289
+ [2215.780 --> 2221.780] And what we find is that they have improvements in, these are all non-polluent patients, improvements in measures of aphasia.
290
+ [2221.780 --> 2227.780] So this is the Western aphasia battery's aphasia quotient, so it's an overall summary of the severity of aphasia.
291
+ [2227.780 --> 2239.780] And they do show, well, I don't know if you want to call it modest, I mean here's the scale here, they're sort of moving within aphasia category, but moving a fair amount within aphasia category compared to patients who are receiving sham.
292
+ [2240.780 --> 2246.780] One point I'll point out, and I wasn't, oh, and here's another way of displaying the data, which makes it look a little bit rosier, right?
293
+ [2246.780 --> 2252.780] This is a percentage change in the Western aphasia battery quotient for patients getting real stimulation versus sham stimulation.
294
+ [2252.780 --> 2257.780] And you see the sham patients really get nothing in the real patients appear to be benefiting.
295
+ [2257.780 --> 2266.780] One interesting point I'll point out about our work here is that it turns out the way we do this study is that we're agnostic to which of these mechanisms is at work for each of these patients.
296
+ [2266.780 --> 2283.780] So we literally bring them in and on different days of the week we'll stimulate them, anode left, cathode left, anode right, cathode right, and have them perform a naming task that day to see if there's one particular montage for each patient that may benefit them the best.
297
+ [2283.780 --> 2287.780] And so patients are being stimulated in whichever their optimal montages.
298
+ [2287.780 --> 2300.780] One interesting point is that we actually find that a number of our patients, I'll, it isn't necessarily their optimal montage, will actually respond to cathode of the left hemisphere, which was not something that we had expected.
299
+ [2300.780 --> 2310.780] But we do stimulate a very high intensity, two milliamps. And so one interesting finding from the talk we had this morning is that stimulation of different intensities might actually have different effects.
300
+ [2310.780 --> 2318.780] All right, you've seen this slide now a few times over the course of this summit. I just wanted to make the point with respect to aphasia.
301
+ [2318.780 --> 2332.780] You know, the conventional stimulation may give you this wide swath of effect. That's poor spatial resolution. And you'd think that that would be a drawback in this kind of research. But maybe it's not. Maybe for our purposes, it's a kind of advantage.
302
+ [2332.780 --> 2344.780] So this is a meta analysis of functional imaging work that we had done some time ago on studies involving non-fluent and fluent aphasia patients with chronic aphasia compared to normal subjects.
303
+ [2344.780 --> 2357.780] And what they demonstrate is that I don't know if you could see these these areas of activation, but patients with aphasia activate a bilateral network of regions when they're trying to perform language tests, at least when they're in the chronic state.
304
+ [2357.780 --> 2372.780] And so one thing we think is that perhaps, you know, the so-called disadvantage of spatial resolution in TDCS when you're trying to hit a broad network of areas that subserves some remodeled cognitive function like language may actually be an advantage.
305
+ [2372.780 --> 2384.780] Finally, a couple of studies have looked at the effect of TDCS in neglect. For those of you who aren't familiar with clinical neglect, it actually affects about two-thirds of right hemisphere stroke patients acutely.
306
+ [2384.780 --> 2395.780] So it's very, very common. It is the failure to report, respond, or orient to meaningful or novel stimuli that are on the opposite side of space is where you had your brain injury.
307
+ [2395.780 --> 2405.780] It also tends to be a lateralized function. So whereas language tends to be lateralized to the left hemisphere, it's almost always the case. It's usually the case that neglect is associated with right hemisphere lesions.
308
+ [2405.780 --> 2415.780] It's a terrible prognostic indicator. Pound for pound, you are worse off having neglect as a persistent deficit than you are having hemiparesis.
309
+ [2415.780 --> 2431.780] And so again, this is another example since it tends to be a unilateral injury that gives rise to neglect or at least a lateralized deficit of where stimulating perilisional or interferically can have effects.
310
+ [2431.780 --> 2438.780] And you can sort of bring in that model that I showed you with respect to parisis and aphasia to this paradigm as well.
311
+ [2438.780 --> 2456.780] So there have been studies, really just a handful, I'll show you a couple, in which anodil stimulation of the right parietal cortex, and I should mention that lesions the parietal lobe are the most common lesions associated with neglect, have resulted in improvements in measures of neglect function, or a visual spatial function.
312
+ [2456.780 --> 2466.780] This is line bisection, this is a task where individuals are shown lines, they are asked to cut them in half. So the patient who is neglecting half of space skews away from the middle.
313
+ [2466.780 --> 2474.780] So you can actually correct that to some extent by applying anodil stimulation of the right parietal cortex.
314
+ [2474.780 --> 2486.780] And a later paper demonstrated that a contraletional anodil and anodil isoletional TDCS can actually improve performance on that task.
315
+ [2486.780 --> 2496.780] So again, inhibiting, or at least we think, potentially inhibiting the intact hemisphere as well as stimulating the damaged hemisphere.
316
+ [2496.780 --> 2500.780] Again, sort of speaking to those interhemispheric models we were talking about.
317
+ [2500.780 --> 2508.780] In our lab, we've been exploring whether or not trans cranial direct current stimulation can be used to try and fractionate different types of neglect symptoms.
318
+ [2508.780 --> 2520.780] Now this study happens to be in healthy subjects, but it's inspired by our interest in applying it to patients with neglect, so I'm presenting it here, because we think we can apply it to patients in a symptom specific way.
319
+ [2520.780 --> 2528.780] So when a patient is neglecting, they often have dissociations in what their neglect is like. You can have different neglect subtypes.
320
+ [2528.780 --> 2534.780] So some patients have what's called egocentric neglect. They neglect with respect to their own body's frame of reference.
321
+ [2534.780 --> 2538.780] So they neglect things to the left of their space.
322
+ [2538.780 --> 2550.780] And then there are other patients who actually have what's called allocentric neglect. And this is actually a copy drawn by a patient, an example of allocentric neglect, where their neglect is tied to the things that they are looking at.
323
+ [2550.780 --> 2558.780] So as they look at things, they ignore the left side of the thing. And there's been some speculation that these might have different neural substrates.
324
+ [2558.780 --> 2567.780] So we had a study in which we showed normal healthy subjects, an array of targets, and they were supposed to detect the target.
325
+ [2567.780 --> 2575.780] And the targets could either be on the left side, it was a combination. The targets were sort of left and right side of space relative to them.
326
+ [2575.780 --> 2584.780] But also the target itself on the object, the thing they're looking for is that gap that could be on the left or right side of the object.
327
+ [2584.780 --> 2604.780] So both serve an allocentric component, egocentric component. We stimulated over the parietal cortex, or we did a baseline block and then followed them up after stimulation, and found that we had a dissociation where there was much more in effect on allocentric processing that individuals were more rapid to respond to the local object.
328
+ [2604.780 --> 2614.780] To the location of that target on the left side of the object, even when that object is on the right side of the field, they had this accentuation of allocentric processes.
329
+ [2614.780 --> 2622.780] So one thing that we're looking into is whether or not we can sort of fractionate these deficits and apply them to patients in specific ways.
330
+ [2622.780 --> 2631.780] Just a slide on TDCS for pain. It's not an area of work for me, but I think it's a very interesting area of growth using this technology.
331
+ [2631.780 --> 2649.780] There's some thinking, many of the studies that investigate this, stimulate over the motor cortex, and the thinking is that there are connections between the motor cortex and subcortical areas like the thalamus, the singulate gyros, and other deeper structures that are involved in a sort of no-susceptive network.
332
+ [2649.780 --> 2657.780] So we're going to be increasing that you can use the motor cortex to serve an access point to connections to that network.
333
+ [2657.780 --> 2670.780] And so just to highlight a couple of different pain studies, it's been used to decrease analgesia use in patients getting knee replacements, as well as patients who are undergoing ERCP.
334
+ [2670.780 --> 2676.780] For those of you who don't know what that is, that's endoscopic retrograde collangiopancreatography, and it's very painful.
335
+ [2676.780 --> 2688.780] Almost as painful as this is to say. Anyway, so, however, the result aren't universal. They're not unanimous. So for instance, there's a recent study where it didn't seem to help for patients getting lumbar surgery.
336
+ [2688.780 --> 2702.780] There are several studies now looking at its potential positive effects in fibromyalgia pain, chronic spinal cord injury, although again here, the jury might still be out. There is some debate in the literature as to whether or not it's helpful.
337
+ [2702.780 --> 2713.780] And then it's being explored more for migraine, migraine-proful access, both with this montage I was telling you about simulation over the motor cortex, as well as stimulation over the visual cortex.
338
+ [2713.780 --> 2723.780] As you may or may not know, migraine sufferers will often have visual phenomena, visual auras, as someone with earliest symptoms of their migraine, of an oncoming migraine.
339
+ [2723.780 --> 2731.780] And so it turns out that at least in one investigation, cathodosimulation over V1 was helpful in reducing migraine symptoms.
340
+ [2731.780 --> 2743.780] And also, chronic pain patients with multiple sclerosis and phantom limb pain, that strange pain that individuals who are amputees get where they give that elusive recensation of an existing limb.
341
+ [2743.780 --> 2747.780] Alright, a few words about depression. How am I doing for the...
342
+ [2747.780 --> 2749.780] Ten minutes?
343
+ [2749.780 --> 2760.780] Ten minutes, I'm great. Okay, so, a few words about depression. And I focus here. I actually think that the data for depression right now are pretty mixed.
344
+ [2760.780 --> 2773.780] The reason I focus on this is that in terms of areas where non-invasive brain stimulation has found some clinical purchase, we know that depression is the only FDA approved clinical indication for TMS.
345
+ [2773.780 --> 2781.780] And so it's sort of an obvious go-to area with respect to clinical syndromes that we could stimulate with hopes of improvement.
346
+ [2782.780 --> 2788.780] And so the most common montage that's been seen in the literature is anodostimulation of the Dorsal Adrenal Prefrontal Cortex.
347
+ [2788.780 --> 2810.780] Interestingly, many of the studies that have been done have pointed not only to the effects on mood, but also to beneficial effects on cognitive performance, various aspects of cognitive performance that in my mind are sort of linked to the function of the Dorsal Adrenal Prefrontal Cortex, like working memory, aspects of attention, the ability to inhibit inappropriate responses on tasks.
348
+ [2810.780 --> 2821.780] That kind of thing. And like I said, the studies are relatively few. Here down here is an example from one of the earlier studies by Fregnian colleagues from a few years ago.
349
+ [2821.780 --> 2829.780] And what this is demonstrating is improvement. The black is the subjects receiving TDCS, active TDCS versus patients who are getting sham.
350
+ [2829.780 --> 2837.780] On a variety of different tests of executive function, the ones that the bars that really stand out here are a digit span.
351
+ [2837.780 --> 2846.780] So I give you a series of numbers. You have to give them back to me. Sometimes in one setting, give them to you forward. You give them back to me and understand I give them to you, you have to give them to me backwards.
352
+ [2846.780 --> 2851.780] Sort of think of as a test of executive function and working memory and seems to improve in these patients who got stimulation.
353
+ [2851.780 --> 2862.780] But like I said, I think recently the weighted average of studies is a little bit mixed. I think the jury is still out. I can just offer my opinion about the matter.
354
+ [2862.780 --> 2880.780] So for instance, a recent meta-analysis looked at all the randomized control studies of patients being stimulated for depression and to make a long story short, found that the effect did not actually, the confidence interval of the effect, did not sort of put it over the finish line with respect to an overall level.
355
+ [2880.780 --> 2891.780] So I think that the effect to an overall positive average effect. Now, you can sort of parcelate that out. It turned out that if you looked at the studies that looked at TDCS as a monotherapy, those studies seem to be significantly better than sham.
356
+ [2891.780 --> 2903.780] And we just heard earlier in the day about a study in which TDCS combined with the medications talapram for the mechanisms that we discussed earlier might give more sort of bang for the book or an extra boost.
357
+ [2903.780 --> 2908.780] But I think the jury is still out and it's an area of active investigation.
358
+ [2908.780 --> 2920.780] Yeah, as far as the effect size of the variability, if you were to compare that, let's say to, or RTMS was before the big trials or even drugs, is this any sort of what you'd expect?
359
+ [2920.780 --> 2932.780] Yeah, so I think when we were looking at the, when early in the days of TMS for depression, you also saw positive studies, some negative studies.
360
+ [2932.780 --> 2946.780] It's a lot of the studies suffered from some of the same problems that studies in general in rehabilitation and treatment and TDCS now suffer from, like small sample sizes, variability in terms of the patient populations being chosen, variability with regards to the parameters.
361
+ [2946.780 --> 2956.780] So, you know, I suspect that we'll learn more that the answer would come clearer as we sort of tighten up some of those parameters. So I think the short answer to your question is probably.
362
+ [2956.780 --> 2969.780] So I just want to mention that I've touched on, sort of cherry picked a number of topics, but I'm really scratching the surface. And it is my way of illustrating the fact that I'm scratching the surface on the various applications that people have been thinking about for TDCS.
363
+ [2969.780 --> 2982.780] I'll just point to some of the studies that are ongoing that we're doing. If you sort of take that and multiply that by every, every lab that's studying TDCS, you can get an example or some idea of the various ways in which TDCS could be used.
364
+ [2982.780 --> 3002.780] So in our lab, we're currently investigating whether it can be used to modulate executive dysfunction and patients with multiple sclerosis, whether it can be used to enhance speech fluency and patients with a degenerative type of aphasia called progressive non-flunofasia, whether it can be used to improve cognitive dysfunction in patients who are withdrawing from smoking as they're trying to quit.
365
+ [3002.780 --> 3015.780] It turns out that when patients are trying to quit smoking in the period immediately after the onset of their abstinence, they have cognitive deficits. And there's some suggestion that if you can improve those, it actually improves their ability to abstain.
366
+ [3015.780 --> 3025.780] And so we're looking at that. And some work that was actually initiated by Adam Woods in the audience is that TDCS can be used to modulate causal reasoning.
367
+ [3025.780 --> 3038.780] And so we've taken that. Now we're starting to look at whether or not it can do so in patients who have schizophrenia, who suffer from deficits of causal reasoning, sort of cause and effect in their model of how the world works.
368
+ [3038.780 --> 3047.780] All right. So if TDCS is so great and can be applied to all these different applications, why is an in common clinical use right now?
369
+ [3047.780 --> 3058.780] And so we already touched on a number of these points. TDCS currently has no FDA approved clinical indications. And a lot of the studies that I've pointed to are at the sort of proof of concept stage.
370
+ [3058.780 --> 3076.780] And that's not good enough to put something in clinical practice, at least in this country. The FDA has a clinical trial process. And before something becomes FDA approved for clinical use, it has to go through phase one, which is safety, phase two, efficacy, phase three, sort of larger, pivotal studies in both safety and efficacy.
371
+ [3076.780 --> 3084.780] And there have to be two of those before it can be approved as a treatment. And that of course has implications for things like whether an insurance company will cover it and so on and so forth.
372
+ [3084.780 --> 3099.780] So there are a number of steps to go through before TDCS becomes a part of common clinical practice. Just to point out, there are some, what I think of as surmountable hurdles, some of which have already been alluded to with respect to applying TDCS clinically.
373
+ [3099.780 --> 3110.780] They include things, and I'm going to take it by phase, by study phase. You know, dose effect curves. We talked a lot about how TDCS at different intensities, different durations might actually have different effects.
374
+ [3110.780 --> 3123.780] So that's something that needs to be fleshed out if you're going to introduce it as a clinical intervention. Also, the patient populations that we're talking about implementing TDCS in are at risk populations, sort of by definition.
375
+ [3123.780 --> 3143.780] And the safety of this technology, which we think of as very safe, hasn't really fully been fleshed out in these potentially risky populations, including populations I should add, who are commonly taking those medications that affect neurotransmitters that we talked about having an effect on how TDCS works.
376
+ [3143.780 --> 3158.780] So psychiatric and neurologic populations. Recruitment and eligibility are always challenges for clinical trials, especially using a technology that physicians and clinicians in the community haven't heard of. They might be reticent to refer their patients to get their brain zapped.
377
+ [3158.780 --> 3166.780] And then we talked about the problem of heterogeneous patient populations affecting the interpretability of studies.
378
+ [3166.780 --> 3180.780] A problem that we've sometimes run into is that these studies, I emphasized before, that clinical studies by and large involve multiple sessions of TDCS. Well, it turns out that as a treatment, that can be a cause of attrition.
379
+ [3180.780 --> 3189.780] If subjects have to come in days and a row and then come in weeks after that to get it reassessed. And so that can stand as a surmountable hurdle, but a potential obstacle.
380
+ [3189.780 --> 3200.780] And then there are any kind of studies control and blinding issues, although I think that that's better in TDCS than for some other technologies, because it's more readily controlled with sham conditions.
381
+ [3200.780 --> 3207.780] And then finally, we touched on this over and over again, the clinical approaches that have been used to date have been heterogeneous in their methodology.
382
+ [3207.780 --> 3215.780] And it turns out that those parameters may have dramatic effects on what you're seeing in terms of effect sizes and patients.
383
+ [3215.780 --> 3231.780] Last point. Who should we stimulate? So it turns out that disease, especially with respect to cognitive complaints, cognitive deficits, psychological states, mood states, it falls somewhere on a spectrum.
384
+ [3231.780 --> 3239.780] It can be a little bit tricky to decide where normal cognition ends and a diagnosable deficit begins.
385
+ [3239.780 --> 3251.780] And I think a question that clinicians have to reflect on in a very practical sense is when am I giving a treatment to a patient and when am I enhancing a cognitively normal individual?
386
+ [3251.780 --> 3266.780] And does that matter to me as a clinician? And here I think there are a number of things that have to be taken into account irrespective of whether you feel like that's fine and all humans should have the ability to enhance themselves as much as they want, or whether you want to restrict yourself to patients.
387
+ [3266.780 --> 3277.780] Something to consider is the risk benefit ratio. Now we talked about how safe TDCS is, but if we're talking about stimulating people with relatively mild deficits, there are at least theoretical risks.
388
+ [3277.780 --> 3294.780] For instance, if we think that TDCS has sufficient effects sufficient enough to affect cognition in individuals on a long term basis based on stimulation, it's unclear what makes us think that it can't have inadvertent effects on cognition.
389
+ [3294.780 --> 3302.780] And while that has not been demonstrated, I'm not saying that that's an imminent threat, it's at least something to consider and investigate.
390
+ [3302.780 --> 3317.780] And then finally, I put in here that if we're talking about risk benefit ratio and stimulating patients who are relatively modestly affected, many, many people out in the community are taking the kinds of medications that we talked about having variable effects on stimulation.
391
+ [3317.780 --> 3325.780] And so as we sort of flesh out what those effects might be, it's worth sort of keeping that in mind in terms of who we're stimulating.
392
+ [3325.780 --> 3341.780] And then finally, who should stimulate? If you go on the internet today, if you go on to YouTube, you're going to find people who are stimulating themselves on camera, you're going to find do it yourself, TDCS websites, you can find blueprints to build your own TDCS machine.
393
+ [3341.780 --> 3347.780] You could go to RadioShack with 100 bucks, come back home, go to your garage and come out with a TDCS unit.
394
+ [3347.780 --> 3355.780] And so it's not controlled by the FDA, and there are commercially available units that you can get for just a couple hundred bucks.
395
+ [3355.780 --> 3359.780] So the question arises, who really should be doing the stimulating?
396
+ [3359.780 --> 3366.780] With direct to consumer marketing, it raises the issue potentially of public safety.
397
+ [3366.780 --> 3371.780] At the very least, people might misuse it and burn their skin, things like that.
398
+ [3371.780 --> 3383.780] But public safety in terms of whether it could be used inappropriately, again, we talked a little bit about indiscriminate use with respect to electrode position, or overused if that's a possibility.
399
+ [3383.780 --> 3391.780] And while, like I said, we've said over and over again, it's a pretty safe technology as far as we know, we have to be cognizant of that possibility.
400
+ [3391.780 --> 3394.780] On the other hand, we have to respect autonomy.
401
+ [3394.780 --> 3401.780] And to a large extent, we have the right to do a lot with our bodies and how we treat them.
402
+ [3401.780 --> 3403.780] And so we have to have some respect for that.
403
+ [3403.780 --> 3411.780] So that, in my opinion, is a potential source of future tensions, or clinical ethical tension, at least for clinicians, or career clinicians.
404
+ [3411.780 --> 3418.780] And it'll be up to us to sort of sort out our fiduciary responsibility with respect to individuals who want stimulation.
405
+ [3418.780 --> 3425.780] And that may branch out into the domain of public education about, well, how do you do TDCS safely?
406
+ [3425.780 --> 3429.780] And maybe even public policy if it gets that far.
407
+ [3429.780 --> 3434.780] So to summarize, we've talked about the advantages of TDCS for clinical use.
408
+ [3434.780 --> 3438.780] And they are many. It's safe, it's convenient, it's cheap.
409
+ [3438.780 --> 3440.780] It has a variety of potential clinical applications.
410
+ [3440.780 --> 3449.780] And we talked about these in some detail, psychiatry, neurology, physiatry, and rehab, pain management, many others.
411
+ [3449.780 --> 3454.780] I believe that cognitive neuroscience directly informs potential TDCS treatments.
412
+ [3454.780 --> 3463.780] And we talked about a variety of different studies where you can manipulate cognition in ways that seem like they might down the road be applied to patients.
413
+ [3463.780 --> 3473.780] There are some surmountable challenges, but challenges nonetheless to the development of TDCS as therapies that can be used widely in clinical settings.
414
+ [3473.780 --> 3487.780] And we talked about that. And then finally, just touching on that ethical issue of cosmetic stimulation and self stimulation, I think that it is an emerging, but not an insurmountable or inscrutable challenge for clinicians.
415
+ [3488.780 --> 3493.780] So I thank everyone in the laboratory for cognition and neural stimulation, and I'd be happy to take the questions.
416
+ [3493.780 --> 3496.780] Thank you.
transcript/allocentric_5TjEcK0f5jY.txt ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.720] Okay, today we're going to be talking about neglect, and specifically we're going to be talking about
2
+ [5.200 --> 9.840] hemimiglact, and for neurophomology that's going to be visual
3
+ [10.720 --> 16.480] hemimiglact, and basically that means you're neglecting half. It's not really a visual problem. It's
4
+ [16.480 --> 22.720] inattention. So the problem with the neglect people is different, and they have a different
5
+ [22.720 --> 28.240] set of complaints than the homonymous hemimiglact. So it's usually a right-sided
6
+ [28.240 --> 34.720] parietal lesion, and it often follows a very large MCA distribution stroke or tumor
7
+ [35.280 --> 39.360] hemorrhage, something like that. And it's the right side because there's redundancy
8
+ [39.360 --> 45.200] on the left side, and if you have a lesion on the right, there is no redundancy, so it can knock out
9
+ [45.200 --> 53.040] the whole hemipheal. And so patients who have a left homonymous hemimiglact,
10
+ [53.360 --> 59.760] can still move their head and see, and they know that some things to the left, they just can't see it.
11
+ [60.320 --> 65.040] So one of the things that we do is we have to draw a clock, and as you know, if you're drawing a
12
+ [65.040 --> 71.840] normal clock, you get a 12, a 3, a 6, a 9, and if you say make the time, say 5, the 2, or something
13
+ [71.840 --> 76.480] like that, you'll get a normal clock like this. Even a homonymous hemimiglact is going to draw a
14
+ [76.480 --> 81.680] normal clock, because they know what a clock looks like. But in a patient who has neglect, they might
15
+ [81.680 --> 87.520] punch everything all up over here, and they basically won't put anything on this.
16
+ [88.320 --> 93.200] And if you ask them to make a spontaneous drawing, they'll just only have half.
17
+ [94.000 --> 98.240] Some of these patients have such severe neglect that they only shave one side of their face,
18
+ [98.240 --> 103.520] or they only dress one side, or the glasses might be whole like this, because they really don't
19
+ [104.160 --> 110.240] know they have another side. And the extreme of this hemimiglact on the left side,
20
+ [112.640 --> 119.680] is a very strange complaint, which is somato, which means bodily para-ferenia.
21
+ [121.360 --> 128.880] So in somato para-ferenia, the patients don't even know that that's their body. So you might show
22
+ [128.880 --> 136.480] the patient their hand, and if it's their left hand, they'll deny it's their hand. They'll say
23
+ [136.560 --> 142.720] that's your hand, or that's my sister's hand. And what's fascinating about these patients with
24
+ [142.720 --> 149.520] somato para-ferenia, if you put a mirror right here, and it projects over to here, they'll say that's
25
+ [149.520 --> 156.400] my hand. Even though they just said that hand wasn't theirs. So it's the only of the things on the left
26
+ [156.400 --> 162.720] side. And so for example, today we had a very fascinating patient with this, who the husband said
27
+ [162.720 --> 171.920] when he puts the food on the plate, like spaghetti and meatballs, she totally won't eat this side.
28
+ [172.560 --> 178.480] And so then he just rotates the plate around, and she eats the other side. A person with a
29
+ [178.480 --> 183.840] homonymous hemiminopsy would never do that. A person with a homonymous hemiminopsy learns to put
30
+ [183.840 --> 189.760] the food on the right side, but also even if you were to do this, they wouldn't do that half thing.
31
+ [190.000 --> 196.560] So the thing, the reason to know having neglect is it's an inattention problem. It seems similar
32
+ [196.560 --> 202.560] to a homonymous hemiminopsy, but it has distinctive and characteristic differences, both in the way
33
+ [202.560 --> 208.080] they copy and draw things, as well as what activities of daily living that are affected. And one of
34
+ [208.080 --> 215.280] the other tests you can do is a bilateral somato painous stimuli test, and it'll extinguish on the
35
+ [215.280 --> 222.880] the neglect side. So usually right parietal, usually MCA stroke, and might have the very weird
36
+ [222.880 --> 224.280] complaint about a paragraph ready.
transcript/allocentric_5mIGIS_OblE.txt ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.280] No book, article, or I should say words, can capture the intensity of relationships in
2
+ [7.280 --> 9.880] collective societies.
3
+ [9.880 --> 16.520] So brilliantly smothering, in your face, sometimes in the form of kisses on the cheek and invigorating
4
+ [16.520 --> 21.640] are these relationships that the self is lost to the collective and the harmony of the
5
+ [21.640 --> 26.760] group becomes more important than silly all you.
6
+ [26.760 --> 33.120] As isms, individualism and collectivism are often politicized, viewed as either good
7
+ [33.120 --> 34.880] or bad.
8
+ [34.880 --> 40.800] Despite their flaws to capture and explain cultural phenomena and the biases they evoke,
9
+ [40.800 --> 47.920] these isms do hint at relational and learning styles that teachers should understand.
10
+ [47.920 --> 52.640] Zareda Hammond says quote, I don't want to stereotype cultures in an oversimplified
11
+ [52.640 --> 58.720] frame, but to simply offer the archetype of collectivism versus individualism as a way
12
+ [58.720 --> 65.840] of understanding the general cultural orientation among diverse students in the classroom.
13
+ [65.840 --> 70.920] Sociologists care to hostets as quote, in a collectivist society, the relationship comes
14
+ [70.920 --> 78.800] first, the task comes second, in the individualistic society, the task comes first and the relationship
15
+ [78.800 --> 82.120] may come afterwards.
16
+ [82.120 --> 87.840] This relationship first approach is essential for American teachers to understand.
17
+ [87.840 --> 92.480] When I lived in the country Georgia, when a guest entered your home, even if the guest
18
+ [92.480 --> 99.240] came unannounced, you stopped what you were doing and served them, the person took priority
19
+ [99.240 --> 101.840] over the task.
20
+ [101.840 --> 107.240] This is such a relational paradigm shift for many Americans that it seems wrong, leading
21
+ [107.240 --> 112.760] to teachers misinterpreting students' cultural behavior as misbehavior, and leading to students
22
+ [112.760 --> 119.920] misinterpreting American teachers as cold, impersonal, and putting work ahead of people.
23
+ [119.920 --> 124.880] As a culturally relevant teacher, the onus is on you, not the child, to bridge these
24
+ [124.880 --> 127.520] cultural gaps.
25
+ [127.520 --> 132.880] According to Hofstead's Cultural Dimensions Index, the United States is the most individualistic
26
+ [132.880 --> 135.160] society in the world.
27
+ [135.160 --> 139.240] This means many American teachers are going to have blind spots when it comes to their
28
+ [139.240 --> 142.840] relational style of students.
29
+ [142.840 --> 148.800] In the United States, we are often told to pursue aspirations and dreams for ourselves.
30
+ [148.800 --> 150.560] It is a personal journey.
31
+ [150.560 --> 154.920] It is believed that you must find yourself and create yourself, and if others stand
32
+ [154.920 --> 159.480] in the way, you must continue to believe in yourself.
33
+ [159.480 --> 164.520] The goal is to become self-sufficient and independent, and if that means leaving the
34
+ [164.520 --> 167.400] collective, so be it.
35
+ [167.400 --> 173.080] Social harmony, though, a good thing, can also be a detriment to societal progress and
36
+ [173.080 --> 175.600] self-growth.
37
+ [175.600 --> 180.120] Hofstead says the key word in collectiveist groups is harmony.
38
+ [180.120 --> 185.160] There should be harmony inside the ingroup, even if people disagree, they should maintain
39
+ [185.160 --> 187.160] a superficial harmony.
40
+ [187.160 --> 191.240] Otherwise, the ingroup will be weakened.
41
+ [191.240 --> 196.720] This is why some of my students in South Korea, though knowing the correct answer in class,
42
+ [196.720 --> 199.200] would refuse to raise their hands.
43
+ [199.200 --> 205.880] They did not want to show up, but wanted to remain humble for the harmony of the group.
44
+ [205.880 --> 210.320] This is also why Korean immigrants in the past needed to be coached on how to interview
45
+ [210.320 --> 213.080] for jobs in the United States.
46
+ [213.080 --> 218.080] The emphasis on advocating for yourself and bragging about your accomplishments was
47
+ [218.080 --> 220.000] viewed as wrong.
48
+ [220.000 --> 225.060] When interviewees were asked about their English abilities, they would downplay them, despite
49
+ [225.060 --> 231.280] speaking excellent English, maintaining humility and not standing out was important for
50
+ [231.280 --> 234.320] group cohesion.
51
+ [234.320 --> 241.280] As teachers, it is our job to not only build relationships with students, but to see relationships
52
+ [241.280 --> 243.280] in a new light.
53
+ [243.280 --> 248.120] In collectivist cultures, interdependence is often seen positively.
54
+ [248.120 --> 252.440] This is why, once you have built rapport and trust with students, you can use what
55
+ [252.440 --> 260.280] Lisa Delpick calls, a quote, communicative style that appeals to affiliation.
56
+ [260.280 --> 265.800] Asking students from collectivist backgrounds to do the work for you, the teacher, is a
57
+ [265.800 --> 271.120] technique that works because it caters to a student's desire to belong.
58
+ [271.120 --> 276.440] In fact, Delpick encourages teachers to be authoritative in their use of affiliation
59
+ [276.440 --> 281.440] because this mirrors the home culture of many students of color.
60
+ [281.440 --> 287.080] In America, you are taught to do the work for yourself to find your why.
61
+ [287.080 --> 289.320] Motivation is a personal affair.
62
+ [289.320 --> 298.000] In collectivist cultures, motivation comes in a more group-oriented and communal form.
63
+ [298.000 --> 303.200] American teachers also need to build genuine relationships with students.
64
+ [303.200 --> 309.120] Many Americans are blind to the fact that a business or work relationship doesn't work
65
+ [309.120 --> 311.520] in many collectivist contexts.
66
+ [311.520 --> 317.800] A relationship is a relationship and having merely a work relationship seems disingenuous
67
+ [317.800 --> 319.320] and fake.
68
+ [319.320 --> 325.280] This is why many cultures refer to Americans at least relationally as cold.
69
+ [325.280 --> 330.920] This means teachers must not use relationships as a technique to leverage students for this
70
+ [330.920 --> 335.120] will cause students to distrust you more.
71
+ [335.120 --> 340.120] Instead teachers need to focus on the art of small talk, forging genuine curiosity of
72
+ [340.120 --> 346.440] our students' lives, and at times prioritizing relationships over work.
73
+ [346.440 --> 354.480] This realness, authenticity, and bonding often needs to take place before learning occurs.
74
+ [354.480 --> 360.280] I will say it one more time, unless you have felt it and experienced it, the intensity
75
+ [360.280 --> 366.520] of relationships in collectivist societies is unlike anything we have in the United States.
76
+ [366.520 --> 371.720] To be culturally relevant and to act as a cultural bridge builder, teachers must think about
77
+ [371.720 --> 376.200] their relational style in the classroom differently.
78
+ [376.200 --> 379.920] Thanks for watching and please subscribe to Tolentino Teaching.
transcript/allocentric_94HekWSIqLM.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 16.380] For all mobile organisms and particularly mammals including ourselves knowing where we are
2
+ [16.380 --> 20.720] and being able to find our way around, find our way back to our home for example, find
3
+ [20.720 --> 27.560] our way out of sources of resources like food is a crucial cognitive capability.
4
+ [27.560 --> 32.640] And we've recently begun to understand what the neural basis of this kind of spatial
5
+ [32.640 --> 38.280] navigation is knowing where you are and knowing how to get to places that you need to get to.
6
+ [38.280 --> 43.040] So we're now beginning to really understand what's happening in the brain that enables
7
+ [43.040 --> 48.720] us to know where we are and know how to find our way around and remember where the important
8
+ [48.720 --> 51.880] places for us are in the environment.
9
+ [51.880 --> 60.240] And this breakthrough or understanding really began in the late 60s and early 70s with
10
+ [60.240 --> 66.760] John O'Keefe here at UCL discovering play cells, neurons within the hippocampus are part
11
+ [66.760 --> 71.680] of the brain, animal models like rats and mice.
12
+ [71.680 --> 76.160] These neurons fire whenever the animal is in a particular part of its environment and
13
+ [76.160 --> 78.440] a different neuron fires when it's somewhere else.
14
+ [78.440 --> 83.880] And so together this big population of neurons, if you look at the activity as it varies
15
+ [83.880 --> 87.920] as the animal moves around its environment, you can tell where the animal is.
16
+ [87.920 --> 94.360] So which of the play cells are firing, firing little electrical impulses to other neurons
17
+ [94.360 --> 99.520] in the brain, that tells you where it is in the environment and those neurons are telling
18
+ [99.520 --> 104.480] the rest of the brain as the animal moves around all the time, where is it in its environment.
19
+ [104.480 --> 110.760] And shortly after this discovery in the 80s Jim Rank and his colleagues in New York
20
+ [110.760 --> 115.960] discovered head direction cells and they're like a neural compass.
21
+ [115.960 --> 122.160] So the play cells are active according to where the animal is in its environment, head direction
22
+ [122.160 --> 125.200] cells are active according to which way it's facing.
23
+ [125.200 --> 128.040] So it doesn't matter where it is, just where it's facing.
24
+ [128.040 --> 132.800] A given head direction cell will fire whenever the animal is facing north, for example,
25
+ [132.800 --> 133.800] wherever it is.
26
+ [133.800 --> 136.840] A different one will fire when it's facing in a different direction.
27
+ [136.840 --> 141.000] So across the population of head direction cells, the pattern of activity is telling the
28
+ [141.000 --> 148.080] rest of the brain which way is the animal facing, all the time as it's moving around.
29
+ [148.080 --> 154.480] And then a third kind of spatial cell was discovered much more recently in 2005 by the
30
+ [154.480 --> 159.160] Moses in Norway and these are the grid cells.
31
+ [159.160 --> 163.280] And there are little bit like play cells in the sense that as the animal moves around its
32
+ [163.280 --> 168.000] environment, a given cell will fire depending on the location of the animal.
33
+ [168.000 --> 172.400] But a given play cell will fire whenever it enters any one of a series of locations that
34
+ [172.400 --> 177.960] are distributed about the environment of the animal in a regular triangular array.
35
+ [177.960 --> 183.560] It's a very surprising thing to see given the complexities of behaviors, these animals
36
+ [183.560 --> 184.560] wondering around.
37
+ [184.560 --> 189.400] But a given play cell will fire whenever it goes into any of these locations organized
38
+ [189.400 --> 192.120] in a triangular array across the environment.
39
+ [192.120 --> 198.160] And a different grid cell will fire on a similar array of locations slightly shifted from
40
+ [198.160 --> 199.400] the other cell.
41
+ [199.400 --> 206.360] So that together, a population of these grid cells, the activity will move from one to another
42
+ [206.360 --> 208.840] as the animal moves around.
43
+ [208.840 --> 213.640] And so again, like the play cells, they're telling the rest of the brain in a special kind
44
+ [213.640 --> 215.280] of way where the animal is.
45
+ [215.280 --> 221.120] You could work out where the animal is from what pattern of grid cell activity there is.
46
+ [221.120 --> 224.960] And these are found in the entirinal cortex, which is just next to the hippocampus, and
47
+ [224.960 --> 229.120] they project into where the play cells are in the hippocampus.
48
+ [229.120 --> 237.400] But because they have this funny repeating, regular pattern of firing in the world, and
49
+ [237.400 --> 241.600] each one has a shifted copy of that same firing pattern in terms of where the cell fires
50
+ [241.600 --> 243.160] in the world.
51
+ [243.160 --> 248.280] It's easy to imagine that these cells could be updating their firing pattern across the
52
+ [248.280 --> 251.520] population of grid cells according to the movements of the animal.
53
+ [251.520 --> 255.720] So as the animal moves in one direction, the activity passes from one grid cell to the
54
+ [255.720 --> 260.040] next one, whose firing patterns are shifted relative to the first cell.
55
+ [260.040 --> 264.960] And that will be true wherever it is in the environment because of this funny repeating
56
+ [264.960 --> 267.200] firing patterns that these cells have.
57
+ [267.200 --> 274.400] And so people think that these grid cells are a way of interfacing knowledge about self-motion
58
+ [274.400 --> 282.120] of the animal, including humans, we think, with the representation of where that animal
59
+ [282.120 --> 283.760] or person is within the world.
60
+ [283.760 --> 288.840] So the play cells could tell you where you are, and the grid cells could update that knowledge
61
+ [288.840 --> 293.760] given that you know you've moved 10 meters to the North, for example, you now know where
62
+ [293.760 --> 297.200] you should be given where you were and how you've moved.
63
+ [297.200 --> 306.400] More recently, still, here we found some cells which indicate our location relative to
64
+ [306.400 --> 309.800] the environment around us, a boundary vector cell.
65
+ [309.800 --> 316.120] So whenever you have a large extended environmental feature, there are cells again in these same
66
+ [316.120 --> 323.800] areas near to the hippocampus which indicate that the animal, or perhaps the person, is
67
+ [323.800 --> 329.040] a particular distance and direction away from a big building or a large extended environmental
68
+ [329.040 --> 330.040] feature.
69
+ [330.040 --> 335.800] Colin Liever discovered these cells working with myself and John O'Keefe.
70
+ [335.800 --> 341.120] And more recently, Jim Kniehrim in the United States has found cells which indicate the distance
71
+ [341.120 --> 345.080] and direction of the animal from individual objects.
72
+ [345.080 --> 351.200] So what we're beginning to see altogether is that cells in and around the hippocampus
73
+ [351.280 --> 356.080] in this part of the brain, in humans that's sort of in here in the middle of the medial
74
+ [356.080 --> 361.840] temporal lobes, all these different cells encoding for our location and our direction and
75
+ [361.840 --> 367.880] being able to update their activity given our own movements and also cells representing
76
+ [367.880 --> 374.360] where we are relative to environmental features or objects within our environment, mean that
77
+ [374.360 --> 381.560] we can understand really at the neural level how we can know where we are and where other
78
+ [381.560 --> 385.000] things are around us and where we're heading.
79
+ [385.000 --> 392.720] And more importantly, perhaps for the idea of navigation and spatial memory, is that
80
+ [392.720 --> 397.760] it's likely that these patterns of firing of neurons which define where we are and where
81
+ [397.760 --> 403.280] we're thinking our environment is around us can be stored so that if there's an important
82
+ [403.280 --> 409.040] location like your home, you can store the pattern of activity that indicates that location.
83
+ [409.040 --> 413.160] And now when you're somewhere else, you could retrieve that pattern of activity and compare
84
+ [413.160 --> 418.200] it to the current pattern of activity and work out the distance and direction between
85
+ [418.200 --> 422.960] them so that you know how to get back to where you were if that's where you want to go to.
86
+ [422.960 --> 428.920] And one aspect about the regular repeating firing of the grid cells is that it's a bit
87
+ [428.920 --> 436.080] like a binary code. It's a very powerful code for potentially very large scale spaces
88
+ [436.080 --> 442.600] so that if you know the firing of the grid cells across the population of grid cells at
89
+ [442.600 --> 448.800] one location and also at your current location, you can work out the vector between them, the
90
+ [448.800 --> 454.600] distance and direction between these locations even if they're very far apart in principle.
91
+ [454.600 --> 458.960] And so it could be that this system is a powerful way of knowing where you are and working
92
+ [458.960 --> 464.920] out how to get to where you need to get to, which as I said is a very important property
93
+ [464.920 --> 472.840] for most mobile organisms. So looking into the future, hopefully beginning to understand
94
+ [472.840 --> 479.040] the neural mechanisms behind spatial memory, we'll enable us to understand for example
95
+ [479.040 --> 484.120] why people who start to get damage to this part of the brain, they become us as in
96
+ [484.120 --> 490.520] Alzheimer's disease, start to lose their way and start wandering off and getting lost,
97
+ [490.520 --> 496.120] which is a problem which creates great difficulties for their carers. And perhaps also it will become
98
+ [496.120 --> 502.760] possible to make artificial devices, driverless cars or robots that can find their way around
99
+ [502.760 --> 507.320] in a similar way to humans, not necessarily because that's the best way to find your way
100
+ [507.400 --> 515.080] around if you're a mechanical device, satanav may be more accurate, but if artificial navigational
101
+ [515.080 --> 519.800] devices can understand how humans find their way around, then it makes them, but perhaps easier
102
+ [519.800 --> 525.240] to interact with and that they can have built in knowledge of what kinds of aspects of finding
103
+ [525.240 --> 530.280] our way around that humans find difficult and which ones they find easy. So from all of these
104
+ [530.760 --> 540.120] scientific experiments and developments recently recognised for example with the 2014 Nobel Prize
105
+ [540.120 --> 546.840] for Physiology or Medicine, we have got a nice detailed understanding of how these different
106
+ [546.840 --> 553.560] types of neurons behave, but actually always in very simple circumstances in the lab. So
107
+ [554.120 --> 559.640] large, perhaps due to constraints of having simple understandable experiments and also constraints
108
+ [559.640 --> 565.000] of being able to forward small amounts of lab space, most of these experiments are done in rather
109
+ [565.000 --> 569.320] simple environments, rather small scale environments and it's still an open question how
110
+ [570.520 --> 575.720] this sort of representation of your location and direction and the grid cell firing patterns
111
+ [576.360 --> 585.000] will really play out in the natural environment of a human in a complex city or a rat in a large
112
+ [585.000 --> 592.440] scale environment of many hundreds of metres with lots of complicated narrow routes and so on.
113
+ [592.440 --> 598.280] And so although we've got a nice understanding of the simplest possible situations and what
114
+ [598.280 --> 603.960] these neurons are doing, it's still not clear how this will play out in the complexity of everyday
115
+ [603.960 --> 612.200] life and how it will really explain everyday navigation in complicated situations, but it's a very
116
+ [612.200 --> 620.520] good first step. So although most of these initial experiments have been done in rodents,
117
+ [621.480 --> 627.080] we can now with functional neuroimaging look for the signs of the same kind of coding in the human
118
+ [627.080 --> 633.480] brain, often while people navigate in a virtual reality video game while their brain is being scanned
119
+ [633.480 --> 639.560] and indeed we can make specific predictions of what kind of patterns of metabolic activity we
120
+ [639.560 --> 644.360] should see in the scanner given that we know what the individual neuron should be doing if the person's
121
+ [644.360 --> 650.840] spatial memory is working like a rats or a mouse's spatial memory. And perhaps surprisingly we've
122
+ [650.840 --> 657.320] seen many strong confirmatory examples of the same kind of posting. You can see evidence for
123
+ [657.320 --> 663.160] the presence of play cells and head direction cells and grid cells and boundary vector cells in fact
124
+ [663.240 --> 668.680] functional neuroimaging experiments with people exploring in virtual environments while their brain
125
+ [668.680 --> 674.360] is being scanned. And with epilepsy patients who have intractable epilepsy and need to have
126
+ [675.240 --> 681.240] the focus of epilepsy actually removed from their brain, then electrical activity is recorded in
127
+ [681.240 --> 688.040] many cases from these patients for many days. And if they play a virtual reality video game where they
128
+ [688.440 --> 698.200] virtually move around, you can also see examples of recordings of individual neurons that show where
129
+ [698.200 --> 703.160] they were or which way they were headed in this virtual environment. These experiments have
130
+ [703.160 --> 709.880] been done by Mike Kahana and it's act freed largely and many collaborators. So although experiments are
131
+ [709.880 --> 717.720] much easier to control perhaps and implement in rodents mice and rats that are foraging around for
132
+ [717.720 --> 724.280] a piece of food, we can take those important results and work out what they imply for human
133
+ [724.280 --> 730.280] experiments. And where we've looked we usually see something rather similar. Of course in humans
134
+ [730.280 --> 735.720] there's much more complexity there as well and all sorts of verbal representations and knowledge of
135
+ [737.560 --> 743.000] semantic knowledge and so on which we don't usually study in rodents and in fact we're probably
136
+ [743.080 --> 749.960] impossible and that's added on as well but these basic spatial building blocks that we see in rodents
137
+ [749.960 --> 758.920] give us a starting point to look in humans and so far it seems like that's a valid starting point.
transcript/allocentric_BUObbn7i_qo.txt ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.320] I'm Yvonne Eng and I'll be presenting work on inferring the body pose of a camera
2
+ [4.320 --> 8.000] wear from Ego-centric video in collaboration with Don Laixiang,
3
+ [8.000 --> 13.840] Han Viu-Ju, and Kristen Gramen. Recent years have seen a growth of wearable cameras in a variety
4
+ [13.840 --> 18.880] of different industries where they have enabled more immersive and realistic user experiences.
5
+ [19.440 --> 23.360] However, in order to seamlessly integrate with a user's actions,
6
+ [23.360 --> 28.880] the system must be able to infer the body pose of the camera wear, also known as the Ego pose,
7
+ [28.880 --> 34.640] which reveals important information about the user's physical activities. Yet Ego pose estimation
8
+ [34.640 --> 40.640] is particularly challenging due to extensive occlusions and motion blur, the often accompanied
9
+ [40.640 --> 46.960] and Ego-centric perspective. More importantly, the person of interest is often completely out of view
10
+ [46.960 --> 53.360] of the camera. This brings us to our goal, which is to use Ego-centric video captured by a wearable
11
+ [53.360 --> 59.440] camera to infer the body pose of the camera wear, who happens to be behind the camera,
12
+ [59.440 --> 66.080] and hence unseen. Prior pose work has typically focused on third-person pose estimation,
13
+ [66.080 --> 72.240] which involves detecting visible poses of people in view of the camera, as opposed to out of view.
14
+ [72.240 --> 78.000] An early attempt at Ego-centric pose estimation was inside out mocap, but this involves using a
15
+ [78.000 --> 84.240] multi-camera setup. And while there exists fireworks that uses only a single camera, these approaches
16
+ [84.240 --> 90.480] rely heavily on cues from Ego-motion, which limits actions to large sweeping movements such as walking
17
+ [90.480 --> 97.200] or sitting. This brings us to our key insight, which is to leverage interactions to estimate the
18
+ [97.200 --> 104.080] unseen first-person pose. We know this link exists, for example in the image, I might have covered
19
+ [104.080 --> 110.080] a few body poses, but using information from the interactions, we can accurately infer what the
20
+ [110.080 --> 117.280] missing body pose is. You to me uses the inferred pose of the second person in view of the camera
21
+ [117.280 --> 124.720] to improve the estimation of the first-person pose. From an input video, we extract three features.
22
+ [124.720 --> 130.400] The dynamic motion feature captures seen invariant cues pertaining to the motion of the camera wear,
23
+ [130.400 --> 135.120] while the static scene feature attends the surrounding visual context that may be associated
24
+ [135.120 --> 142.000] with certain poses. Finally, we extract a second-person pose feature. This feature expresses the
25
+ [142.000 --> 147.840] central concept that the camera wear's pose is strongly governed by interaction dynamics,
26
+ [147.840 --> 154.320] which are directly tied to the interact these pose. We leverage recent successes in third-person
27
+ [154.400 --> 161.840] pose estimation via open pose to extract the second-person pose. These features are fed as input into
28
+ [161.840 --> 168.640] an LSTM that outputs a frame-by-frame sequence of the camera wear's predicted body pose. To train and
29
+ [168.640 --> 174.800] test our method, we introduce a novel data set consisting of two collections. Our panoptic studio
30
+ [174.800 --> 180.480] collection contains highly accurate ground truth skeletons, but we're taken in a limited setting.
31
+ [180.480 --> 185.920] We therefore also provide an in-the-while data set captured by two connect sensors.
32
+ [185.920 --> 190.560] Each collection contains enactments from different activities that are conducive to
33
+ [190.560 --> 197.840] dyadic interactions. We stress that our method is tested on unseen people and is trained across
34
+ [197.840 --> 203.840] all of the activities at once. As shown in these examples, our method successfully captures the
35
+ [203.840 --> 211.440] link between how other people respond in their body pose as a function of one's own ego body pose.
36
+ [213.040 --> 218.720] Quantitatively, our approach outperforms current state of the art for ego pose estimation,
37
+ [218.720 --> 223.600] a state of the art third-person approach adapted to the first-person setting, and a stronger than
38
+ [223.600 --> 229.360] random guessing baseline. All quantitative results are also demonstrated separately in the panoptic
39
+ [230.320 --> 234.880] setting. As opposed to current state of the art, our approach successfully predicts just your
40
+ [234.880 --> 242.800] upper body movements of the camera wear. In Appalachian studies, removing the second-person pose
41
+ [242.800 --> 249.280] feature most significantly decreases performance. This is best shown here where the open pose feature
42
+ [249.280 --> 255.120] allows our approach to accurately predict down to which arm moves forward in a hand game.
43
+ [255.840 --> 260.960] While we achieve successful results using 2D open pose skeletons, we get further
44
+ [260.960 --> 266.800] improvements by feeding in the 3D skeletal ground truths of the interact E. Finally,
45
+ [266.800 --> 272.880] we verify that our network learns appropriate correlations from the inferred second-person pose
46
+ [272.880 --> 278.240] by feeding in incorrect 2D skeletons, which causes the network's performance to drop.
47
+ [279.200 --> 287.440] In summary, we introduce a method for ego pose estimation in dyadic interactions that explicitly
48
+ [287.440 --> 303.600] leverages first and second-person interaction dynamics to achieve improved ego pose estimates.
transcript/allocentric_D8FMbC7RoIg.txt ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 9.840] So today we're going to talk a little bit about communication as it relates to perception
2
+ [9.840 --> 13.800] and perception checking.
3
+ [13.800 --> 16.760] We know there are all kinds of limits on perception.
4
+ [16.760 --> 20.400] We know that we don't perceive everything.
5
+ [20.400 --> 25.880] And some of those limits are just within the bounds of physics, right?
6
+ [25.880 --> 29.280] So right now my wife could be talking.
7
+ [29.280 --> 30.920] She's three miles away.
8
+ [30.920 --> 34.480] And so if she is, I wouldn't know.
9
+ [34.480 --> 38.640] I don't even have a perception of where she is.
10
+ [38.640 --> 44.560] And sometimes the realities within physics actually change things before they get from one
11
+ [44.560 --> 45.560] place to another.
12
+ [45.560 --> 48.760] You can see these little girls here playing with prisms, right?
13
+ [48.760 --> 51.240] And you can see the way that the light is changed.
14
+ [51.240 --> 59.240] And so then therefore the perception of the light is changed just by physical object.
15
+ [59.240 --> 64.040] It's and physical things that are happening.
16
+ [64.040 --> 67.280] Our perceptions are also affected by biology.
17
+ [67.280 --> 73.880] There are biological processes that go to us and our perception that are just part of
18
+ [73.880 --> 74.880] it.
19
+ [74.880 --> 78.320] We know that insects, right?
20
+ [78.320 --> 84.800] Can see wavelengths that human beings just can't see because they respond to those wavelengths
21
+ [84.800 --> 88.280] and human ways that human beings can't.
22
+ [88.280 --> 96.320] We also know that people with two X chromosomes and people with an XY chromosome, their eyes
23
+ [96.320 --> 104.320] function differently so that people with the X chromosomes have more cones in their eyes
24
+ [104.320 --> 112.240] that are receptive to color and receptive to things sitting still, things that are just
25
+ [112.240 --> 114.040] in one place.
26
+ [114.040 --> 124.440] Whereas those with XY chromosomes tend to perceive more things that are moving and perceive
27
+ [124.440 --> 128.360] us less gradation in color.
28
+ [128.360 --> 131.760] This is a fact.
29
+ [131.760 --> 134.760] There's really no way around it.
30
+ [134.760 --> 143.100] But even within individuals, I don't probably have exactly the same layout and design of
31
+ [143.100 --> 151.100] cones and rods as another person, XY chromosome, as another person with XY chromosomes.
32
+ [151.100 --> 154.420] I just don't so I'm going to see things differently.
33
+ [154.420 --> 155.420] Yeah.
34
+ [155.420 --> 162.940] And you hear about it a lot in heterosexual relationships where the woman often complains
35
+ [162.940 --> 167.300] that the guy can't see something that is sitting right in front of him.
36
+ [167.300 --> 170.740] Well, he can't because it's not moving.
37
+ [170.740 --> 176.940] It's harder for the male eye to attend to things that are sitting.
38
+ [176.940 --> 181.980] Whereas we do notice the people walking around a room and sometimes that can be frustrating
39
+ [181.980 --> 186.460] to other people too who are sitting with us and we want them to be focused on us but they're
40
+ [186.460 --> 192.540] maybe we're sitting at a restaurant right and we're talking and but our eye is drawn
41
+ [192.540 --> 198.380] to the movement of the wait staff walking around the restaurant.
42
+ [198.380 --> 202.820] That's what we're going to look at rather than the person sitting still in front of us.
43
+ [202.820 --> 208.580] And that's just biology that has nothing to do with socialization.
44
+ [208.580 --> 212.660] This is a biological fact.
45
+ [212.660 --> 220.980] And since things like physics and biology affect perception, there's not much that we can
46
+ [220.980 --> 223.060] really do about those.
47
+ [223.060 --> 228.860] We can maybe make some certain kinds of glasses to adjust for certain things.
48
+ [228.860 --> 239.500] But for the most part, there's not a lot that we can do about our physics or our biology.
49
+ [239.500 --> 246.100] And so that's not what we really focus on when we're studying communication.
50
+ [246.100 --> 251.820] There are however, some influences on our perception that do have to do with communication.
51
+ [251.820 --> 253.940] And those are the ones that we're going to focus on today.
52
+ [253.940 --> 260.660] I'm going to use Adler Proctor and Manning's differentiation on these influences or perceptions.
53
+ [260.660 --> 271.180] They list selection, organization, interpretation and negotiation as four personal influences
54
+ [271.180 --> 273.180] that go on in our perception.
55
+ [273.180 --> 275.500] So I'm going to talk about each of these.
56
+ [275.500 --> 282.540] We do tend to select particular stimuli for our perception.
57
+ [282.540 --> 289.940] We tend to select stimuli that are intense, stimuli that are repetitive, stimuli that
58
+ [289.940 --> 294.260] are different, desirable or fitting.
59
+ [294.260 --> 301.340] Those are the ways that we kind of select which stimuli we're going to pay attention to.
60
+ [301.340 --> 305.980] We tend to select stimuli that are more intense.
61
+ [305.980 --> 311.900] When I was a kid, I remember being told that red cars get more speeding tickets.
62
+ [311.900 --> 315.380] And I haven't done any research ever to know if that's true.
63
+ [315.380 --> 321.980] But if it is, I would know why it is because the police are more likely to just notice a
64
+ [321.980 --> 326.980] red car once they notice that they might check the speed and that would be the reason.
65
+ [326.980 --> 328.580] Like I said, I don't really know that that's true.
66
+ [328.580 --> 334.420] But I can certainly understand why it would be true if it is because I have those experiences
67
+ [334.420 --> 335.420] myself.
68
+ [335.420 --> 345.220] I walk in and I see this bright light or I see this flash of color and I notice it.
69
+ [345.220 --> 348.220] I notice my eye goes right to that.
70
+ [348.220 --> 353.060] Or I hear a loud boom and I get scared.
71
+ [353.060 --> 356.980] And all of that is stuff where you have this intense stimulation.
72
+ [356.980 --> 364.380] And so you suddenly are paying attention to that intense stimulation.
73
+ [364.380 --> 370.820] We also tend to select stimuli that are repetitive where we see it and then we see it again
74
+ [370.820 --> 372.300] and then we see it again.
75
+ [372.300 --> 375.700] And then we start to wonder why am I seeing this everywhere?
76
+ [375.700 --> 378.420] It just seems to pop up again and again and again.
77
+ [378.420 --> 381.500] And then we actually seem to notice it popping up more.
78
+ [381.500 --> 387.860] The last two nights I had a dream that had dim verin, the city in Colorado.
79
+ [387.860 --> 391.060] And it's not a normal thing for me to dream about dim verin.
80
+ [391.060 --> 395.060] And the fact that it happened once I was like, okay, I had a dream with dim verin last.
81
+ [395.060 --> 398.660] Then last night it happened again, this dream with dim verin.
82
+ [398.660 --> 403.100] And so I noticed it because it happened again.
83
+ [403.100 --> 409.260] And so you see things again and then you see them again and you tend to notice them again.
84
+ [409.260 --> 415.540] I know somebody who is always when they're complaining, who often when that person is complaining,
85
+ [415.540 --> 424.020] says, and once again, because at one point that person noticed a pattern.
86
+ [424.020 --> 431.140] And then you start to pay attention to find the pattern over and over and over again.
87
+ [431.140 --> 433.420] Right? So it starts out you notice a pattern.
88
+ [433.420 --> 437.620] And then you might even see a pattern where there isn't so much one.
89
+ [437.660 --> 442.260] Because you say, and then you start to think, well, once again, this person is doing this thing.
90
+ [442.260 --> 448.900] Well, if you're using that phrase, well, once again, you might look that you're just looking for the repetition
91
+ [448.900 --> 456.780] because you saw the repetition once or twice or more.
92
+ [456.780 --> 465.180] We focus on stimuli as well that kind of stand out that are different than the previous stimulus that we've seen.
93
+ [465.220 --> 469.700] Or just this thing that just doesn't fit, right?
94
+ [469.700 --> 472.380] When I was a kid, there was on Sesame Street this song.
95
+ [472.380 --> 475.500] Two or three of these things belong together.
96
+ [475.500 --> 478.620] Three of these things are kind of the same anyway.
97
+ [478.620 --> 479.500] Sorry.
98
+ [479.500 --> 485.540] But in that case, there would be, but one of these things just doesn't belong here now.
99
+ [485.540 --> 487.020] It's time to play our game.
100
+ [487.020 --> 489.500] But we tend to notice that thing that doesn't belong.
101
+ [489.500 --> 492.860] That thing that's just like, why is that here?
102
+ [492.860 --> 494.540] Why is that there?
103
+ [494.580 --> 495.540] And this is good.
104
+ [495.540 --> 507.420] So I've noticed in my face-to-face classes that if somebody has green hair, I remember that student's name.
105
+ [507.420 --> 513.500] Or if somebody just is different than other people, I remember that.
106
+ [513.500 --> 515.380] And I catch it right away.
107
+ [515.380 --> 516.700] And why is that?
108
+ [516.700 --> 519.620] Just because it's different than what I'm seeing all the time.
109
+ [519.660 --> 521.860] And that's why it stands out.
110
+ [521.860 --> 526.420] And that's why I attend to it.
111
+ [526.420 --> 530.100] And then later I end up remembering more about it.
112
+ [530.100 --> 533.100] We also select what's desirable, right?
113
+ [533.100 --> 536.340] When we're selecting things to perceive.
114
+ [536.340 --> 541.180] And when I say desirable, I don't mean good.
115
+ [541.180 --> 547.860] I mean, we tend to select things that we want to see, right?
116
+ [547.860 --> 549.340] And that doesn't mean those things aren't real.
117
+ [549.380 --> 550.700] You're just seeing what you want to see.
118
+ [550.700 --> 555.500] Some people say, implying that what they're seeing isn't really there.
119
+ [555.500 --> 556.740] You just wish it were.
120
+ [556.740 --> 560.140] No, no, the things that we select to see are there.
121
+ [560.140 --> 564.180] But we notice them because they're the thing we want to see.
122
+ [564.180 --> 569.700] When I was a kid, I grew up during the Satanic panic, as they call it.
123
+ [569.700 --> 577.900] And during that time, it was common for people to notice Satanic symbols
124
+ [577.900 --> 585.220] in our media and in our lives and point out that there must be that there are
125
+ [585.220 --> 589.700] Satanists controlling the world.
126
+ [589.700 --> 601.540] And the thing with that is if you look for Satanic symbols, you will find them.
127
+ [601.540 --> 606.660] I had one person tell me as I was a little kid, third grade told me that the fact
128
+ [606.660 --> 612.060] that my teacher had made a five-pointed star on my paper to show that I had done a good
129
+ [612.060 --> 616.660] job meant that my teacher was probably a Satanist, right?
130
+ [616.660 --> 619.660] So you start looking for five-pointed stars.
131
+ [619.660 --> 622.380] You start looking for goats heads.
132
+ [622.380 --> 626.380] You're not the things that poke you, but things that look like a goat.
133
+ [626.380 --> 631.020] You start looking for a triangle inside a circle.
134
+ [631.020 --> 635.460] You start looking for all these things that people told me these are Satanic.
135
+ [635.460 --> 637.740] And you're going to find them.
136
+ [637.740 --> 643.580] You're going to find them over and over and over and over again.
137
+ [643.580 --> 645.540] That's what you're going to look for.
138
+ [645.540 --> 651.700] The same as once you start to say, well, you know, it drives me nuts about my partner
139
+ [651.700 --> 660.500] in this interpersonal situation is that my partner tends to eat all the raisins out of
140
+ [660.500 --> 665.140] the trail mix and just leave me with peanuts.
141
+ [665.140 --> 670.300] So you see your partner eating a raisin and you're like, no, noticing.
142
+ [670.300 --> 674.420] You're eating a raisin because you were looking for that.
143
+ [674.420 --> 678.300] And the fact that you were looking for it meant that you perceived it and your partner really
144
+ [678.300 --> 680.580] is eating a raisin.
145
+ [680.580 --> 688.340] But as you're using that to build up evidence for what you want to see, you select those
146
+ [688.340 --> 695.020] stimuli that are the thing for which you're looking for and tend to ignore those stimuli
147
+ [695.540 --> 699.540] that you're not looking for.
148
+ [699.540 --> 705.460] And the last kind of thing that works into our selection is we select what is bidding.
149
+ [705.460 --> 707.980] Okay, here's what I mean by that.
150
+ [707.980 --> 713.180] You see this picture here of the guy reading the book and there's a light.
151
+ [713.180 --> 716.620] What do you think if you asked him what were you looking at?
152
+ [716.620 --> 721.060] Do you think he would say the light reflecting off the pages?
153
+ [721.060 --> 722.540] I doubt it, right?
154
+ [722.540 --> 724.140] He doesn't pay attention to the light.
155
+ [724.140 --> 725.820] The light's there.
156
+ [725.820 --> 731.860] It's entering physiologically and biologically and physically.
157
+ [731.860 --> 737.340] He's seeing it, but he doesn't perceive it because he's focused on perceiving the words
158
+ [737.340 --> 739.340] in the book.
159
+ [739.340 --> 741.740] And lots of things are like that.
160
+ [741.740 --> 748.060] When you're first learning to drive, it's really hard to pay attention to both the gauges
161
+ [748.060 --> 749.380] and the road.
162
+ [749.380 --> 750.380] Right?
163
+ [750.380 --> 754.380] And you realize that if you look down at your speedometer to make sure you're not speeding,
164
+ [754.380 --> 756.380] you're not looking at the road.
165
+ [756.380 --> 763.180] You eventually learn to do that, but it is a learning process to look at both your gauges
166
+ [763.180 --> 764.740] and the road.
167
+ [764.740 --> 769.900] And this is kind of the difficulty that we have is that we see things.
168
+ [769.900 --> 771.540] We hear things, right?
169
+ [771.540 --> 776.740] I might hear some children playing in the background behind me, but I'm not really focused
170
+ [776.740 --> 777.740] on that right now.
171
+ [777.740 --> 783.620] And so if you ask me, what were you perceiving, I probably start talking about the computer
172
+ [783.620 --> 789.020] screen in front of me or something like that, not what I hear behind me, right?
173
+ [789.020 --> 791.460] Because that's just not what I'm looking at.
174
+ [791.460 --> 795.740] So we select what is bidding.
175
+ [795.740 --> 803.300] And after we've gone through all that processing to decide what we're actually going to perceive,
176
+ [803.300 --> 810.500] we then move on to the next state, which is we have to organize what we've perceived.
177
+ [810.500 --> 811.660] So what do we do here?
178
+ [811.660 --> 814.380] We now have to organize.
179
+ [814.380 --> 818.460] And there's different things that go into our organization.
180
+ [818.460 --> 824.340] One is we have to decide which of these things that we have somehow made it through all of
181
+ [824.340 --> 835.660] our processes to choose to attend to it, to choose to actually let ourselves know it's
182
+ [835.660 --> 836.660] there.
183
+ [836.660 --> 845.620] We still have to decide what is the important thing in this situation, what matters here.
184
+ [845.620 --> 852.300] And that process of organizing in our mind, what is most important, what is least important,
185
+ [852.300 --> 856.260] is going to affect then what we perceive the world as being.
186
+ [856.260 --> 860.900] Next we move through the perceptual schema and stereotypes.
187
+ [860.900 --> 865.700] Now these can be confusing to some people, and that's why I put them together.
188
+ [865.700 --> 873.780] It's because there is an extent to which perceptual schema taken to the extreme are stereotypes,
189
+ [873.780 --> 874.780] right?
190
+ [874.780 --> 886.340] So if I'm about and I see a black man, right, and so he's a black man, he's probably
191
+ [886.340 --> 892.940] of African descent, and those are things that I can perceive about him as being pretty
192
+ [892.940 --> 896.700] likely, and that fits into a perceptual schema.
193
+ [896.700 --> 901.340] There's nothing racist about knowing that there is a black man here, right?
194
+ [901.340 --> 904.500] There's nothing, no problem with that.
195
+ [904.500 --> 910.140] Now if I start to take some of the social stories that have been told to me about this is
196
+ [910.140 --> 917.940] how black people behave, I'm moving to stereotypes and even into racism if I let myself go there,
197
+ [917.940 --> 922.700] if I were to allow myself to fall into these stories, right?
198
+ [922.700 --> 928.300] So you've got the perceptual schema which is putting things into categories, and it's
199
+ [928.300 --> 934.900] not wrong to put things into categories, it's necessary to put things into categories.
200
+ [934.900 --> 942.660] But when you start to take those categories and make them the entire story of an individual,
201
+ [942.660 --> 948.460] especially an individual person, but really any individual perception, then you start
202
+ [948.460 --> 950.380] to move into stereotypes.
203
+ [950.380 --> 955.220] And stereotypes are extremely problematic because they limit our perception in ways that
204
+ [955.220 --> 957.420] aren't really accurate.
205
+ [957.420 --> 962.380] The last thing is that we do in our organizing is we engage in punctuation.
206
+ [962.380 --> 969.020] Now punctuation is not like what you learned in your English class when we're talking about
207
+ [969.020 --> 970.460] it in communication.
208
+ [970.460 --> 975.500] So when we talk about punctuation here, we're looking at what do you see as the cause and
209
+ [975.500 --> 977.860] what do you see as the effect?
210
+ [977.860 --> 980.780] That's really what we're looking for.
211
+ [980.780 --> 987.860] So we see something happen, we see something happen before it or simultaneously with it,
212
+ [987.860 --> 991.140] and we think one thing caused the other thing.
213
+ [991.140 --> 997.060] This is a common thing, we'll talk about it at another video where we talk about our
214
+ [997.060 --> 1002.460] emotions, but sometimes people say, you made me angry.
215
+ [1002.460 --> 1008.180] And we know that another person can never make you angry.
216
+ [1008.180 --> 1017.620] But we feel this need to put in this punctuation where there is a cause and there is an effect.
217
+ [1017.620 --> 1022.660] And what, these two things happened, you did this thing and I was angry.
218
+ [1022.660 --> 1027.660] And so I decide that you caused me to be angry.
219
+ [1027.660 --> 1029.780] We do a lot of things like that.
220
+ [1029.780 --> 1035.700] And sometimes most of the time probably, we're right about this causes that, right?
221
+ [1035.700 --> 1041.140] You know, I turn the key on my car and it fires the ignition.
222
+ [1041.140 --> 1044.980] My turning of the key was the cause of that.
223
+ [1044.980 --> 1049.380] We can get all end to kinds of first causes, second causes, and Aristotle and stuff like that.
224
+ [1049.380 --> 1051.460] But we're not going to do today.
225
+ [1051.460 --> 1058.500] But yeah, the punctuation, how you decide what causes, what, what the relationships between
226
+ [1058.500 --> 1060.420] perceptions are.
227
+ [1060.420 --> 1064.420] That's one of the ways that we organize our reality.
228
+ [1064.500 --> 1066.100] So then we come to the last two.
229
+ [1066.100 --> 1069.300] We have interpretation and negotiation.
230
+ [1069.300 --> 1075.380] And so when we talk about negotiation, we're talking about kind of a rhetorical act of trying to
231
+ [1075.380 --> 1077.700] get other people to adopt your perception.
232
+ [1077.700 --> 1081.700] And when we talk about interpretation, we really go into narrative.
233
+ [1081.700 --> 1088.820] So I want to focus just a little bit on the narratives that we use before I move on to talking about
234
+ [1088.820 --> 1093.860] negotiation, which involves a very important thing called perception checking.
235
+ [1093.860 --> 1098.100] When we start talking about interpretation and narrative, anybody who's really studied
236
+ [1098.100 --> 1101.140] communication immediately jumps over to Walter Fisher.
237
+ [1101.140 --> 1106.340] So first thing that comes to mind, Walter Fisher and his narrative paradigm, his narrative
238
+ [1106.340 --> 1113.700] paradigm is this idea that people make decisions based on how things fit in their stories, right?
239
+ [1113.700 --> 1116.980] So we hear lots of stories from other people.
240
+ [1116.980 --> 1120.580] We experience lots of things and try and put them into stories.
241
+ [1120.580 --> 1122.740] And when we do that, we look for two things.
242
+ [1122.740 --> 1125.220] The first is narrative fidelity, right?
243
+ [1125.220 --> 1129.220] So within the story that we're telling does this make sense.
244
+ [1129.220 --> 1132.740] So if you have the mama bear, the papa bear, and the baby bear come home from the walk,
245
+ [1132.740 --> 1136.820] and the mama bear says somebody's been eating my porridge, and the papa bear says somebody's
246
+ [1136.820 --> 1139.940] been eating my porridge, and the baby bear, well, doesn't say anything.
247
+ [1139.940 --> 1140.740] Bears can't talk.
248
+ [1141.460 --> 1143.060] We're going to throw that story out, right?
249
+ [1143.060 --> 1147.300] We're not going to take that as a true story that we're going to apply to our life because
250
+ [1148.020 --> 1150.980] you just said bears can talk.
251
+ [1150.980 --> 1153.140] You had bears talking, and then you said bears can't talk.
252
+ [1153.140 --> 1154.260] You contradicted yourself.
253
+ [1155.060 --> 1156.900] And in those situations, we just tend to throw it out.
254
+ [1156.900 --> 1158.260] We don't even think about it.
255
+ [1158.260 --> 1162.580] When we have this thing that contradicts the story within itself, we just throw it out.
256
+ [1163.220 --> 1166.900] We also tend to throw out things that don't fit into narrative coherence.
257
+ [1166.900 --> 1173.540] And a narrative coherence is all of the stories that you've told yourself that make up your world view.
258
+ [1174.020 --> 1181.140] So that when you hear a story from somebody else, when you perceive something,
259
+ [1181.460 --> 1186.100] you think does this fit with the stories I'm already telling, right?
260
+ [1186.900 --> 1191.620] Maybe you are out and about, and you see a little tiny person,
261
+ [1192.260 --> 1197.460] and that little tiny person has little wings flies into your field of view, lights up, and flies away.
262
+ [1198.340 --> 1202.580] You've got some choices there to make about this,
263
+ [1202.580 --> 1208.180] so in order to make your story coherent, maybe you had a hallucination,
264
+ [1208.660 --> 1210.500] that could be a story that you tell yourself.
265
+ [1210.500 --> 1215.140] And that would make sense because you've learned that there are no fairies,
266
+ [1215.940 --> 1223.860] or maybe you haven't fully accepted as coherent the idea that there are no fairies.
267
+ [1224.340 --> 1227.060] And so you don't think it's a hallucination.
268
+ [1227.060 --> 1228.900] You think, wow, I saw a fairy.
269
+ [1231.780 --> 1235.380] And so you've got to make the story make sense in your other stories.
270
+ [1236.260 --> 1241.140] Now, Ernest Borming took this idea of narrative in kind of a different direction.
271
+ [1241.140 --> 1243.220] You notice that we don't just do this ourselves.
272
+ [1243.220 --> 1245.700] I'm not just making sense of my stories myself.
273
+ [1245.700 --> 1247.860] I make sense of my stories with other people.
274
+ [1248.660 --> 1254.500] And so maybe you'll tell me a story about a professor who's really mean to you.
275
+ [1254.500 --> 1257.300] And then I'll think, oh, you know what?
276
+ [1257.300 --> 1260.100] I remember this time when a professor was really mean to me.
277
+ [1260.100 --> 1263.140] We start telling these stories back and forth.
278
+ [1263.780 --> 1267.060] And in this moment, when we're telling these stories,
279
+ [1267.060 --> 1271.780] we build what Borming calls a fantasy theme.
280
+ [1271.780 --> 1274.020] And some people think, well, fantasy is not real.
281
+ [1274.580 --> 1276.820] It's likely real.
282
+ [1277.460 --> 1280.580] But it's a way we've made sense of the world altogether.
283
+ [1280.580 --> 1286.820] So it kind of ties in with narrative and with our next concept, which is perception checking.
284
+ [1287.860 --> 1291.540] And the truth is whether we articulate it this way or not,
285
+ [1291.540 --> 1295.700] when we talk about perceptions, we all know these things.
286
+ [1295.700 --> 1299.620] We know what a lot of the things affecting our perception are.
287
+ [1300.580 --> 1305.940] And so we need to engage in perception checking.
288
+ [1306.500 --> 1310.820] Perception checking is where you check out with other people.
289
+ [1310.820 --> 1312.740] This is what I'm perceiving.
290
+ [1312.740 --> 1314.420] Is this really what's going on?
291
+ [1316.660 --> 1321.140] Now, this notion of perception checking isn't really a new idea.
292
+ [1321.140 --> 1326.580] We can trace this all the way back to Plato and what he called dialectic or what his translators
293
+ [1326.580 --> 1328.340] call dialectic.
294
+ [1328.340 --> 1334.340] So Plato here, he talked about how we can know what's true.
295
+ [1334.340 --> 1340.500] We can know what's real by comparing with other people.
296
+ [1340.500 --> 1345.860] And so as we interrogate our perceptions with other people's perceptions,
297
+ [1345.860 --> 1350.020] we start to come closer to something that might be true.
298
+ [1352.100 --> 1356.820] So Adler Proctor and Manning have a really good method that they describe in their textbook,
299
+ [1356.820 --> 1361.460] looking at looking in of how we can do perception checking.
300
+ [1362.180 --> 1365.380] So we see something, we perceive something.
301
+ [1365.380 --> 1368.660] And we know it could be one way or it could be another.
302
+ [1369.460 --> 1373.060] And so you need at least two possible interpretations.
303
+ [1373.060 --> 1376.260] Sometimes it's hard to find that second interpretation,
304
+ [1376.260 --> 1380.660] because we're always so married to the narratives that we already have.
305
+ [1380.660 --> 1384.580] But you try and find two possible interpretations or more,
306
+ [1385.140 --> 1391.140] and then go to somebody else who was maybe the person who did the behavior or was involved in the
307
+ [1391.140 --> 1397.540] event and say, you know, I see these multiple interpretations of the behavior.
308
+ [1398.500 --> 1407.620] This is what I'm perceiving is this also how you would interpret your behavior or their behavior?
309
+ [1407.620 --> 1409.860] Is this also what's going on with you?
310
+ [1409.860 --> 1415.300] And when we do that, we can come a little closer to maybe a full perception.
311
+ [1415.940 --> 1419.380] Now they say there's Adler Proctor and Manning again.
312
+ [1419.380 --> 1424.260] Say that there's a few things to consider when doing your perception checking.
313
+ [1424.260 --> 1425.700] When is the completeness?
314
+ [1425.700 --> 1430.740] Right? So how much of the picture do you really have?
315
+ [1430.740 --> 1433.460] How much do you really know of what's going on?
316
+ [1433.460 --> 1435.860] Another is nonverbal congruency.
317
+ [1435.860 --> 1441.060] Right? And so sometimes what a person says is their perception?
318
+ [1441.060 --> 1444.100] It doesn't match what they the way they're acting.
319
+ [1444.100 --> 1447.300] And so do two things not quite mix?
320
+ [1447.300 --> 1449.940] A third thing is our cultural rules.
321
+ [1449.940 --> 1459.780] So our cultural rules and rules put us in more high context or low context cultures.
322
+ [1459.780 --> 1468.180] Right? So high context cultures are cultures where a whole lot of what is meant by something is
323
+ [1468.180 --> 1470.900] carried through in the traditions and in the culture.
324
+ [1471.860 --> 1478.740] Whereas in low context cultures a lot more of what is meant by something takes place
325
+ [1478.740 --> 1483.140] in the language itself and in explaining what you're going to do.
326
+ [1483.140 --> 1488.740] So we can sometimes call them high context cultures as Alice-centric because they're focused
327
+ [1488.740 --> 1490.740] on looking at what's around stuff.
328
+ [1490.740 --> 1496.500] Whereas low context cultures, we sometimes call them logo-centric because they're more about
329
+ [1496.500 --> 1497.540] explaining things.
330
+ [1497.620 --> 1498.900] And then there's face saving.
331
+ [1499.620 --> 1504.740] The person who's helping you with your perception checking might be trying to save face for
332
+ [1504.740 --> 1512.020] themselves. No, no, no, that's not what I meant because they realize that would be a bad
333
+ [1512.020 --> 1519.300] persona to be putting out or they might be face saving with you saying, oh well, you're right
334
+ [1519.300 --> 1520.820] trying to make you feel better.
335
+ [1520.820 --> 1526.340] And either way they're just trying to to to to they're not trying to mess up your perception.
336
+ [1526.340 --> 1533.700] But they're trying to give you a perception that helps you perceive somebody maybe yourself
337
+ [1533.700 --> 1534.740] in a positive light.
338
+ [1535.940 --> 1541.220] Now I'd really like to hear from you maybe do a bit of perception checking on me, right?
339
+ [1541.220 --> 1547.860] So you've learned all of this stuff or you understand all of this stuff about perception
340
+ [1547.860 --> 1550.100] and perception checking, right?
341
+ [1550.100 --> 1556.260] So what I want to know is tell me about a time when maybe it would have helped if you
342
+ [1556.260 --> 1560.500] done some perception checking but you just went with your own perception or well you
343
+ [1560.500 --> 1564.820] realized that somebody else really what had done well to have some perception checking
344
+ [1564.820 --> 1570.100] but they really didn't and that's that caused a problem in your friendship or your relationship
345
+ [1570.100 --> 1571.940] or other aspects of your communication.
346
+ [1573.220 --> 1576.660] All of that is is really good.
347
+ [1576.660 --> 1581.060] So if you're one of my students what I want you to do is put it down in the discussion
348
+ [1581.060 --> 1585.940] prompt below I want to you to talk about you know how you've engaged in perception checking
349
+ [1585.940 --> 1590.420] should have engaged in perception checking or maybe how somebody else has engaged in perception
350
+ [1590.420 --> 1592.980] checking or should have engaged in perception checking.
351
+ [1592.980 --> 1598.500] For the rest of you those of you who are not my students in my class at WNMU I would still
352
+ [1598.500 --> 1600.260] love to hear from you, right?
353
+ [1600.260 --> 1604.500] So I'd love to you for you to talk about these situations in the comments below.
transcript/allocentric_EAikQVqvqnY.txt ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.000] Good evening everybody. You're watching School Psych podcast. Thank you for hanging out with us.
2
+ [6.000 --> 13.000] We're running a little bit late, so I really appreciate you guys hanging out and staying in this at the end of the row.
3
+ [13.000 --> 17.000] Really excited about our guests tonight. A little bit star-struck.
4
+ [17.000 --> 22.000] Before we get going, I was a racial ama school psychologist working in the state of Maryland.
5
+ [22.000 --> 29.000] But I also wanted to mention that we have kind of a little side project going on right now that's posted to our page.
6
+ [29.000 --> 36.000] We're using a Google form to acquire some information from you guys about working conditions for school psychologists.
7
+ [36.000 --> 44.000] So it's just a quick Google form that asks about where you live and you can just specify the state if you want to remain a little bit more anonymous.
8
+ [44.000 --> 50.000] You can list your district. You can tell us a little bit of a review, salary, you know, basic stuff.
9
+ [50.000 --> 56.000] So, and then you can view everyone else's responses, which I think is super helpful for those of us that are moving around.
10
+ [56.000 --> 64.000] So check out on the Facebook page, that Google form, which also has the link to the responses that you can sort according to roll and salary and state and things like that.
11
+ [64.000 --> 70.000] So that's just another thing that we've got going on. But anyways, I'm going to turn it over to Rebecca now. Rebecca?
12
+ [70.000 --> 75.000] Hi everybody. I'm Rebecca. I'm a school psychologist working in the state of Connecticut.
13
+ [75.000 --> 81.000] I want to remind you a little bit about all the ways we would love for you to participate tonight.
14
+ [81.000 --> 91.000] If you are watching this video live, please feel free to sign in to YouTube and comment right in the live chat box alongside your video screen.
15
+ [91.000 --> 99.000] You can also comment on either of the Facebook pages, school psyched your school psychologist or school psyched podcast page.
16
+ [99.000 --> 113.000] And you can post your comments anywhere you'd like. You can post right under the last post of the page, which was for this podcast event, or you can post privately in messages or anywhere really.
17
+ [113.000 --> 127.000] I'll be looking for those notifications on either page and on Twitter using the hashtag site podcast looking forward to hearing your questions and thoughts and connections. And here is Anna.
18
+ [127.000 --> 145.000] Hi, guys. I'm Anna. I'm a school psych in New York state. I'd like to introduce our host or our guest today. Dr. Kevin McGrew. He is the director of Institute for Applied Psychometrics, IAP and Visiting Lecture and Educational Psychology, School Psychology program at the University of Minnesota.
19
+ [145.000 --> 158.000] He received his BA in psychology 1974 and MS in school psychology 1975 from more head state university, more of Minnesota. He received his doctorate in Educational Psychology at the University of Minnesota, 1989.
20
+ [158.000 --> 173.000] McGrew has 12 years of experience practicing as a school psychologist in state of Iowa in Minnesota. He subsequently applied to be with a professor of applied psychology and educational psychology at St. Claude State University in St. Claude, Minnesota.
21
+ [173.000 --> 185.000] He was just the research director for the Woodcock Municipal Foundation from 2005 to 2014. He also served as the Associate Director for Measurement Learning Consultations, MLC in 2008-2014.
22
+ [185.000 --> 200.000] He currently serving as an intelligence theory and testing consultant to two major international test development projects. He serves as the capacity for 2014 to president of the Dharma Burmaka Foundation of the University of Indonesia.
23
+ [200.000 --> 208.000] I'm sorry, I'm going to skip that a little bit. I'm sorry.
24
+ [208.000 --> 213.000] I know. So he's really got a lot going on. Dr. McGrew, thank you so much for joining us tonight.
25
+ [213.000 --> 217.000] It's great to be here. Sorry for the technical problems.
26
+ [217.000 --> 223.000] We're so happy that you're here. We're just like kind of a little bit giddy.
27
+ [223.000 --> 229.000] Well, do what I have to kind of compress things now or do we do we?
28
+ [229.000 --> 243.000] I'm able to stay a little bit later for sure. So if you have the time and are willing to give it, I'm willing to go beyond nine. I can't speak to my co-host here, but I'll skip some slides in.
29
+ [243.000 --> 247.000] I had two in the last minute. I'm going to get rid of those.
30
+ [247.000 --> 250.000] Sounds good.
31
+ [250.000 --> 254.000] So you've got a presentation for us.
32
+ [254.000 --> 257.000] Yeah, one of the people behind the Woodcock Johnson, right?
33
+ [257.000 --> 259.000] Yeah.
34
+ [259.000 --> 264.000] What I'm going to do is do a brief presentation. And then I'd like to have more time for discussion and questions.
35
+ [264.000 --> 270.000] So I'm going to go really fast. This is a presentation that Joel Schneider and I did at NASP this last year.
36
+ [270.000 --> 275.000] A month ago, it was a mini skill session. So it's two hours of material. I'm going to try doing 30 minutes.
37
+ [275.000 --> 283.000] So I'm going to talk fast. The slides are available. There'll be some slides in here that aren't in the PDF, but, you know, that's just the way it is.
38
+ [283.000 --> 289.000] What I'm going to do is just get started. First, make sure I get the technology working correctly.
39
+ [289.000 --> 292.000] Am I my life?
40
+ [292.000 --> 294.000] Yeah, I see.
41
+ [294.000 --> 297.000] Okay. What this is is is it basically I'm summarizing.
42
+ [297.000 --> 301.000] And if you can't see anything, let me know what Joel Schneider and I did it.
43
+ [301.000 --> 309.000] It's basically our update of our CFC chapter in non-planning and upcoming book, the contemporary intellectual assessment.
44
+ [309.000 --> 312.000] Kind of the state of the art of what CFC is all about.
45
+ [312.000 --> 316.000] And I'm skipping these slides. You want me to come back to them. They're really cool.
46
+ [316.000 --> 320.000] But we don't have time for those.
47
+ [320.000 --> 323.000] I'm in fact, I'm going to really.
48
+ [323.000 --> 327.000] Just stuff I put in the last minute and give a big picture context.
49
+ [327.000 --> 332.000] And I'm going to skip those. So are you seeing what is intelligence?
50
+ [332.000 --> 333.000] Yes.
51
+ [333.000 --> 335.000] Okay. This is the beginning of Joel's and my presentation.
52
+ [335.000 --> 339.000] So is it one thing or many things?
53
+ [339.000 --> 342.000] Actually, it's one thing and many things.
54
+ [342.000 --> 348.000] And before I get to going too far, there's a big debate in academic school psychology right now.
55
+ [348.000 --> 355.000] Or the bunch of people are arguing about which factor model is correct in representing intelligence tests.
56
+ [355.000 --> 359.000] And it's just a, you know, Tisso Tisnot. It's a huge debate.
57
+ [359.000 --> 363.000] And a lot of information is coming out in schools like all just about don't do this with your IQ test.
58
+ [363.000 --> 366.000] Or don't do this. You can't do this. You should do this.
59
+ [366.000 --> 371.000] And it's basically art. These two different types of statistical models, which I'm actually getting really kind of frustrated with.
60
+ [371.000 --> 374.000] Because what they're trying to do is represent this.
61
+ [374.000 --> 379.000] You know that the brain is a dynamic interaction of different brain networks.
62
+ [379.000 --> 382.000] So all these models are wrong.
63
+ [382.000 --> 385.000] You know, they're useful heuristics to help understand what tests are measuring.
64
+ [385.000 --> 389.000] But they really don't explain the structure and what's going on with human intelligence.
65
+ [389.000 --> 394.000] So I don't want to get into those kind of debates today. I'm working on a response to all that kind of stuff.
66
+ [394.000 --> 402.000] Because I'm going to get kind of frustrated with the discussion was seem to be all about statistics.
67
+ [402.000 --> 408.000] Okay. So this is kind of a nice. These are drill Schneider slides. He's the Van Gogh of PowerPoint.
68
+ [408.000 --> 414.000] It's a very nice structural model of intelligence, G at the top to see abilities, narrow abilities and tests.
69
+ [414.000 --> 418.000] It's very symmetrical. It's neat and tidy.
70
+ [418.000 --> 421.000] It's, you know, it can represent the corn coutel model.
71
+ [421.000 --> 432.000] But John Hornsel says, structural theory doesn't describe natural phenomenon well because natural phenomenon don't fit into nice equation systems.
72
+ [432.000 --> 439.000] To understand human intelligence. So you get a look at things in terms of all these different kind of nonlinear things that are going on.
73
+ [439.000 --> 446.000] Like that model I showed of the brain. It's not just a series of linear equations, which is what's kind of bothers me a lot of academic school psychologies.
74
+ [446.000 --> 450.000] My factor models better near factor model right now.
75
+ [450.000 --> 461.000] And this is what Joel said presents is that the weird and real weird and wild reality is that human capabilities are really more funky in more nonlinear.
76
+ [461.000 --> 468.000] And to really under to help understand intelligence, we need to look at functional theories, which is where Joel and Iris spending most of our time.
77
+ [468.000 --> 477.000] And we don't really have a lot of interest in debating my models better than your model. We want to go farther and I'll be getting into that.
78
+ [477.000 --> 484.000] And what Joel is basically says, these are his beautiful slides. Are they showing up well?
79
+ [484.000 --> 485.000] Are you guys there?
80
+ [485.000 --> 488.000] Yep, they look really good. I'm very much impressed.
81
+ [488.000 --> 497.000] They're not mine. They're Joel's. This is what you think you see in when you when you read chapters and stuff and technical manuals, nice little structural models of what the test measure and CHC.
82
+ [497.000 --> 511.000] What really is going on is the weird and wild reality underneath. There's a lot of stuff going on under the hood that just is very doesn't fit into those nice models that well.
83
+ [511.000 --> 524.000] And in fact, Joel and I are iterating toward stuff like this where we are integrating a causal relationship between processing speed, working memory, which causes fluid reasoning.
84
+ [524.000 --> 535.000] It's actually there should be arrows between the CHC abilities and integrated with interests and personality characteristics to really understand what's going on with humans.
85
+ [535.000 --> 544.000] But for now, the CHC, what I'm going to talk about and I'll get back to the dynamic models later is the CHC model is really a taxonomy.
86
+ [544.000 --> 554.000] And this is my presentation of it's a periodic table of human cognitive elements. It's the new CHC model organized at like a periodic table.
87
+ [554.000 --> 565.000] And that's what we're updating in our chapter. Basically, here's this static model of CHC, what's changed since the last iteration of it.
88
+ [565.000 --> 575.000] And I'm going to give you a quick summary. I'm skipping fluid intelligence and crystallizer some changes there, but they're very more cosmetic and such.
89
+ [575.000 --> 589.000] But in terms of your practicing schools like called us looking at what's changed with the taxonomy. I use cross battery assessment or I use a woodcut Johnson and the author say it measures working memory or attention control or memory span.
90
+ [589.000 --> 608.000] Well, the GWM taxonomy is now changing in the newest version of CHC and we have auditory short term storage is coded WA and we explain the book where these codes come from is keeping the historical tradition of where this stuff came from originally from the 1950s and 60s.
91
+ [608.000 --> 630.000] There's a visual short term storage and we provide the evidence for all this in the book chapter and attention control. So if you're really into the CAC assessment and cross battery assessment, you can see that this is not the nomenclature you're used to dealing with, but the theory needs to move on and taxonomy needs to evolve because the fathers of it to would have wanted that.
92
+ [630.000 --> 659.000] I'm not going to spend a lot of time on those things because I don't you know you can look at the slides and study them. The probably the biggest change in CHC started about night 2013 or 14 the Joel and I have been talking about is when I was the first one to put the CHC model together I call it extended GFGC extended horns, you see three in a 1997 chapter later became called CAC based on Dr. Woodcock's involvement.
93
+ [659.000 --> 668.000] And I had to make a decision about how do I integrate Carol's model with Horn and Cattell's model so I classify tests for chapter 1997.
94
+ [668.000 --> 681.000] And the one of the biggest problems was what to do with the GLR because Horn and Carol had different beliefs about general memory and learning and general retrieval fluency and horn had different things.
95
+ [681.000 --> 696.000] I did my best approximation of what I thought it was and we came up with this idea of GLR and there's been marital discord in the family of CAC ever since that was been done, right when I presided over that initial marriage of.
96
+ [696.000 --> 706.000] Retrieval fluency abilities and some memory abilities and even though I made some caveats that was only an initial attempt and a proposed framework for some reason it stuck.
97
+ [706.000 --> 735.000] Even stuck in my own writing that which I have to apologize for it was an attempt to resolve some tensions between the models that didn't make sense but GLR is the first attempt to try a reconcile them but it took there were problems right away from the beginning and we finally figured out that they needed to be go through a separation and Joel is in my chapter in 2012 in Flanagan's book we separated GLR into learning efficiency which is meaningful memory associated memory.
98
+ [735.000 --> 750.000] Other abilities and retrieval fluency which is naming fluids see like ran task speed electrical access ideational fluency but we kept GLR intact as a union of two discordant unhappy marital partners.
99
+ [750.000 --> 764.000] So it was trial separation a lot of things happened and we've explained in our chapter where the data and the support comes was very convincing that basically GLR GLR should be GL and GR now and I think that's a great question.
100
+ [764.000 --> 793.000] And actually Carol was correct so the biggest thing in CHC theory is now we talk about GL which is the level of abilities associated memory meaningful memory and those types of things and it's efficiency in terms of how much cost to take for person to learn and some people need to do more effort than others so it's not the same thing as cognitive efficiency on the wixlers or the woodcock Johnson in terms of mental processing.
101
+ [793.000 --> 807.000] Just how easily does somebody learn and store information it's putting stuff into the file cabinet of your brain by association or meaningful memory mechanisms and other abilities we have three listed there right now.
102
+ [807.000 --> 835.000] G before I go to the GR another change or not change a tweak that's happening to see see theory is that facets are becoming fashionable facets have been around psychology of intelligence since the 1960s with Humphries that basically you can look at you shouldn't just look at the ability cognitive processing component of a task or a task you should also look at the content features are the manipulating numbers are they manipulating.
103
+ [835.000 --> 864.000] Visual spatial material so in some of the domains now we are adding a facet dimension and so in GR and GR is now the fluency with which you can retrieve information it's getting stuff out of the file cabinet getting stuff out of the your network of knowledge not putting it in but just going in or rummaging around in there finding it and getting it out as quick as you can and there are three facets to GR there's a retrieval I do.
104
+ [865.000 --> 870.000] Ideas retrieval of words and retrieval of figures you can see the narrow abilities in there.
105
+ [870.000 --> 873.000] You'll see that some of them are in bold.
106
+ [873.000 --> 889.000] Joel and I have gone back to Carol's original work and looked at all the factor loadings and such and basically now we are tweaking CAC to say some narrow abilities are more important than others and we call them major narrow abilities in some are minor.
107
+ [889.000 --> 901.000] And so in GR we have ideational fluency and expression fluency speed of lexical access naming facility or fluency they seem to be the biggest and most important ones in retrieval fluency.
108
+ [902.000 --> 922.000] In another major change or not change a revolution or not revolution revision is a processing speed we think there's pretty convincing evidence that there are two major types based upon the facets that there's cognitive processing speed and academic processing speed.
109
+ [923.000 --> 948.000] And because you'll see when you do factor analysis and other kind of statistical things like multi-dimensional scaling and other stuff that these abilities the academic things kind of hang together the cognitive hang together eventually they collapse together in one super factor but we now think there's a distinction there and we are also proposing that I'm going to skip this is not worth time it's too to not.
110
+ [948.000 --> 950.000] Let me pass that and come back to it.
111
+ [950.000 --> 969.000] The biggest change is been probably one of the biggest problems was CAC taxonomy and what test developers like myself do and all the cross battery stuff is almost every test that was out there for you to use any of the authors say or the cross battery type assessment systems everything's perceptual speed or rate or rate of test taking.
112
+ [969.000 --> 990.000] He really knows the theory well it's P or R9 P or R9 and any practicing school psychologist who gives you know the what we're processing speed test or the woodcock Johnson processing speed test you'll find discrepancies in those scores even though they're all classified as P or perceptual speed.
113
+ [990.000 --> 1014.000] So Joel and I again went back to Carol's original 1993 work and he talked about a distinction between types of processing speed and people who've been researching processing speeds since the 1950s and 60s have always said there's a substructure on a perceptual speed and a Kerman who's done a lot of work with adults found a four factor structure perceptual speed to his first two are very similar to what Carol suggested.
114
+ [1014.000 --> 1043.000] So we now think that perceptual speed is really an intermediate stratum level ability is between the broad and the narrow and underneath it well there's an also an academic fluency which is an intermediate stratum so under academic fluency we have number facility reading speed and writing speed and under perceptual speed there seem to be at least two broad categories perceptual speed where you search and scan perceptual speed where you compare and do pattern recognition.
115
+ [1044.000 --> 1073.000] So to kind of make this a little more concrete so under I just took three different tests here this is not in the chapter yet isn't the chapter so letter pattern matching from the woodcock Johnson is now considered perceptual speed searching and scanning number pattern matching considered searching and scanning what's our symbol search would be PS and then pair cancellation from the would copy BTC or or perceptual speed.
116
+ [1074.000 --> 1103.000] So speed comparison pattern recognition same with wester cancellation and maybe wester coding so we think that perceptual speed needs to be broken into two types of narrow abilities and you'll also see that there's a coding after each one about what's the stimulus content reading writing so letter pattern matching is letters that we read and write number pattern matching are numbers so it would GQ after it symbol search is primarily visual spatial symbols so we think to help understand psychology but understand.
117
+ [1104.000 --> 1119.000] Why is difference beaded test diverge in profiles is that one is that there's a dichotomy between the two different perceptual speed processes and sometimes it might be related to the the fast at the content or the stimulus characteristics so.
118
+ [1119.000 --> 1139.000] I think I think planning is already working on or maybe has incorporated this into her cross battery software I can't remember but this is you know require people to kind of record things and you know change interpretations of some tests and maybe revisit how you look at speed test when you got a bunch of them.
119
+ [1139.000 --> 1151.000] And while we're talking about speed is I'm a big fan of work of well not of a number of people fill up acumen.
120
+ [1151.000 --> 1168.000] He has a theory called the ppik models primary for adolescents and adults intelligence is knowledge so if we're here's one way to think about reorganizing chc that there's gc gk, which is domain specific knowledge reading and writing in quantitative knowledge.
121
+ [1168.000 --> 1181.000] That's intelligence as knowledge acumen calls it what we sometimes call acquired knowledge systems that was cutels original gc that was a super big thing and not what we've now think about as gc.
122
+ [1181.000 --> 1208.000] Then there is what acumen calls intelligence as process that's gf gwmgv gkl and this fits well with conumins work on thinking fast thinking slow he's a no mill prize winner for stuff and behavior like anomics but is a huge book in the last two to three years ago thinking fast thinking slow these are controlled cognitive operations that require you to do someone that's not automatic you have to.
123
+ [1208.000 --> 1235.000] But you know control delivery processing this is what could tell originally consider gf and then there is intelligence as process but it's the speed fluency component which relates to conumin stuff which isn't the cognitive science is literature not so much as psychometric that there seems to be you know gr is a speed or fluency factor gs is in so is reaction time.
124
+ [1235.000 --> 1257.000] So it's one another way to look at organizing the cognitive taxonomy and tests and there's a pretty consistent evidence that suggests that maybe there's a gs ability comparable to what people consider general intelligence but the jury is to kind of still out on that.
125
+ [1257.000 --> 1286.000] So we're going through the biggest changes there's a lot of gv is been one of the most studied domains in in a psychometric models of intelligence and you know it's you know here here we have the gv domain and visualization is the most bold that's the most important gv ability to measure spatial relations and imagery we the Joel Schneider and I believe is considered evidence from a neuro cognitive research brain network research and other things that imagery.
126
+ [1287.000 --> 1295.000] So it's a very necessary need to be something we need to be assessing especially that we're ready to create activity and such.
127
+ [1295.000 --> 1316.000] But the biggest forthcoming change in the visual spatial domain is that there's a distinction between visual spatial which was in carols 1993 book he just didn't elaborate on it but now the research since then has really made it very clear that there's small scale spatial ability which is you know task like the block design spatial relations with the web.
128
+ [1316.000 --> 1345.000] So the visual relation to the book, Johnston manipulating objects where it's it has an allocentric spatial transformation and then there's large scale spatial abilities which is way finding sensitive direction perspective tasting navigation abilities and huge growing literature on large scale spatial navigation ability to become more important our society because of Google maps virtual reality systems and getting around GPS and such.
129
+ [1345.000 --> 1351.000] There's a lot of tests of large scale spatial ability at this at this stage but you're going to be seeing more and more coming in that area.
130
+ [1351.000 --> 1366.000] Probably the most important thing is there's some recent meta analyses that takes small scale spatial ability which is like what we given our intelligence test in large scale spatial tasks which are primarily experimental in the meta analysis of this is actually correlated by 0.27.
131
+ [1366.000 --> 1376.000] What this means is you could be really great at rubik's cube but you might get lost in a new city even if you have a map or vice versa they're not the same type of abilities.
132
+ [1376.000 --> 1383.000] And there's a strong neural science research suggests they have different neural substrates.
133
+ [1384.000 --> 1396.000] So spatial thinking we're not we in terms of CAC theory we now need to start looking at tasks and building tasks to take into account the frame of reference is it small scale or large scale.
134
+ [1396.000 --> 1412.000] And is it also another dimension which I haven't mentioned movement are the static tasks or dynamic tasks dynamic tasks have to do with like looking at things on a radar screens football being thrown in somebody you know tracking it and catching.
135
+ [1412.000 --> 1427.000] Stuff in virtual reality where there's movement of the visual spatial materials and that's a very active area of research right now what we've got to think about it probably that nothing to worry about right now in terms of assessments.
136
+ [1427.000 --> 1437.000] Most gv tests we have now are down in this one little quadrant there's static tasks like block design block rotation and their small scale or elocentric.
137
+ [1438.000 --> 1451.000] There's going to be a lot of development is going on right now stuff in more large scale and dynamic visual spatial, especially with today's technology and iPads and such.
138
+ [1452.000 --> 1471.000] I'm going to skip that slide you can read it basically GA abilities have been the Rodney danger field of intelligence for a long time they didn't have gotten respect but they GA abilities are important just as important as the visual spatial abilities as a major component of intelligence.
139
+ [1471.000 --> 1485.000] And in GA we're also adding a facet dimension based upon research by some individuals over in Germany with the Berlin intelligence structure model that their speech abilities like phonetic coding or biological processing.
140
+ [1485.000 --> 1490.000] Resistance speech sound discrimination but then there are also non verbal.
141
+ [1490.000 --> 1506.000] GA abilities which are like sound localization maintaining and judging rhythm you're going to be seeing I predict a big explosion in some of these non verbal abilities, especially maintaining and judging rhythm because we're finding that synchronized.
142
+ [1506.000 --> 1523.000] This clapping to metronomes and such seems to be tapping into a very fundamental important part of human intelligence and as a potential marker for some early literacy problems and dyslexia.
143
+ [1523.000 --> 1548.000] I'm going to skip the Ramseyer stuff there but I basically just said this, especially researchers at the auditorium neuroscience lab in Northwestern are finding a maintaining and judging rhythm is a very good marker test of underlying brain efficiency, especially evoke brainstem response that deals with processing sounds in terms of milliseconds.
144
+ [1548.000 --> 1556.000] So you're going to be seeing more development of auditory temporal processing type tests.
145
+ [1556.000 --> 1562.000] So here's kind of the summary of the CHC where it is now version and I call version 2.5 or something.
146
+ [1562.000 --> 1577.000] The big thing is G L and GR have been split. They're the definitions. I'm getting near the end here what Joel and I have also suggested is that you know we're kind of humble we can't we can't.
147
+ [1577.000 --> 1590.000] It seems odd that there's these two guys McGruen Schneider was seem to be the caretakers of CHC taxonomies it's certain reasons why that's happened and we're somewhat uncomfortable with it.
148
+ [1590.000 --> 1599.000] So we've decided that any changes that have made like the ones that are articulated all basically we reference back to Carol's original work and let's look at new research.
149
+ [1599.000 --> 1619.000] We also have specified in our chapter that if you're going to be updated and a theory can't be just done by one person and we've developed in our chapter and specified in articulated criteria that should be used maybe by a consortium of people or committee or group or something to help nominate new factors.
150
+ [1619.000 --> 1632.000] And I'm not going to go through that. I want to know that's being put out there because there needs to be more than just two minds working on this.
151
+ [1632.000 --> 1646.000] And according to those criteria there is a new family member in CHC it meets all the criteria. Some of you will like it some of you won't like it. It's basically emotional intelligence GEI.
152
+ [1646.000 --> 1661.000] It's actually didn't like the idea of it being in there but there's a this is the cognitive component of a of social emotional intelligence that the mayor salivate crucial model in particular there's a sufficient factor research developmental research.
153
+ [1661.000 --> 1675.000] Other kind of research that suggests GEI belongs as a cognitive component of the CHC model. It's got motion perception and motion knowledge and motion management and emotion utilization and you'll have to read up on that in our chapter when it comes on April.
154
+ [1675.000 --> 1683.000] So what I'd like to do is kind of finish. I've got some contingency slides.
155
+ [1683.000 --> 1692.000] So what I've just given you is a quick very quick overview and you'll read the chapter when it comes out. I heard was April something.
156
+ [1692.000 --> 1712.000] That Joel and I are not really all that interested in arguing my model by factor versus your model hierarchical which ones better and all the statistical debates that seem to go on we are moving more towards we want to figure out how CHC abilities work together to help understand human intelligence in school learning.
157
+ [1712.000 --> 1741.000] And this is this I'm going to follow the arrows in and we got on time here Joel was happy was nice enough to put my name on this at the last minute but he's working with a doctoral student analyzing data from the major test batteries norm data coming up with structural causal models that's not so G S causes working memory working memory then goes to GF which then goes to GC and you can see there's different linkages.
158
+ [1741.000 --> 1755.000] That there's a dynamic maybe causal system here and not just a simple hierarchical factor model thing and actually Joel and this matter are the ones really doing doing the work.
159
+ [1756.000 --> 1771.000] So kind of end before here's what we typically look at for a C HC taxonomy the top line are the major abilities the other ones are tentative abilities that are in the thing model such as old factory kinesthetic cycle motor.
160
+ [1771.000 --> 1793.000] And what Joel and I are moving for and think we need to be spending more time is rearranging these taxonomies to make sense how they might work together and this is when we had in our 2012 chapter we organize C HC ability is by motor perceptual processing control potential abilities acquired knowledge and still the functional idea of long term storage.
161
+ [1794.000 --> 1806.000] Here's another one that we presented in that chapter but we didn't talk about it much is another functional organization or the organized C C abilities by sensory domain specific in a functional group or conceptual group.
162
+ [1807.000 --> 1821.000] This is one of this is another way of looking at C C it's the acronym framework that there's intelligence as knowledge intelligence process intelligence is process speed and fluency and then physical competencies.
163
+ [1822.000 --> 1829.000] And then and then there might be a a G S ability above it and this is my favorite one coming up.
164
+ [1830.000 --> 1850.000] Is an information processing model how this is one Joel and I've been working on for a while it's just it's conceptual it's not supposed to be the perfect model that the C C abilities might should be organized as those are tied to sensory perceptual systems those are acquired knowledge that are up in your own personal cloud storage cloud.
165
+ [1851.000 --> 1875.000] Which is putting stuff in your cloud GRs pulling stuff out and then working memory and fluid intelligence is the bottleneck of everything and so this is a dynamic causal model and as you see there's no G in these models because we think I think is the G's a statistical artifact and that's for another time and we're working on these kind of models right here.
166
+ [1875.000 --> 1886.000] So with that the rest of my slides are contingency slides and I'm going to kick out of them and open up for questions.
167
+ [1887.000 --> 1891.000] All right sounds good I know that we've had a couple questions come in.
168
+ [1892.000 --> 1899.000] I'm going to jump in with the one and then after let's see what we can get to.
169
+ [1899.000 --> 1905.000] So if you were wanted to know on where does attention and executive functioning fit into C H C.
170
+ [1906.000 --> 1908.000] Okay do let me get my thing off so you can see me.
171
+ [1909.000 --> 1910.000] Sure or whatever works.
172
+ [1912.000 --> 1913.000] Oh now you're seeing the screen of my TV.
173
+ [1917.000 --> 1918.000] I'm going to look on the little.
174
+ [1919.000 --> 1921.000] Okay where does that get a functions fit in.
175
+ [1922.000 --> 1923.000] Yeah attention executive functioning.
176
+ [1923.000 --> 1928.000] Okay I've got a whole module sitting here on the side if we want to really get into that.
177
+ [1929.000 --> 1941.000] This is what you're going to talk about this in our chapter because it is some really good research done especially over by Steve Bowden and his student Jewsberry over in Australia at Melbourne University.
178
+ [1942.000 --> 1952.000] It's kind of like mini Carol analyses of massive data sets and what they have found is that in terms of psychometrics executive functions does not come out as a single.
179
+ [1953.000 --> 1960.000] So it's a separate distinct factor that cannot be explained by beyond the CAC taxonomy already.
180
+ [1961.000 --> 1968.000] What we what the thinking is and what I think it is executive is executive executive functioning is a bunch of different.
181
+ [1969.000 --> 1977.000] Different neurological functions different brain areas of the brain that you know.
182
+ [1977.000 --> 1998.000] Some of them are active during GF some of them are active during GF these some of them are active during another task and they collectively kind of when they're the synchronization of them is kind of executive functions but they're not a psychometric trait per se they're different parts in different brain networks that kind of go on and off like a modem.
183
+ [1998.000 --> 2017.000] You know like you see your modem flashing and sometimes certain things are being you know and so executive functions is kind of a coordinated synchronization and turning things on and off of these these inhibitions switching and updating kind of things and so it's not a psychometric trait per se it's a more of a neuro cognitive.
184
+ [2018.000 --> 2024.000] Emergent property kind of traits it's not a C type of ability so it's it's something different.
185
+ [2024.000 --> 2028.000] That's my thinking in same as a intentional control.
186
+ [2029.000 --> 2034.000] A potential control if you look at the brain network research I got tons of slides on this.
187
+ [2035.000 --> 2051.000] There's like seven to ten brain networks that we're finding in the contemporary brain network research and there's three at a really critical there's one called the central executive network which is primarily frontal lobe and the parietal lobe connected by a white matter track called the arquea
188
+ [2051.000 --> 2067.000] arquea vesiculus that's the p-fit model of intelligence and you've got the default brain network which is mind wandering where you kind of think about things spontaneously and you have what's called the salient network which is the air traffic control system between the two of them.
189
+ [2068.000 --> 2079.000] So when you need to do really focused problem solving the salient network which is certain parts of the brain working together shuts down your brain your mind wandering task unrelated thoughts.
190
+ [2079.000 --> 2091.000] So you can focus and do problem solving with the central executive network when you're in a more creative mode or something say there's network will put you back to the default mode and so you can third pulling stuff out of your knowledge and such.
191
+ [2092.000 --> 2095.000] So control the potential control is really the switching back and forth.
192
+ [2096.000 --> 2105.000] How good can you turn that off and how good can you do it and it's relates to different networks of the brain working together like I'm what I call the motor model.
193
+ [2106.000 --> 2112.000] So cool and interesting another viewer said this is incredibly fascinating.
194
+ [2113.000 --> 2116.000] What are his thoughts on test catching up with the changes in theory.
195
+ [2117.000 --> 2118.000] What I thought about it.
196
+ [2119.000 --> 2124.000] Yeah, what are your thoughts about the test catching up with the changes in theory up to the changes?
197
+ [2124.000 --> 2125.000] Oh, it's going to take a while.
198
+ [2125.000 --> 2126.000] It's going to take a while.
199
+ [2127.000 --> 2134.000] The whole process of developing tests is it's a commercial enterprise.
200
+ [2135.000 --> 2137.000] You know, there's publishers, there's a business model behind it.
201
+ [2138.000 --> 2145.000] And if you notice, you know, some of my best friends work at Pearson, the wexlers took a long time to get to see us.
202
+ [2146.000 --> 2147.000] They're there now basically with the with the five.
203
+ [2148.000 --> 2155.000] It's basically the me I say it real front, but they didn't want to make them a lot of changes to that would cause too much consternation amongst users.
204
+ [2156.000 --> 2157.000] And that would cause sales problems.
205
+ [2158.000 --> 2163.000] So I think that the changes are going to be slow primarily because of the constraints of the business model.
206
+ [2164.000 --> 2168.000] There's a lot of really good stuff going on by people who are doing just pure research.
207
+ [2170.000 --> 2179.000] I think we're not far away from some innovative psychometric test development stuff because of the technology, what you can do with a.
208
+ [2180.000 --> 2188.000] With an iPad and the computational stuff you can do under the hood on your desktop or your laptop computer.
209
+ [2189.000 --> 2194.000] There's really some potential that within the maybe 10 years we're going to see some things.
210
+ [2195.000 --> 2199.000] Joel and I talk about a lot of ideas for that kind of stuff in there.
211
+ [2200.000 --> 2203.000] I don't think you're going to see anything too soon though.
212
+ [2204.000 --> 2205.000] Not in the next five years or so.
213
+ [2206.000 --> 2219.000] Yeah, I when you were talking about GV and the broader because I'm thinking of you know like you mentioned Google maps and I myself I say that I'm like spatially challenged I cannot find my way around anywhere.
214
+ [2220.000 --> 2234.000] But yeah, and the iPads immediately sprung into my mind that that kind of opens up the door for a lot of new and interesting and different tests I feel kind of computer gameish rather than maybe what we typically have.
215
+ [2235.000 --> 2247.000] Yeah, there are people do developing tests game type tests and correlating with psychometric measures that are out there, but these are people who are developing tests for commercial purposes they're just their researchers.
216
+ [2248.000 --> 2256.000] I think you know the I think Pearson was the first was the first one through the wall on iPad administrations and the first one through the wall always gets bloodied.
217
+ [2257.000 --> 2260.000] So I think they took a lot of heat from it. I think I admire them for doing that.
218
+ [2260.000 --> 2287.000] I think I think next every next generation of an intelligence test is going to be on some kind of computer platform probably still have the paper hands on version to which opens up to me some really exciting potential ideas in terms of test development and also in terms of how you score the tests and for example, I got so nice a lot of time.
219
+ [2287.000 --> 2297.000] I don't know how many of you have ever given the idea that retrieval fluency test in the woodcock Johnson name all the colors you can or all the animals you can think of in a minute.
220
+ [2298.000 --> 2306.000] Tell me all the blood of what you remember the categories you can't admit you got three probes now.
221
+ [2306.000 --> 2321.000] I've seen psycho linguistic people who have taken that kind of information recording all responses and they use the latest semantic analysis and tools from what's called network science and they build this network map.
222
+ [2322.000 --> 2325.000] It's a very squiggly kind of thing. It looks like a three dimensional fishing net.
223
+ [2326.000 --> 2335.000] And you can see where certain nodes are very powerful and a lot of connections coming in and how tightly certain ideas cluster together and then there's another one over here and over here.
224
+ [2336.000 --> 2347.000] If the technology is there, I think that if we could capture people's responses when they do those type of tests and also vocabulary tests any verbal.
225
+ [2348.000 --> 2358.000] And the test publishers when they ignore their tests and I have this on my mental list for the next time we do it to capture all those responses and then you could develop say here's the model for an eight year old.
226
+ [2358.000 --> 2372.000] What it looks like that the normative model three dimensional of how vocabulary is organized or idea productions organized and you could take a person's response is put them in and you could get some quotients or like a retrieval quotient.
227
+ [2373.000 --> 2378.000] How good, you know, the person's retrieval compared to the normative group.
228
+ [2379.000 --> 2384.000] How deep is their network of knowledge or ideas is that a very sparse network is how is it organized.
229
+ [2384.000 --> 2391.000] So I think that that's kind of exciting, exciting, maybe we may be able to get more into what's really going on and how we process information.
230
+ [2392.000 --> 2396.000] Even with existing tests and technologies there.
231
+ [2397.000 --> 2400.000] I'm always wait had thinking about what we could should be doing.
232
+ [2401.000 --> 2412.000] Wow, we have some other great questions here and some of them are kind of similar. So I may be asked to together.
233
+ [2412.000 --> 2427.000] Eric asks, are tests often attempt to separate factors? Do you have any thoughts about tests connecting with the interconnectedness of the model you showed.
234
+ [2428.000 --> 2437.000] And then another question is, you know, some scholars have claimed that several CHC inspired tests are over factored.
235
+ [2437.000 --> 2439.000] I almost thought about that.
236
+ [2440.000 --> 2446.000] Oh, yeah, yeah, yeah, that's kind of what I got into earlier about the debate that's going on.
237
+ [2447.000 --> 2452.000] The by factor people versus the hierarchical model people. And I'll just put up one slide here.
238
+ [2453.000 --> 2456.000] What maybe I already did it. I did it. I already put it up there.
239
+ [2457.000 --> 2459.000] Let me see here if I can find something quickly.
240
+ [2459.000 --> 2466.000] You're seeing my directory structure, right?
241
+ [2467.000 --> 2468.000] Oh, yeah.
242
+ [2470.000 --> 2474.000] Okay, you at least think something so shortly.
243
+ [2475.000 --> 2478.000] I don't know if this is going to answer the question, but what I'm going to do like a.
244
+ [2479.000 --> 2481.000] You got something there?
245
+ [2482.000 --> 2485.000] Now did you click on the green screen share button to the left?
246
+ [2486.000 --> 2487.000] Oh, yeah, yeah, yeah, yeah, yeah.
247
+ [2489.000 --> 2492.000] Maybe I should just.
248
+ [2494.000 --> 2497.000] Forbillies to say what I want to say. Okay, screen share.
249
+ [2506.000 --> 2508.000] Let me just say, let me just say, oh,
250
+ [2510.000 --> 2512.000] paraphrase question again.
251
+ [2512.000 --> 2523.000] Okay, so one question was about the test connecting the interconnectedness of the models that you show those thoughts moving all over.
252
+ [2524.000 --> 2531.000] And then the other one is do you at what point do you think if we're trying to measure these small components, are we over a factor?
253
+ [2532.000 --> 2536.000] I was trying to get a nice PowerPoint that I'm working on to respond to this kind of stuff.
254
+ [2536.000 --> 2538.000] Um,
255
+ [2540.000 --> 2544.000] Oh, let me, okay, you've got a you've got a car engine, right?
256
+ [2545.000 --> 2550.000] There's an ignition system is an electronic system. There's the fuel injection system. There's always different systems, right?
257
+ [2551.000 --> 2553.000] Now,
258
+ [2554.000 --> 2559.000] what's the quality of how you usually judge an engine? What's one of the metrics?
259
+ [2560.000 --> 2562.000] The rush bar.
260
+ [2563.000 --> 2566.000] I had some slides that really show this with animation and such.
261
+ [2567.000 --> 2571.000] Can you see horsepower in the engine? Is there a horsepower piece in there?
262
+ [2572.000 --> 2574.000] Right now. No, there's no horsepower piece.
263
+ [2575.000 --> 2577.000] But horsepower is what comes out of these things all working together.
264
+ [2578.000 --> 2580.000] And that's called an emergent property.
265
+ [2581.000 --> 2585.000] And in fact, analysis, and I got to put my slides up there, but you're going to see my creativity here.
266
+ [2586.000 --> 2593.000] This is G. These are the little arrows coming down from G to the, to the, the five, four different factors.
267
+ [2594.000 --> 2596.000] Five, I can get my thumb in there, okay?
268
+ [2597.000 --> 2602.000] That's called a, that's the classic model of intelligence of how people are doing factor analysis.
269
+ [2603.000 --> 2605.000] That G is, G is causing the stuff down here.
270
+ [2607.000 --> 2609.000] You go like this.
271
+ [2610.000 --> 2614.000] It's called G is actually just an emergent property of the brain, like horsepower is.
272
+ [2615.000 --> 2618.000] The thing in the brain, but that is G.
273
+ [2619.000 --> 2629.000] It might be an early efficiency or brain networks working together in the, it might be an indicator of how good a brain is working.
274
+ [2630.000 --> 2631.000] Just like horsepower is.
275
+ [2632.000 --> 2640.000] And if you go to this kind of model, and there's an article by Colvax and Conway about two, three years ago, which is what they call process overlap theory model.
276
+ [2641.000 --> 2646.000] If this model is right, this falls apart, that's whole thing about your overfactoring.
277
+ [2647.000 --> 2655.000] Because when you have to anything is interconnected and correlated, you get positive manifold and G is a statistical mathematical necessity.
278
+ [2656.000 --> 2662.000] And where, where some of this is coming from is in the field pathology, people have been proposing AP.
279
+ [2663.000 --> 2665.000] This is just general pathology trade.
280
+ [2666.000 --> 2675.000] And some psychometrician, especially over the Netherlands, who are really bright people, saying, wait, don't do the same thing in psychopathology that the people in intelligence did wrong.
281
+ [2676.000 --> 2684.000] That it's in do this, that there's a piece up in your brain that causes all, you know, schizophrenia psychosis, manic depression and that kind of stuff.
282
+ [2685.000 --> 2692.000] That maybe the statistical thing we think we have a factor is just an emergent property of the brain.
283
+ [2693.000 --> 2698.000] And there is no G so when people are arguing saying, hey, you're overfactoring tests, you have to buy into the fact that there's a G.
284
+ [2699.000 --> 2708.000] And I have a lot more I could say about that. It's got some good readings on it, but I don't think we're overfactoring, especially when you get away.
285
+ [2709.000 --> 2725.000] There's now a thing called network psychometrics is it gets into how how measures were related in like three dimensional space and it's more of a network model network network analysis.
286
+ [2726.000 --> 2727.000] Okay.
287
+ [2730.000 --> 2736.000] I'm probably going to get myself in trouble, but I was an early factor analysis crazy person too.
288
+ [2737.000 --> 2740.000] I believe everything came out of factor analysis was the truth.
289
+ [2741.000 --> 2752.000] And I've learned since then, especially when I worked with John Jack McCartle with John horns, mathematician in factors guru, one of the best in the country, he said to me, Kevin factor analysis is only one tool.
290
+ [2752.000 --> 2761.000] It's only one lens look at things you got to look at neurocognitive evidence, you can look at behavioral genetic evidence, you got to look at, you know, how things correlate with things and look at the brain structure stuff.
291
+ [2762.000 --> 2772.000] Factoring also a limited tool and I think a lot of people in academic school psychology are this gives me a lot of trouble are guilty of the law of the instrument.
292
+ [2773.000 --> 2775.000] You give a child a hammer everything's a nail.
293
+ [2777.000 --> 2778.000] Okay.
294
+ [2778.000 --> 2794.000] So I'm going to say we've got some really good questions. I'm going to read you the next one. Alberto asks, was there any evidence for the existence of orthographic processing abilities, perhaps under GVGS or Althony numeric grna.
295
+ [2795.000 --> 2801.000] I was wondering if GAs being under assessed, there seems to be a focus on phonetic. Oh, is this two questions. I'm sorry.
296
+ [2802.000 --> 2803.000] Yes, sorry.
297
+ [2804.000 --> 2805.000] Yeah, they're together.
298
+ [2806.000 --> 2807.000] Sorry.
299
+ [2808.000 --> 2819.000] I, my, not my ignorance is my lack of knowledge. I don't know what orthographic processing is and I keep asking Nancy, my other co author of the WCAG, Jensen's got a test of it out there.
300
+ [2820.000 --> 2822.000] It doesn't isolate as a factor.
301
+ [2823.000 --> 2824.000] Okay.
302
+ [2825.000 --> 2827.000] What doesn't mean that it isn't a process.
303
+ [2828.000 --> 2836.000] It's kind of like it may be I'm think in my mind is kind of like executive functions. It's a bunch of different abilities may be working together as an amalgam.
304
+ [2836.000 --> 2842.000] It's not a factor factor model. We're all about you know, you got all these things down here. And this is a fact clear factor here's a clear factor.
305
+ [2843.000 --> 2849.000] When I talk about amalgams, it's like going across. It's a number of different abilities working together in concert.
306
+ [2850.000 --> 2852.000] And so I don't know if it's a clear.
307
+ [2853.000 --> 2862.000] Psychometric trait like GF and GV. I think it's more of a maybe an amalgam of things working together. And that's why it doesn't show up in studies as a factor.
308
+ [2862.000 --> 2872.000] Just same same as the negative functions. It's more a functional thing that different parts of the brain are different things are going together to work together like a modem.
309
+ [2873.000 --> 2874.000] So.
310
+ [2874.000 --> 2881.000] Interesting. I know I hear a lot of talk online and stuff about orthographic processing and I don't really know so much about it.
311
+ [2882.000 --> 2884.000] I don't know what I think I have a deficit in it.
312
+ [2886.000 --> 2889.000] We've seen my spelling in my handwriting. It's a trotious.
313
+ [2889.000 --> 2903.000] Okay, so I'm going to read the next question that I accidentally started reading before. Robert asked is wondering if GA is being under assessed. There seems to be a focus on phonetic coding. Should we be supplementing with tests such as the scan?
314
+ [2904.000 --> 2905.000] Yes. Yes.
315
+ [2905.000 --> 2926.000] Yes, phonetic coding. It's it's GA is being way under assessed. First of all, Gustafson and colleagues have shown that things that are under phonetic or are actually there's a lot of different abilities at different based on terms of complexity of processing on underneath there.
316
+ [2926.000 --> 2943.000] Some are simple phonetic coding and they're more complex for that. But the abilities that are being really under assessed are temporal processing processing of sound wave information in terms of milliseconds.
317
+ [2943.000 --> 2958.000] And we didn't have the technology there before, but that's the work I was showing to the looting to about GA is under assessed. There's been an explosion of research in the last 10, 15 years about the importance of processing sounds at the millisecond level.
318
+ [2958.000 --> 2981.000] And there are tests that can be developed that are, you know, today, especially today's technology and need a crow set out of the outer neural science lab at Northwestern, which would be good web pages to go to is correlating psychometric measures with these more millisecond measure things. And that's where this maintaining and judging rhythm where they're having kids, you know, do stuff to metronomes and measuring the millisecond performance as an assessment tool.
319
+ [2981.000 --> 2998.000] I know it's also a therapy tool and there are coming up extremely interesting findings that which to me is suggest next version of intelligence test need to have some more measures of temporal processing in terms of milliseconds that can be easily done with today's technology maintaining and judging rhythm.
320
+ [2998.000 --> 3008.000] And there might be some others, but yes, GA is being very under assessed. Some respects, I think GA is more important to human intelligence in GV.
321
+ [3008.000 --> 3018.000] There's a article out there, I can't remember who wrote it and trends in cognitive science talks about auditory abilities are the scaffolding on which language is built.
322
+ [3018.000 --> 3025.000] And language is what is used for thinking is the tool for thought. So it's called the auditory scaffolding hypothesis.
323
+ [3025.000 --> 3035.000] We don't measure our auditory well because they didn't we didn't have the technology to do it back in the 60, 60, 70s, 80s and 90s. But now we do. So yes, it's under assessed.
324
+ [3035.000 --> 3047.000] I've got a question and I'm going to probably show my lack of knowledge here, but because I worked in a cross battery district about three years ago. So I'm a bit rusty now.
325
+ [3047.000 --> 3054.000] But I know that then and I only worked there for one year. So I have kind of a limited understanding.
326
+ [3054.000 --> 3068.000] But a lot of my colleagues would kind of downplay GV that it wasn't super important because according to them, it didn't necessarily correlate as well as some of the other areas to academics.
327
+ [3068.000 --> 3077.000] But the way that you're talking about GV sounds like you feel like it's pretty important. So where where can you clarify GV for me?
328
+ [3077.000 --> 3087.000] Yeah, well, Bob Wendling and I wrote an article in psychology in the schools in 2010, a special issue on CHC. And we took a section called the GV mystery.
329
+ [3087.000 --> 3099.000] And because we looked we reviewed all the studies that were available based on CAC theory. And we couldn't find strong relations between current psychometric tests of GV and anything.
330
+ [3099.000 --> 3122.000] But there's a extremely diverse body of research outside of psychometric testing where they do some stuff with other kind of three dimensional kind of processing that shows that visual spatial abilities are extremely important, especially for the stem science technology education and mathematics. We know what is.
331
+ [3122.000 --> 3135.000] It's the ability to do visual thought experiments in your head. The problem is our test that we have in our current intelligence batteries just don't seem to tap into it well. So GV is very important.
332
+ [3135.000 --> 3149.000] There's a national Academy of science publication that came out sometime in the last 10 years you can get free talks about visual spatial intelligence and visual spatial thinking wise more important in our society than ever.
333
+ [3149.000 --> 3165.000] The problem is we're on the woodcock Johnson on the waxers and all the other tests. We don't do a very good job of measuring the stuff that people are doing in their heads with virtual reality coming. I think assessments are possible with computers.
334
+ [3165.000 --> 3179.000] I think we can start doing stuff for people manipulating and moving things and there's dynamic moving things in perspective taking and stuff. So I think GV is important. We just haven't been able to measure it well yet and applied intelligence test.
335
+ [3179.000 --> 3182.000] Very important.
336
+ [3182.000 --> 3195.000] I have one last question and I'm going to try to articulate it so it makes sense but I find it's fascinating when you talked about the horsepower and the model.
337
+ [3195.000 --> 3197.000] I think I can do it too.
338
+ [3197.000 --> 3200.000] I guess there's really nice slides on this.
339
+ [3200.000 --> 3216.000] So if horsepower is the process and then the factors are the parts of the end and personality in Akramin's theory would be maybe one of those factors.
340
+ [3216.000 --> 3222.000] I think this personality is like knowledge versus process. Is that right?
341
+ [3222.000 --> 3229.000] Yes, Akramin's PPPI case personality.
342
+ [3229.000 --> 3240.000] Personality process intelligence is interest and knowledge or something like that. Yes, I've got a series of slides here which I could put up.
343
+ [3240.000 --> 3249.000] So what you think about personality measurement and I guess yes, yes, right now probably but should we choose.
344
+ [3249.000 --> 3257.000] Cool psychologists look at that more and temperament and personality and of our students.
345
+ [3257.000 --> 3270.000] Not so much personality. There's a Richard Snow's work on cognitive abilities, probably the most important under appreciated work in school psychology, most brilliant educational psychologist around.
346
+ [3270.000 --> 3280.000] And he talks about cognitive abilities and in there you've got motivational beliefs which have to do with intrinsic motivation.
347
+ [3280.000 --> 3288.000] Locals of control those kinds of things self beliefs, self efficacy.
348
+ [3288.000 --> 3296.000] I can't they're not I have some great slides on this somewhere and and there's volitional control which have to do with self regulated learning.
349
+ [3296.000 --> 3303.000] It's so it's it's kind of a not personality per se but it's more cognitive thinking dispositions.
350
+ [3303.000 --> 3309.000] How you get engaged in tasks. I think school psychologists should be doing a lot more in there.
351
+ [3309.000 --> 3320.000] If you go to my web page called the mind hub and look under category is but I have a thing called the macum and the model of academic competence motivation.
352
+ [3320.000 --> 3333.000] I tried to take all the research on personality and in cognitive non non IQ abilities come up with a conceptual model of what things we should be maybe assessing on kids beyond intelligence.
353
+ [3333.000 --> 3341.000] And yes, we should the problem is earned in very good tools out there.
354
+ [3341.000 --> 3349.000] So interesting. I love the emotional intelligence pieces that you're looking at.
355
+ [3349.000 --> 3356.000] I think that's so interesting in the future in learning more about all of that seems hopeful and bright.
356
+ [3356.000 --> 3362.000] Yeah, there was a lot of money. I can just talk about it.
357
+ [3363.000 --> 3372.000] And we are we are running out of time. We really appreciate you joining us tonight and sharing all the information with us and the PowerPoints really blew our mind.
358
+ [3372.000 --> 3380.000] Thank you so much. So we're going to we're going to wrap up. Thanks to I think we had a good chunk of viewers out there tonight. So thanks everyone for tuning in tonight.
359
+ [3380.000 --> 3385.000] Join us again on April 15th for ADHD essentials. Thank you, Dr. Kimaguru.
360
+ [3385.000 --> 3387.000] Thanks for the opportunity.
361
+ [3387.000 --> 3389.000] All right. Thank you. Night everybody.
transcript/allocentric_EmFQUDV67xQ.txt ADDED
@@ -0,0 +1,456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.880] Hi and welcome to this session on how to score and how to interpret the Oxford
2
+ [5.880 --> 12.420] cognitive screen. So the particular learning objectives for this talk is
3
+ [12.420 --> 17.120] really about the scoring. So you've seen a little bit before hopefully about the
4
+ [17.120 --> 21.440] introduction and the background and then how to actually manage it, how to move
5
+ [21.440 --> 27.480] your papers around and this little video and talk is about how you then
6
+ [27.480 --> 32.440] actually go on and score it. Interpret the scores and how you use the wheel to
7
+ [32.440 --> 40.000] report it. So section one how to score. So we have a particular scoring template
8
+ [40.000 --> 45.520] which looks like the one on the right and then we use a visual snapshot report
9
+ [45.520 --> 50.280] which looks like the big circle here on the left. So it specifically
10
+ [50.280 --> 55.640] denotes different areas within attention, memory, language, number and practices
11
+ [55.640 --> 60.600] that we are testing within the ox and the aim is at the end for you to determine
12
+ [60.600 --> 65.960] whether or not the patient's performance. So their score falls within outside
13
+ [65.960 --> 71.680] the normative cutoffs and then the note whether or not there is an impairment
14
+ [71.680 --> 77.440] in each of those domains. We do that simply by crossing out one of the areas
15
+ [77.440 --> 84.440] where the impairment is. So here this kind of little form you can have is at the
16
+ [84.440 --> 92.120] end of your participant pack typically and just for each of the scores says what
17
+ [92.120 --> 97.520] is the max score and then gives you an overview in the end what's the cutoff
18
+ [97.520 --> 101.040] so are. So if you want you can put your own participant score next to it and
19
+ [101.040 --> 106.360] determine okay is that less than the cutoff impaired yes or no and then cross
20
+ [106.360 --> 113.480] out the specific slice of the pie where the problem may have occurred. So I'll
21
+ [113.480 --> 119.220] go now task by task and walk you through each of the tasks and how their
22
+ [119.220 --> 126.280] scored. Give you some examples of typical errors and then give you some
23
+ [126.280 --> 130.680] overview of the typical questions we've had so I've been running quite a few of
24
+ [130.680 --> 136.640] these workshops over the years and I received a series of questions over
25
+ [136.640 --> 140.600] different sessions and different times I'm trying to put all of them together in
26
+ [140.600 --> 145.520] this one presentation which may be a bit too much but hopefully if you say have
27
+ [145.520 --> 151.480] a particular query about one of the tasks you can just skip to that one but if
28
+ [151.480 --> 156.440] you're new to this it may be worth you just going through all of this
29
+ [156.440 --> 162.760] presentation one by one. Okay the first task is a picture naming task as you
30
+ [162.760 --> 166.960] know from the demonstration there's four pictures and so the score is very
31
+ [166.960 --> 174.400] simply out of four. Typical cutoff for this one is three so people can make one
32
+ [174.400 --> 179.200] error and still be okay this is the impairment incidence from one of our
33
+ [179.200 --> 185.880] samples in acute stroke so you can see it's very very common to have impairments
34
+ [185.880 --> 193.080] within this task. So here are some example responses so you might have a patient
35
+ [193.080 --> 200.200] who is quite a phasic doesn't give you any response verbally but shows you for
36
+ [200.200 --> 205.160] example on the second one an eating gesture and tries to mimic how you would
37
+ [205.160 --> 211.760] open a drawer and then maybe doesn't respond so that would score zero on this
38
+ [211.760 --> 216.240] particular task but nonetheless these examples are being given because I
39
+ [216.240 --> 220.560] would hope that you make note of this and you can see that they do understand
40
+ [220.560 --> 227.000] what the meaning of it is and they can see what concepts are being brought
41
+ [227.000 --> 233.480] about but they don't necessarily have the ability to name it. Similarly you
42
+ [233.480 --> 238.840] could have someone who makes this what we would call semantic errors so they
43
+ [238.840 --> 244.280] would say rhino instead of hippo and then some sort of fruit some sort of box
44
+ [244.280 --> 248.960] some sort of fruit and again this person was scored zero but for a very
45
+ [248.960 --> 254.760] different reason as the to the first person so they clearly have an ability to
46
+ [254.760 --> 260.320] speak and to name things but they're making errors in the specificity of the
47
+ [260.320 --> 267.640] semantic categories. Finally another example and these are all taken from real
48
+ [267.640 --> 272.000] experiences over the years and I still remember the first time they said oh this
49
+ [272.000 --> 277.360] this is like a rock it's not a rock how but then you move on and so for the
50
+ [277.360 --> 282.920] second one they said slippers and then the last one's covered and the final one
51
+ [282.920 --> 287.440] is a pair and so this person would score only one out of four because covered
52
+ [287.440 --> 292.800] wouldn't be taken as correct although we would take chest of drawers or draw
53
+ [292.800 --> 301.360] yeah sorry chest of drawers that's correct. Okay so what's happening here is that
54
+ [301.360 --> 307.040] this person seems to have a vision problem rather than a naming problem and if
55
+ [307.040 --> 311.840] you squint hard enough you may be able to see the slipper it looks like more
56
+ [311.840 --> 317.280] one of those Dutch wooden shoes if you squint and you can see that the melon
57
+ [317.280 --> 323.280] may appear to look like a shoe. Try it see if you can if you can make it work for
58
+ [323.280 --> 327.880] you but it's come up quite a few times and it's actually quite a regular error.
59
+ [327.880 --> 332.520] Okay so common questions we get on this task I think's like you know why are they
60
+ [332.520 --> 336.720] grayscale why aren't you using photographs and this is not very much on purpose
61
+ [336.720 --> 341.520] we wanted to be sensitive enough as we all know it's much easier to name things
62
+ [341.520 --> 347.480] from a color photograph than from this kind of more abstract to the grayscale
63
+ [347.480 --> 354.120] drawing and it's done to make it difficult enough really and then like the
64
+ [354.120 --> 357.920] example I gave you you know what's happening if this is a perceptual
65
+ [357.920 --> 362.120] difficulties and this is why they're scoring below cutoff so they'd still be
66
+ [362.120 --> 366.320] impaired on the task but you would make it clear note that these are visual
67
+ [366.320 --> 370.320] errors and you would want to investigate further what's happening with these
68
+ [370.320 --> 376.000] kind of visual errors is it just a matter of detailed vision and they just need
69
+ [376.000 --> 380.320] glasses or is there other things going on and you might want to investigate and
70
+ [380.320 --> 387.360] assess further remember this is just a screen and it can't do everything for you
71
+ [387.360 --> 391.640] but it will definitely give you the hint of what to look at further and that's the
72
+ [391.640 --> 398.720] case here okay semantics then so the semantics task looks like this and they
73
+ [398.720 --> 402.800] just being asked to point to the fruit point to the animal and point to the
74
+ [402.800 --> 407.960] tool the score is simply out of three and the cutoff is three so in healthy
75
+ [407.960 --> 413.080] population norms nobody makes mistakes here the impairment incidents in our
76
+ [413.080 --> 419.160] sample was a lot lower as well as you can see and so typical examples when they
77
+ [419.160 --> 424.040] fail to do this correctly as I did I just have no idea and can't follow the
78
+ [424.040 --> 429.360] basic instructions and don't respond or you might get something like where they
79
+ [429.360 --> 433.080] respond to something within a category so again seem to have some sort of
80
+ [433.080 --> 440.880] semantic issue but still are able to understand sufficiently that they understand
81
+ [440.880 --> 445.000] the instructions so again you get kind of two levels of where things might be
82
+ [445.000 --> 450.320] going wrong in both cases they would be considered impaired on this task and
83
+ [450.320 --> 455.880] you would want to understand what's underlying it by doing federal assessments
84
+ [456.400 --> 461.500] so we often get asked and can we actually use the Oxford patients with
85
+ [461.500 --> 465.280] receptive aphasia if they've got difficulty in understanding language in
86
+ [465.280 --> 470.600] written or spoken form and this is something I've briefly spoken about when I
87
+ [470.600 --> 474.760] talked about background and we've tried to make as much as possible things
88
+ [474.760 --> 480.120] available by demonstrating by showing examples and so forth but if there is a
89
+ [480.120 --> 486.480] very severe receptive aphasia and they cannot even pass this task or understand
90
+ [486.480 --> 491.040] what you're asking them to do here then it is likely that very few of the
91
+ [491.040 --> 495.600] following tasks will be able to complete and maybe you want to call it a day
92
+ [495.600 --> 503.320] and stop the assessment here so the next question is an orientation one this
93
+ [503.320 --> 508.520] one is simple four questions about the city or town people are in at the
94
+ [508.520 --> 513.160] moment of a assessment what part of day it is so here you're just looking for
95
+ [513.160 --> 518.760] things like morning or afternoon or evening you're also telling them explicitly
96
+ [518.760 --> 522.600] without looking at a clock can you tell me what part of the day it is just
97
+ [522.600 --> 527.480] roughly approximately and then you ask the for the month and the year and so
98
+ [527.480 --> 533.360] similarly as with the previous task the normative data is at ceiling for
99
+ [533.360 --> 538.800] these tasks and questions where people don't tend to make any mistakes and so
100
+ [538.800 --> 545.360] anyone showing impairment here by making a single error would be classed as
101
+ [545.360 --> 550.800] impaired incidents of impairment in our sample was about quarter of our patients
102
+ [550.800 --> 556.400] that we assessed so some example responses would be again in the case of
103
+ [556.400 --> 561.240] full expressive aphasia if they are not responding but then you move to
104
+ [561.240 --> 565.640] multiple choice and you can find out all multiple choice questions are
105
+ [565.640 --> 573.320] correct and so they would score four out of four and second example is where
106
+ [573.320 --> 578.720] people might say they're hometown instead of the current place they're at say
107
+ [578.720 --> 583.120] something wrong on part of the day get the month correct get the year incorrect
108
+ [583.120 --> 588.520] and kind of show this kind of disorientation and so failure on orientation and
109
+ [588.520 --> 594.800] time and space common questions we get here is about using the multiple choice
110
+ [594.800 --> 602.640] questions do you adjust the scores and if they need MCQ do they automatically
111
+ [602.640 --> 608.200] score lower and so the answer to both of those is no so the way the scoring has
112
+ [608.200 --> 614.160] been set up is for it to not be affected by aphasia so there's no automatic
113
+ [614.160 --> 620.120] penalty for you not being able to use expressive language so in that respect
114
+ [620.120 --> 626.600] when you go to multiple choice because people can't say the responses there is
115
+ [626.600 --> 632.720] no penalty and the scores would just be as normal so any correct answer would
116
+ [632.720 --> 637.240] count as a point that said it may be that the reason that you're trying
117
+ [637.280 --> 641.200] multiple choice is not because they can't give you a free response but more
118
+ [641.200 --> 646.120] because you kind of want to prompt a little bit more to see if they would get
119
+ [646.120 --> 650.520] it correct if you were to give them multiple choice in that case you would just
120
+ [650.520 --> 655.480] be scoring on the free responses so say that example who gave the wrong part of
121
+ [655.480 --> 660.120] day you might say oh can I just show you a few options and you can you just have
122
+ [660.120 --> 665.440] another go and have another think about what part of day it is now just to see if
123
+ [665.440 --> 671.480] adding the multiple choice helps them figure out what is the correct answer but
124
+ [671.480 --> 676.360] those things are done more as a qualitative kind of extra trying to understand
125
+ [676.360 --> 680.920] a little bit more in depth what's going on but they wouldn't score for those
126
+ [680.920 --> 684.560] and you wouldn't need to do those as well so say especially if you're pressed
127
+ [684.560 --> 690.800] for time you just want to get the scores if people can speak then you wouldn't
128
+ [690.800 --> 698.600] show MCQ but you may want to at times just to figure out what they're like
129
+ [698.600 --> 703.920] without it affecting the scores this course would be just on free response
130
+ [703.920 --> 709.720] okay the visual field test as we've shown is just a simple confrontation task
131
+ [709.720 --> 714.840] where you hold your hands in the two upper quadrants and in the two lower
132
+ [714.840 --> 720.360] quadrants and you're asking the participant to just look at your nose so it's
133
+ [720.360 --> 725.160] very simple if they can detect the movements in the upper left and upper
134
+ [725.160 --> 732.000] right and lower left and lower right then they get the point so here we often
135
+ [732.000 --> 736.800] ask you know what if they're missing one side due to severe neglect rather than
136
+ [736.800 --> 740.520] due to a visual field impairment so they are missing everything that's
137
+ [740.520 --> 745.440] happening on their left side not because they have a hemionopia but just because
138
+ [745.440 --> 750.160] neglect is so severe that they're completely oriented to the other side and
139
+ [750.160 --> 756.640] not noticing anything happening in that side so the main question then is how
140
+ [756.640 --> 761.200] do you dissociate neglect from hemionopia and so what you have to do here is
141
+ [761.200 --> 765.960] actually run the neglect assessment so you can tell if this is the case but when
142
+ [765.960 --> 770.200] you then end up running the neglect task so the the heart scan installation task
143
+ [770.200 --> 775.080] you can then see whether they are scanning across so if people are not scanning
144
+ [775.080 --> 781.840] across and they're missing in this kind of visual field test then you can't
145
+ [781.840 --> 787.040] dissociate you might say it could be that they have both but they definitely
146
+ [787.040 --> 792.360] also have neglect the only time you can say well they clearly only have a
147
+ [792.360 --> 797.240] hemionopia or a pordrontonopia is when they pass the visual field test and sorry
148
+ [797.240 --> 801.960] when they don't pass the visual field test but they do pass and go and search
149
+ [801.960 --> 809.360] everywhere in the broken hearts test next this is a task on sentence reading
150
+ [809.360 --> 815.600] there's quite a complex sentence here that has got 15 words split over four lines
151
+ [815.600 --> 820.360] so we've tried to keep it centered and the scoring is very simple you kind of go
152
+ [820.360 --> 824.480] along as people are reading it and you tick off each of the words
153
+ [824.480 --> 828.920] importantly we'd also very much recommend that you write down the kind of
154
+ [828.920 --> 834.200] errors people make rather than just do a tick and a cross you don't really need
155
+ [834.200 --> 839.080] it for the score in the end you can just score whatever is correct and incorrect
156
+ [839.080 --> 844.040] but you do need it for yourself to maybe interpret or to maybe guide what kind
157
+ [844.040 --> 849.440] of full assessment maybe needed the cutoff here on the UK norms is 14 that means
158
+ [849.440 --> 854.000] people can make one error and still be okay once people start making more than one
159
+ [854.000 --> 859.640] error then they would be considered impaired on this task so here's some
160
+ [859.640 --> 866.000] example response classic example would be something like any of the lands got a
161
+ [866.000 --> 872.360] quay fought the Kolo Nell on his yacht so you get a series of types of errors
162
+ [872.360 --> 879.280] here which may hint at different things so for example lands instead of islands
163
+ [879.680 --> 884.640] missing the half seems to point towards having an issue on the left hand side
164
+ [884.640 --> 890.720] also completely missing the word sitting they've clearly got a very severe
165
+ [890.720 --> 896.560] left side of neglect which is causing them to either miscomplete words or mis parts
166
+ [896.560 --> 901.360] of words which is something we call neglect dyslexia when they're missing specific
167
+ [901.360 --> 908.360] parts of a word that are lateralized so it could be either missing or it could be
168
+ [908.520 --> 916.280] substitutions or they might say something like hands rather than lands so they
169
+ [916.840 --> 921.880] these kind of lateralized errors point towards possibility of neglect dyslexia
170
+ [921.880 --> 926.120] you've only got the one word here so you're not entirely sure but here's another one then
171
+ [927.080 --> 932.280] fought instead of thought so this is starting to very much paint a picture where you might want
172
+ [932.280 --> 938.120] to do a more fuller reading assessment to see what's going on on the other side here we have
173
+ [938.120 --> 945.720] these kind of regularizations as we recall it so quay instead of key and this kind of spelling
174
+ [945.720 --> 956.280] out of Kolo Nell instead of kernel and then this yacht so there's another oh sorry there's
175
+ [956.280 --> 962.120] another hint here that there's something wrong with that kind of route to reading so in psychology
176
+ [962.760 --> 969.000] we have this classic dual route model to reading one is the phonetics way which is how often
177
+ [969.000 --> 974.040] a lot of the children now are being taught how to read you know how to pronounce each of the word off
178
+ [974.600 --> 981.640] okay off put it together and that route works for a lot of the simple regular words but then
179
+ [981.640 --> 986.760] with these kind of irregular words you have to recognize them as a whole and you have to use this
180
+ [986.760 --> 994.680] other route to reading which is this kind of word recognition type role where when one of those
181
+ [994.680 --> 1001.800] routes gets impaired it's possible that you get this kind of pattern so these are just hints
182
+ [1001.800 --> 1007.240] I mean not saying the sentence reading is going to allow to diagnose all of these things for sure
183
+ [1007.240 --> 1015.080] but it will hints at these potential issues that you may want to assess further so
184
+ [1015.560 --> 1021.880] so common question here is often you know why do you have these complex irregular words in here
185
+ [1021.880 --> 1027.640] so I've tried to explain already it's looking for these things like surface dyslexia it's known as
186
+ [1027.640 --> 1037.560] where people regularize this kind of read so they might say Islam's for example so on the number
187
+ [1037.560 --> 1044.120] section what you get is these kind of numbers to write and people are asked to write each number
188
+ [1044.120 --> 1052.120] separately the cutoff for this is three so people have to be perfect because all of our healthy
189
+ [1052.120 --> 1060.440] normative data show that this is a task again at ceiling in neurologically healthy population
190
+ [1061.080 --> 1067.320] similarly the second task is a calculation task where there's some simple calculations just some
191
+ [1067.320 --> 1073.320] sums and subtractions we explicitly chose not to include any multiplications or division
192
+ [1073.320 --> 1080.200] because they tend to weigh much heavier on language domains than these kind of simple calculations
193
+ [1080.200 --> 1086.520] we do have one of each that has got a bridging function where you have to use a little bit more working
194
+ [1086.520 --> 1095.480] memory to keep the carry and move it to the next set of units so you know from seven plus nine
195
+ [1095.480 --> 1101.320] you go from these units to these teens you have to remember and that you have to carry one over
196
+ [1102.280 --> 1107.560] so the cutoff for that one is three so that's because quite a lot of our healthy norms
197
+ [1108.280 --> 1114.360] also struggles in particularly with this last more complicated subtraction 36 minus 17
198
+ [1115.000 --> 1122.840] equally 19 okay so here are then some example responses of typical errors so we very often see
199
+ [1122.920 --> 1132.840] this extra zero so we call it so people write 700 and eight or 15,000 with three zeroes and then 200
200
+ [1133.320 --> 1140.520] and so this this is this kind of semantic error where for them it you know 700 is 700 and it's an
201
+ [1140.520 --> 1146.120] error in this kind of place coding of where the units and the tens under hundreds belong
202
+ [1146.680 --> 1154.040] and it's very common in acute stroke to see these kind of errors is it much much less common to see
203
+ [1154.040 --> 1162.840] these errors six months down the line so when we try to understand what it means we probably think
204
+ [1162.840 --> 1169.560] it has something to do with executive control and an ability to inhibit this kind of pre potent 700
205
+ [1170.200 --> 1180.360] response and and can we override it with knowing oh actually it is 708 and only needs one zero to make
206
+ [1180.360 --> 1190.040] it a hundred another error might be some sort of perseveration on the numbers where single numbers
207
+ [1190.040 --> 1197.800] get repeated and they struggle to keep in mind the full number so you often see these kind of
208
+ [1198.280 --> 1206.360] repeated numbers and sometimes people completely write very very very very long strings of either
209
+ [1206.360 --> 1212.760] numbers or letters in response to this kind of task where you're asked to write down some numbers
210
+ [1214.440 --> 1221.800] on the calculation these are some typical responses like here where they struggle to do the
211
+ [1221.800 --> 1230.040] carry operation as well as struggles to see the minus and kind of continue on adding up
212
+ [1231.320 --> 1237.080] for example on the third one and again an error in the carry in this last one
213
+ [1239.160 --> 1244.040] in patient with severe phaser you will see often that when they can't respond and they can't say
214
+ [1244.040 --> 1250.040] and they can't also write very well you can move straight to multiple choice and if they get all
215
+ [1250.040 --> 1254.600] of the multiple choice correct that would give them a score at a four at a four so again there would
216
+ [1254.600 --> 1263.000] be no penalty for having to use mcq because of an aphazure so common questions here is about these
217
+ [1263.000 --> 1268.760] type of errors to be scored them differently and so the answer again is no and errors in error in
218
+ [1268.760 --> 1274.040] terms of normal cut offsets the same but in terms of again trying to understand a little bit more
219
+ [1274.040 --> 1279.240] what's going on and maybe direct some of the further assessments it's very important that you write
220
+ [1279.240 --> 1286.200] down the errors it really helps to interpret so which is why we always suggest it's a good thing to
221
+ [1286.200 --> 1291.720] not just see the scores but also see the performance of the patients it gives you a much richer picture
222
+ [1293.160 --> 1300.200] so mcqs are not scored differently unless you're just trying to use an mcq to further probe whether
223
+ [1300.200 --> 1305.160] they might be able to do it with an mcq in which case you would ignore those kind of scores you
224
+ [1305.160 --> 1312.280] would only I can't the mcq scores if they are not able to do free response but in the end they're
225
+ [1312.280 --> 1319.480] not scored differently if they're being used because of a language problem and finally we often get
226
+ [1319.480 --> 1324.840] asked if they can write things down to work out the answer and the answer is no so the idea is that
227
+ [1324.840 --> 1330.040] if they want to write down the response because they can't say that's okay but they shouldn't be using
228
+ [1330.120 --> 1337.640] pen and paper to try and calculate or to try and write down the operations to try and work it out
229
+ [1337.640 --> 1345.960] and show the workings okay next task is the hard scan solution task so this is our neglect
230
+ [1345.960 --> 1351.880] and task and the first thing to explain is that you run the practice and you explain
231
+ [1353.240 --> 1358.360] and so on this already we have some common questions around what do you do if the participant
232
+ [1358.360 --> 1363.400] can't hold a pen and although that's that isn't really underscoring I've put it here because
233
+ [1363.400 --> 1368.520] sometimes if they were we couldn't assess it because they couldn't hold a pen so we couldn't
234
+ [1368.520 --> 1373.720] actually test it and so we couldn't score it but importantly if they can't hold a pen we would
235
+ [1373.720 --> 1379.640] still assess it so try and get them to show you just by their finger and you as the examiner can
236
+ [1379.640 --> 1386.520] cross out because this task is really about trying to see if they search across a space if they can
237
+ [1386.520 --> 1393.160] orient and look for things around the space and it's not about holding a pen or not holding a pen
238
+ [1393.160 --> 1399.720] so if they can show you in another way just by pointing and that they are exploring all of the space
239
+ [1399.720 --> 1403.880] that's a valid response so please do assess patients who can't hold a pen as well
240
+ [1405.880 --> 1412.440] finally on the scoring then of this doing the practice so if somebody has an object-centered problem
241
+ [1412.440 --> 1420.200] so I've tried to explain this before so when they consistently cross out lateralized
242
+ [1420.200 --> 1426.520] hearts so for them they look like complete hearts and so they might do the task correctly so show
243
+ [1426.520 --> 1434.200] that they understand but systematically cross out hearts with a gap on one side if you have to do
244
+ [1434.200 --> 1441.640] the practice once or twice it doesn't impact the score so the score in the end is just on the actual
245
+ [1441.640 --> 1446.520] cancellation and there's nothing on how many practices you do but again these are observations
246
+ [1446.520 --> 1452.680] that may be relevant so we do encourage you to make notes and to keep track of these things just as a
247
+ [1452.680 --> 1461.960] more wider observational picture so here's an example of egocentric neglect so where the patient has
248
+ [1461.960 --> 1470.280] crossed out and only a series of hearts on one particular side of the page and then this is an
249
+ [1470.280 --> 1475.560] example of pure alacentric neglect where they've crossed out throughout all of the page but
250
+ [1475.560 --> 1483.640] systematically made these lateralized errors so for the scoring this is what it looks like you can
251
+ [1483.640 --> 1490.840] actually divide up that whole roster that you've just seen let me go back so there's little dots
252
+ [1490.840 --> 1499.880] here as you can see and what they denote is that if you want to score it by a box you can draw a
253
+ [1499.880 --> 1507.480] line here and draw a line across and you'll see that you have one two three four five six seven eight
254
+ [1507.480 --> 1514.760] nine ten boxes and that's what these are here this is box one to ten and so in each of them there's
255
+ [1514.760 --> 1518.680] five complete hearts there's five hearts with a gap on the left and there's five hearts with a gap
256
+ [1518.680 --> 1524.760] on the right so you would go through and count how many in each of those areas how many of the
257
+ [1524.760 --> 1529.880] complete hearts did they get how many of the left gaps how many of the right gaps so perfect
258
+ [1529.880 --> 1536.200] performance obviously is to only have five out of five on all of the complete hearts and have zero
259
+ [1537.000 --> 1542.360] lateralized errors but in the end you count them up over here so you say total number of left gap
260
+ [1542.360 --> 1549.320] hearts say zero total number of right gap hearts say five and you will find that you then have an
261
+ [1549.320 --> 1555.880] object a symmetry of left minus right so you'd have an object a symmetry of minus five
262
+ [1557.320 --> 1564.840] similarly you record the time and then you also count up those total correct so those are all of those
263
+ [1566.360 --> 1572.520] and do the space symmetry so for space a symmetry you get the total correct ones seven eight nine
264
+ [1572.520 --> 1579.480] ten so say in this case they've got them all say 20 minus the total correct
265
+ [1581.160 --> 1588.600] in these boxes and so say they didn't go to the extreme ends but they got these so say 10
266
+ [1588.600 --> 1595.160] so in that case you would have ten a 20 minus 10 so you'd have an asymmetry of 10 so the problem
267
+ [1595.880 --> 1601.880] would be left lateralized because it's a positive value right whereas if you had a negative
268
+ [1601.880 --> 1608.360] value you would have things missing on this side so that's the thing to always remember and I'll
269
+ [1608.360 --> 1614.200] show you again at the end this kind of left versus right so all the positive asymmetry values denotes
270
+ [1614.760 --> 1621.080] left neglect all the negative asymmetry values be they object to space denotes right neglect
271
+ [1622.840 --> 1628.600] so here's the norms and a cutoff so there's a cutoff for the overall performance so people who's
272
+ [1629.160 --> 1635.720] miss more than eight of the hearts so of the correct ones would be considered impaired
273
+ [1638.040 --> 1646.120] so it's okay to miss a few right that's that would be with a normal range of the healthy neurological
274
+ [1646.120 --> 1654.440] neurologically healthy data on the space a symmetry we would more conservatively probably say
275
+ [1654.440 --> 1659.080] you have to have an asymmetry more the more than three be that positive or negative
276
+ [1659.880 --> 1667.080] an objective symmetry none of our healthy participants showed any errors in cancelling
277
+ [1667.080 --> 1675.080] as hearts with a cap so anything that's more than zero would be considered outside of the norm
278
+ [1675.080 --> 1680.440] cutoff data in terms of clinical relevance that may be a different question right if they just
279
+ [1680.440 --> 1686.760] make the one error I'd be very hesitant to call that a very clear object neglects once they
280
+ [1686.760 --> 1692.360] start making two three four five six and and and up that's when it becomes much more
281
+ [1694.520 --> 1700.760] clear that this is this is not just a one-off error so we typically although the norms are zero
282
+ [1700.760 --> 1707.560] we would typically have a cutoff of more than one so you don't count this one-off error as a clear
283
+ [1708.360 --> 1717.080] impairment so this is impairment incidents that half the people we tested acutely
284
+ [1718.360 --> 1724.360] show an impairment on level of accuracy so got less than 42 correct and then these kind of
285
+ [1724.360 --> 1733.480] spatial asymmetries you can see more often left-sided neglect than right-sided neglect and
286
+ [1733.480 --> 1741.320] similarly more often left-sided object neglect than right-sided object neglect common questions here
287
+ [1741.320 --> 1748.120] you know there's a time limit here of three minutes why is that and what happens if they fail to
288
+ [1748.120 --> 1756.040] to finish in that time so the reason why there is a time limit is to keep the interval as stable as
289
+ [1756.040 --> 1761.080] possible between encoding of the sentence and then the recall of this task where you have to
290
+ [1761.080 --> 1766.360] recall the sentence so we don't want to make that gap too wide for it to become too difficult
291
+ [1768.440 --> 1775.720] and in terms of what if they fail to finish should we still score this yes definitely because
292
+ [1775.720 --> 1782.600] all of the healthy participants were able to finish in that time and needing more than three minutes
293
+ [1782.600 --> 1788.440] is clinically important and it may well be that they indeed don't have an neglect but have this
294
+ [1788.440 --> 1794.600] kind of slowed processing and that's a relevant thing to notice although this task is aimed to
295
+ [1794.600 --> 1802.040] the technique neglect there's also in the overall accuracy impairment hints at sustained attention
296
+ [1802.840 --> 1809.320] hints at broader selective attention issues and slower processing again this would be a hint
297
+ [1809.320 --> 1815.240] it's a screen so we urge you to assess further but it will definitely give you a very clear hint
298
+ [1815.400 --> 1824.040] at what maybe the key problem here okay in terms of praxis the task is to copy
299
+ [1825.720 --> 1832.120] what you see in a just like in a mirror and so you get a first presentation and a second
300
+ [1832.120 --> 1839.400] presentation if you need it so basically do the first presentation like this and like this and
301
+ [1839.400 --> 1845.400] they copy and this is first gesture and second gestures if they get it correct you go tick
302
+ [1846.120 --> 1851.080] for the first one and tick for the second one and that's it you don't repeat and they score three out
303
+ [1851.080 --> 1860.920] of three if however they make an error so say they do this and then you would repeat the task and
304
+ [1860.920 --> 1866.840] so you would have a tick and across here for the first presentation and then if second time
305
+ [1866.840 --> 1873.640] round they get it all corrected have tick and tick and they would score two out of three if they
306
+ [1873.640 --> 1878.840] still make a mistake in the second presentation but they get one right they would score one out of
307
+ [1878.840 --> 1884.120] three and if after the second presentation they still didn't get any right they would score zero
308
+ [1884.120 --> 1890.600] out of three and so that's the same for the one underneath as well so three out of three perfect
309
+ [1890.600 --> 1897.080] first time two out of three perfect second time one out of three still an error in the second
310
+ [1897.080 --> 1905.080] presentation zero out of three is completely incorrect even after the second presentation on the
311
+ [1905.080 --> 1911.720] finger positions you basically showed them as finger position and asked them to copy it and so
312
+ [1911.720 --> 1919.160] you only have one box so it's either correct or incorrect first time second time and that's the
313
+ [1919.160 --> 1926.440] same for the second one so typical example responses of errors might be for the second one that
314
+ [1926.440 --> 1934.600] they do something like this or that they orient it the wrong way around and that kind of gets you
315
+ [1934.600 --> 1939.800] to the common questions of have you then score one out of three on these finger positions because
316
+ [1939.800 --> 1947.320] there's no two gestures so basically three out of three for first time perfect two out of three for
317
+ [1947.320 --> 1955.160] second time perfect and one out of three for second time not perfect but recognizable so either
318
+ [1955.160 --> 1961.240] like some sort of orientation error or something that shows you they kind of have it but not quite
319
+ [1963.640 --> 1969.480] and so that's how you still score one out of three on the second presentation so you can get
320
+ [1969.480 --> 1978.040] some credit for having a recognizable gesture even though it's not perfect and another common
321
+ [1978.040 --> 1982.120] question here is you know what if there is severe arthritis as preventing this full straighting
322
+ [1982.120 --> 1987.960] over fingers or rather than this you get something that's more like this but it's simply because
323
+ [1987.960 --> 1994.360] they cannot straighten their hand in that case you would give the score so remember you're only
324
+ [1994.360 --> 1999.560] trying to assess what you're trying to assess so if this is this is about motor planning and
325
+ [1999.560 --> 2009.480] about mirroring so if that is intact then the full score is given this is not an easy task in
326
+ [2009.480 --> 2016.200] several of our healthy normative control participants also make errors or need second presentation so
327
+ [2016.280 --> 2024.760] the cutoff is is very conservative here at eight okay moving on to memory we're on task nine there's
328
+ [2024.760 --> 2033.960] only 10 tasks in the ox so we're getting through it so memory and this is about first asking free
329
+ [2033.960 --> 2043.960] recall do you remember the sentence you read before and they either remember what they what they
330
+ [2043.960 --> 2050.360] do and for each one they don't remember are incorrect you move on to do recognition so say they
331
+ [2050.360 --> 2056.760] say oh it was something by islands and a kernel but quite remember what so then you only show page two
332
+ [2056.760 --> 2061.880] of the multiple choice and say okay which one of these four words was in there and similarly
333
+ [2061.880 --> 2067.240] showed the last one which one of these four words was in there so if the patient achieves full
334
+ [2067.240 --> 2072.440] marks for recall do you then automatically award the points for recognition and the same for a
335
+ [2072.440 --> 2078.440] partial score so yes indeed so if the recall was fully there then you don't test the recognition
336
+ [2078.440 --> 2084.120] and the points just carry over and so if you get four or four here you automatically four or four
337
+ [2084.120 --> 2090.280] here say in our example the person got two out of four here and then got both of the extra ones
338
+ [2090.280 --> 2097.800] correct in multiple choice so they get four out of four and there we often get asked why there's
339
+ [2097.800 --> 2103.560] no specific cutoff for verbal recall only and that's very much to do with our idea of not
340
+ [2104.200 --> 2108.520] penalizing people within aphasia and treating everyone the same across the board
341
+ [2110.280 --> 2119.480] so the cutoff is only based on recognition total scores and the cutoff is also three so people are
342
+ [2119.880 --> 2129.240] to make one mistake and would so be considered unimpaired okay final bit of memory is then a
343
+ [2129.240 --> 2135.880] recognition which is just task recognition incidental recognition about the things they've seen before
344
+ [2137.560 --> 2143.320] and you just can't see the number of correct answers and then finally and maybe the most
345
+ [2143.400 --> 2150.040] difficult one so if you need a little break put it on pause then we can go through this one so
346
+ [2150.040 --> 2156.360] this is the task switching and task this one is probably harder to score and harder to interpret so
347
+ [2156.360 --> 2162.840] we'll walk you through what the task is so as you know there's three parts to it there's two baselines
348
+ [2162.840 --> 2167.560] where you first just connect the circles then you just connect the triangles and then finally
349
+ [2167.880 --> 2178.840] you alternate between them so the cutoff score is made on the executive score so this executive
350
+ [2178.840 --> 2185.320] score is the sum of the baselines so some are circles and triangles minus the score on the mix
351
+ [2185.320 --> 2190.040] and we'll talk you through some examples so you can see how that works and why that was
352
+ [2190.920 --> 2197.480] set up to be like that so on the first example response is I'm just going to show you
353
+ [2198.280 --> 2204.920] examples of patients I've had before so this would be simple failure to understand the instruction
354
+ [2206.040 --> 2213.560] kind of repetitive behavior this kind of perseveration and just drawing lines without really
355
+ [2213.560 --> 2223.400] understanding what the task is next this happens quite often as well where people again don't
356
+ [2223.400 --> 2229.160] quite understand the task and I just trying to connect two shapes at a time rather than make a full
357
+ [2229.160 --> 2235.720] trail so this is why we run the practice so we try and explain it but sometimes even after explaining
358
+ [2235.720 --> 2246.760] and trying to do joint practice people still don't really get how to do the final task and finally
359
+ [2246.760 --> 2252.440] this would be an example of someone who starts off okay and then stops switching and continues on
360
+ [2252.440 --> 2258.200] one shape this is a very common error and this is a very classic problem in the switching aspect and
361
+ [2258.200 --> 2265.160] that's what this task is really about with that executive score because what we're trying to understand
362
+ [2266.040 --> 2272.600] is not just simple instruction comprehension and trail making but this kind of added cost of having
363
+ [2272.600 --> 2280.520] to switch and this kind of added executive load what that does and whether that particularly is impaired
364
+ [2282.760 --> 2289.640] so here's another example that maybe some of you have come across which looks like this on the
365
+ [2289.640 --> 2298.200] switching task and so they would have scored say six out of 13 and what's happening here is even
366
+ [2298.200 --> 2303.160] though we've tried to really keep it central some patients with various avianic licks and especially
367
+ [2303.160 --> 2308.520] if they have both object and space and then this this kind of incident on the glect where they're
368
+ [2308.520 --> 2313.800] not specifically being told oh try and get all of the space but just you know make this trail
369
+ [2314.920 --> 2322.120] often you see this exacerbated neglect response where they just fail to explore one side of the shapes
370
+ [2324.280 --> 2328.840] but this is again why we have the baselines because they're doing quite a good job in switching
371
+ [2329.320 --> 2334.760] so you can see you know they're clearly switching in the space that they can see and then when you
372
+ [2334.760 --> 2342.040] compare the baseline performance to this kind of executive score you will find that these people
373
+ [2342.040 --> 2347.480] would not be classes impaired because they don't seem to have this added executive switching
374
+ [2348.200 --> 2353.880] deficit in fact they do just as well are better than on the single baseline tasks and that's
375
+ [2353.880 --> 2355.560] what this task is getting at.
376
+ [2355.640 --> 2366.040] I'll show this point. Okay yeah come on question is you know but what if they fail like your first
377
+ [2366.040 --> 2371.560] participants you know don't really get the task they still won't have an executive score below
378
+ [2371.560 --> 2377.320] cut off but surely these people are impaired and this is this is a crucial part to think about so
379
+ [2377.320 --> 2384.280] when we call an executive impairment we want to specifically look at this higher order switching
380
+ [2384.280 --> 2392.120] impairments that's above and beyond simple trail making it's about this level of high level
381
+ [2392.120 --> 2400.520] switch that goes on so so indeed they would not be impaired on the switching but they would be
382
+ [2400.520 --> 2407.480] impaired overall we would make clear notes that what's happening here is impairment on each of the
383
+ [2407.480 --> 2414.680] tasks and basically much more likely to be an impairment in understanding of complex instruction
384
+ [2415.320 --> 2423.640] than an executive high level switching impairment. So the manual states to record a time why do you
385
+ [2423.640 --> 2429.160] have to do this is there a maximum time allowed so yeah we have maximum time for each of the
386
+ [2429.160 --> 2434.760] baselines and particularly to go on 30 seconds there's no maximum time on the switching
387
+ [2435.720 --> 2441.000] but the timing is there because we've recorded an unhealthy population and sometimes these kind of
388
+ [2441.000 --> 2448.040] more subtle deficits maybe obviously in timing only and not in accuracy we've really tried to
389
+ [2448.040 --> 2453.640] design these tasks to be more about accuracy than reaction time because there's a lot of confants
390
+ [2453.640 --> 2458.680] when you're using time in these kind of neurological populations they may already have a general
391
+ [2458.680 --> 2464.520] slowing due to motor deficits which is nothing to do with their cognitive deficits but never the
392
+ [2464.600 --> 2474.040] less if you know that they're fine in that respect you can look at the at a time it takes to do the
393
+ [2474.040 --> 2480.520] switching compared to the baseline and there is norms for that in the original normative paper but it's
394
+ [2481.400 --> 2490.280] most of the time we focus on accuracy but it's there okay next section how to interpret those
395
+ [2490.280 --> 2497.800] cutoffs so here we go again this is the same overview I've shown you before and you can see here
396
+ [2498.600 --> 2507.800] who we've added this in so if the asymmetry is more than one versus less than minus one so this
397
+ [2507.800 --> 2514.120] denotes left neglect this denotes right and again left and right and so we've added that in here
398
+ [2515.000 --> 2521.560] so you can have this kind of keep reference to keep looking at and so all the other scores are here
399
+ [2521.560 --> 2529.640] as well but importantly also this kind of overall score and so on so then this is the detail
400
+ [2530.760 --> 2535.640] case that other one was a little bit small for you but we've gone over all of these tasks
401
+ [2535.640 --> 2542.200] specifically and you can see some of these are basically people need to be at ceiling and anything
402
+ [2542.200 --> 2549.000] below ceiling is impaired and sometimes one or two errors are allowed within the normal range
403
+ [2551.320 --> 2558.120] okay so here's some extra questions overall about is there any norms for the total score of the ox
404
+ [2558.760 --> 2564.280] so there is no straightforward just adding up of all scores and getting to a total score as you
405
+ [2564.280 --> 2571.000] may know from things like the mockout yes and that's mainly because they would be very severely
406
+ [2571.000 --> 2576.520] weighted by some of the tasks for example the the heart scancillation task is at a 50 whereas a
407
+ [2576.520 --> 2582.520] lot of the other scores are only at a four and if we total them all up it'd be very much driven by
408
+ [2582.520 --> 2588.280] that performance in some of the the biggest scored items so we don't do that so you can't get a total
409
+ [2588.280 --> 2595.560] score in that way in order to get some idea about severity overall we have suggested that people
410
+ [2595.560 --> 2601.880] use total number of tasks impaired so we've just walked through all of them there's about 10 tasks
411
+ [2601.880 --> 2607.640] and in that sense you might get a bit of an idea of severity overall across domains whether they're
412
+ [2607.640 --> 2612.920] impaired in one task versus seven tasks will give you a different clinical picture
413
+ [2614.920 --> 2620.680] it's worth thinking about this though because it may be that you have a relatively small impairment
414
+ [2620.680 --> 2626.200] in several tasks versus a very very severe impairment in one task so this kind of severity
415
+ [2627.080 --> 2631.480] depends on how you look at it if we do this we're kind of talking about overall burden
416
+ [2632.200 --> 2639.560] of domain impairments and higher numbers mean more specific domains and tasks impaired
417
+ [2641.000 --> 2646.680] and the norms for that is simply the same with anything more than zero would be outside of
418
+ [2646.680 --> 2653.000] the norms right because by definition each of the tasks have their own cutoffs so having a total
419
+ [2653.000 --> 2660.440] score of total impairment score anything higher than zero would be an impairment so we also often
420
+ [2660.440 --> 2666.120] get asked about this norm data you know most of the norm data from our original paper was gotten
421
+ [2666.120 --> 2671.880] people aged over 65 and so does this mean that we can't use the screening younger stroke survivors
422
+ [2672.440 --> 2678.840] and the answer obviously is no please do use it in younger stroke survivors but maybe be aware
423
+ [2679.480 --> 2688.520] that those cutoffs may be quite liberal in that sense for maybe younger people but if they are
424
+ [2688.520 --> 2695.160] impaired on the older cutoffs then they would definitely be impaired so it's only going in one
425
+ [2695.160 --> 2702.600] direction if that makes sense so say person scores to add a four on the naming that would be impaired
426
+ [2702.600 --> 2707.720] for the over 65s that will definitely be impaired for the under 65s as well so the cutoffs definitely
427
+ [2707.720 --> 2714.360] do apply maybe they're a little bit generous for younger people but at the same time this is just
428
+ [2714.360 --> 2722.600] the screening just looking at this very kind of growth domain specific impairment and for that definitely
429
+ [2722.920 --> 2729.800] you can use it more norms are being gathered and several of the translations are gathering
430
+ [2730.600 --> 2736.760] wider age range norms but we chose specifically to go for the older population because that is
431
+ [2737.400 --> 2747.160] the population that we end up seeing next up how to use the wheel of cognition so this is what
432
+ [2747.160 --> 2753.240] it looks like this is how we report so for example you would cross out two areas where they were in
433
+ [2753.240 --> 2761.000] pan and you'd make a little comment and similarly you would do so as you go around and do all the other
434
+ [2761.000 --> 2766.440] task it's nice to also know when people do very well so you know this clearly in this case for
435
+ [2766.440 --> 2773.720] example there's a language impairment but they still could do calculation so it's nice to highlight
436
+ [2774.360 --> 2781.400] the elements of preserved function as well this patient also shows some sequencing and
437
+ [2782.040 --> 2789.720] praxis issues poor verbal memory specific on the language component likely related to the fact
438
+ [2789.720 --> 2794.920] that they have poor language encoding and they had issues with the reading and the naming
439
+ [2795.240 --> 2804.840] and in this case and also objects neglect so here the exact score as you can see is no
440
+ [2804.840 --> 2810.200] crust out so they weren't particularly impaired but they had poor complex instruction understanding
441
+ [2811.000 --> 2815.320] both in baseline and switching with poor and these were the scores so these kind of bits
442
+ [2815.880 --> 2822.680] around the frame are to write comments to write specific scores to write observations so we
443
+ [2822.680 --> 2830.920] encourage people very much to use that space and to add more than just coloring in the wheel and
444
+ [2830.920 --> 2837.480] actually use it to report some of these extra elements nevertheless you can see at a glance here
445
+ [2837.480 --> 2843.480] where people strengths and weaknesses are and just looking across the different domains
446
+ [2844.600 --> 2851.960] here's a second example this person made these specific neglect dyslexia question mark
447
+ [2853.320 --> 2857.240] errors didn't seem to have a visual field deficit but they still did this
448
+ [2859.960 --> 2866.120] show these extra zeros happening in the writing and they stopped switching four shapes in
449
+ [2867.720 --> 2873.880] they checked the task understanding at the end and that was all good so it wasn't that they didn't
450
+ [2873.880 --> 2877.640] understand so perhaps you know you might think is this kind of hinting at something like
451
+ [2878.200 --> 2882.520] goal neglect where they do understand the rules and they know what they have to do but then they
452
+ [2882.520 --> 2888.840] fail to follow those instructions when it actually comes down to it and so again this is something
453
+ [2888.840 --> 2896.360] you might want to look at in a bit more depth and that's it's that's the last example and that's
454
+ [2896.360 --> 2902.600] the end of this session so thanks very much for listening to this one as well and we've got the
455
+ [2902.680 --> 2908.600] website oxtest.org where you find everything and then my lab website about ongoing research
456
+ [2908.600 --> 2913.000] if you're interested come and take a look thanks very much
transcript/allocentric_EoEVXS8K5w4.txt ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.120] Blogs model of allocentricity and psychocentricity is one of the biggest models in travel and tourism theory.
2
+ [7.120 --> 12.720] But I will admit that does sound a little bit complicated. So in today's video,
3
+ [12.720 --> 17.440] I'm going to make it super simple so that you understand what blog was talking about,
4
+ [17.440 --> 22.320] how why it matters. If you are new here, my name is Dr. Toby Stanton and I'm here to teach you all
5
+ [22.320 --> 29.520] things travel and tourism. So let's start at the beginning. What is Blogs model of allocentricity
6
+ [29.520 --> 35.760] and psychocentricity? Well Stanley Blogs model of allocentricity and psychocentricity has been
7
+ [35.760 --> 42.960] why he taught and cited for almost 50 years. And I'm guessing that you are studying this theory too
8
+ [42.960 --> 51.120] or else find here only kidding fear not if you are here to learn more about blog then you will do
9
+ [51.120 --> 57.200] just that. So make sure you stick around right until the end and you will be a blog expert.
10
+ [57.200 --> 62.560] So Blogs model is largely regarded as a cornerstone of tourism theory. In other words,
11
+ [62.560 --> 69.280] it's pretty important. In fact, Blogs model has provided foundations for many other studies
12
+ [69.280 --> 75.280] throughout the past four decades and this has helped tourism industry stakeholders to better
13
+ [75.280 --> 82.480] understand what's going on and manage their tourism provision. In fact, Blogs model was the precursor.
14
+ [82.560 --> 87.280] In other words, it came before the famous theory that Butler has made Butler's tourism
15
+ [87.280 --> 92.320] area life cycle. And if you don't know what that is, make sure you check it out. Blog wanted to
16
+ [92.320 --> 98.400] examine the way in which tourism destinations develop. How do they grow? How and why do they decline?
17
+ [98.400 --> 104.400] And how can we make relatively accurate predictions to help us better understand and manage the
18
+ [104.400 --> 110.160] tourism provision at hand? Blogs research found that there were or are distinct correlations between
19
+ [110.160 --> 115.360] the appeal of a tourist destination to different types of tourists and the rise and fall in
20
+ [115.360 --> 122.240] popularity of this destination. Essentially, Blogs delineated these types of tourists according to
21
+ [122.240 --> 129.440] their personalities. He then plotted these along a continuum in a bell shaped normally distributed curve
22
+ [129.440 --> 135.200] and it was this curve that identified the rise and fall of destinations. You said this would be
23
+ [135.920 --> 142.400] I hear you, I hear you. Okay, that did sound a bit complicated. Let me simplify it. To put it simply,
24
+ [142.400 --> 149.920] Blogs theory demonstrates that the popularity of a destination will rise and fall over time,
25
+ [149.920 --> 155.520] depending on which types of tourist visit the destination. Okay, hopefully you get that. But that's
26
+ [155.520 --> 160.560] not the full story. Keep watching. To really understand this theory, let's start with a little bit of
27
+ [160.560 --> 168.080] history. Why did Blogs do this research in the first place? Well, Blogs research began back in 1967
28
+ [168.080 --> 173.280] when he worked for market research company Behaviour Science Corporations, also known as Bacicco.
29
+ [173.280 --> 179.280] Blog was working on a consulting project whereby he was sponsored by 16 domestic and foreign
30
+ [179.280 --> 185.280] airframe manufacturers on various magazines. The intention was to examine and understand the
31
+ [185.360 --> 190.960] psychology of certain segments of travellers. During this time, the commercial aviation industry
32
+ [190.960 --> 196.160] was only just developing. Airlines wanted to better understand their potential customers.
33
+ [196.160 --> 202.240] They wanted to turn non-flyers into flyers and they wanted Blog to help. This saw the
34
+ [202.240 --> 207.600] bird of Blogs research into tourism motivation that later spanned into decades of research into
35
+ [207.600 --> 214.240] the subject. So why do these tourist destinations rise and fall in popularity? Well, Blogs model of
36
+ [214.240 --> 220.880] allocentridity and psychocentridity demonstrated that this does indeed happen. Essentially, Blogs
37
+ [220.880 --> 227.920] suggested that as a destination grows and develops and also declines, it attracts different types of
38
+ [227.920 --> 232.960] people. Blog pointed out that as a destination reaches a point in which it is widely popular,
39
+ [232.960 --> 238.080] with a well-established image, the types of tourists will be different from those who have
40
+ [238.080 --> 243.680] visited the destination before and before it became widely developed. In other words, the mass
41
+ [243.680 --> 248.720] tourism market attracts very different people from the niche and the non-mass tourism fields.
42
+ [248.720 --> 254.080] And Blog also pointed out that the area eventually loses its positioning in the tourism market.
43
+ [254.080 --> 258.720] The total tourist arrivals decrease gradually over the years and the types of tourists who are
44
+ [258.720 --> 263.920] attracted will once again change. Now there are loads of examples of this throughout the world,
45
+ [263.920 --> 272.800] but I'll give you one. Let's take Goa. 20, 30 years ago Goa was a hippie backpacker destination.
46
+ [272.800 --> 278.240] There weren't that many people that went there. However, tourism then started to develop and grow and
47
+ [278.240 --> 282.720] whilst I wouldn't say certainly with the international market, I wouldn't say that Goa is
48
+ [283.680 --> 289.280] necessarily mass tourism, but it's definitely been going in that direction for some time.
49
+ [289.280 --> 295.360] And there have been companies like Monarch and Thomas Cook, who no longer exist, who were offering
50
+ [295.360 --> 299.600] package tours. Now there are lots of other travel agents who are doing this nowadays, but they were
51
+ [299.600 --> 304.640] two big names that I thought were worth mentioning. So the people that Thomas Cook was flying out there
52
+ [304.640 --> 311.440] and I took one of their flights myself were not the hippie backpackers. In fact, there were lots of
53
+ [311.440 --> 316.720] sort of middle-aged older people who are staying in a higher quality accommodation and they have
54
+ [316.720 --> 321.680] more money to spend. So they're different types of tourists, they're different people, they're going
55
+ [321.680 --> 327.040] to act different, they're going to want a different thing. Okay, so all of this has led to Blog
56
+ [327.040 --> 334.560] developing a typology. Blog developed a typology that is basically a way to group people or to
57
+ [334.560 --> 340.480] classify them based on certain characteristics. In this case, Blog classifies tourists based on
58
+ [340.480 --> 346.240] their motivations. Blog examined traveler motivations and came up with classifications for tourists.
59
+ [346.240 --> 352.720] He came up with two, aloe centric and psychocentric, which were then put at extreme ends of the scale.
60
+ [352.720 --> 358.480] As you can see in this diagram, psychocentric tourists are placed on the far left of the scale
61
+ [358.480 --> 363.760] and aloe centric tourists are placed at the far right. The idea is then that a tourist can be
62
+ [363.760 --> 369.440] situated at any place along this scale. Okay, so you get that there's a scale but we still don't
63
+ [369.440 --> 374.320] know what these words mean, right? I'm going to tell you. Let's start by looking at aloe centric
64
+ [374.320 --> 380.720] tourists. In Blog's model of aloe centricity and psychocentricity, the aloe centric tourist is
65
+ [380.720 --> 386.960] most likely associated with destinations that are un or underdeveloped. These tourists might be
66
+ [386.960 --> 392.480] the first tourist to visit an area. They might be the first intrepid explorers. The ones brave enough
67
+ [392.480 --> 397.920] to travel into the unknown. Is it bad that I've got frozen, come to my mind as I say that? You can
68
+ [397.920 --> 404.640] tell I've got two small kids. Aloe centric tourists typically like adventure. They're not afraid of
69
+ [404.720 --> 411.760] the unknown and they like to explore. No familiar food? Heck, let's give it a try. Nobody speaks
70
+ [411.760 --> 418.880] English? I'll get by just fine with hand gestures and my translation app. No Western toilets. My
71
+ [418.880 --> 424.160] thighs are as strong as steel. Aloe centric tourists are often found traveling alone. They're not
72
+ [424.160 --> 429.280] phased that the destination they're visiting doesn't have a chapter in their guidebook. In fact,
73
+ [429.280 --> 435.040] that's what excites them. Aloe centric tourists enjoy cultural tourism. They are ethical
74
+ [435.040 --> 440.640] travelers and they love to learn and research has suggested that only 4% of the population is
75
+ [440.640 --> 446.880] predicted to be purely aloe centric. Whilst many people do have aloe centric tendencies, they're
76
+ [446.880 --> 452.640] more likely to sit further along Blog's scale and be classified as near or centric aloe centric.
77
+ [452.640 --> 457.120] Okay, so let's summarise some of the characteristics that are associated with
78
+ [457.120 --> 462.400] aloe centric tourists. Aloe centric tourists commonly are independent travelers,
79
+ [463.040 --> 470.640] excited by adventure. Egeta learn. They like to experience the unfamiliar. They are put off by
80
+ [470.640 --> 477.280] group tours packages and mass tourism. They enjoy cultural tourism. They are ethical tourists.
81
+ [477.280 --> 483.280] They enjoy a challenge. They are advocates of sustainable tourism and they enjoy embracing
82
+ [483.440 --> 489.440] slow tourism. On the opposite end of the spectrum we have the psycho centric tourists.
83
+ [489.440 --> 495.120] In Blog's model of aloe centricity and psycho centricity, psycho centric tourists are most
84
+ [495.120 --> 500.960] commonly associated with areas that are well developed or even overdeveloped for tourism.
85
+ [500.960 --> 506.320] Many people will have visited the area before them. It's been tried and tested. These tourists
86
+ [506.320 --> 510.000] feel secure knowing that their holiday choice will provide them with the comforts and
87
+ [510.000 --> 515.520] familiarity that they know and love. What is there to do on holiday? I'll find out from
88
+ [515.520 --> 520.640] the rep at the welcome meeting. Want the best spot by the pool? I will get up super early and
89
+ [520.640 --> 528.000] put my towel down on that sun lounger. Thursday? Get me to the all-inclusive bar. Psychocentric tourists
90
+ [528.000 --> 533.440] travel in organised groups. Their holidays are typically organised for them by their travel
91
+ [533.440 --> 538.000] agent. These travelers seek the familiar. They are happy in the knowledge that their holiday
92
+ [538.000 --> 542.640] resort will provide them with their home comforts. The standard activity level of psycho centric
93
+ [542.640 --> 548.720] tourists is low. These tourists enjoy holiday resorts that are all inclusive. They are components
94
+ [548.720 --> 554.080] of enclave tourism, meaning that these people are likely to stay put in their hotel or their
95
+ [554.080 --> 559.920] resort for the majority of the duration of their holiday. These are often repeat tourists who choose
96
+ [559.920 --> 567.040] to visit the same destination year on year. So I told you I'd make this easy. Let's summarise
97
+ [568.000 --> 574.480] what are the typical characteristics associated with psycho centric tourists. They enjoy familiarity.
98
+ [575.280 --> 580.480] They like to have their home comforts whilst on holiday. They give preference to known brands.
99
+ [581.280 --> 587.520] They travel in organised groups. They enjoy organised tours, package holidays and all-inclusive
100
+ [587.520 --> 593.600] tourism. They like to stay within their holiday resort. They typically do not experience much
101
+ [593.600 --> 598.320] of the local culture. They do not learn much about the area they are visiting or people that
102
+ [598.320 --> 603.920] live there. They often pay one flat fee to cover the majority of the holiday costs. And they are
103
+ [603.920 --> 610.640] regular visitors to the same area or resort. So because Pog's model is essentially a spectrum,
104
+ [610.640 --> 615.040] you don't have to be one or the other. You can sit somewhere in between. So there is also
105
+ [615.040 --> 620.480] something called a mid-centric tourist. The reality is that not many tourists neatly fit into either
106
+ [620.480 --> 626.800] of these allocentric or psychocentric categories, which is why Pog developed a scale. As you can see
107
+ [626.800 --> 631.920] in the diagram, the largest category of tourists fall somewhere within the mid-centric category on
108
+ [631.920 --> 639.040] the spectrum. Tourists can lean towards allocentric or towards psychocentric, but ultimately most people
109
+ [639.040 --> 644.800] are going to sit somewhere in the middle. Mid-centric tourists like some adventure, but also some of
110
+ [644.800 --> 650.240] their home comforts. Perhaps they put their holiday themselves through dynamic packaging. But then
111
+ [650.240 --> 655.760] they spend the majority of their time in their holiday resort. Or maybe they book an organised package,
112
+ [655.760 --> 660.560] then they choose to break away from the crowds and explore the local area. So hopefully you are now
113
+ [660.560 --> 667.120] beginning to understand how Pog's model works. And it is actually quite simple, despite the use of
114
+ [667.120 --> 673.280] quite complicated sounding words. So before I finish this video, I'm just going to round up by saying
115
+ [673.280 --> 679.200] what are the good things and the bad things about Pog's model. Pog's model of allocentricity and
116
+ [679.200 --> 685.280] psychocentricity has been widely cited throughout the academic literature for many years. It's a
117
+ [685.280 --> 689.520] cornerstone theory in travel and tourism research that has formed the basis for further research
118
+ [689.520 --> 695.760] and analysis in a range of contexts. In doing this, Pog's theory has encouraged critical thinking
119
+ [695.760 --> 700.720] throughout the tourism community for several decades. And it's difficult to find a textbook that
120
+ [700.720 --> 707.440] doesn't pay reference to his work. However, Pog's model of allocentricity and psychocentricity
121
+ [707.520 --> 711.920] is not without its critique. In fact, many academics have questions, it's
122
+ [712.480 --> 718.640] real world of validity over the years. Some common criticisms include the research being based on
123
+ [718.640 --> 724.000] the US population, so it might not be applicable for other nations. The concepts of personality
124
+ [724.000 --> 729.520] appear and motivation are pretty subjective terms that may be viewed differently by different
125
+ [729.520 --> 735.360] people. This is exemplified when put onto the global stage with different cultural contexts too.
126
+ [735.360 --> 740.960] Not all destinations will move through the curve continuum that Pog describes. In other words,
127
+ [740.960 --> 746.720] not all destinations will strictly follow this path. And it is difficult to categorise people
128
+ [746.720 --> 752.720] into groups. Behaviours and preferences do change over time and people change too. Things change
129
+ [752.720 --> 758.400] depending on the time of the year or the day of the week. It's not always so clear cut. So there
130
+ [758.400 --> 764.720] we have it. That is Pog's model. If you understand it now, do give me a big thumbs up to say
131
+ [765.360 --> 769.920] you've got it. And if you don't quite get it, let me know what your questions are and I will do
132
+ [769.920 --> 774.160] my best to answer them in the comments below. And if you have found this helpful, make sure you
133
+ [774.160 --> 776.560] subscribe to my channel.
transcript/allocentric_FTZHpKQbqbQ.txt ADDED
@@ -0,0 +1,567 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [30.000 --> 38.280] Hi, Bill Mobley for the Brain Channel on UCTV and it's my pleasure today to be speaking
2
+ [38.280 --> 46.960] with Dr. Ramashandran, the Lanyur Ramashandran, who is professor of psychology at UCSD and
3
+ [46.960 --> 52.720] whose director of the Center for Brain and Cognition also at UCSD.
4
+ [52.720 --> 60.560] He's a remarkable person with a remarkable intellect and a remarkable set of insights about
5
+ [60.560 --> 62.080] how the brain works.
6
+ [62.080 --> 67.320] So I thought to introduce him, I talk a little bit about what other people have said about
7
+ [67.320 --> 71.320] him and then what he said about the brain.
8
+ [71.320 --> 78.020] From Richard Dawkins comes the quote, Ramashandran is a later day Marko Polo, journeying the
9
+ [78.020 --> 83.740] Silk Road of Science, Distrange and Exotic Cathays of the Mind.
10
+ [83.740 --> 90.900] From Mike Merznik, Ramashandran is a scientist with the kind of creativity that's really
11
+ [90.900 --> 93.740] quite rare.
12
+ [93.740 --> 96.940] It's usually not allowed in some sense.
13
+ [96.940 --> 102.620] You're not supposed to be a butterfly like Ramashandran.
14
+ [102.620 --> 107.660] My colleague describes his approach to science as opportunistic.
15
+ [107.660 --> 112.500] In quotes, you come across something strange, what Thomas Cune, the famous historian in
16
+ [112.500 --> 115.980] philosopher of science called anomalies.
17
+ [115.980 --> 121.100] Something seems weird, doesn't fit the big picture of science.
18
+ [121.100 --> 122.340] People just ignore it.
19
+ [122.340 --> 124.020] It doesn't make any sense.
20
+ [124.020 --> 128.300] They say the patient is crazy if it's about a patient.
21
+ [128.300 --> 134.620] A lot of what I've done is to rescue these phenomena from oblivion.
22
+ [134.620 --> 137.660] Dr. Ramashandran is quite a remarkable person.
23
+ [137.660 --> 139.940] Again, I'm a remarkable intellect.
24
+ [139.940 --> 147.900] He's the publisher of hundreds of papers and the literature of terrific books, the telltale
25
+ [147.900 --> 153.620] brain, phantoms in the brain, a brief tour of human consciousness.
26
+ [153.620 --> 157.340] I wanted to use one of his quotes.
27
+ [157.340 --> 160.340] I think it summarizes in a wonderful way.
28
+ [160.340 --> 162.420] This is from the telltale brain.
29
+ [162.420 --> 169.220] How can a three-pound mass of jelly that you can hold in your palm, imagine angels, contemplate
30
+ [169.220 --> 174.660] the meaning of infinity, and even question its own place in the cosmos?
31
+ [174.660 --> 179.380] Especially awe-inspiring is the fact that any single brain, including yours, is made
32
+ [179.380 --> 184.820] up of atoms that were forged in the hearts of countless far-flung stars billions of
33
+ [184.820 --> 186.140] years ago.
34
+ [186.140 --> 191.140] These particles drifted for eons and light years until gravity and change brought them
35
+ [191.140 --> 193.620] together here, now.
36
+ [193.620 --> 199.300] These atoms now form a conglomerate, your brain, that can only not only ponder the very
37
+ [199.300 --> 205.900] stars that gave it birth, but also think about its own ability to think and wonder about
38
+ [205.900 --> 208.660] its own ability to wonder.
39
+ [208.660 --> 213.740] With the arrival of humans, it has been said, the universe has suddenly become conscious
40
+ [213.740 --> 214.740] of itself.
41
+ [214.740 --> 219.980] Thus truly, it is the greatest mystery of all.
42
+ [219.980 --> 221.580] The brain.
43
+ [221.580 --> 223.620] Welcome to the brain channel.
44
+ [223.620 --> 225.020] It's a great pleasure to have you here.
45
+ [225.020 --> 226.540] I've known you for a number of years.
46
+ [226.540 --> 234.980] I've always been so pleased at your remarkable ability to turn things on their heads.
47
+ [234.980 --> 239.180] Tell us a little bit more about yourself and what your current interests are in brain
48
+ [239.180 --> 240.180] science.
49
+ [240.180 --> 244.180] Well, my interest in brain science, as you pointed out in that paragraph, which you
50
+ [244.180 --> 247.940] very kindly quoted, arises from the big questions.
51
+ [247.940 --> 252.780] Everyone begins with his medical students, or what his consciousness, what his self-awareness,
52
+ [252.780 --> 255.540] what his body image, and things of that nature.
53
+ [255.540 --> 260.540] But in science very often, you can't directly head on, tackle the big questions, like what
54
+ [260.540 --> 263.300] is consciousness, too nebulous atop it.
55
+ [263.300 --> 269.220] We can approach it, chip away at the problem, doing experiments, which is my approach.
56
+ [269.220 --> 272.460] Looking at odd phenomena, they've been discarded and ignored for a long time.
57
+ [272.460 --> 273.860] This is not new to neuroscience.
58
+ [273.860 --> 278.460] It's a new old tradition in science in general, like continental drift was ignored for a long
59
+ [278.460 --> 279.460] time.
60
+ [279.460 --> 281.660] It's significant as it discovered.
61
+ [281.660 --> 282.660] Bacterial transformation.
62
+ [282.660 --> 283.660] I'm not an example.
63
+ [283.660 --> 288.900] Even X-rays discovered it when it was first discovered, it was considered an auditory.
64
+ [288.900 --> 292.060] And the Rungan published it in a newspaper first before sending it to a science-related
65
+ [292.060 --> 295.300] journal because he was worried that it might get rejected.
66
+ [295.300 --> 298.540] But turned out to turn physics stops it every when he discovered it.
67
+ [298.540 --> 300.780] So neurology is full of these audities and anomalies.
68
+ [300.780 --> 302.300] And this is what we tend to focus on.
69
+ [302.300 --> 305.580] Not because they're audities, but because they might illuminate something important and
70
+ [305.580 --> 309.180] interesting about normal brain function.
71
+ [309.180 --> 314.100] And a lot of the time with the wild goose chase, but every now and then you hit the jackpot.
72
+ [314.100 --> 322.020] You know, one of the comments that you make is that these anomalies are first easily rejected.
73
+ [323.020 --> 325.780] It just isn't true.
74
+ [325.780 --> 327.700] What the patient says can't be true.
75
+ [327.700 --> 329.980] The patient's crazy.
76
+ [329.980 --> 334.460] And I think it probably would be your perspective.
77
+ [334.460 --> 335.460] No patient is crazy.
78
+ [335.460 --> 342.780] At least not crazy about phenomena that otherwise a sentient, thoughtful, reasonable person is
79
+ [342.780 --> 344.020] commenting upon.
80
+ [344.020 --> 350.580] Their brain is generating images, perhaps even hallucinations, that for them are very
81
+ [350.580 --> 355.580] real and that have all of them a basis in brain function.
82
+ [355.580 --> 356.580] Right, absolutely.
83
+ [356.580 --> 359.540] A lot of the time when you think the patient's crazy, it means you're not smart enough
84
+ [359.540 --> 360.540] to figure it out.
85
+ [360.540 --> 363.700] You also look for consistency across patients who are not talking to each other.
86
+ [363.700 --> 368.940] They produce the same bizarre story, chances of this being, them being crazy is quite small.
87
+ [368.940 --> 373.060] An example of this would be a curious disorder we've studied recently with my colleague Paul
88
+ [373.060 --> 377.700] McGee of the postdoc and fellow was now in Edinburgh.
89
+ [377.700 --> 380.900] He and I look at a phenomena called a poiterminophilia or xenomilia.
90
+ [380.900 --> 381.900] Xenomilia.
91
+ [381.900 --> 382.900] Xenomilia.
92
+ [382.900 --> 386.100] It refers to a person, I don't want to call him a patient because they don't regard them
93
+ [386.100 --> 387.100] to those patients.
94
+ [387.100 --> 392.700] And otherwise normal person leads a normal life in society, holds a job, has a family,
95
+ [392.700 --> 395.340] perfectly fluent in conversation, not mentally disturbed in any way.
96
+ [395.340 --> 397.340] Often mildly depressed in some ways.
97
+ [397.340 --> 402.620] But the person is otherwise quite normal but has harbored a secret of desire all his life
98
+ [402.620 --> 406.540] or her life, her entire life, to have his arm or leg removed.
99
+ [406.540 --> 407.540] I'm pitated.
100
+ [407.540 --> 411.460] Now just think about how upset the sounds to one of us quote and quote normal people.
101
+ [411.460 --> 412.460] Right.
102
+ [412.460 --> 416.460] Here's a perfectly, I saw a dean of a medical school not long ago who wanted his arm
103
+ [416.460 --> 417.460] removed.
104
+ [417.460 --> 420.980] It's one after he retired and it's first time he came out to speak.
105
+ [420.980 --> 422.780] He's so embarrassed about it.
106
+ [422.780 --> 430.900] So xenomilia is the condition in which a person believes that it would be appropriate,
107
+ [430.900 --> 434.060] and it really desires for one of their limbs to be amputated.
108
+ [434.060 --> 435.620] That is correct.
109
+ [435.620 --> 438.500] And there are all kinds of vague Freudian theories about why this happens.
110
+ [438.500 --> 443.700] There are child groups among patients, people who have this condition.
111
+ [443.700 --> 446.220] And one standard view is that it's a cry for attention.
112
+ [446.220 --> 447.220] It doesn't make any sense.
113
+ [447.220 --> 451.740] When I remove the ear or nose or something, it's like I was wearing an arm.
114
+ [451.740 --> 456.020] And often more often it's left side than the right side, where they land reality.
115
+ [456.020 --> 459.660] And also they'll take a felt pen and they'll draw an exact precise line along with the
116
+ [459.660 --> 460.660] end of the amputation.
117
+ [460.660 --> 464.220] They'll draw an irregular line just below the elbow or just above the elbow with various
118
+ [464.220 --> 465.220] impatient to patient.
119
+ [465.220 --> 468.980] And you take a photograph and then do a surprise retest.
120
+ [468.980 --> 470.980] Month later you call them again and ask them to come to the lab.
121
+ [470.980 --> 473.380] You haven't told them ahead of time you're going to bring them back.
122
+ [473.380 --> 474.380] Ask them to draw the line.
123
+ [474.380 --> 477.540] You'll draw the precise same line irregular line.
124
+ [477.540 --> 482.860] Already hinting at the fact that this is not some sort of cry for help or some vague
125
+ [482.860 --> 487.180] psychological propensity to get under the arm, it may have a physiological basis.
126
+ [487.180 --> 489.700] Otherwise, why should the precise line matter?
127
+ [489.700 --> 494.700] So our first idea was that maybe it turns out there's a complete map of the body surface
128
+ [494.700 --> 498.500] and the surface of the brain, the post-central tyros, the big vertical furrow on the side
129
+ [498.500 --> 499.500] of the brain.
130
+ [499.500 --> 501.300] It's called the central sulcus.
131
+ [501.300 --> 503.900] Behind the central sulcus is a vertical strip of cortex.
132
+ [503.900 --> 507.580] On that vertical strip of cortex is a complete map, point to point, the surface of the body
133
+ [507.580 --> 508.580] and the surface of the brain.
134
+ [508.580 --> 514.700] So we thought maybe in these people congenitally the arm is missing and the brain is hard
135
+ [514.700 --> 516.860] wired to have a missing arm.
136
+ [516.860 --> 520.140] Therefore, the arm feels strange or alien and they want it removed.
137
+ [520.140 --> 521.140] It's kind of all the work.
138
+ [521.140 --> 523.020] It's just if it doesn't belong to them.
139
+ [523.020 --> 524.020] It doesn't belong to them.
140
+ [524.020 --> 526.900] So we went and looked at that area of the brain and it's completely normal.
141
+ [526.900 --> 528.620] It's proving in our theory.
142
+ [528.620 --> 532.420] But then we went further up in the brain and this region represents what we call the
143
+ [532.420 --> 533.420] body image.
144
+ [533.420 --> 536.740] You all have a sense of your close your eyes, your vivid sense of all your body parts,
145
+ [536.740 --> 539.900] moving body parts and you open your eyes, there's a confirmation of your body part.
146
+ [539.900 --> 544.420] There's a convergence of visual, somatosensory, auditory, all these inputs and superior
147
+ [544.420 --> 550.100] lopiole of the right hemisphere to construct what we call a body image, a dynamic, vibrant
148
+ [550.100 --> 553.980] image of your own body in the region of the brain.
149
+ [553.980 --> 556.140] Now that region of the brain was missing the arm.
150
+ [556.140 --> 558.380] We showed this using imaging techniques.
151
+ [558.380 --> 563.340] So the information from the arm, from the hand on the arm, skin, bones and muscles, goes
152
+ [563.340 --> 568.620] up to these post-integiris, the first map, but there's nowhere for it to get to in the
153
+ [568.620 --> 571.300] second map, the body image map.
154
+ [571.300 --> 575.860] So there's this clash, the signals arrive with nowhere for it to get to.
155
+ [575.860 --> 579.540] So the discrepancy is picked up by a structure called the amygdala in the brain and it
156
+ [579.540 --> 582.460] creates, makes you uncomfortable.
157
+ [582.460 --> 592.260] So the lack of connectivity between the primary sensory station and where that information
158
+ [592.260 --> 597.860] becomes organized into a body image, that's the connection is missing for this part of
159
+ [597.860 --> 600.500] the limb that one wishes to amputee it.
160
+ [600.500 --> 604.820] And so they regard it as foreign, they regard it as not them.
161
+ [604.820 --> 607.500] Well, if you ask them, they're very specific about it.
162
+ [607.500 --> 610.580] They say it doesn't feel like it's foreign but it feels like it's too intrusive, it's
163
+ [610.580 --> 611.580] too much a part of them.
164
+ [611.580 --> 612.580] Ah, see, okay.
165
+ [612.580 --> 614.580] It's very consistent across objects.
166
+ [614.580 --> 618.020] And they say they want to remove it about half of them go get it removed.
167
+ [618.020 --> 619.020] Half of them.
168
+ [619.020 --> 622.140] It's illegal in this country but they go across the border and get it removed half of
169
+ [622.140 --> 623.140] them.
170
+ [623.140 --> 625.980] And then when they get it removed, 90% of them feel very good about it.
171
+ [625.980 --> 627.620] The depression lifts.
172
+ [627.620 --> 629.500] They feel finally they're a complete person.
173
+ [629.500 --> 635.740] So here's an example of a very strange, odd, quote unquote disorder, spooky even, which
174
+ [635.740 --> 639.140] you think is a chiropractic, but it turns out in fact there's a simple neurological
175
+ [639.140 --> 640.460] information you can come up with.
176
+ [640.460 --> 644.420] You can come up with a theory, test your theory, and then do brain imaging if required and
177
+ [644.420 --> 646.060] so that you're on the right track.
178
+ [646.060 --> 647.140] And these are your tools.
179
+ [647.140 --> 651.620] The tools that you're using begin with the individual person.
180
+ [651.620 --> 653.660] Let's not call them patient.
181
+ [653.660 --> 655.140] Begin with the person.
182
+ [655.140 --> 661.340] Back the kind of information that would seem reasonable for their complaint and presumably
183
+ [661.340 --> 666.580] do some kind of an examination to kind of confirm that yes, they do feel you can't touch
184
+ [666.580 --> 669.620] the the limb and there is sensation there.
185
+ [669.620 --> 674.580] It's just that that sensation doesn't get organized in a way in this case that's normal.
186
+ [674.580 --> 676.620] So that's the neurological method of course.
187
+ [676.620 --> 677.620] That's right.
188
+ [677.620 --> 679.580] That he was watching it could have been done a hundred years ago.
189
+ [679.580 --> 680.580] Yeah.
190
+ [680.580 --> 681.580] A long time ago.
191
+ [682.580 --> 692.260] In reading the papers and reading what you've spoken about, it seems that misrepresentation
192
+ [692.260 --> 700.660] of a physical reality that all of us would appreciate is in essence the underlying substrate
193
+ [700.660 --> 706.460] for a lot of the problems that you've looked at, for a lot of the problems, a lot of the
194
+ [706.460 --> 708.260] issues that you've looked at.
195
+ [708.260 --> 710.700] Especially when it concerns body image, that's true.
196
+ [710.700 --> 715.500] For example, we've studied a phenomenon called RSD, reflect sympathetic dystrophy, the
197
+ [715.500 --> 720.700] name sounds strange, but what happens is you typically have a small injury in your finger.
198
+ [720.700 --> 722.340] That's a touch of heart, kettle or a flame.
199
+ [722.340 --> 726.900] You would draw your hand out, the protective reflex to avoid further tissue injury.
200
+ [726.900 --> 730.500] Why didn't you bring by natural selection?
201
+ [730.500 --> 734.940] But sometimes the opposite happens is there's a more permanent injury like a metacopal
202
+ [734.940 --> 739.020] bone fracture, a more severe damage, I should say.
203
+ [739.020 --> 745.500] What happens is that the finger injury is removing your finger because immobilized or
204
+ [745.500 --> 750.260] quote unquote temporarily paralyzed, this immobilization reflex is again to allow tissue
205
+ [750.260 --> 751.260] to heal.
206
+ [751.260 --> 752.260] So it's also adaptive.
207
+ [752.260 --> 757.220] You only have the pain, the similar, the adaptive function is different and the manifestation
208
+ [757.220 --> 758.220] is different.
209
+ [758.220 --> 760.060] Here the arm is temporarily quote unquote paralyzed.
210
+ [760.060 --> 765.020] But then the bone heals and after a few weeks the injury heals, the inflammation subsides,
211
+ [765.020 --> 770.380] the swelling subsides, the redness subsides, the pain subsides and then you start moving
212
+ [770.380 --> 771.380] the finger again.
213
+ [771.380 --> 773.740] That's the normal sequence of events, full recovery.
214
+ [773.740 --> 777.700] But in a certain percentage of patients, maybe about two or three percent, this doesn't
215
+ [777.700 --> 778.900] happen.
216
+ [778.900 --> 783.100] The pain persists with the vengeance, the inflammation does not subside, the finger swelling does
217
+ [783.100 --> 784.100] not subside.
218
+ [784.100 --> 788.500] In fact, it spreads to the entire hand from the finger, entire arm from the finger, arm
219
+ [788.500 --> 794.420] gets swollen, becomes hairless, bone starts atrophying and sweating changes and all the
220
+ [794.420 --> 796.940] signs of inflammation are amplified.
221
+ [796.940 --> 802.620] So a tiny little injury that would normally be handled perfectly well now basically takes
222
+ [802.620 --> 803.620] over the whole limb.
223
+ [803.620 --> 804.900] It takes over all of them.
224
+ [804.900 --> 809.100] And it's been considered incurable largely, although there's some sympathetic involvement,
225
+ [809.100 --> 813.180] sympathetic ganglionectomy sometimes helps, but notoriously in effect.
226
+ [813.180 --> 817.060] We hit on the technique of using a mirror because our hypothesis was, this is a form of
227
+ [817.060 --> 820.980] what we call learned pain, inspired by earlier work on phantom limbs.
228
+ [820.980 --> 825.420] That every time the patient tries to move his finger, he gets an out signal from the vision
229
+ [825.420 --> 831.060] saying, out it hurts, out so that the, there's a heavy link, a memory link, a sublice between
230
+ [831.060 --> 835.060] the mere command to move the hand and the pain signal coming from back in the hand.
231
+ [835.060 --> 837.220] So the pain just gives up and refuses to move the hand.
232
+ [837.220 --> 840.900] So it's immobilized by pain.
233
+ [840.900 --> 845.820] So we then put a mirror and then ask, as you say, the experiments were based on previous
234
+ [845.820 --> 850.380] experiments were done on mirrors using phantom pain to cure phantom pain.
235
+ [850.380 --> 855.140] So and then you move the normal hand, then you see the abnormal hand, you see the reflection
236
+ [855.140 --> 859.340] of the normal hand in the mirror superimposed on where the paralyzed hand is.
237
+ [859.340 --> 864.060] But now the, when you move the normal hand, it looks like the paralyzed hand is moving and
238
+ [864.060 --> 867.660] it looks an optical trick without any pain.
239
+ [867.660 --> 871.780] So the brain is told, you can look, but you're able to move your hand, you can't even
240
+ [871.780 --> 875.100] go to a paralyzed hand, fine, and it's not painful.
241
+ [875.100 --> 877.780] So you unlearn this learned association.
242
+ [877.780 --> 881.900] So this is based on a handful of patients initially by Patrick Wallen, his colleagues.
243
+ [881.900 --> 889.180] And now there's been a whole double blind control, placebo control study on 50 patients in
244
+ [889.180 --> 895.220] Europe showing that the 20 patients who were on the mirror, all of them without exception,
245
+ [895.220 --> 899.900] pain fell down from a scale of eight, from evaluation of eight on a scale of 10, excruciating
246
+ [899.900 --> 904.900] pain to barely noticeable pain down to about two in all, all 20 patients.
247
+ [904.900 --> 908.020] It's about as good as it gets in pain research.
248
+ [908.020 --> 913.980] And the placebo control is just opaque mirror and the third control is visual imagery.
249
+ [913.980 --> 917.300] In fact, some of them increase the pain with pain, the pain would have sent you into
250
+ [917.300 --> 918.300] the pain.
251
+ [918.300 --> 922.540] But as a mirror, pretty dramatic reduction, then did it across over once an imagery and
252
+ [922.540 --> 928.340] once on the opaque mirror, again, the pain got reduced to about two.
253
+ [928.340 --> 930.580] So now it's widely used in clinics.
254
+ [930.580 --> 932.460] And it's a huge problem.
255
+ [932.460 --> 938.020] So the idea that is that you're providing the brain information that it otherwise wouldn't
256
+ [938.020 --> 939.020] have.
257
+ [939.020 --> 940.020] It's correct.
258
+ [940.020 --> 942.180] It interprets movement is no longer necessarily painful.
259
+ [942.180 --> 944.260] In fact, it's not painful.
260
+ [944.260 --> 950.980] And so in a way, the idea is your reprogramming brain circuits to allow the normal function
261
+ [950.980 --> 953.460] to continue on the injured side.
262
+ [953.460 --> 954.460] That's correct.
263
+ [954.460 --> 960.340] Amazing is, Patrick Wall and Blake and others, Candy McCabe and others, look, first I
264
+ [960.340 --> 961.340] did this.
265
+ [961.340 --> 966.660] One of the things they noticed is that the atrophic, dystrophic arm, the painful dystrophic
266
+ [966.660 --> 972.460] inflamed arm, the actual temperature change online, how they're watching the mirror, the
267
+ [972.460 --> 974.940] temperature change and the swelling subsided as they were watching.
268
+ [974.940 --> 979.180] So this is about as good as an example of Bob Mine body medicine as it gets, where you
269
+ [979.180 --> 981.660] have skin temperature, but you cannot fake.
270
+ [981.660 --> 982.660] You can fake pain.
271
+ [982.660 --> 986.460] It can be placebo and all that, even though they're all the controls, they'll fake
272
+ [986.460 --> 987.460] meta.
273
+ [987.460 --> 989.220] You can't fake a temperature change and a finger.
274
+ [989.220 --> 993.340] So exactly what the pathway that is immediate is that they're made obscure, but it's fascinating
275
+ [993.340 --> 994.980] to me to see that happen.
276
+ [994.980 --> 995.980] But there's no magic.
277
+ [995.980 --> 998.020] There's no magic in the mirror.
278
+ [998.020 --> 999.020] No.
279
+ [999.020 --> 1003.700] The mirror is providing your brain an alternative source of information that allows it to function
280
+ [1003.700 --> 1008.580] more normally and presumably for those circuits that are meant to work in the brain to work
281
+ [1008.620 --> 1012.100] normally as they have it maybe for months at that point in time.
282
+ [1012.100 --> 1013.100] That's correct.
283
+ [1013.100 --> 1014.940] Yeah, very cool.
284
+ [1014.940 --> 1019.140] And the phantom limb pain, maybe we'll talk a little bit about phantom limbs and phantom
285
+ [1019.140 --> 1023.100] limb pain and the use of mirrors in that context.
286
+ [1023.100 --> 1024.100] Sure.
287
+ [1024.100 --> 1027.940] When an arm is removed, the patient often continues to visibly feel the presence of the missing
288
+ [1027.940 --> 1028.940] arm.
289
+ [1028.940 --> 1031.100] It's called a phantom limb, if everybody knows.
290
+ [1031.100 --> 1036.420] Now in about half the patients, the phantom is immobilized, frozen in a particular position.
291
+ [1036.420 --> 1040.740] Many of them, they can move the phantom freely and it's less painful, but the ones in whom
292
+ [1040.740 --> 1046.020] the phantom is immobilized often say that the phantom hand is excruciatingly painful.
293
+ [1046.020 --> 1047.020] And they can't do anything about it.
294
+ [1047.020 --> 1050.020] The painful edge, they can't scratch a phantom limb.
295
+ [1050.020 --> 1053.580] The phantom limb, they can't massage the phantom.
296
+ [1053.580 --> 1058.060] So this is very frustrating and a serious clinical problem, but have the patients found them
297
+ [1058.060 --> 1061.900] pain, a phantom limb experience found them pain, excruciating pain, sometimes driving
298
+ [1061.900 --> 1064.980] them to do a severe depression even to the side.
299
+ [1064.980 --> 1065.980] Yes.
300
+ [1065.980 --> 1069.500] Because there have been various pharmacological approaches, some of them are quite defective,
301
+ [1069.500 --> 1071.340] but usually not very effective.
302
+ [1071.340 --> 1076.220] So again, we hear on the technique of using a mirror, so let's assume I am the patient
303
+ [1076.220 --> 1081.620] with a phantom limb and I put my phantom limb on the left side, it's cramped in an awkward
304
+ [1081.620 --> 1086.780] position and it's painful, like that or like that.
305
+ [1086.780 --> 1090.260] Then you ask him to try and move his hand, the patient says, I wish I could move the hand,
306
+ [1090.260 --> 1091.260] but I can't.
307
+ [1091.260 --> 1092.260] It's stuck.
308
+ [1092.260 --> 1095.660] And you do the same things you do with the artist, do you put the normal hand on the other
309
+ [1095.660 --> 1096.660] side of the mirror?
310
+ [1096.660 --> 1100.700] You peek inside the mirror, then you see the position, you see the mirror reflecting to
311
+ [1100.700 --> 1105.860] the normal hand, superposed on the felt location of the phantom.
312
+ [1105.860 --> 1108.060] So it's as though you optically resurrected his phantom.
313
+ [1108.060 --> 1111.260] He's looking at his phantom limb for the first time in five years or ten years or three
314
+ [1111.260 --> 1114.980] years or one year or longer, the amputation was done.
315
+ [1114.980 --> 1119.620] Then you ask him to send mirror symmetric commands to both hands, like Mu, Mu, horizontal
316
+ [1119.620 --> 1125.700] lay, or clench an unclench the fist or clap or wave goodbye while looking in the middle.
317
+ [1125.700 --> 1129.860] So he's going to close that loop, sensitive motor loop, he's going to get the visually
318
+ [1129.860 --> 1132.500] illusion that the phantom is obeying his command.
319
+ [1132.500 --> 1134.300] The fist is opening, right?
320
+ [1134.300 --> 1139.180] So he's got a clenched phantom fist for the last year, exclusively painful position.
321
+ [1139.180 --> 1140.180] You can't open it.
322
+ [1140.180 --> 1144.660] The visual feedback of the phantom opening suddenly kicks in and he actually starts opening
323
+ [1144.660 --> 1145.660] his phantom.
324
+ [1145.660 --> 1148.300] And he opens it, the painful cramp is relieved.
325
+ [1149.180 --> 1153.740] This again has been, I've been confronted with clinical trials by its how and his colleagues
326
+ [1153.740 --> 1156.540] in Walter Reed.
327
+ [1156.540 --> 1157.540] Right.
328
+ [1157.540 --> 1158.540] And used widely.
329
+ [1158.540 --> 1163.780] Again, the surprising effect of visual feedback in correcting.
330
+ [1163.780 --> 1169.180] You know, one question, I have a right hander, my right, right handed.
331
+ [1169.180 --> 1173.580] I wonder if I could learn to write left handed using the mirror.
332
+ [1173.580 --> 1174.780] That's a good question.
333
+ [1174.780 --> 1177.980] I think you can do so even without using the mirror, but it's slow and tedious.
334
+ [1177.980 --> 1180.700] But with the mirror, I suppose you could accelerate it.
335
+ [1180.700 --> 1185.060] And the great neurologist, Brown Secouard, French neurologist in the 19th century, actually
336
+ [1185.060 --> 1186.860] tried this.
337
+ [1186.860 --> 1189.860] He didn't use the mirror, but he just tried educating people to use the left hand to
338
+ [1189.860 --> 1196.660] write, claiming that those people who suffered a stroke would now be more able to cope
339
+ [1196.660 --> 1200.420] with the disability than people who had not been so trained.
340
+ [1200.420 --> 1204.900] He was arguing that if you train the left hand to write, some of the other cognitive
341
+ [1204.900 --> 1207.820] abilities would also transfer to the right hand hemisphere.
342
+ [1207.820 --> 1210.340] So maybe even language, sparing them with this.
343
+ [1210.340 --> 1213.220] I don't know how far he could with that, but...
344
+ [1213.220 --> 1220.540] But the notion then that you have brain circuits that are incredibly flexible at some level,
345
+ [1220.540 --> 1230.820] that with training or without proper use, change in ways that are either helpful or unhelpful
346
+ [1230.820 --> 1232.500] is kind of the bottom line.
347
+ [1232.500 --> 1241.980] So brain, the dynamics of brain circuit biology are terribly exciting and also right now mechanistically
348
+ [1241.980 --> 1243.540] much to learn about them.
349
+ [1243.540 --> 1244.540] Right.
350
+ [1244.540 --> 1245.540] Yeah.
351
+ [1245.540 --> 1252.740] You know, I wanted to turn to a recent paper that I thought was just really, really interesting.
352
+ [1252.740 --> 1255.940] And part of it's just because it's odd.
353
+ [1255.940 --> 1258.580] Sleep paralysis and the bedroom intruder.
354
+ [1258.580 --> 1259.580] Oh, okay.
355
+ [1259.580 --> 1260.580] Okay.
356
+ [1260.580 --> 1266.300] So this is a paper that came out in 2014 with you as a senior author and the first author
357
+ [1266.300 --> 1269.860] is Bollinger Law.
358
+ [1269.860 --> 1271.980] So I'll just mention a couple things very quickly.
359
+ [1271.980 --> 1274.980] So sleep during sleep paralysis.
360
+ [1274.980 --> 1279.060] The sleeper experience, and this is not uncommon for our listeners.
361
+ [1279.060 --> 1285.420] During sleep paralysis, the sleeper experience is a transient period of gross motor paralysis.
362
+ [1285.420 --> 1290.780] All the sensory system is clear and the eyes and respiratory movements are completely
363
+ [1290.780 --> 1292.900] intact.
364
+ [1292.900 --> 1303.340] During sleep paralysis, the intrusion of rapid eye movement related mentation into emerging
365
+ [1303.340 --> 1306.020] wakefulness is common.
366
+ [1306.020 --> 1309.980] In effect, dreaming with one's eyes open.
367
+ [1309.980 --> 1315.060] Now these hallucinations occur in all sensory modalities and commonly involve seeing
368
+ [1315.060 --> 1320.900] hearing and sensing the presence of menacing intruders in one's bedroom.
369
+ [1320.900 --> 1323.780] This is very strange.
370
+ [1323.780 --> 1326.700] But it's common, not so common, but pretty common.
371
+ [1326.700 --> 1331.060] The intruder is often perceived as a shadowy humanoid figure.
372
+ [1331.060 --> 1337.380] The figure may approach the sleeper's body, sit on the bed, strangle and even sexually
373
+ [1337.380 --> 1339.660] assault sleeper.
374
+ [1339.660 --> 1344.780] Supernatural accounts of this hallucinate intruder are common across cultures and
375
+ [1344.780 --> 1351.420] include nocturnal, incubus, succubus, assaults, old hag attacks, coast visitations, and
376
+ [1351.420 --> 1353.420] alien abductions.
377
+ [1353.420 --> 1360.100] So for those who have been abducted by aliens, there's hope.
378
+ [1360.100 --> 1365.260] Dr. Ramishandran has ways of thinking about that, can help us all to understand you.
379
+ [1365.260 --> 1368.020] Talk to us about the intruder.
380
+ [1368.020 --> 1371.220] The intruder, when one is waking up.
381
+ [1371.220 --> 1374.740] Well, ordinarily you get 10-30 feedback from the arms and legs, anchoring your legs.
382
+ [1374.740 --> 1380.460] I can temporarily adopt an hallucinant review of myself saying, I'm pretending I'm giving
383
+ [1380.460 --> 1384.060] a lecture, I'm rehearsing a fourth coming lecture and I'm walking on the podium.
384
+ [1384.060 --> 1386.940] But you temporarily entertain this thought, but you don't literally float out of your
385
+ [1386.940 --> 1387.940] body.
386
+ [1387.940 --> 1388.940] Right.
387
+ [1388.940 --> 1389.940] Because you're anchored in your body.
388
+ [1389.940 --> 1390.940] Ego-centric.
389
+ [1390.940 --> 1391.940] Ego-centric.
390
+ [1391.940 --> 1392.940] Ego-centric.
391
+ [1392.940 --> 1396.660] But during sleep paralysis, there's a temporary escape from the ego-centric feedback information
392
+ [1396.660 --> 1398.660] that gives you this ego-centric perspective.
393
+ [1398.660 --> 1403.420] And you then, you literally start floating out of your body on outer body experience.
394
+ [1403.420 --> 1408.420] And, undoubtedly, not just all these abduction cases, even concepts of soul, your ancient
395
+ [1408.420 --> 1412.340] concept of soul could be derived, given how common this phenomenon is.
396
+ [1412.340 --> 1413.340] Sure.
397
+ [1413.340 --> 1414.740] We partly derive from such experiences.
398
+ [1414.740 --> 1416.980] But we don't talk about it much.
399
+ [1416.980 --> 1420.860] Because if we talked about it, people would say, we're crazy.
400
+ [1420.860 --> 1422.380] Happy to laugh.
401
+ [1422.380 --> 1423.380] But we're not.
402
+ [1423.380 --> 1427.100] Let me talk a little bit about allocentric versus ego-centric.
403
+ [1427.100 --> 1428.580] Let's define the terms.
404
+ [1428.580 --> 1434.300] In an ego-centric mode, I see myself, if not the center of the universe, at least to
405
+ [1434.300 --> 1437.940] which much information is being relayed in reference.
406
+ [1437.940 --> 1440.940] I'm in the center of the circle.
407
+ [1440.940 --> 1443.420] Allocentric is I'm looking out.
408
+ [1443.420 --> 1447.900] There are many other centers, and I happen to be observing them.
409
+ [1447.900 --> 1449.900] It's a reasonable first level of guidance.
410
+ [1449.900 --> 1451.900] You're adopted on the other person's perspective.
411
+ [1451.900 --> 1452.900] Right.
412
+ [1452.900 --> 1455.140] And I'm going to go with that.
413
+ [1455.140 --> 1461.660] Allocentric for me, and for in your words, allows one, an allocentric perspective, allows
414
+ [1461.660 --> 1467.460] one to put ones to stand in the other guy's shoes, to understand what that person might
415
+ [1467.460 --> 1469.740] be experiencing.
416
+ [1469.740 --> 1477.020] Now here I refer to the social conquest of Earth by E.O. Wilson, who argues that the allocentric
417
+ [1477.020 --> 1485.100] perspective was essential for evolution, that understanding the others around the
418
+ [1485.100 --> 1494.540] Iron Age or the pre-iron age campfire allowed one to understand what the issues were that
419
+ [1494.540 --> 1500.420] one needed to deal with to successfully move the tribe forward.
420
+ [1500.420 --> 1501.420] Right.
421
+ [1501.420 --> 1502.420] That's absolutely right.
422
+ [1502.420 --> 1507.020] We are being able to have a sophisticated theory of other people's intentions and minds.
423
+ [1507.020 --> 1511.300] Even for something as simple as imitation, imitation learning.
424
+ [1511.300 --> 1516.620] So, a polar bear, for example, takes a few hundred thousand years to evolve a coat to an
425
+ [1516.620 --> 1518.340] antelope selection.
426
+ [1518.340 --> 1523.300] But a human baby or human infant watching his mother or father slay a polar bear and skin
427
+ [1523.300 --> 1524.700] it and wear the coat.
428
+ [1524.700 --> 1527.660] Does it in just one trial or just half a dozen trials?
429
+ [1527.660 --> 1532.380] So you skip hundred thousand years of evolution by one trial of imitation learning.
430
+ [1532.380 --> 1536.940] This obviously requires the child to put itself in the parent's shoes to adopt that parent's
431
+ [1536.940 --> 1539.860] point of view in the hunt and subsequent skinning of the bear.
432
+ [1540.260 --> 1543.980] This is thought to be mediated by a group of neurons called mere neurons, but given the
433
+ [1543.980 --> 1547.860] still a little bit preliminary, we won't go into that too much.
434
+ [1547.860 --> 1552.460] But nevertheless, it's this skill set that humans have and very likely other primates and
435
+ [1552.460 --> 1560.540] very likely other animals that allows them to take this other than my own perspective
436
+ [1560.540 --> 1561.540] as a way...
437
+ [1561.540 --> 1562.540] The laws you were trying to send.
438
+ [1562.540 --> 1569.420] Yeah, as a way of helping to organize the group, helping to ensure that your own agenda
439
+ [1569.420 --> 1572.420] is ultimately reflected in the group's activities.
440
+ [1572.420 --> 1575.460] It's very pro-evolutionary.
441
+ [1575.460 --> 1578.620] It's very pro-sending on your DNA to the next generation.
442
+ [1578.620 --> 1579.620] Absolutely.
443
+ [1579.620 --> 1580.620] And it's in the brain.
444
+ [1580.620 --> 1582.620] You phrased well, I think, at the thread.
445
+ [1582.620 --> 1583.620] Good.
446
+ [1583.620 --> 1587.940] You've had such a terrific career and continues to have...
447
+ [1587.940 --> 1589.780] You continue to have a wonderful...
448
+ [1589.780 --> 1591.860] What are the projects you're working on right now?
449
+ [1591.860 --> 1596.340] What are we going to learn about VS.
450
+ [1596.340 --> 1598.660] Ramashandran's next adventure?
451
+ [1598.660 --> 1600.940] And when are we going to learn about it?
452
+ [1600.940 --> 1603.420] Well, we became interested in a condition called synesthesia.
453
+ [1603.420 --> 1606.580] We originally described by Francis Galtan in the 19th century.
454
+ [1606.580 --> 1611.380] And certain people who are otherwise completely normal in the population have a following
455
+ [1611.380 --> 1612.380] quirk.
456
+ [1612.380 --> 1615.060] Every time they see a number, they see it tinges of particular color.
457
+ [1615.060 --> 1617.860] So I draw the number five on a sheet of paper.
458
+ [1617.860 --> 1618.860] It's red.
459
+ [1618.860 --> 1619.860] Six looks green.
460
+ [1619.860 --> 1622.860] Seven looks short, two is eight looks indigo, nine looks yellow and so on and so forth.
461
+ [1622.860 --> 1624.860] It's different from different synesthesia.
462
+ [1624.900 --> 1625.860] And what does this happen?
463
+ [1625.860 --> 1629.580] Again, it's an example of an anomaly or a quirk, because they're not patients.
464
+ [1629.580 --> 1632.380] They're found in the normal population.
465
+ [1632.380 --> 1635.780] And it runs in families, so Galtan said it may have genetic basis.
466
+ [1635.780 --> 1639.860] And also, it's about eight or nine times more common among artists, poets and novelists
467
+ [1639.860 --> 1640.860] who created people.
468
+ [1640.860 --> 1643.620] It's controversial, but we think it's true.
469
+ [1643.620 --> 1644.620] So why would that be?
470
+ [1644.620 --> 1646.980] So first thing to show these people are not crazy.
471
+ [1646.980 --> 1648.180] One view is that it's making it up.
472
+ [1648.180 --> 1649.780] And why would somebody make up something like that?
473
+ [1649.780 --> 1650.780] Five or six or seven?
474
+ [1650.780 --> 1653.220] But leaving that aside, we needed proof.
475
+ [1653.220 --> 1656.060] So if we created a display, first of all, we found it's much more common than Galtan
476
+ [1656.060 --> 1657.060] thought.
477
+ [1657.060 --> 1659.780] It's not one in a thousand, one in a few hundred, it's one in fifty people, that's
478
+ [1659.780 --> 1660.780] anesthesia.
479
+ [1660.780 --> 1661.780] So there are two or three in my class.
480
+ [1661.780 --> 1663.780] Did I teach a large undergraduate class?
481
+ [1663.780 --> 1664.780] Wonderful, yeah.
482
+ [1664.780 --> 1668.460] So we brought them in and we had a matrix of fives.
483
+ [1668.460 --> 1670.380] So this of two is red and five is green.
484
+ [1670.380 --> 1674.460] So you have a bunch of two, two, five scattered on the screen.
485
+ [1674.460 --> 1677.020] Among them, there's two, one or two twos.
486
+ [1677.020 --> 1681.540] And most of us have great difficulty in finding the twos, camouflage by the fives.
487
+ [1681.540 --> 1684.300] And these people spotted very, very quickly, much more quickly than you and I, because
488
+ [1684.300 --> 1687.260] they see these red twos pop out against background of green fives.
489
+ [1687.260 --> 1690.540] And they're crazy, I'll come there better at them as.
490
+ [1690.540 --> 1693.140] This shows that it's an authentic, genuine phenomenon, a sensory phenomenon.
491
+ [1693.140 --> 1698.100] Because they tell you it's phenomenologically, I see red color against the background of
492
+ [1698.100 --> 1701.460] green leaves, green fives.
493
+ [1701.460 --> 1706.540] This shows that it's sensory, it's automatic, it's authentic, they're not making it up.
494
+ [1706.540 --> 1707.860] Questions, why does it happen?
495
+ [1707.860 --> 1711.500] So we did some brain imaging studies and what we found was that if you go to the fives,
496
+ [1711.500 --> 1714.980] from diodes of the brain tucked away in the medial temporal lobes and the sides of the
497
+ [1714.980 --> 1718.540] brain, there's a surgical diffusiform diodes.
498
+ [1718.540 --> 1722.020] In the fivesiform diodes, there's a color area of the brain that is sensory signals from
499
+ [1722.020 --> 1723.740] color to our process.
500
+ [1723.740 --> 1727.620] And right next to the color area is the number area, the visual appearance of numbers is
501
+ [1727.620 --> 1728.620] processed.
502
+ [1728.620 --> 1730.180] So we said this can be a coincidence.
503
+ [1730.180 --> 1733.220] Maybe there's some sloppy wiring in these people between the number area and color area,
504
+ [1733.220 --> 1736.340] which are ordinarily clearly segregated in all of us.
505
+ [1736.340 --> 1739.220] But in these people, there's some cross wiring, accidental cross wiring.
506
+ [1739.220 --> 1742.020] The clue comes from Carlton's own observation that it runs in families.
507
+ [1742.020 --> 1745.820] Maybe there's a gene that causes cross wiring because normally in the infant brain or in
508
+ [1745.820 --> 1748.180] the fetal brain, everything is connected to everything.
509
+ [1748.180 --> 1751.140] There's a tremendous redundancy of connections.
510
+ [1751.140 --> 1755.580] These get pruned by pruning genes and if these genes mutate, you get defective pruning
511
+ [1755.580 --> 1757.900] between adjacent brain modules.
512
+ [1757.900 --> 1762.940] So the number and color area ordinarily segregated get connected by these redundant connections.
513
+ [1762.940 --> 1767.340] Every time you see a number, it activates not just a number neuron, but cross activates
514
+ [1767.340 --> 1769.740] a color neuron and you see a corresponding color.
515
+ [1769.740 --> 1773.900] This is the theory and we tested this using brain imaging and found this to be true.
516
+ [1773.900 --> 1777.460] But then the question arises, why is it more common among creative people among artists,
517
+ [1777.460 --> 1778.460] poets and all of us?
518
+ [1778.460 --> 1779.860] Well, again, this is just a hunch.
519
+ [1779.860 --> 1781.260] We haven't tested this.
520
+ [1781.260 --> 1785.300] But maybe that when you talk, when you speak about somebody who's artistically creative
521
+ [1785.300 --> 1787.180] or poetic, what do you mean?
522
+ [1787.180 --> 1790.260] You mean, you're capable of analogy and metaphor.
523
+ [1790.260 --> 1793.620] Like when Shakespeare said it is a july, it is a yeast in july it is a sun.
524
+ [1793.620 --> 1795.500] You don't say july it is a sun.
525
+ [1795.500 --> 1797.940] Because that means she's a glowing ball of fire.
526
+ [1797.940 --> 1799.460] What does that mean?
527
+ [1799.460 --> 1803.780] If you're a schizophrenic, you might say that, but what we usually mean is that she's
528
+ [1803.780 --> 1809.220] warm like the sun, radiant like the sun, nurturing like the sun, so on and so forth.
529
+ [1809.220 --> 1811.500] So it's a metaphor.
530
+ [1811.500 --> 1815.780] Metaphor is involved linking seemingly under related concepts and ideas which are located
531
+ [1815.780 --> 1817.500] in faflung regions of the brain.
532
+ [1817.500 --> 1823.100] So if the synesthesia gene expressed only in the fizzle of amdiad, because of transcription
533
+ [1823.100 --> 1825.380] factor, you get this quirk called synesthesia.
534
+ [1825.380 --> 1827.860] The fire was a red and six is completely useless.
535
+ [1827.860 --> 1832.660] But if you express more diffusely throughout the brain, you get hyper-connectivity throughout
536
+ [1832.660 --> 1837.140] the brain, increasing the propensity to link seemingly under related ideas located in
537
+ [1837.140 --> 1840.740] different parts of the brain, hence the propensity to a metaphorical thinking and creativity
538
+ [1840.740 --> 1842.580] and artistic talent and literary talent.
539
+ [1842.580 --> 1845.900] We say that's the hidden agenda of the gene.
540
+ [1845.900 --> 1847.500] That's why it's so still so prevalent.
541
+ [1847.500 --> 1850.700] Why would 150 people see fire was a red and six is green?
542
+ [1850.700 --> 1853.660] Not because of that, but because it has an hidden agenda, namely it makes them outliers
543
+ [1853.660 --> 1856.780] in the population more creative, more poetic and all that.
544
+ [1856.780 --> 1861.900] So here is an example of how you start with this quirk synesthesia known for 100 years,
545
+ [1861.900 --> 1866.180] show that it's a real phenomenon, not some bogus phenomenon, not something that's fabricated
546
+ [1866.180 --> 1867.980] by the patient, the subject.
547
+ [1867.980 --> 1871.980] If you find out what the neural underpinnings are in the fizzle of amdiad, and then point
548
+ [1871.980 --> 1875.620] out its broader implications for understanding human nature, it loses the aspect of human
549
+ [1875.620 --> 1877.260] nature like creativity.
550
+ [1877.260 --> 1881.500] This is led us now to asking questions about savants in Rome.
551
+ [1881.500 --> 1885.340] That's one of the only things we're interested in working on, we haven't started yet.
552
+ [1885.340 --> 1891.100] The people have extraordinary ability to, for example, name four, digit prime numbers.
553
+ [1891.100 --> 1894.700] And the question of why does it happen was still up in the air.
554
+ [1894.700 --> 1897.380] What about the genetic basis then of the synesthesia?
555
+ [1897.380 --> 1898.620] Are you looking for genes?
556
+ [1898.620 --> 1900.580] Well, people in the Rockefeller were looking for genes.
557
+ [1900.580 --> 1902.380] I don't know how far they've got with it.
558
+ [1902.380 --> 1904.340] It's quite recent enterprise.
559
+ [1904.340 --> 1909.620] But one would expect you find a lot of the families to be able to find the genes.
560
+ [1909.620 --> 1913.780] Fascinating work, fascinating life, fascinating person.
561
+ [1913.780 --> 1914.780] Thanks for being with us.
562
+ [1914.780 --> 1915.780] Thank you Bill.
563
+ [1915.780 --> 1918.980] And it's Bill Mobley for the Brain Channel and UCTV.
564
+ [1918.980 --> 1927.020] I hope you'll continue to tune in and hear not just this episode again, if it makes
565
+ [1927.020 --> 1933.140] sense to you, but also to listen to past episodes and continue to check us out, because
566
+ [1933.140 --> 1938.020] there'll be additional terrific guests just like my colleague here in the future.
567
+ [1938.020 --> 1939.220] Thanks very much for being with us.
transcript/allocentric_FqBzVmlXQMA.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_GddQd53mgEk.txt ADDED
@@ -0,0 +1,497 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.840] This gentleman here is Francis Crick and he is of course famous for being one of the
2
+ [6.840 --> 11.680] co-discoverers of the double helix structure of the DNA molecule and he
3
+ [11.680 --> 14.860] want to know about primes for that. The latter part of his life he's been
4
+ [14.860 --> 19.280] thinking about the other big problem which is human intelligence and the brain.
5
+ [19.280 --> 24.640] In 1979 he wrote this essay called Thinking About the Brain which had a huge
6
+ [24.640 --> 29.480] impact on my life. And this was a critical essay. He said
7
+ [29.480 --> 32.880] you know what people have been studying the brain for a long time even back in
8
+ [32.880 --> 37.880] 1939 and they had a massive huge amount of data. But he said in spite of this
9
+ [37.880 --> 41.120] accumulation of detailed knowledge how the human brain works is still
10
+ [41.120 --> 46.760] profoundly mysterious. He said we would clearly get more data over the time and
11
+ [46.760 --> 49.560] we're going to come up with new techniques for measuring and understanding how
12
+ [49.560 --> 53.440] the brain works. He says but it may not matter because we're just not thinking
13
+ [53.440 --> 57.640] about the problem correctly. He said to understand the brain we need new ways of
14
+ [57.640 --> 62.440] thinking about it. More experimental data will not be sufficient. So this is a
15
+ [62.440 --> 66.240] real sort of wake up call for neurosciences. We're just not going about this in the
16
+ [66.240 --> 70.360] right way. And then here's a longer quote I want to read to you. He says what
17
+ [70.360 --> 74.800] was what is conspicuously lacking is a framework of ideas within which to
18
+ [74.800 --> 79.080] interpret all these different approaches. It is not that most neurologist do not
19
+ [79.080 --> 82.800] have some general concept of what is going on. The trouble is the concept is not
20
+ [82.800 --> 88.480] precisely formulated touch it in a crumbles. Ouch. He's being really critical.
21
+ [88.480 --> 92.480] He's saying people act as if we understand this the brain or what's going on.
22
+ [92.480 --> 97.560] But he says we really don't. And to me when I read this article I said wow that's
23
+ [97.560 --> 100.640] incredible. I want to spend my life working on that and that's what we do in
24
+ [100.640 --> 104.520] the momentum. We work on this developing this framework for understanding
25
+ [104.520 --> 110.720] near cortex. Now that we think about the human brain the near cortex is about
26
+ [110.720 --> 114.760] 75% of the brain. That's a picture of the near cortex right here. The other
27
+ [114.760 --> 120.160] 25% is consists of like the brain stem and the spinal cord and the cerebellum.
28
+ [120.160 --> 124.560] And it's like a post that sticks up inside of the near cortex. The old part of
29
+ [124.560 --> 128.000] the brain, the old 25% of the brain, this seems like an enormous control of
30
+ [128.000 --> 133.400] breathing and heart rate, reflex reactions, maybe even running and walking or
31
+ [133.400 --> 136.800] controlled by that, emotions. But when you think about things everything we
32
+ [136.800 --> 140.160] think about intelligence it's really the near cortex. We think about
33
+ [140.160 --> 143.760] perception and language and thought and planning all your conscious
34
+ [143.760 --> 147.680] perceptions are all in the near cortex. And even today it how works is still a
35
+ [147.680 --> 152.120] mystery. It's also my opinion the most important scientific problem of all
36
+ [152.120 --> 158.600] time. Not only does it, are we as humans really basically a near cortex and all
37
+ [158.600 --> 163.600] of our issues and problems in society are related to our brains. It is how we
38
+ [163.600 --> 167.400] think it is all of our arts and sciences and I believe understanding the
39
+ [167.400 --> 170.760] near cortex will be important for the long term survival of our species. I'll
40
+ [170.760 --> 175.440] get back to that more at the end of my talk. So here's my talk outline. I'm going
41
+ [175.440 --> 178.800] to go through some background material because I can't assume everybody knows a
42
+ [178.800 --> 182.640] lot of neuroscience. I want to get you up to speed where we are. And then I'm
43
+ [182.640 --> 185.500] going to introduce this new framework for intelligence and
44
+ [185.500 --> 188.440] court of a computation. And then the end of the talk I'll talk about some of the
45
+ [188.440 --> 192.640] implications for the new theory. All right let's go into some background material.
46
+ [193.160 --> 199.400] Here's a model of the near cortex. It's a dintranaptum. And if you take a
47
+ [199.400 --> 203.440] near cortex out of your head and iron it flat it would be about this big. And about
48
+ [203.440 --> 207.320] this thick a little bit thicker. It's about 1500 square centimeters in area and
49
+ [207.320 --> 213.120] 2 and a half millimeters thick. In this tissue in this near cortex there's
50
+ [213.120 --> 217.880] somewhere between 15 and 20 billion neurons. Now one of the tenants of
51
+ [218.400 --> 222.840] neuroscience is that every perception and every thought and everything you've
52
+ [222.840 --> 227.760] ever occurred to you and perceived is basically the activity of neurons. So
53
+ [227.760 --> 230.480] some of these neurons are active most of them are inactive at any point in
54
+ [230.480 --> 233.520] time and the ones that are inactive represent your current thoughts and
55
+ [233.520 --> 237.600] perceptions. The neurons are connected together. There are thousands of synapses
56
+ [237.600 --> 241.120] or connection between each neuron. So there's somewhere between 50 and 100
57
+ [241.120 --> 246.840] trillion synapses in the near cortex. And second-hand of neuroscience is those
58
+ [246.840 --> 250.040] synapses contain all the knowledge you have about the world. So everything you
59
+ [250.040 --> 253.680] know, everything you've ever learned is stored in those connections. Now if you
60
+ [253.680 --> 257.440] look at the surface of the near cortex it's very uniform. You don't see any
61
+ [257.440 --> 262.040] doing more occasions. But we now know that the that different areas do different
62
+ [262.040 --> 265.560] things. This is first discovered when people had trauma and they had an injury
63
+ [265.560 --> 267.920] in one spot and they said oh they can no longer see you or they can't do
64
+ [267.920 --> 271.080] language or they can't think about certain types of things. But we now have
65
+ [271.080 --> 274.520] mapped out the near cortex in great detail. We know those. These different
66
+ [274.520 --> 277.880] regions are connected to each other with bundles of nerve fibers through the
67
+ [277.880 --> 282.840] white matter. And some of those regions are assigned to different things. So let's
68
+ [282.840 --> 287.400] just talk about that and go back to the slideshow. So now we have here I've
69
+ [287.400 --> 291.120] shown some of those regions I've been highlighted in blue, some visual
70
+ [291.120 --> 294.120] regions, some auditory regions, some adiacenter regions. Those are multiple
71
+ [294.120 --> 298.040] regions in those blue highlighted areas. You see there's a large part of the
72
+ [298.040 --> 302.280] cortex you can't easily associate with vision auditory, somatic input. And the
73
+ [302.280 --> 305.320] basic way people think about this is the following. They say the input
74
+ [305.320 --> 309.240] comes in from some sensory organ, such as a retina, a projection of the first
75
+ [309.240 --> 313.520] visual region in the back of your brain. And that visual region extracts some
76
+ [313.520 --> 316.880] sort of simple features. And then that those simple features are passed to the
77
+ [316.880 --> 320.680] next region which extracts some complex features which protect to the next
78
+ [320.680 --> 324.160] region. After a few of these you end up with some sort of object representation.
79
+ [324.160 --> 328.120] This is saying basically people believe the same basic thing is happening in
80
+ [328.120 --> 331.840] a different modality such as touch. And so your skin projects with this
81
+ [331.840 --> 335.840] somatic sensory regions and a similar type of process. And somehow in these
82
+ [335.840 --> 339.960] most in your carcass, there's some sort of multi-modal associations, multiple
83
+ [339.960 --> 343.520] modal objects and so on. This is the basic idea that how many people think about
84
+ [343.520 --> 347.120] the brain today. It's actually not really true and I'll get to that in a moment.
85
+ [347.120 --> 352.360] When you actually look at the map out how regions of the brain can actually
86
+ [352.360 --> 355.960] reach other. Now this has been done. When the first real great map was done in
87
+ [355.960 --> 360.560] 1991 by two people, Felmen Van Neffson, this is what they reported. This picture
88
+ [360.600 --> 364.680] on the left shows each of those little rectangles is a region of a
89
+ [364.680 --> 370.120] macaque monkeys neocortex. It's basically showing the somatic sensory regions
90
+ [370.120 --> 373.960] on the left and the visual regions on the right. And the one on the line,
91
+ [373.960 --> 377.640] a bundle of nerve fibers, millions of nerve fibers that are going between each
92
+ [377.640 --> 383.040] of these regions. And you can see it's very complicated. There's connections
93
+ [383.040 --> 386.840] going all over the place and there's parallel, horizontal connections of parallel
94
+ [386.880 --> 390.960] regions and there's level skipping going on. So we can start for a first of
95
+ [390.960 --> 394.200] all to say this is a very complex structure. It's not a simple flow chart like
96
+ [394.200 --> 398.880] in the upper right. It's certainly not strictly hierarchical. In fact, in 1991
97
+ [398.880 --> 402.480] they reported that 40% of all possible connections between regions exist,
98
+ [402.480 --> 405.760] which is much greater than you'd get in a hierarchy. And now we know that
99
+ [405.760 --> 410.360] numbers much, much higher with new techniques, we know that there's a far greater
100
+ [410.360 --> 413.840] connectivity. So that's kind of confusing. Now the strange thing about is,
101
+ [413.840 --> 417.520] not a where you look in the Nioquat, if you look at any one of these regions,
102
+ [417.520 --> 422.640] the local circuitry anywhere is remarkably the same. So you have all these
103
+ [422.640 --> 425.440] regions doing different things connected in the current way, but then every
104
+ [425.440 --> 430.280] region looks like it's doing the same thing. This was first noted by the perhaps
105
+ [430.280 --> 435.040] the most famous neuroscientist of all time, Romani Kahal, a spanner. And right
106
+ [435.040 --> 439.960] around the late 1800s and early 1900s, he started mapping out all the types
107
+ [439.960 --> 444.600] of parts of the brain. They had just discovered these staining techniques where
108
+ [444.600 --> 448.000] you could take a bunch of neural tissue and would stain just some of the neurons.
109
+ [448.000 --> 451.000] And this was important because if you stained all the neurons, it would be
110
+ [451.000 --> 454.080] big black masses. So by staining some of the neurons, they could start seeing
111
+ [454.080 --> 457.960] where they look what they look like. So here on the left of a picture here is a
112
+ [457.960 --> 460.760] slice of the Nioquat. Actually, you see it's two and a half millimeters thick.
113
+ [460.760 --> 466.240] And this stain highlights the cell bodies. So the neuron bodies. And you can
114
+ [466.240 --> 469.360] see they have different shapes. They have different sizes and they have
115
+ [469.360 --> 473.000] different packing densities. And they started noticing that they appear to be
116
+ [473.000 --> 476.320] layers. And so they started saying, oh, it looks like there's six layers of
117
+ [476.320 --> 480.200] neurons in the Nioquat text. And this looked the same everywhere. Then the
118
+ [480.200 --> 483.480] stain on the image on the right is a different stain that not only stains the
119
+ [483.480 --> 489.000] cell body, but stains the dendrites and the axons that coming out of the
120
+ [489.000 --> 492.080] cell. So now you can see how they're connected. You can see there's a lot of
121
+ [492.080 --> 495.840] vertical connectivity going on across the layers. And some layers have more
122
+ [495.840 --> 501.680] horizontal connectivity. So for the last 120 years, many neurosciences have been
123
+ [501.680 --> 505.200] mapping out what these circuits look like, what these neurons look like, how
124
+ [505.200 --> 509.800] they're connected together. It's an incredibly rich field of data. People like
125
+ [509.800 --> 514.680] ourselves who are theorists, we create models from this. So we build pictures
126
+ [514.680 --> 517.800] like this where we're looking at the different cell layers and trying to figure
127
+ [517.800 --> 520.960] how they connect together and what they're doing and the different types of
128
+ [520.960 --> 525.160] connections between them. When you do some of these, first we could say there are
129
+ [525.160 --> 530.920] dozens of neuron types in the Nioquat text. They are organized in layers.
130
+ [530.920 --> 535.720] How many layers you get depends on how you measure them. You can say six to
131
+ [535.720 --> 539.640] ten depending on which way you break them apart. But there are prototypical
132
+ [539.640 --> 542.800] projections across layers, meaning that everywhere you look, the same type of
133
+ [542.800 --> 546.920] connection and the information basically goes vertically. You come in one layer
134
+ [546.920 --> 550.840] and it goes up and it goes down and back and forth across the layers. There are
135
+ [550.840 --> 554.240] horizontal connections but they're more limited and they only come from
136
+ [554.240 --> 558.200] certain layers. Another thing that was discovered was that all regions of
137
+ [558.200 --> 562.680] Nioquat text have a motor output. I've shown that here labeled this one of
138
+ [562.680 --> 566.680] the green hours coming back out. And this is surprising. You might have heard
139
+ [566.680 --> 570.440] that there's a motor section in the Nioquat text but we know it's all parts of
140
+ [570.440 --> 573.920] the Nioquat text or motor cortex. She said, well what would the visual cortex
141
+ [573.920 --> 577.160] have to say about movement? Well, it turns out that the regions in the visual
142
+ [577.160 --> 581.000] cortex project to an old part of the brain that moves the eyes. And the
143
+ [581.000 --> 583.640] regions in the auditory cortex projects to a part of the brain that
144
+ [583.640 --> 587.480] orients the head. So when you're listening or when you're seeing it's an active
145
+ [587.480 --> 591.720] sense. You don't just see by getting input from the eyes. You see by moving the
146
+ [591.720 --> 595.360] eyes and getting inputs. And that's how the cortex works as a general principle.
147
+ [595.360 --> 599.720] And then of course as I've said already that every part of the Nioquat
148
+ [599.720 --> 604.960] text has similar circuits. Now there are variations and so people point out
149
+ [604.960 --> 607.360] all this part of the cortex in this movement world. This part of the
150
+ [607.360 --> 610.480] cortex is a little bit less than that. And so there are variations that are going
151
+ [610.480 --> 614.640] on but there's this complex common circuitry everywhere. And so there's
152
+ [614.640 --> 618.640] something fundamental going on everywhere in the Nioquat text. And it's
153
+ [618.640 --> 623.200] complex. The first person who really made some sense of this was a
154
+ [623.200 --> 626.640] gentleman named Vernon Malcastle. And he had this huge idea. By the way he is
155
+ [626.640 --> 631.120] like one of the fathers of neuroscience and he was really the guy who
156
+ [631.760 --> 635.440] has some of the most elegant writings about the Nioquat text. He was at John's
157
+ [635.440 --> 640.560] Hopkins. And he had this big idea. In 1978 he published a monograph.
158
+ [640.560 --> 644.640] And in that monograph he made the following claims. He says well look the reason
159
+ [644.640 --> 648.400] the all-arriage of the Nioquat text looked the same is because they
160
+ [648.400 --> 652.960] performed the same basic function. They're all doing the same thing. And what
161
+ [652.960 --> 656.720] makes one region of visual cortex another region in the auditory region is
162
+ [656.720 --> 659.920] what it's connected to. Literally saying you could take a part of the Nioquat
163
+ [659.920 --> 663.040] text and stick an auditory nerve into it and it would become auditory cortex.
164
+ [663.040 --> 667.040] Or you could take an any piece of visual nerve into it would become visual
165
+ [667.040 --> 673.360] cortex. And then which is an amazing idea. And then he said a small area of the
166
+ [673.360 --> 676.880] cortex about a one millimeter square area which you call the
167
+ [676.880 --> 680.960] cortical column is the unit of replication and contains this sort of
168
+ [680.960 --> 685.360] common cortical algorithm. He chose a millimeter square because in that square
169
+ [685.360 --> 687.920] millimeter you have all the different cell types, all the different
170
+ [687.920 --> 690.640] connections, all the different physiological response properties.
171
+ [690.640 --> 695.040] It wasn't that this cortical column is a physical thing. It was that it was a
172
+ [695.040 --> 699.040] small enough unit that contained everything. We sometimes make pictures like
173
+ [699.040 --> 704.320] this to illustrate this. So here we can imagine a slice to the Nioquat text
174
+ [704.320 --> 707.120] and you're looking at the Nioquatical sheet and we're showing these individual
175
+ [707.120 --> 711.600] columns packed in there. In a human there'd be 150,000 of these
176
+ [711.600 --> 714.800] columns. Now again they're not physical like this. If you actually look at the
177
+ [714.800 --> 718.400] cortex without with this a few exceptions you wouldn't see this but this is the
178
+ [718.400 --> 721.920] way to think about it and this is the way a lot of neuroscientists do is what
179
+ [721.920 --> 726.240] this column looks like. Now Moundcast's idea was one of the biggest ideas ever in
180
+ [726.240 --> 732.560] science. I put right up there with Darwin. Darwin said that we have this tremendous
181
+ [732.560 --> 737.840] tremendous diversity of life and all of it comes about because of a single
182
+ [737.840 --> 741.520] algorithm repeated over and over again. And Moundcast was saying we have this
183
+ [741.520 --> 744.400] tremendous diversity of intelligence. If everything we think about it
184
+ [744.400 --> 747.600] intelligence whether it's language of music or arts or physics and
185
+ [747.600 --> 753.040] mathematics it's all based on a single algorithm. It's an incredible idea.
186
+ [753.040 --> 757.200] It's so incredible that many neuroscientists today have trouble dealing with it.
187
+ [757.200 --> 760.240] It's like they don't have to interpret it with the new about it but it's
188
+ [760.240 --> 765.280] clearly true and it should be a foundation of all Nioquatical theory.
189
+ [765.280 --> 769.920] Okay so next question I want to address is what does the Nioquatics do
190
+ [769.920 --> 773.280] and over the years people propose different ways of looking about it but I and
191
+ [773.280 --> 777.200] other neuroscientists have many of us have come up with this perspective.
192
+ [777.200 --> 781.440] It's the thing about the Nioquatics what it does is it learns a model of the world.
193
+ [781.440 --> 784.880] So when you're born you don't know about the world you don't know about
194
+ [784.880 --> 788.480] about buildings and cars and people you don't know about trees you don't know
195
+ [788.480 --> 791.600] about computers you don't know about everything and you don't know any
196
+ [791.600 --> 795.360] languages you don't know any words you have to learn all of this and so you have
197
+ [795.360 --> 798.320] to learn this model of the world and in your cortex you have a model of the
198
+ [798.320 --> 801.680] world and and when you interact with that model you'll see it makes
199
+ [801.680 --> 805.440] predictions so let's talk about what does that know I will consist of there's
200
+ [805.440 --> 809.120] thousands and thousands of things you know you know how they look how they feel
201
+ [809.120 --> 813.760] and how they sound objects in the world that you interact with every day
202
+ [813.760 --> 817.600] you know where these objects are located to relative to other objects so
203
+ [817.600 --> 821.760] I'm sitting here looking I'm in a room and I see a door and a screen and a floor
204
+ [821.760 --> 824.880] and a table and chairs these things I recognize by the also have
205
+ [824.880 --> 828.400] relationships to each other and I would not expect to see the door on the
206
+ [828.480 --> 834.400] ceiling or on the floor I know how objects behave so for example a door has hinges
207
+ [834.400 --> 838.080] and it can open and close and has a latch that goes up and down or my computer has
208
+ [838.080 --> 843.600] all kinds of behaviors that it changes as I interact with it and we do this
209
+ [843.600 --> 847.760] we also learn this for both physical and abstract objects so I can model
210
+ [847.760 --> 850.880] physical things that are in front of me but I also model abstract
211
+ [850.880 --> 854.320] concepts maybe like democracy or places that have never been to and yet I
212
+ [854.320 --> 858.560] still have models of them and then finally it's a predictive model
213
+ [858.560 --> 862.400] so you have this model in your cortex and it's constantly making predictions
214
+ [862.400 --> 866.560] about what's going to happen next and this is its value now it makes predictions
215
+ [866.560 --> 869.920] at all levels so even every time I touch something I have a prediction what I'm
216
+ [869.920 --> 872.400] going to feel and every time I move my eye that would predict what I'm going to
217
+ [872.400 --> 874.640] see you're going to not consciously aware of this
218
+ [874.640 --> 878.320] but I also can make predictions about long term things so if I'm trying to
219
+ [878.320 --> 881.200] apply to get a grant from my science I might say
220
+ [881.200 --> 884.800] what should I apply early or late or should I use this language of that line which
221
+ [884.800 --> 888.320] improve my chances of getting accepted so we're trying to predict the outcomes
222
+ [888.320 --> 892.080] of our actions all the time I wrote an entire book about how the cortex
223
+ [892.080 --> 896.080] builds predictive models it's called on intelligence so this is what the brain
224
+ [896.080 --> 899.680] does we went this is an incredibly complex model the world that is stored
225
+ [899.680 --> 902.240] in your neocortex and we understand how it happens
226
+ [902.240 --> 905.840] how it happens so the question we want to know is how does the neocortex
227
+ [905.840 --> 909.200] learn this model using the circuitry that we've talked about
228
+ [909.200 --> 912.720] so now we can switch to our new framework for this
229
+ [912.720 --> 915.840] so I'm going to start with a thought experiment we had this occurred just a
230
+ [915.840 --> 919.360] couple years ago and it was a real revelation and sort of a bunch of things
231
+ [919.360 --> 922.880] came out of this thought experiment we were asking
232
+ [922.880 --> 927.680] how it is I can predict what I'm going to feel when I touch an object such as
233
+ [927.680 --> 932.320] a couple like this and it sounds simple but it's a very complex question
234
+ [932.320 --> 936.880] the answer and what we realized was the following I asked
235
+ [937.280 --> 940.640] what does the brain need to know what does the cortex need to know to predict
236
+ [940.640 --> 943.840] what I'm going to feel when I move my finger to a new location place it down
237
+ [943.840 --> 946.960] on top of the cup so I'm going to touch the rim of the cup here and I can
238
+ [946.960 --> 950.320] imagine what I'm going to feel before I feel it the cortex needs to know
239
+ [950.320 --> 954.080] several things it needs to know that it's touching a coffee cup and
240
+ [954.080 --> 956.880] he's now says oh I know what this object is because it's that's
241
+ [956.880 --> 960.720] requirement to know it also means you know where the finger is going to be
242
+ [960.720 --> 964.560] on the cup after it the finger comes down if I move the finger a different
243
+ [964.560 --> 967.680] direction I'll touch something else and I'll feel something different so I
244
+ [967.680 --> 971.280] have to know where it's going to be and it needs to know what object it's
245
+ [971.280 --> 975.280] touching now if I'm if my finger is first on the side of the cup and I'm about to
246
+ [975.280 --> 979.280] move it I have to know where is originally and where it will be based on a
247
+ [979.280 --> 983.040] movement so essentially saying okay where's my finger now where will it be
248
+ [983.040 --> 986.960] after I move and then I can make a prediction about the cup
249
+ [986.960 --> 991.120] now this is a location the where it is is a location relative to the cup it's
250
+ [991.120 --> 994.080] not relative to my body the same thing we're curved the cup is at a different
251
+ [994.080 --> 999.520] angle and so on so the cortex needs to know a location in the reference frame of
252
+ [999.520 --> 1002.800] the cup it's kind of hard to imagine it would do that but it's not no it
253
+ [1002.800 --> 1006.560] must need to know that now if you realize it when you touch the cup with
254
+ [1006.560 --> 1010.160] multiple fingers at the same time or the your hand all parts of your
255
+ [1010.160 --> 1014.480] skin are making the same type of predictions so my different fingers are
256
+ [1014.480 --> 1018.800] touching different parts of the cup but each one independently in simultaneous
257
+ [1018.800 --> 1022.640] predicting what it's going to feel and therefore each part of my skin has to
258
+ [1022.720 --> 1026.320] know where it is relative to this cup there isn't a single location every
259
+ [1026.320 --> 1030.720] part of my input has to have a location and so this tells us that in the
260
+ [1030.720 --> 1035.760] primary sensory cortex in the neocortex in this case touch but the same is
261
+ [1035.760 --> 1039.360] going to happen in a vision or addition there has to be a representation in
262
+ [1039.360 --> 1044.560] the neural tissue of the location on the object that I'm touching this is a
263
+ [1044.560 --> 1049.280] really interesting idea and we ran with it okay back to the slides a year ago
264
+ [1049.280 --> 1053.280] just about a year ago we published our first paper on this idea and it was
265
+ [1053.280 --> 1056.560] called the title of paper was a theory of how columns in the neocortex
266
+ [1056.560 --> 1060.400] enable learning the structure of the world and I'll give you just briefs
267
+ [1060.400 --> 1065.600] the opposite what's in that paper we argued that a single column in the
268
+ [1065.600 --> 1069.120] cortex is say receiving input from the tip of your finger
269
+ [1069.120 --> 1073.200] is able to complete module to the objects how does it do that I mentioned
270
+ [1073.200 --> 1077.760] to have a location signal which I'm illustrating here in blue so we have an
271
+ [1077.760 --> 1080.560] input coming from the finger and you have another input coming from a
272
+ [1080.560 --> 1083.760] different layer in the cortex which we believe is in layer six and that
273
+ [1083.760 --> 1088.720] represents the the location of the input relative to the object so now I have
274
+ [1088.720 --> 1092.240] two things I have the actual sensation coming from the finger and I had the
275
+ [1092.240 --> 1096.000] location on the object these arrive in layer four this is
276
+ [1096.000 --> 1100.480] well known in the enemy and so now if I take about that I say okay I know the
277
+ [1100.480 --> 1104.240] sense and where it is and if I move my finger multiple times I can
278
+ [1104.240 --> 1107.920] basically build up a model of the object what the different features are
279
+ [1107.920 --> 1112.240] different locations and we propose that that's being assembled in an upper layer
280
+ [1112.240 --> 1116.480] layer two three this object layer is a stable representation like this would
281
+ [1116.480 --> 1120.560] represent the coffee cup it's the stable representation meaning as I move my
282
+ [1120.560 --> 1124.400] finger the input changes and location changes but the representation in the
283
+ [1124.400 --> 1128.480] output layer is stable so I associate a set of features and locations with an
284
+ [1128.480 --> 1133.440] object we then went on to show that what would happen if I had multiple columns
285
+ [1133.440 --> 1136.960] at the same time so imagine I have three fingers touching the cup at the same
286
+ [1136.960 --> 1142.000] time each one is sensing a different location has a different input each one on
287
+ [1142.000 --> 1148.480] its own would not be able to determine what the object is but they can by
288
+ [1148.480 --> 1152.640] voting meaning at this object layer they can also be uncertain say well I don't
289
+ [1152.640 --> 1155.360] really know what this is it could be a b and c and the other one says this could
290
+ [1155.360 --> 1158.880] be b c and d and the other one says this could be b x and q and they say
291
+ [1158.880 --> 1162.880] what must be be the coffee cup we model this we showed simulations on this
292
+ [1162.880 --> 1165.920] using realistic neurons we showed the capacity of the system but the
293
+ [1165.920 --> 1170.800] basic idea is if you only had like one sense we like one finger and imagine
294
+ [1170.800 --> 1173.920] how we're asking you to recognize an object by putting your finger in a black
295
+ [1173.920 --> 1177.280] box and you touch with one touch you almost certainly couldn't recognize
296
+ [1177.280 --> 1180.320] what the object is but if I can touch and by having to move my finger
297
+ [1180.320 --> 1183.360] multiple locations but if I can grab it with my whole hand at once I can use
298
+ [1183.360 --> 1187.040] to get it in a single sense so multiple columns can refer objects in a
299
+ [1187.040 --> 1191.520] single sensation by voting on object identity this is touched but you should
300
+ [1191.600 --> 1195.600] realize that the same thing is going on envision in other sensory modalities
301
+ [1195.600 --> 1199.280] when you think about the eyeball or the retina it's not a single thing it's like
302
+ [1199.280 --> 1204.960] it's like the skin it's a set of sensory organs arranged on the retina
303
+ [1204.960 --> 1208.560] and each one of those is projecting to a column and each one it's like the
304
+ [1208.560 --> 1211.440] cortex doesn't visual cortex doesn't look in an image it looks a lot to
305
+ [1211.440 --> 1214.640] little pieces of it just like when you get the inputs from your fingers
306
+ [1214.640 --> 1218.960] so the same basic structure works in all modalities
307
+ [1219.840 --> 1224.080] we had a big question though like we proposed that this location
308
+ [1224.080 --> 1228.560] representation is in layer six one of these lower layers but how could that be
309
+ [1228.560 --> 1232.000] where could it come from what does it look like how would the brain know a
310
+ [1232.000 --> 1234.560] location and what does it mean to know a location
311
+ [1234.560 --> 1239.520] on an object in this paper we didn't answer that question we left it as
312
+ [1239.520 --> 1243.920] we left it as a question but we suggested we were to look to find the answer
313
+ [1243.920 --> 1247.280] and that turned out to be correct what we suggested we should look in
314
+ [1247.280 --> 1250.720] something called the enterrhyne cortex now what's going on in the enterrhyne
315
+ [1250.720 --> 1252.560] cortex you may have heard of these in the enterrhyne
316
+ [1252.560 --> 1255.920] cortex there's some things called grid cells here i have a picture of a
317
+ [1255.920 --> 1259.600] rodent and a human and it shows these two
318
+ [1259.600 --> 1262.800] older and smaller brain structures are not part of the near cortex called
319
+ [1262.800 --> 1266.720] hippocampus in the enterrhyne cortex you can see in the human they're like the
320
+ [1266.720 --> 1270.560] size of maybe your pinky they're sort of wrapped around on the inside
321
+ [1270.560 --> 1274.160] of the towards the older part of the brain like good cells are a very hot
322
+ [1274.880 --> 1277.600] a lot of people have been studying and the noble prizes have been awarded for
323
+ [1277.600 --> 1284.800] them they uh... and what they do is they represent the location of your body
324
+ [1284.800 --> 1290.480] relative to an environment so and we have them too so let me just give you
325
+ [1290.480 --> 1293.200] what do i mean by representing the location of your body relative to the
326
+ [1293.200 --> 1297.920] environment so i'm in a room right now and i have a sense where i'm in the room
327
+ [1297.920 --> 1300.080] and remember when you have a sense for something those cells of a
328
+ [1300.080 --> 1304.000] representing that that's the grid cells in my my enterrhyne cortex
329
+ [1304.000 --> 1308.240] and they're representing where i am and even if i close my eyes and i take a
330
+ [1308.240 --> 1311.920] couple steps over here i have a i have a different perception of where i am in
331
+ [1311.920 --> 1315.120] them i know now that the stool is further away that windows further away
332
+ [1315.120 --> 1319.040] if i step back here i know i'm closer to it again so this sense of where i am
333
+ [1319.040 --> 1324.000] is actually these grid cells in the enterrhyne cortex and they're updating as i move
334
+ [1324.000 --> 1327.840] and because i can they update even if i'm not looking anything then you know that
335
+ [1327.840 --> 1331.680] they're updated by my movement themselves i don't need a sensory input to tell me this
336
+ [1331.680 --> 1334.560] the brain says as you move i know you're in a different location i'm going to
337
+ [1334.560 --> 1339.280] represent location differently okay let's go back to this so these grid cells
338
+ [1339.280 --> 1343.120] represent the location of the body relative in an environment the big idea we
339
+ [1343.120 --> 1347.280] have is that grid cells also exist in the near cortex that they were preserved
340
+ [1347.280 --> 1350.640] in evolutionary time but they're now used for something different the
341
+ [1350.640 --> 1353.600] cortical grid cells represent the location of a
342
+ [1353.600 --> 1357.520] sensory input relative to the object you're sensing and i'll go into this more
343
+ [1357.520 --> 1362.640] in detail first of all i need to tell you how grid cells work now this is
344
+ [1362.640 --> 1367.200] complicated and if i lose you on this we'll come back and you don't need to
345
+ [1367.200 --> 1369.680] hold the details about this but the details are important and they're
346
+ [1369.680 --> 1373.280] interesting so let's just talk about some basic things about grid cells
347
+ [1373.280 --> 1377.040] highly represent location typically this is done with a rodent such as a rat
348
+ [1377.040 --> 1381.520] or a mouse and here we have a rodent walking around in a room
349
+ [1381.520 --> 1384.800] and if i would just stick a probe into one of the grid cells and it's enter
350
+ [1384.800 --> 1387.680] on the cortex and say well when does that grid cell become active when does
351
+ [1387.680 --> 1391.360] it fire you would see that it becomes active at different locations in this
352
+ [1391.360 --> 1394.960] environment whenever the the rodent is in one of those red spots that cell
353
+ [1394.960 --> 1398.320] becomes active and when it's not in the red spot it's not it's relatively
354
+ [1398.320 --> 1402.880] inactive and this occurs not how the animal moves on around that's and that's
355
+ [1402.880 --> 1406.400] where the term grid comes from because it's a sort of a grid-like pattern in
356
+ [1406.400 --> 1409.760] the room where the where this cell becomes active
357
+ [1409.760 --> 1413.440] now we know as i just mentioned a moment ago that the grid cells activity is
358
+ [1413.440 --> 1417.440] updated by motor command because you can this even happens in the dark if the
359
+ [1417.440 --> 1419.760] animal moving around in the dark it doesn't have to see anything for these
360
+ [1419.760 --> 1423.280] grid cells to say hey we're in a new location now
361
+ [1423.280 --> 1426.320] this isn't very useful to knowing exactly where you are because it could be
362
+ [1426.320 --> 1430.320] any one of those spots if we then probe the next cell over one right
363
+ [1430.320 --> 1434.400] close to the first one a different grid cell we might see that it becomes
364
+ [1434.400 --> 1439.600] active in these blue areas and so it's very you can see they're very
365
+ [1439.600 --> 1442.240] similar spacing and similarly tiling going on here it's just
366
+ [1442.240 --> 1446.000] represented one little further over in fact there are grid cells that
367
+ [1446.000 --> 1449.680] represent every spot in this in this room but they're all the sort of the same
368
+ [1449.680 --> 1453.840] child
369
+ [1453.840 --> 1456.880] so grid cells on themselves can tell you something about where you are in the
370
+ [1456.880 --> 1460.160] room but they can't represent a unique location
371
+ [1460.160 --> 1464.080] so how does the brain get around that the basic way we believe and other people
372
+ [1464.080 --> 1466.880] believe it gets around as this follows
373
+ [1466.880 --> 1471.120] imagine i had two grid cell modules meaning two sets of grid cells
374
+ [1471.120 --> 1474.800] module one and module two and they differ slightly they might differ in
375
+ [1474.800 --> 1478.160] their spacing of the where the the firing fields are and they might
376
+ [1478.160 --> 1482.400] differ in their orientation relative to the room now if i want to know where
377
+ [1482.400 --> 1485.760] i am in the room if i looked at the cells in module one i might say well it
378
+ [1485.760 --> 1487.680] could be at any one of those red spots because that's
379
+ [1487.680 --> 1491.600] self-active if i looked at module two i could be any one of those green spots
380
+ [1491.600 --> 1494.640] because that's where this cell is active
381
+ [1494.640 --> 1498.000] and but if i look at the two modules together and say which two cells are
382
+ [1498.000 --> 1500.320] firing one from module one and one from module two
383
+ [1500.320 --> 1503.200] it turns out you end up with a unique location
384
+ [1503.200 --> 1506.320] and so you can now say by looking at the two cells together
385
+ [1506.320 --> 1510.320] aha i know where i am because that's the only place where these two cells are
386
+ [1510.320 --> 1513.280] when those two cells are active it must be at this location
387
+ [1513.280 --> 1516.720] and if you have a set of modules the number of locations you
388
+ [1516.720 --> 1519.920] you can identify uniquely grows exponentially with the set of modules
389
+ [1519.920 --> 1524.960] and so you can create a very very large sort of space of locations
390
+ [1524.960 --> 1529.760] in the in the world and so now the animal can know exactly where it is
391
+ [1529.760 --> 1534.480] one of the things that i won't explain it in more detail but state is that the
392
+ [1534.480 --> 1537.440] the representation of location is different than the kind of thing you learned
393
+ [1537.440 --> 1542.800] in high school you know it's not like x y and z and in this case the
394
+ [1542.800 --> 1546.320] location meaning which cells are firing which grinsals are firing is unique
395
+ [1546.320 --> 1551.200] to the position in the room and to the room so if i know my location i also
396
+ [1551.200 --> 1553.760] i know where i am in the room and i know what room i'm in it's the
397
+ [1553.760 --> 1558.240] unique in all the world so it's a very unique thing
398
+ [1558.240 --> 1563.120] okay that's the basis by grinsals now the theory is
399
+ [1563.120 --> 1567.680] and this is many people study this is that the underlying cortex used grinsals
400
+ [1567.680 --> 1571.840] to learn in map environments like how do i learn the spaces in the rooms i'm
401
+ [1571.840 --> 1575.920] in and we do the same thing too and so the grinsals represent your location
402
+ [1575.920 --> 1579.760] of the body relative to the room i show two rooms here that are different in
403
+ [1579.760 --> 1582.320] the sense that i've shown one having a green
404
+ [1582.320 --> 1586.240] wall and one having a blue wall and so the rat or the road will see these
405
+ [1586.240 --> 1588.880] in different rooms even though the same size
406
+ [1588.880 --> 1591.920] and i've marked three different locations or labeled three different locations
407
+ [1591.920 --> 1595.600] in each room and room one i've labeled location a b and c and room two
408
+ [1595.600 --> 1598.880] i've labeled location t and f remember every location in the room has a
409
+ [1598.880 --> 1602.400] unique representation and as the animal moves these
410
+ [1602.400 --> 1605.040] updates these location representations change
411
+ [1605.040 --> 1609.520] notice if i go from a to b to c or if i go direct from a to c i'm always
412
+ [1609.520 --> 1613.120] going to get the c this is called path integration so no matter how i get
413
+ [1613.120 --> 1615.600] there you're always going to get the same location
414
+ [1615.600 --> 1620.240] now if you think about this since each of these all these locations are unique
415
+ [1620.240 --> 1624.080] to the room that a room can be defined as a location space of all the
416
+ [1624.080 --> 1628.480] possible location representations in the world there's a unique set
417
+ [1628.480 --> 1632.800] that are assigned to each of these rooms and and so we can think about a room
418
+ [1632.800 --> 1635.360] having this location space even if the animals never
419
+ [1635.360 --> 1638.720] bented some corner of the room it still has a representation for that
420
+ [1638.720 --> 1642.800] location now what we're proposing is the following the new cortex does
421
+ [1642.800 --> 1647.360] something very similar but instead of the animal your body moving on the room
422
+ [1647.360 --> 1650.960] it's your sensory patches of your sensory organs moving relative to
423
+ [1650.960 --> 1654.320] objects in the world so in the cortex the grid cells represent the
424
+ [1654.320 --> 1658.560] location of a sensor input relative to the object in this case where my finger
425
+ [1658.560 --> 1661.360] is relative to the pen or relative to the coffee cup
426
+ [1661.360 --> 1665.680] i've labeled three locations on the cup x y and z i've really built four
427
+ [1665.680 --> 1669.440] locations relative to the pen and as you move your finger
428
+ [1669.440 --> 1672.960] the cortex updates this location relative those objects notice that i went
429
+ [1672.960 --> 1676.480] for i go from v to t and i go through location w
430
+ [1676.480 --> 1680.240] w is not on the pen but it's still in the location space of the pen
431
+ [1680.240 --> 1683.280] so location space is bigger than just the object the object is maintained in
432
+ [1683.280 --> 1686.720] this location space and so the the cortex would be tracking where that
433
+ [1686.720 --> 1691.440] finger is as it moves so every object in the in the world now has its own unique
434
+ [1691.440 --> 1695.680] location space and this is the key to understanding how the cortex models the
435
+ [1695.680 --> 1699.040] world objects have their own location spaces and we have to start thinking
436
+ [1699.040 --> 1704.320] along those lines so now we go back to the drawing i soldier earlier and i
437
+ [1704.320 --> 1707.600] said well we didn't know how the location relates to the object was represented
438
+ [1707.600 --> 1711.840] we don't believe that there are grid cell modules in each cortical column
439
+ [1711.840 --> 1714.320] i showed those by these green rectangles here
440
+ [1714.320 --> 1718.640] and so they provide the mechanism for representing the location and
441
+ [1718.640 --> 1722.640] everything we've learned about grid cells have now applied to the cortex as well
442
+ [1722.640 --> 1726.800] we had another paper which is just posted
443
+ [1726.800 --> 1732.000] just a few weeks ago which talks about the mechanisms being how these grid cells
444
+ [1732.000 --> 1735.840] and in layer six and layer four actually interact it's a more detailed paper
445
+ [1735.840 --> 1740.640] subset of this whole overall theory and that's mentioned Lewis et al
446
+ [1740.640 --> 1745.280] so let's review our proposal so far we pose that the grid cells
447
+ [1745.280 --> 1749.120] exist in every cortical column and they represent the location of the
448
+ [1749.120 --> 1753.200] input to the column relative to the object being sent each column learned
449
+ [1753.200 --> 1756.720] is now able to complete models of objects because it knows these location
450
+ [1756.720 --> 1762.320] signals and the objects in the world have their own unique location space
451
+ [1762.320 --> 1765.920] so once we understood this there's a whole series of other problems that
452
+ [1765.920 --> 1769.040] started that we've been puzzling over for years it became um
453
+ [1769.120 --> 1772.480] clear how to solve them i'll just go through a few of them
454
+ [1773.520 --> 1776.640] and as i mentioned here this is uh this basically defines a location-based
455
+ [1776.640 --> 1779.840] framework for understanding and in your projects so let's talk about one of
456
+ [1779.840 --> 1782.960] these things one of these things is compositional structure everything in the
457
+ [1782.960 --> 1785.760] world is composed of other things so a door from
458
+ [1785.760 --> 1791.360] magenta door it has uh has panels and a shape but it also has a handle that
459
+ [1791.360 --> 1794.160] goes up and down or turns rotates it also got a little
460
+ [1794.160 --> 1798.720] aspect goes in and out it also has hinges and hinges have pins and the
461
+ [1798.720 --> 1802.720] screws and so on um so everything every object in the world is like this
462
+ [1802.720 --> 1807.920] cars are consists of other things and so the cortex has to learn objects as
463
+ [1807.920 --> 1811.280] composed of other objects arranged in particular ways
464
+ [1811.280 --> 1814.160] the example uh we're going to use again in our coffee cup
465
+ [1814.160 --> 1817.680] in this case we have the coffee cup with the nomencl logo on it
466
+ [1817.680 --> 1821.200] the cup is a previously learned object i mean i know coffee cups
467
+ [1821.200 --> 1825.520] and the logo is a previously learned object i've seen that elsewhere in the
468
+ [1825.520 --> 1828.960] world and i'm trying to learn a new thing a new composition which is the cup
469
+ [1828.960 --> 1832.320] with the logo i've never seen that combination before
470
+ [1832.320 --> 1835.360] and i want to learn it very quickly and efficiently i don't want to have to
471
+ [1835.360 --> 1838.800] relearn the cup or relearn the logo i want to be able to say here's a new
472
+ [1838.800 --> 1841.760] object consists of something i already know and something else i already know
473
+ [1841.760 --> 1845.280] bingo it's done here's the new object and i want to i basically want to carry on
474
+ [1845.280 --> 1847.920] all knowledge associated with the logo and all knowledge associated with the
475
+ [1847.920 --> 1851.280] cup into a new object so we realize this can be done
476
+ [1851.280 --> 1855.520] by thinking about location spaces the two objects the cup and the logo have
477
+ [1855.520 --> 1858.320] their own location spaces i've been i've labeled three
478
+ [1858.320 --> 1862.720] points or three locations on the cup a b and c and i've labeled three locations
479
+ [1862.720 --> 1866.960] on the logo x y and z and when the logo is positioned
480
+ [1866.960 --> 1871.520] relative to the cup they they're in this case if i move the logo on top of the
481
+ [1871.520 --> 1875.840] cup then a and x are physically the same location meaning point a on the
482
+ [1875.840 --> 1880.320] cup space is physically the same location as point or location x on the logo
483
+ [1880.320 --> 1885.280] space the same is true with b and y and c and one so those blue arrows in some
484
+ [1885.280 --> 1889.600] sense represent a like a transformer a way of saying if i can go between the
485
+ [1889.600 --> 1893.680] the points in the space of the cup and the points in space of logo i can define
486
+ [1893.680 --> 1896.960] where the logos on the cup are defined in the new object
487
+ [1896.960 --> 1900.080] and so every point of logo and every point of cup there's a one-to-one
488
+ [1900.080 --> 1903.520] corresponds between those two so basically there's one sort of blue arrow one
489
+ [1903.520 --> 1906.560] sort of transform that can be taken represent this new object
490
+ [1906.560 --> 1912.640] and how can that be done um we are proposing a new type of cell to do this
491
+ [1912.640 --> 1916.800] we call it uh displacement cells that solve this problem it's a fundamental
492
+ [1916.800 --> 1920.880] problem it has to be solved by the cortex here i'm going to illustrate uh a
493
+ [1920.880 --> 1924.240] little bit about how displacement cells work just to give you a flavor for it
494
+ [1924.240 --> 1928.080] now one of showing here are three grid cell modules these are the actual
495
+ [1928.080 --> 1931.920] imagine these little rectangles actually populations and neurons in
496
+ [1931.920 --> 1936.320] your columns and your cortex and i'm showing at one time the green dot
497
+ [1936.320 --> 1939.360] represents which cell is active it's it's not this rubbing
transcript/allocentric_Ikg0gmekByE.txt ADDED
@@ -0,0 +1,987 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.560] This is from Compinas in the state of San Pablo who was in.
2
+ [4.560 --> 9.160] And so now we have the word, let's say, future.
3
+ [9.160 --> 12.840] So we have a lexicon pattern set of future that
4
+ [12.840 --> 15.280] is directly a temporal term.
5
+ [15.280 --> 21.080] But when we look at what's in there as an ideogram or pictogram
6
+ [21.080 --> 25.280] or a graphic production or depending on how you want to take it,
7
+ [25.280 --> 29.120] you will see that you have a text that says there, for example,
8
+ [29.120 --> 32.820] is how to prepare the citizens of the students
9
+ [32.820 --> 36.200] high school for the challenges of the future in society.
10
+ [36.200 --> 38.920] Here it says something like high school.
11
+ [38.920 --> 43.720] And then what is interesting, different from grammar
12
+ [43.720 --> 46.880] and linguistic production, is that you
13
+ [46.880 --> 49.200] don't have a particular rule for how
14
+ [49.200 --> 51.920] you're supposed to explore this figure.
15
+ [51.920 --> 57.200] You may go with your own eyes working.
16
+ [57.200 --> 60.760] You may go with your eyes over the text down there first,
17
+ [60.760 --> 67.280] or maybe, I'll say, or maybe you focus your attention
18
+ [67.280 --> 68.760] over here or maybe over there.
19
+ [68.760 --> 71.280] So there's no particular positional that
20
+ [71.280 --> 74.480] says step-to-step procedure.
21
+ [74.480 --> 76.400] What's the question for cognitive science?
22
+ [76.400 --> 77.160] It will be OK.
23
+ [77.160 --> 78.280] We have a term future.
24
+ [78.280 --> 80.440] We have something to be pairing about the future.
25
+ [80.440 --> 83.520] But why is this person projected frontwards?
26
+ [83.520 --> 86.920] What is it about frontwards or front?
27
+ [86.920 --> 90.040] That has anything to do with the future.
28
+ [90.040 --> 92.360] And I did some experiments like in La Ngo.
29
+ [92.360 --> 94.920] I was manipulating this figure that I said, OK,
30
+ [94.920 --> 98.680] this is the educational system A. This will happen.
31
+ [98.680 --> 101.520] The educational system B was like this.
32
+ [101.520 --> 104.040] And then the guy was falling over here.
33
+ [104.040 --> 106.640] And then the educational system C was like this.
34
+ [106.640 --> 108.240] And then the question was, which one
35
+ [108.240 --> 111.760] is the best educational system preparing for the future?
36
+ [111.760 --> 113.680] And which one is the worst?
37
+ [113.680 --> 118.920] And then the answer is the best being A.
38
+ [118.920 --> 123.360] And then there was the C. So we've translated inferences.
39
+ [123.360 --> 126.200] Better than better than worse than that one.
40
+ [126.200 --> 128.640] The question then is why?
41
+ [128.640 --> 129.800] And people don't know why.
42
+ [129.800 --> 131.440] People just say, well, it's obvious.
43
+ [131.440 --> 132.120] This is better.
44
+ [132.120 --> 134.440] The other one is not as good as to hear
45
+ [134.440 --> 135.960] that one goes further away.
46
+ [135.960 --> 139.160] But there's no temporal reasoning explanation
47
+ [139.160 --> 142.400] about times that is directly, we just
48
+ [142.400 --> 145.120] use of these forms of mappings, but we've
49
+ [145.120 --> 148.960] not necessarily understand them in any conscious way.
50
+ [148.960 --> 151.040] And of course, we can come up with stories
51
+ [151.040 --> 154.480] and we can call them cox and histamine.
52
+ [154.480 --> 155.560] Percontinulations.
53
+ [155.560 --> 156.080] Yes.
54
+ [156.080 --> 159.320] Somebody could you over the line at the front in the book?
55
+ [159.320 --> 160.480] So here's now example.
56
+ [160.480 --> 164.280] Now we go beyond text and beyond just illustrations.
57
+ [164.280 --> 169.320] Now is real action, motor action in real time.
58
+ [169.320 --> 171.800] So here's a delay scene.
59
+ [171.800 --> 172.800] We're here in a delay.
60
+ [172.800 --> 173.520] Maybe it's me.
61
+ [173.520 --> 174.480] I'm a little bit this like sick.
62
+ [174.480 --> 175.600] I read this three times.
63
+ [175.600 --> 176.840] Maybe it's me.
64
+ [176.840 --> 179.160] It's the We're Pleth country place,
65
+ [179.160 --> 182.320] skilled nursing and rehabilitation home,
66
+ [182.320 --> 183.880] explain their slogan.
67
+ [183.880 --> 186.680] Helping residents today, remember tomorrow's yesterday.
68
+ [186.680 --> 189.320] I don't know.
69
+ [189.320 --> 192.160] What would that be today?
70
+ [192.160 --> 196.320] Tomorrow's yesterday, today.
71
+ [196.320 --> 198.280] What does that mean?
72
+ [198.280 --> 199.120] All right.
73
+ [199.120 --> 199.640] So.
74
+ [199.640 --> 206.920] So this is not fully natural language in the sense
75
+ [206.920 --> 209.640] that the guy is not talking to an interlocking
76
+ [209.640 --> 211.800] lover in their fully natural scene.
77
+ [211.800 --> 215.360] But still in Poland give some interesting information
78
+ [215.360 --> 216.520] just to illustrate the point.
79
+ [216.520 --> 218.760] So here we have a text that has clearly
80
+ [218.760 --> 222.440] some temporal lexical items that I've worked tomorrow
81
+ [222.440 --> 223.840] or the work yesterday.
82
+ [223.840 --> 226.120] They have other grammatical issues there.
83
+ [226.120 --> 230.400] But for the point here is that when he is saying or referring
84
+ [230.400 --> 233.920] to that, certain things are going to happen with his hands
85
+ [233.920 --> 235.000] and mouth and face.
86
+ [235.000 --> 237.720] So we say, wouldn't that be today?
87
+ [237.720 --> 241.120] So he said, here we go.
88
+ [241.120 --> 243.800] That's the mouth shape for it.
89
+ [243.800 --> 244.880] Wouldn't that?
90
+ [244.880 --> 245.600] OK.
91
+ [245.600 --> 248.720] And then we're going to go about 40 milliseconds
92
+ [248.720 --> 249.840] made by a firm something.
93
+ [249.840 --> 252.560] You can see now the finger, the index finger,
94
+ [252.560 --> 255.920] is starting to come out.
95
+ [255.920 --> 261.120] He's about to say B. Wouldn't that that's the B, starting to be now?
96
+ [261.120 --> 262.120] Wouldn't that be?
97
+ [262.120 --> 265.040] By now the hands shape is incredibly well formed
98
+ [265.040 --> 266.680] with the index coming out.
99
+ [266.680 --> 269.440] So the animal foliage is very well defined.
100
+ [269.440 --> 273.480] And now the interesting thing is that with the eccentrician
101
+ [273.480 --> 279.680] of the syllable for today, he's also going to go today.
102
+ [279.680 --> 282.800] That's the A of today, ice clothes, et cetera.
103
+ [282.800 --> 284.720] And now he stays there, pointing.
104
+ [284.720 --> 288.280] So this is an example, it's called an abstract pointing.
105
+ [288.280 --> 293.400] He's not pointing at a Wachamole stain or a mighty name or something.
106
+ [293.400 --> 295.440] He's pointing at today.
107
+ [295.440 --> 299.160] So an old orchestrated, a few hundred milliseconds,
108
+ [299.160 --> 302.440] just what he's about to find, the hand is already coming out
109
+ [302.440 --> 305.360] of the finger and something, one that says today.
110
+ [305.360 --> 307.680] And then he wants to contrast that with tomorrow,
111
+ [307.680 --> 313.120] that tomorrow, back in and out, there's
112
+ [313.120 --> 318.320] that too much, like, pointing in front of him.
113
+ [318.320 --> 321.400] And then he goes back to today, et cetera.
114
+ [321.400 --> 324.520] So this is just to illustrate that if you ask this person
115
+ [324.520 --> 327.600] what is he doing at this moment, he would be like, no,
116
+ [327.600 --> 328.840] he doesn't have a clue.
117
+ [328.840 --> 332.120] And how is it coordinating all these various motor actions
118
+ [332.120 --> 333.960] in real time, very specific?
119
+ [333.960 --> 336.200] And the pointings are in this sense abstract.
120
+ [336.200 --> 337.800] They have temporal reference and they
121
+ [337.800 --> 342.200] have the very specific orientation in space.
122
+ [342.200 --> 346.840] So when you do tons of analysis of these sorts,
123
+ [346.840 --> 349.320] the linguistic material, the illustrations,
124
+ [349.320 --> 354.480] and graphic production, then analyze the gesture production,
125
+ [354.480 --> 355.880] post-speed gesture production.
126
+ [355.880 --> 358.960] So when you get the summary, this is one line of,
127
+ [358.960 --> 361.440] is that temporal expressions are primarily
128
+ [361.440 --> 364.800] construed in terms of one time, one dimensional space.
129
+ [364.800 --> 368.360] So we have many forms of space, if you let's say the surface,
130
+ [368.360 --> 370.680] if you pour some water on top of the surface,
131
+ [371.280 --> 375.240] it's going to go like center periphery sort of motion.
132
+ [375.240 --> 378.160] That is not the type of space we recruit for time
133
+ [378.160 --> 380.040] in everyday cases.
134
+ [380.040 --> 382.040] It tends to be one dimensional.
135
+ [382.040 --> 385.600] So that's something that we, sort of, apparently,
136
+ [385.600 --> 390.160] we all do when we will see them, because now,
137
+ [390.160 --> 393.480] the question is, if it recruiting space in spatial tools,
138
+ [393.480 --> 396.400] we want to know more details about what type of space
139
+ [396.400 --> 399.800] in the type of spatial tools are recruited
140
+ [399.800 --> 404.200] for supporting the abstract notion of what time.
141
+ [404.200 --> 408.600] So we're going to unpack a little bit, types of space.
142
+ [408.600 --> 411.880] One thing that's been studied quite a bit in linguistic
143
+ [411.880 --> 414.240] and operability and experimental psychology,
144
+ [414.240 --> 417.480] as well, is spatial frames of reference
145
+ [417.480 --> 422.520] and the summary, sort of, speed, is that they're at least
146
+ [422.520 --> 425.840] this more, but there's at least these three forms
147
+ [425.840 --> 429.720] in which observers characterize relative positions
148
+ [430.160 --> 430.960] of spatial objects.
149
+ [430.960 --> 434.440] So in the case of the pig is in front of the cow,
150
+ [434.440 --> 439.040] he was a observer, he's called the object center,
151
+ [439.040 --> 440.560] frame of reference, because it depends
152
+ [440.560 --> 443.400] on where the observer is that we're
153
+ [443.400 --> 445.560] solving, characterizing, or construing the pig
154
+ [445.560 --> 447.560] and bringing in front of the cow.
155
+ [447.560 --> 449.880] There's another one which is called egocenter,
156
+ [449.880 --> 452.400] in which the observer would say something
157
+ [452.400 --> 454.560] like the pig is right of the cow.
158
+ [454.560 --> 458.040] Again, in this case, depending on where the observer is,
159
+ [458.320 --> 461.320] the form would change dramatically.
160
+ [461.320 --> 465.640] Here, the reference is along the construction
161
+ [465.640 --> 466.480] of these two objects.
162
+ [466.480 --> 468.680] Here, the reference we need relies
163
+ [468.680 --> 470.280] on the position of the observer.
164
+ [470.280 --> 473.040] And then we have another form which is called geocentric.
165
+ [473.040 --> 475.040] Some people call it absolute.
166
+ [475.040 --> 479.120] And this example would be a cardinal absolute,
167
+ [479.120 --> 481.560] in the sense that the observer may say something
168
+ [481.560 --> 483.760] like the pig is east of the cow.
169
+ [483.760 --> 486.400] Now, the truth is that we tend to use all of them
170
+ [486.400 --> 487.440] for different purposes.
171
+ [487.440 --> 489.360] So if you give directions to how you get through this
172
+ [489.360 --> 492.800] and be able to LA, you may focus on this one.
173
+ [492.800 --> 496.200] If you want to say to someone, oh, pass me that bottle of water,
174
+ [496.200 --> 498.480] you may say something like the bottle
175
+ [498.480 --> 501.920] right to the salt, or maybe the bottle
176
+ [501.920 --> 503.680] that's in front of so and so on.
177
+ [503.680 --> 507.440] So we may tabletop, like on the small scale,
178
+ [507.440 --> 511.640] we may use this, bigger scales we may use that, and so on.
179
+ [511.640 --> 514.520] But there are some groups around the world that tend to,
180
+ [514.520 --> 516.200] this is an interesting finding by it.
181
+ [516.200 --> 520.760] Lots of people, scholars working in Australia,
182
+ [520.760 --> 526.120] and then Nepal, and India, and Bali, and so on.
183
+ [526.120 --> 529.320] People would tend to prefer this form for everything.
184
+ [529.320 --> 531.360] So they would say things like, oh, pass me that,
185
+ [531.360 --> 535.040] no, the bottle of water, and all of these like that.
186
+ [535.040 --> 536.120] So these are the kinds of things
187
+ [536.120 --> 537.160] that now we want to explore.
188
+ [537.160 --> 540.400] If people start to wait in a different way
189
+ [540.400 --> 543.320] with this spatial transfer reference,
190
+ [543.320 --> 545.800] is that going to be the type of space
191
+ [545.800 --> 548.200] that potentially could be recruited for
192
+ [548.200 --> 553.080] for grounding temporal relations?
193
+ [553.080 --> 554.400] Now, that's about space.
194
+ [554.400 --> 558.880] Let me make, yet another distinction we're regarding time.
195
+ [558.880 --> 562.280] This goes back to the work of philosopher Matt Packard,
196
+ [562.280 --> 565.400] so this is now a century ago.
197
+ [565.400 --> 569.520] The distinction really between diting time and secrets time.
198
+ [569.520 --> 572.960] And diting time refers to the type of time
199
+ [572.960 --> 577.320] that has a center or a diting center technique.
200
+ [577.320 --> 580.640] Speaking, essentially, it has a now.
201
+ [580.640 --> 584.840] And once you have a now, you have a future and a past.
202
+ [584.840 --> 587.360] If you don't have a now, you do not have future,
203
+ [587.360 --> 589.560] and you do not have past.
204
+ [589.560 --> 593.680] So really, the future and past are fundamentally
205
+ [593.680 --> 598.760] diting categories, resulting from the center now.
206
+ [598.760 --> 602.360] So you would say she left today as a goal
207
+ [602.360 --> 604.880] that would be relative to a now.
208
+ [604.880 --> 606.240] The week ahead looks good.
209
+ [606.240 --> 611.040] This term ahead operates on spatial terms,
210
+ [611.040 --> 613.960] but also assumes that there is a now.
211
+ [613.960 --> 616.840] So you can only, let's say, want to refer to the first
212
+ [616.840 --> 619.400] a week of November after the week ahead,
213
+ [619.400 --> 622.720] I can only use this term at this very moment.
214
+ [622.720 --> 626.040] But in mid November, if I want to refer to the beginning
215
+ [626.040 --> 629.880] of November, I cannot use the word ahead anymore.
216
+ [629.880 --> 632.760] Sequence time doesn't have a now.
217
+ [632.760 --> 634.920] That sense is formally and logically
218
+ [634.920 --> 635.440] simpler.
219
+ [638.000 --> 640.840] Some people call it tenseless time.
220
+ [640.840 --> 643.720] And the relationships that you really normally use
221
+ [643.720 --> 646.360] are like the earlier than and later than relationships.
222
+ [646.360 --> 650.600] So you'd say things like spring follows winter.
223
+ [650.600 --> 652.480] That's true, no matter when.
224
+ [652.480 --> 654.600] No need for a now, two days before,
225
+ [654.600 --> 656.120] Thursday, things like that.
226
+ [656.120 --> 658.200] Or any actually any storytelling.
227
+ [658.200 --> 660.080] You tell a story of the Second World War.
228
+ [660.080 --> 661.680] You say, OK, the beginning of this happened.
229
+ [661.680 --> 662.440] Then this happened.
230
+ [662.440 --> 665.240] Oh, I forgot to say that right before that this happened.
231
+ [665.240 --> 667.680] And then before that happened.
232
+ [667.680 --> 670.120] It's like independently of the now, that case.
233
+ [670.120 --> 672.560] This is going to be important for many reasons.
234
+ [672.560 --> 676.680] But today, when I want to focus on dytic time,
235
+ [676.680 --> 681.000] here's an example, by the way, of linguistic expression
236
+ [681.000 --> 684.800] that could be understood in dytic sense or in a sequence
237
+ [684.800 --> 685.800] sense.
238
+ [685.800 --> 689.320] To say, move the meeting forward,
239
+ [689.320 --> 693.560] it could be considered here in a dytic sense or a sequence
240
+ [693.560 --> 695.160] sense with consequences.
241
+ [695.160 --> 701.720] For example, you say something like the Wednesday meeting.
242
+ [701.720 --> 703.440] We can't do it on Wednesday.
243
+ [703.440 --> 707.360] Let's move the Wednesday meeting forward, two days.
244
+ [707.360 --> 710.000] When is the meeting?
245
+ [710.000 --> 713.440] How many votes for Friday?
246
+ [713.440 --> 716.360] How many votes for Monday?
247
+ [716.360 --> 717.040] There we go.
248
+ [717.040 --> 720.560] Some people say, well, how old is people in this piece?
249
+ [720.560 --> 723.280] Isn't well, I assume, or less.
250
+ [723.280 --> 726.800] And you have radically different entrances out of that.
251
+ [726.800 --> 729.040] Well, the answer of the short answer is that, well, for some
252
+ [729.040 --> 733.400] people, they interpret the forward in dytic sense.
253
+ [733.400 --> 738.560] So two days ahead in the dytic sense in front of us,
254
+ [738.560 --> 740.640] like J. Leno pointing in front.
255
+ [740.640 --> 743.840] And some people interpret it in front of the sequence,
256
+ [743.840 --> 745.640] like the pig in front of the cow.
257
+ [745.640 --> 748.520] And those are earlier, therefore, Monday.
258
+ [748.520 --> 750.240] That's the one liner.
259
+ [750.240 --> 752.000] Anyway, so there are linguistic expressions
260
+ [752.000 --> 756.960] that could construe, like in this case, one type of time
261
+ [756.960 --> 759.240] or the other, and they're very important to keep them
262
+ [759.240 --> 760.560] except for the more focused ones.
263
+ [760.560 --> 763.760] So we're going to be focused on dytic time
264
+ [763.760 --> 766.120] in this particular presentation.
265
+ [766.120 --> 770.520] So how is dytic time to the space?
266
+ [770.520 --> 775.080] Well, after studying many, many cultures and gathering
267
+ [775.080 --> 777.720] all the information, so many, many studies over the world,
268
+ [777.720 --> 782.800] you get essentially one pattern that is really widespread,
269
+ [782.800 --> 786.600] which is this ego center pattern that is preserving
270
+ [786.600 --> 794.000] transciptivity, meaning the present is the dytic center,
271
+ [794.000 --> 794.520] technically.
272
+ [794.520 --> 797.960] So especially is colorated with the ego.
273
+ [797.960 --> 801.040] So this is now J. Leno pointing today
274
+ [801.040 --> 805.040] to where he is standing, sitting in that case.
275
+ [805.040 --> 806.920] And if the speaker is walking, we'll say, well,
276
+ [806.920 --> 807.920] let's do it right now.
277
+ [807.920 --> 809.400] And then we're walking around and we're like, well,
278
+ [809.400 --> 810.520] I don't know today.
279
+ [810.520 --> 812.600] So it's not more than we're pointing here
280
+ [812.600 --> 813.800] and then we're pointing here.
281
+ [813.800 --> 816.840] So it's really about cultivation.
282
+ [816.840 --> 819.640] Then future is in front of ego and passes behind ego.
283
+ [819.640 --> 823.480] This is essentially the short, the Mickey Mouse version
284
+ [823.480 --> 826.280] of it, what you have is a speaker in his face.
285
+ [826.280 --> 828.440] And you have at least in English, not language,
286
+ [828.440 --> 829.440] but some variations.
287
+ [829.440 --> 834.200] You may have the temporal entities in the landscape.
288
+ [834.200 --> 837.600] So speak and then they observe or moving.
289
+ [837.600 --> 841.760] So you will get things like we're approaching
290
+ [841.760 --> 843.360] through the end of the year.
291
+ [843.360 --> 845.720] So the end of the year is a location in space
292
+ [845.720 --> 847.840] and you move towards that location.
293
+ [847.840 --> 850.840] Or you may say something like Halloween is coming.
294
+ [850.840 --> 854.160] So then you're standing and then the temporal event
295
+ [854.160 --> 855.520] is moving towards you.
296
+ [855.520 --> 858.840] The common thing is that future in front of the observer
297
+ [858.840 --> 860.760] passes behind the observer.
298
+ [860.760 --> 864.640] The dytic center is co-roaded with the observer.
299
+ [864.640 --> 867.200] And you get this also in sign language and another,
300
+ [867.200 --> 871.880] an ASL, for example, is attested with linguistic analogies
301
+ [871.880 --> 874.920] and also all types of psychological experiments
302
+ [874.920 --> 876.240] you can do in the lab.
303
+ [876.240 --> 879.760] Or gesture analysis, in the sense of the example
304
+ [879.760 --> 882.480] I gave you with a J-Lend.
305
+ [882.480 --> 885.480] Now, the question, of course, is, well,
306
+ [885.480 --> 890.840] this is tremendously widespread all over the place.
307
+ [890.840 --> 892.360] But is it universal?
308
+ [892.360 --> 894.760] And this is where I want to begin now.
309
+ [894.760 --> 897.680] What the remaining of the talk is to sort of see what happens
310
+ [897.680 --> 904.800] with this apparently universal orientation for dytic time.
311
+ [904.800 --> 907.400] So let's go first with the Imara.
312
+ [907.400 --> 910.160] I will summarize briefly for those who were not here
313
+ [910.160 --> 913.200] a few years ago, but just briefly summarize
314
+ [913.200 --> 915.680] the findings we've been doing recently with that.
315
+ [915.680 --> 918.560] So if you just start with linguistic expressions,
316
+ [918.560 --> 924.520] here is an obstruction, for example, for its past expression,
317
+ [924.520 --> 925.840] expression involving the past.
318
+ [925.840 --> 928.040] It's an ancha-nida-pachana.
319
+ [928.040 --> 930.040] If you do the morphine by morphine cloth,
320
+ [930.040 --> 932.920] you get something like ancha is a lot.
321
+ [932.920 --> 935.880] Nida is the morphine we really care about here
322
+ [935.880 --> 940.680] because it denotes I or sight from that.
323
+ [940.680 --> 944.080] Pacha, roughly speaking, is more complicated than that.
324
+ [944.080 --> 945.560] But let's get a moment.
325
+ [945.560 --> 949.080] And Nida is a suffix that works very much like an English
326
+ [949.080 --> 954.880] when you say on Monday, in September, or at night
327
+ [954.880 --> 956.080] upon something like that.
328
+ [956.080 --> 962.320] So fixating time in some particular dimension.
329
+ [962.320 --> 964.800] Literal translation of that morphine cloth
330
+ [964.800 --> 970.280] would be something like a lot I front time at.
331
+ [970.280 --> 971.360] Why does it use?
332
+ [971.360 --> 974.080] What does it really mean when they say something like,
333
+ [974.080 --> 976.480] at a long time ago?
334
+ [976.480 --> 979.640] So here we have something that's way to say, long time ago,
335
+ [979.640 --> 982.400] and you essentially recruit morphine,
336
+ [982.400 --> 986.200] saying something like I or sight or front.
337
+ [986.200 --> 990.200] So hypothesis, when you start gathering a lot of these days,
338
+ [990.200 --> 992.560] they say, well, is it something about paths
339
+ [992.560 --> 997.480] and front of this speaker, I or sight or front,
340
+ [997.480 --> 1002.080] that is now anchoring this form of understanding?
341
+ [1002.080 --> 1003.680] How about future expression?
342
+ [1003.680 --> 1005.920] Here's the one, I got the battle.
343
+ [1005.920 --> 1008.200] I got the battle, do the morphine cloth,
344
+ [1008.200 --> 1011.880] you get something like that, is here or this.
345
+ [1011.880 --> 1015.680] Da and rule, they work very much like English from and to
346
+ [1015.680 --> 1016.080] words.
347
+ [1016.080 --> 1019.920] So you say, from San Diego to LA, you would say
348
+ [1019.920 --> 1022.680] San Diego, that LA rule, something like that.
349
+ [1022.680 --> 1025.560] PIPA is a more female, it's relevant here.
350
+ [1025.560 --> 1029.120] The notes back or behind, they are anatomical back.
351
+ [1029.120 --> 1033.720] Literal translation here, this from back to or to words.
352
+ [1033.720 --> 1035.080] When is it used?
353
+ [1035.080 --> 1036.960] From now on.
354
+ [1036.960 --> 1039.520] So this is what a mom would say, okay, from now on,
355
+ [1039.520 --> 1042.000] you will eat all the food I prefer for you,
356
+ [1042.000 --> 1046.040] meaning all the events for cooking in the future
357
+ [1046.040 --> 1048.120] are covered by that expression.
358
+ [1048.120 --> 1050.600] So that's the kind of thing that would now recruit the
359
+ [1050.600 --> 1053.960] tree back, behind of the back.
360
+ [1053.960 --> 1056.960] Now, this was very exciting, on the first time we encountered
361
+ [1056.960 --> 1061.840] that, but of course there is a question, which is,
362
+ [1061.840 --> 1066.120] is this the truly ego center of counter universal?
363
+ [1066.120 --> 1069.720] This is now the point we need to know, we make sure that
364
+ [1069.720 --> 1074.160] the front of the back are operating on an ego, not just
365
+ [1074.160 --> 1075.320] on something else.
366
+ [1075.320 --> 1077.080] Why is that relevant?
367
+ [1077.080 --> 1082.080] Because let's say you take the word before in English,
368
+ [1082.840 --> 1087.600] before, or has that moved in like four or front,
369
+ [1087.600 --> 1090.720] and before we say the day before yesterday,
370
+ [1090.720 --> 1093.120] that day happens in the past,
371
+ [1093.120 --> 1095.600] rather than yesterday, and recruits the term,
372
+ [1095.600 --> 1097.480] four, or front.
373
+ [1097.480 --> 1100.760] Same thing with after, the day after tomorrow recruits
374
+ [1100.760 --> 1105.000] the act, the act, like an act of the shape or an aircraft.
375
+ [1105.000 --> 1108.520] The rear part off to note,
376
+ [1108.520 --> 1111.480] future relative future in that sense.
377
+ [1111.480 --> 1114.880] So the question, this is now what happens also with
378
+ [1114.880 --> 1118.240] example with the Wednesday meeting,
379
+ [1118.240 --> 1122.080] if you're thinking sequentially, and in that sequence you
380
+ [1122.080 --> 1125.240] have a front on the back, it could be that the front and
381
+ [1125.240 --> 1128.480] back, like in English before and after,
382
+ [1128.480 --> 1132.560] is recruiting front and act relative to a secret, but not
383
+ [1132.560 --> 1134.000] relative to an ego.
384
+ [1135.160 --> 1138.800] So, what we need to find out is that what we would do in
385
+ [1138.800 --> 1141.240] English, that the week ahead looks good,
386
+ [1141.240 --> 1144.000] or if you really look for the all the linguistic material,
387
+ [1144.000 --> 1146.680] it's going to be the week ahead of us.
388
+ [1146.680 --> 1149.600] You would find Marker saying the reference point of
389
+ [1149.600 --> 1153.200] ahead in those cases is us, some community,
390
+ [1153.200 --> 1155.360] some human, and so on.
391
+ [1155.360 --> 1160.360] As opposed to, it is 20 minutes ahead of 1 p.m.,
392
+ [1160.760 --> 1164.280] in which the reference point is another temporal mark.
393
+ [1164.280 --> 1166.920] So what we need to know is what is a reference point?
394
+ [1166.920 --> 1171.240] If the reference point is really ego, then this
395
+ [1171.240 --> 1174.920] admire case is truly a counter example for this widespread
396
+ [1174.920 --> 1177.280] pattern thought to be universal at some moment.
397
+ [1178.280 --> 1180.800] But if it's not, if it's just like English before and
398
+ [1180.800 --> 1182.240] after, then it's no big deal.
399
+ [1182.240 --> 1185.600] It's just like English before and after, or
400
+ [1185.600 --> 1190.720] Albanne in French, or Anteus in Spanish and so on.
401
+ [1190.720 --> 1193.440] So the linguistic test, linguist would say,
402
+ [1193.440 --> 1195.720] just check for the reference point.
403
+ [1195.720 --> 1198.240] The problem is that in a manner, you check for the reference points,
404
+ [1198.240 --> 1201.160] and for other reasons I'm not going to go into in a
405
+ [1201.160 --> 1203.160] manner grammar, you cannot find it.
406
+ [1203.160 --> 1207.360] So all you find is front and back, but you can't tell just
407
+ [1207.360 --> 1210.600] through the linguistic material, what is the reference point
408
+ [1210.600 --> 1211.680] for those expressions.
409
+ [1212.680 --> 1216.320] So it's not conclusive, just looking at the linguistic material,
410
+ [1216.320 --> 1218.840] you actually look at something else.
411
+ [1218.840 --> 1222.240] And the model, of course, is not a big,
412
+ [1222.240 --> 1225.640] main spire or brings some hypothesis, but
413
+ [1225.640 --> 1229.240] it's not the last word either because it doesn't tell you much
414
+ [1229.240 --> 1232.240] about the cognitive reality of the speaker.
415
+ [1232.240 --> 1235.400] So it could be that the term meant something 500 years ago,
416
+ [1235.400 --> 1236.600] that's where it came from.
417
+ [1236.600 --> 1241.720] But today, it just doesn't have any role in the actual,
418
+ [1241.720 --> 1243.840] let's say, motor production of the speaker,
419
+ [1243.840 --> 1245.520] like in gesture, for example.
420
+ [1245.520 --> 1248.520] So how can we tell?
421
+ [1248.520 --> 1252.320] Well, this is when we say, well, we have some challenges here
422
+ [1252.320 --> 1256.640] because we don't have only the language as a source.
423
+ [1256.640 --> 1258.480] We can't just study that.
424
+ [1258.480 --> 1260.560] It's not transparing in that sense.
425
+ [1260.560 --> 1264.680] Then we need to also study the community with bringing
426
+ [1264.680 --> 1266.160] issues of ecological validity.
427
+ [1266.160 --> 1268.240] We want to boost the ecological validity.
428
+ [1268.240 --> 1272.480] A boy, let's say, test that we bring from the West and things
429
+ [1272.480 --> 1272.840] like that.
430
+ [1272.840 --> 1277.200] We want something that it would maximize ecological validity,
431
+ [1277.200 --> 1280.280] external validity, and internal validity.
432
+ [1280.280 --> 1282.640] So that's why we don't want to have it say
433
+ [1282.640 --> 1286.440] or bird judgments of people saying, OK, where is the future for you?
434
+ [1286.440 --> 1288.240] Tell me.
435
+ [1288.240 --> 1290.200] Because then we're trying to essentially
436
+ [1290.200 --> 1293.280] ask for them to come up with a story.
437
+ [1293.280 --> 1294.880] So this is when we said, let's study
438
+ [1294.880 --> 1296.960] your production in the same way.
439
+ [1296.960 --> 1298.920] I was created with this example.
440
+ [1298.920 --> 1301.960] What happens when we analyze the details of course
441
+ [1301.960 --> 1303.520] speech gesture production?
442
+ [1303.520 --> 1306.880] Then we can make explicit hypothesis about future
443
+ [1306.880 --> 1310.320] and past and see where are they pointing?
444
+ [1310.320 --> 1313.200] And either pointing more along the lines of let's say an object,
445
+ [1313.200 --> 1315.640] and then this is the front and the back of the object.
446
+ [1315.640 --> 1318.040] And this is where the more female operating
447
+ [1318.040 --> 1320.960] like Kipa and Naita, that's one story.
448
+ [1320.960 --> 1323.040] If the point ends go this way, really
449
+ [1323.040 --> 1325.200] relative to the body of the speaker,
450
+ [1325.200 --> 1328.520] then we're in the business of anchoring the dyke center
451
+ [1328.520 --> 1329.720] on the eagle.
452
+ [1329.720 --> 1333.200] So that's what we did.
453
+ [1333.200 --> 1335.880] So this is a study that in the highlands,
454
+ [1335.960 --> 1337.000] I was saying.
455
+ [1337.000 --> 1341.120] And we did this particular study in the northern most
456
+ [1341.120 --> 1344.600] tip of Chile, the border with Bolivia,
457
+ [1344.600 --> 1346.960] and Peru over here.
458
+ [1346.960 --> 1350.800] We had about 20 hours of raw video people talking
459
+ [1350.800 --> 1355.520] about temporal expressions, from 17 towns in northern Chile.
460
+ [1355.520 --> 1360.120] All of them between about 4,000 meters above sea level.
461
+ [1360.120 --> 1363.200] That's like 4,000 feet or less.
462
+ [1363.200 --> 1364.480] It's pretty high.
463
+ [1364.480 --> 1367.920] And center on everyday temporal expressions,
464
+ [1367.920 --> 1369.720] anecdotes, stories, and so on.
465
+ [1369.720 --> 1371.480] Essentially, the method we used here
466
+ [1371.480 --> 1374.800] was called gestural recitation parameter.
467
+ [1374.800 --> 1376.760] We wanted them to explain things.
468
+ [1376.760 --> 1380.840] And so we put things like the temporal equivalent
469
+ [1380.840 --> 1385.560] to something like an apple a day keeps the doctor away.
470
+ [1385.560 --> 1386.800] What does that mean?
471
+ [1386.800 --> 1388.960] So then you would explain to someone,
472
+ [1388.960 --> 1392.760] okay, what we mean, that means so and so blah, blah, blah.
473
+ [1392.760 --> 1395.080] And what we want to see is what happens with their hands
474
+ [1395.080 --> 1397.120] when they're explaining that.
475
+ [1397.120 --> 1398.760] So we did a bunch of those.
476
+ [1399.760 --> 1401.880] So we essentially were analyzing the pointing
477
+ [1401.880 --> 1405.400] directionality co-occurring with this temporal terms.
478
+ [1406.800 --> 1408.480] So what you can do is like,
479
+ [1408.480 --> 1411.960] we did also, not just in Imano, but also Castellanandino,
480
+ [1411.960 --> 1416.960] which is a kind of Creole, Spanish, and the Spanish sort of thing,
481
+ [1417.240 --> 1419.840] where you also can also find the same patterns,
482
+ [1419.840 --> 1423.080] by the way, so this example is more or less from that.
483
+ [1423.080 --> 1425.560] So you take the gesture production,
484
+ [1425.560 --> 1429.760] you can take the screen, and then you can sort of pinpoint
485
+ [1429.760 --> 1432.560] exactly when certain words have been said,
486
+ [1432.560 --> 1434.400] and then you see where their hands are.
487
+ [1434.400 --> 1436.760] So this guy is saying something like,
488
+ [1436.760 --> 1439.440] from last year until this year.
489
+ [1439.440 --> 1442.760] And so when he's starting to say from last,
490
+ [1442.760 --> 1445.880] he's pointing towards the front with a left hand,
491
+ [1445.880 --> 1449.520] index out, and completes that with,
492
+ [1449.920 --> 1453.240] until this year, you have now the pointing down
493
+ [1453.240 --> 1456.800] very much like J. Len, and so co-location.
494
+ [1456.800 --> 1458.920] So we did tons of these.
495
+ [1458.920 --> 1463.520] I'm not going to just give you real gestures,
496
+ [1463.520 --> 1465.800] so you get a sense of how it works.
497
+ [1465.800 --> 1467.040] He was one.
498
+ [1467.040 --> 1472.040] The top left is based on a misunderstanding.
499
+ [1472.160 --> 1475.080] So this guy is talking about atotillas,
500
+ [1475.080 --> 1477.840] which is the term for ancestors,
501
+ [1477.840 --> 1480.360] and he thinks, so he's a speaker,
502
+ [1480.360 --> 1484.240] he thinks that he's talking about the time of the Incas
503
+ [1484.240 --> 1485.840] before the arrival of the Spaniards,
504
+ [1485.840 --> 1487.200] in that sense of ancestors,
505
+ [1487.200 --> 1490.520] and he's talking just talking about the great grandparents.
506
+ [1490.520 --> 1493.440] So he's going to clarify that to him.
507
+ [1493.440 --> 1501.440] So what is that?
508
+ [1501.440 --> 1505.360] Achaachila is a term for ancestors.
509
+ [1505.360 --> 1508.200] This person here in the right says,
510
+ [1508.200 --> 1510.520] oh, you mean the Incas time?
511
+ [1510.520 --> 1513.560] It's a no, no, no, that's way before.
512
+ [1513.560 --> 1515.360] You have a bilingual production,
513
+ [1515.360 --> 1520.360] the index comes out pointing to the front, and so on.
514
+ [1521.360 --> 1522.360] He's a timbre.
515
+ [1522.360 --> 1524.360] He's a timbre.
516
+ [1524.360 --> 1527.560] He's talking about the old times.
517
+ [1527.560 --> 1530.240] So at the moment, it says, the old times,
518
+ [1530.240 --> 1534.360] the hands comes out, the index, it says.
519
+ [1535.360 --> 1537.840] Oh, time.
520
+ [1537.840 --> 1540.320] Now we were rotating people, I didn't mention that.
521
+ [1540.320 --> 1541.720] So in order to really dissociate,
522
+ [1541.720 --> 1544.560] is the pointing, if I'm pointing like this way,
523
+ [1544.560 --> 1546.800] you don't know whether I'm pointing towards the door
524
+ [1546.800 --> 1548.560] or in front of me.
525
+ [1548.560 --> 1550.280] So if you rotate the person and say, okay,
526
+ [1550.280 --> 1552.320] now continue talking, because, you know,
527
+ [1552.320 --> 1554.640] and if I go like this, you have better, you know,
528
+ [1554.640 --> 1557.280] better info saying, oh, this is about body,
529
+ [1557.280 --> 1558.280] and it's not the door.
530
+ [1558.280 --> 1560.840] If I go like that, that is probably centered
531
+ [1560.840 --> 1564.080] on other items.
532
+ [1564.080 --> 1565.760] And so these two are about the past,
533
+ [1565.760 --> 1568.600] here's an example of future on the right side now.
534
+ [1568.600 --> 1571.320] Akamara, maradu, sorry.
535
+ [1571.320 --> 1576.320] So Akamara, maradu is something,
536
+ [1576.320 --> 1577.680] wait a minute, sorry.
537
+ [1577.680 --> 1580.880] The rosé, I can't hear you.
538
+ [1580.880 --> 1585.560] So the Akamara, maradu is an expression
539
+ [1585.560 --> 1588.960] that means from here until next year.
540
+ [1588.960 --> 1593.480] So the speaker is going to produce a gesture
541
+ [1593.480 --> 1596.760] that goes like this, and then it's going to show to the...
542
+ [1596.760 --> 1599.280] So from now until next year,
543
+ [1599.280 --> 1603.400] counter lateral gesture, and then an X-Z lateral gesture
544
+ [1603.400 --> 1605.320] with different hand movements.
545
+ [1605.320 --> 1606.840] So just to summarize the whole thing,
546
+ [1606.840 --> 1610.080] I'm just going to show you the big quickly.
547
+ [1610.800 --> 1612.720] Akamara,
548
+ [1612.720 --> 1614.040] so here is what I was just telling you,
549
+ [1614.040 --> 1617.920] you get Akamara, from this moment,
550
+ [1617.920 --> 1623.360] this moment, and then this moment.
551
+ [1623.360 --> 1627.440] And now this is the expression I was referring to.
552
+ [1627.440 --> 1628.760] Akamara, maradu.
553
+ [1628.760 --> 1630.600] Akamara, maradu, sorry.
554
+ [1630.600 --> 1635.280] Akamara, maradu, maradu, maradu, maradu.
555
+ [1635.280 --> 1637.840] So here is what I was just telling you.
556
+ [1637.840 --> 1643.840] You get akamara from this year, so you have this pointing with the right hand.
557
+ [1643.840 --> 1647.840] It's going to come there, collocation from this year.
558
+ [1647.840 --> 1653.840] Until next, so this is a counter lateral with the right hand index out.
559
+ [1653.840 --> 1658.840] Second time, and now supported with an EBC lateral with a thumb.
560
+ [1658.840 --> 1661.840] So it's totally different, a totally different hand morphology,
561
+ [1661.840 --> 1664.840] apparently pointing something behind him.
562
+ [1664.840 --> 1666.840] And this is now about the future.
563
+ [1666.840 --> 1672.840] So summarizing many forms, many handshakes, I'm not going to go into all the details,
564
+ [1672.840 --> 1679.840] but we did the, you know, also checking I'mara people from the committee who don't speak the language.
565
+ [1679.840 --> 1686.840] And then you will get cases like this person saying something about the roots of the old history,
566
+ [1686.840 --> 1692.840] this culture, and so on, pointing towards the back, like what you and I would do, for example.
567
+ [1692.840 --> 1695.840] Or here's another speaker talking about the future.
568
+ [1695.840 --> 1702.840] This is a I'mara member of the community, but someone doesn't speak the language.
569
+ [1702.840 --> 1709.840] And this people would point to the front of the future like we would do.
570
+ [1709.840 --> 1714.840] These are just some examples of, you know, is it just any global?
571
+ [1714.840 --> 1717.840] Well, no, you really observe in this area.
572
+ [1717.840 --> 1723.840] Essentially, the pattern, which is, for example, this case, is just passed from only,
573
+ [1723.840 --> 1727.840] you really see that those who speak I'mara or the Creole,
574
+ [1727.840 --> 1737.840] and those who only speak Spanish or some, but you see much more the pattern that we do in the west, so to speak,
575
+ [1737.840 --> 1740.840] or say, in English or in Europe or so.
576
+ [1740.840 --> 1750.840] Now, the sad thing is this life is that the pattern is only observe or primarily observe among people who are 65 or older.
577
+ [1750.840 --> 1757.840] So it means that at this region and not until you are not claiming this is what's happening in Bolivia or Peru,
578
+ [1757.840 --> 1761.840] this is a pattern that disappears. This study is now already like nine years old or something,
579
+ [1761.840 --> 1767.840] so these people are now 75, many are probably now dead, unfortunately.
580
+ [1767.840 --> 1774.840] And the truth is that this is going to be out of the picture in maybe 10 years,
581
+ [1774.840 --> 1787.840] meaning that all these efforts of trying to save languages are focused essentially on trying to serve in the safe phonies and the acoustics and sometimes the grammar,
582
+ [1787.840 --> 1796.840] but the forms of thinking that go with languages like this one, for example, may disappear even though the language is not challenging in any way.
583
+ [1796.840 --> 1800.840] There's more than two million speakers in the matter today.
584
+ [1800.840 --> 1807.840] All right, so the American is that it could be striking in a sense that all of a sudden you have a culture that can see it,
585
+ [1807.840 --> 1818.840] egocentrically passive being in front of the future behind, but in a certain way it's still egocentric.
586
+ [1818.840 --> 1823.840] So the question we had at some point was, what could it be that there is a culture that would ground,
587
+ [1823.840 --> 1831.840] die-tick time, not in egocentr form, but on something like absolute egocentric patterns,
588
+ [1831.840 --> 1837.840] very much like what I showed here with a cow on the pig based on north and south or something like that.
589
+ [1837.840 --> 1844.840] And it was actually your pastman, a natural politician from Hydebeck who contacted me after reading the Imane work,
590
+ [1844.840 --> 1851.840] I said, well, this is very good, but you know, been working for many years with this group that you know in Tatyana Guinea,
591
+ [1851.840 --> 1859.840] and I have a sense that they have a completely different notion of time that you have not described in your previous work.
592
+ [1859.840 --> 1863.840] So may I may be wrong? I don't know, but would you like to come with me?
593
+ [1863.840 --> 1866.840] So yes, what I said. And then of course.
594
+ [1866.840 --> 1872.840] So we organized the trip and we did this study, I'm about to present.
595
+ [1872.840 --> 1885.840] So the question is, what are the properties of the construal, if you have now a culture that would spontaneously and primarily ground temporal relationships,
596
+ [1885.840 --> 1891.840] die-tick time in something like geocentric coordinates?
597
+ [1891.840 --> 1901.840] So this is now we go to New Guinea, you're going to be this part of the world, and the juvenile live out in the Finister range.
598
+ [1901.840 --> 1909.840] So from Madan is south east of Madan, and we're going to go up to about 2000 meters above sea level.
599
+ [1909.840 --> 1922.840] And we're going to focus in the area where the liver is in the upper human body, so it's where the juvenile river has the human river begins.
600
+ [1922.840 --> 1931.840] And there essentially is a very enclosed geographic setting that has no communication whatsoever with the rest, essentially.
601
+ [1931.840 --> 1940.840] There's no road, no electricity, and the only way you can get there is walking from the several day walk from the coast.
602
+ [1940.840 --> 1947.840] So it looks essentially sort of like this, the mouth of the river, it would go down that way.
603
+ [1947.840 --> 1959.840] So this is the only access to this entire open upper valley here, but this is where all these little towns are located.
604
+ [1959.840 --> 1972.840] The community has about 5,000 speakers, and each of these villages that you see there, this is where, has about 400 to 100 depending on the towns.
605
+ [1972.840 --> 1983.840] All right, so now why this? And I was saying your Basmeng was a person who contacted me, said, well, in the cosmology and many aspects of everyday life among the UNO,
606
+ [1983.840 --> 2005.840] to reign the cleavage seems to be really important. So topographic relations for all kinds of things, for characterizing cosmology, making sense of rain, and making sense of all types of things of everyday life, have somehow influenced by topographic distinctions.
607
+ [2005.840 --> 2015.840] And then this shows also in language in a big, big time. So you have a graphic contrast appearing in special adverts, in virtual motion, and in a dytic system.
608
+ [2015.840 --> 2028.840] So you don't have things like, for example, demonstrators like that, or those, or this, which are totally neutral relative to properties like the rain-declivity.
609
+ [2028.840 --> 2037.840] And you know, apparently you need to have something like, you have to say that, but that's rolling down, or that hanging in some ways, and so on.
610
+ [2037.840 --> 2044.840] So you have to make distinction that's built with topographic constraints and markets.
611
+ [2044.840 --> 2057.840] Now there's only one term that we could find out, sort of, give us a hint that maybe there was something like this, which is the term of Orufmo, which means down there on the other side, literally.
612
+ [2057.840 --> 2070.840] But use metaphorically, so to speak, as a, something like a few years ago. So something that has to do now with terrain topography used now for something like past.
613
+ [2070.840 --> 2078.840] The question is, of course, is this an egg-doodle? Is this the only expression that happens to be like that for some reason we ignore?
614
+ [2078.840 --> 2082.840] Or is it something like, is it really solid pattern that it's important to say?
615
+ [2082.840 --> 2102.840] So this is how, this is the background of the whole project. So we started with, believe in our interviews, sort of getting sense of, like, a sense of what are the most frequent and used terms, temporal expressions that would show up all the time when discussing issues of everyday life.
616
+ [2102.840 --> 2115.840] And then we came up with about 15 expressions that covered the dieting time, past, present, present, and future all of the categories on different scales, so days and weeks, months, lifetimes, and so on.
617
+ [2115.840 --> 2125.840] So then what we did now was to improve the methodology we did with the admiral, refine, refine it in many ways that I will try to characterize right now.
618
+ [2125.840 --> 2135.840] So one of them was now that we recorded each of these expressions with a local dialect, with a local accent, so it came something like this.
619
+ [2135.840 --> 2137.840] Alibziyyun.
620
+ [2137.840 --> 2154.840] Alibziyyun, meaning long time ago, and now this is the true voice with us, so it's not just us or some of the people in the group coming up with the terms, but they will actually listen to the local people saying these terms.
621
+ [2154.840 --> 2166.840] So the paradigm was essentially the same at some point. What does it mean when we would compare some of these expressions and people would have to disentangle some questions and so on.
622
+ [2166.840 --> 2180.840] So we had what we call semi-structuring to use with sometimes pairs of adults, sometimes individual, depending, sometimes inside, sometimes outside the house, to see later why it's important.
623
+ [2180.840 --> 2191.840] And then we also changed the direction of the participants where they were sitting because we wanted to disentangle the ego from the topographic.
624
+ [2191.840 --> 2198.840] So people were pointing in all directions or being seated in different directions we're meant to be.
625
+ [2198.840 --> 2204.840] And in this part, when I'm going to talk about today, we had 27 participants, more men and women.
626
+ [2204.840 --> 2210.840] You would say, why is that? And the short answer is because women were busy working and men weren't.
627
+ [2210.840 --> 2218.840] So, they were available. They had machetes and everything, but here we are. We're hunters, but we're ready to work with you.
628
+ [2218.840 --> 2225.840] And women actually have to take care of the babies and the fire and the land and absolutely everything.
629
+ [2225.840 --> 2233.840] So it was very hard to get women who would have some time for us. We have to do some of the interviews actually out there that feels...
630
+ [2233.840 --> 2237.840] That's another topic, but very interesting.
631
+ [2237.840 --> 2241.840] Lots of asymmetries in that regard.
632
+ [2241.840 --> 2256.840] Okay, we recorded all kinds of orientations, houses, camera positions, everything because we wanted was to minimize in the field all the measurements that would distract or chase away people.
633
+ [2256.840 --> 2264.840] So we recorded everything and the idea was to reconstruct in the lab everything from scratch. So that's what we did.
634
+ [2264.840 --> 2274.840] And this is now the method we came up with. So first, out of the tons of hours from all these people, we annotated all the manual gestures.
635
+ [2274.840 --> 2284.840] There's also a head gesture, a key gesture, elbow gestures, toe gestures, and so on. But here's only the manual gestures, I'm going to send you today.
636
+ [2284.840 --> 2296.840] And so for every production that would come up with a code production with a temporal term, one of these 15 I mentioned, we would see what happens when and what time and so on.
637
+ [2296.840 --> 2302.840] And we came up with about 845 manual temporal gestures.
638
+ [2302.840 --> 2309.840] Now, many of those, of course, don't have a very precise hand shape. So some of them say, oh yeah, it was long ago.
639
+ [2309.840 --> 2316.840] Or something like that. But you don't know what was that. Was it pointing, was it this coming out, or maybe it's a finger, the index.
640
+ [2316.840 --> 2325.840] So the hand morphology sentence is not clear, or maybe the directionality is not clear. So even though the hand, all these, it is and so on.
641
+ [2325.840 --> 2335.840] We said, okay, well, out of these 845, let's get the hand, the like hand of these old gestures that would actually tell us something about the hypothesis.
642
+ [2335.840 --> 2344.840] So we coded for with blind, with blind coders, not aware of the hypothesis of anything about the humanoid or the u-n-a-d.
643
+ [2344.840 --> 2355.840] Directionality, what we call strokingness, which is this feature of gesture production that speaks about the acceleration and the acceleration of the gesture.
644
+ [2355.840 --> 2366.840] So a good stroke would be something like this, also called something like that. So stroke would be acceleration and a abrupt de-acceleration.
645
+ [2366.840 --> 2373.840] And then also displacement. Is it like this, or is it like a big displacement from the body?
646
+ [2373.840 --> 2388.840] So we selected the best 215 gestures that we retained with the clear, esmorphological profile, without knowing whether these were about future or past or anything, just what are the best 215?
647
+ [2388.840 --> 2400.840] And also capping some of the participants, because some were more producers of gestures than others, so we didn't want the database to be like fired by the one big speaker gesture guy.
648
+ [2401.840 --> 2415.840] Good. Then we had two more coders, blind coders, coding now from the videos, now back in San Diego, especially I want to just talk about the top view.
649
+ [2415.840 --> 2429.840] They were now reconstructing the directionality of the pointing. So if the person would see some point like that, they would have to, with this bar, reconstitute where the pointing was relative now from the top camera.
650
+ [2429.840 --> 2439.840] Or the top view, where would that pointing come from? Then we also did it with front and the side, but I'm not talking about that today.
651
+ [2439.840 --> 2448.840] So with these three views, we had this reconstruction now from the, we're using coders.
652
+ [2448.840 --> 2461.840] And instead for because we had the measurements of the angle for each camera, now we asked the coders to locate the shoulder line relative to the camera.
653
+ [2461.840 --> 2475.840] So we had some other people saying, okay, if the camera was, let's say like this very camera in this room, then the person looking at the camera would have to reconstitute my position of shoulders and nose here relative to that angle.
654
+ [2475.840 --> 2491.840] So then they can do some pretty like move. So with all those four, now we have various layers. And the final layer, which is the most relevant one, is to put all on top of the topographic information of the univaly.
655
+ [2491.840 --> 2502.840] Where are all these gestures pointing, what is future, when is past and so on, when we put them on top of the topography of the terrain.
656
+ [2503.840 --> 2513.840] So results. The first thing we wanted to know is, is there a dietic center? And then if there is a dietic center, what happens with past and future?
657
+ [2513.840 --> 2528.840] Okay. So the first thing we found that just indeed there is a dietic center and like anything else we've seen so far, the dietic center is co-production with a pointing towards a collocation with the speaker.
658
+ [2528.840 --> 2548.840] I can give all the statistics later if you, someone is interested, but so when people were pointing toward the ground, which essentially is something you cannot see from the top of the angle, then it co-occurred with temporal expression that had to do with now, today, this day, something like that.
659
+ [2549.840 --> 2570.840] Now, here is the second problem. The second problem is that we're going to now analyze directional statistics. And I don't know how many of you do statistics when you work, but most of statistics coming from the 19th century on is all based on what is called linear statistics, even when it's very sophisticated, multi-variant and so on.
660
+ [2570.840 --> 2580.840] Essentially, all the variables we use, cholesterol levels, reaction time, income, whatever, they go from little to more.
661
+ [2580.840 --> 2596.840] The problem here is when you have directional data, it's very hard to pick a zero because when you go from little to more, low cholesterol to high cholesterol, you know, let's say, what is zero is and so on.
662
+ [2596.840 --> 2613.840] But in the case of directional data, you get to encounter a new problem, which is depending on where you zero is going to be, those who lie on one or the other side of the zero, if you calculate an average, it's going to give you a completely wrong direction.
663
+ [2613.840 --> 2624.840] So let's say this is our zero and I measure, let's say, five degrees, what's we have in 55 degrees, if I calculate the mean between 555, you get 180, which is 1.
664
+ [2624.840 --> 2642.840] So when you get a bunch of directions, then it's a mess. So, whatever is okay, how do we do this problem? And you look at some parts of the literature, like people who do movements of birds and, you know, fashion, the ocean, or the average direction of the breeze in LA today or the wind.
665
+ [2643.840 --> 2652.840] Moving all like that, what is the average? When he moves on one side, like here, no problem. But when he moves all over, 36 degrees is a problem.
666
+ [2652.840 --> 2661.840] So we needed to get into spherical statistics to analyze this data because they were all pointing on a total traffic valley.
667
+ [2661.840 --> 2669.840] Another problem we face is that, I don't know how many of you are aware of this, but spherical statistics are very recent.
668
+ [2669.840 --> 2679.840] Just to give an example, the correlation coefficient in the simple R, Pearson's R, goes back to the 19th century, call Pearson.
669
+ [2679.840 --> 2685.840] The correlation for two directional variables is from 1982.
670
+ [2686.840 --> 2694.840] So you don't have the fancy regressions and the fancy things because they are being developed as we speak, and they're very simple.
671
+ [2694.840 --> 2702.840] So we had that kind of problem, but luckily we could use what we used for the kinds of questions we have.
672
+ [2702.840 --> 2706.840] But it's something to keep in mind when you work with spherical or directional stats.
673
+ [2706.840 --> 2715.840] Anyway, so here we put the results now. What happens is that when we go to the YouTube valley, here's the river, here the mouth is heading that way.
674
+ [2715.840 --> 2724.840] The source of the YouTube is up here. This is the village, and this is now amplified and zoomed out over here.
675
+ [2724.840 --> 2732.840] So roughly speaking, I could give you all the stats and the spherical statistics, which are very sexy, but anyway, that's not a talk.
676
+ [2733.840 --> 2739.840] The one-liner, the past, the past production.
677
+ [2739.840 --> 2745.840] So, and this means gestures that were produced while saying something about the past.
678
+ [2745.840 --> 2754.840] No matter what boundary orientation the speaker had, tended to go this way, and this is the cone, going essentially down the hill.
679
+ [2754.840 --> 2760.840] These are the over-key walking. And in the future, it was heading this way.
680
+ [2760.840 --> 2767.840] We don't know, we can't interpret yet what is this kind of against the rate over here, or maybe pointing towards the source of the river.
681
+ [2767.840 --> 2773.840] We don't know that yet. We're going to go to the next year and find out. We've got new funding, so we're happy about that.
682
+ [2773.840 --> 2788.840] We'll see. The point is summarizing is that you get this nice dissociation between gestures produced by past and future, knowing now that the gestures for this center of present is collocated.
683
+ [2789.840 --> 2800.840] Many other interesting things come out. So, when you look at the, this is like all the outdoor data, I want to say about indoor data in the minutes.
684
+ [2800.840 --> 2809.840] So, once speakers were outside of their houses, which is the only built thing you see in this area, this is the pattern you see.
685
+ [2810.840 --> 2818.840] And one interesting observation, other than just being topographic center, not egocentral, is that it's not aligned anymore.
686
+ [2818.840 --> 2825.840] So, the usual idea that the future goes one way and passes the other way, here it just breaks down.
687
+ [2825.840 --> 2838.840] So, you can do all the stats again, and essentially you have a broken line, and we did extra statistics to figure out that not only from the top view is broken, like this, but it's also from the front view is broken.
688
+ [2839.840 --> 2849.840] So, gesture towards the past, they tend to be more around this angle, and gestures for the future, they much more, the slope is much bigger.
689
+ [2849.840 --> 2864.840] So, even from the front is broken. So, essentially in a 3D is kind of broken, like this. So, the timeline that is not the timeline that we assume in other places to be.
690
+ [2864.840 --> 2872.840] Now, another shocking result for us, at least, was that when we did the interviews indoors, a completely different pattern emerged.
691
+ [2872.840 --> 2884.840] So, this is a group of people, like many others described before in Australia, as I was saying, who heavily rely on geosensical coordination of characterizing space.
692
+ [2884.840 --> 2892.840] So, pass me that downhill orange, or pass me that uphill apple, those will be kind of the patterns in a certain way.
693
+ [2892.840 --> 2913.840] But, all the societies that have been described as relying heavily on absolute or geosensric frames of references, they tend to keep it no matter where they are inside the house, people try to bring them some other place, rotating them wherever they end up knowing exactly what the North is and so on.
694
+ [2913.840 --> 2929.840] What happened here was very different, as soon as you enter these houses, which is essentially like a wooden igloo or something like that, you can see the scale, you enter like a dark universe.
695
+ [2929.840 --> 2941.840] These things have no windows, only a fireplace in the middle. Fire is on every time they're in there, for illumination, for cooking, for warmth, for everything.
696
+ [2941.840 --> 2947.840] And there's a lot of smoke, as a consequence, there's no windows, it's just smoke like crazy.
697
+ [2947.840 --> 2957.840] The first two or three days I had to literally drag myself on the ground, because it was the least smoke I could find in this world.
698
+ [2957.840 --> 2961.840] As soon as you were moving up, the amount of smoke was absolutely pretty good.
699
+ [2961.840 --> 2986.840] So, the point here is that when we were inside, we noticed, and this is not with no graphicers, that people were talking about this side of the house as downhill, and this side of the house as uphill with lexical terms, irrespective of the rotation of the house and irrespective of the rotation of the house in the valley, when there was downhill uphill.
700
+ [2986.840 --> 3004.840] So then we had the idea, say, well, maybe let's check that and do a little experiment of having people discriminating between similar objects and saying, okay, point to the uphill, you know, apple, or drag, or grab the downhill bag or something like that, different terms.
701
+ [3004.840 --> 3015.840] And it was almost 90-something percent clear that inside this house, as soon as you enter the house, downhill, and uphill is relative to the house.
702
+ [3015.840 --> 3026.840] And there's no more valley anywhere. So then we said, well, this is different because it's different from what we were reading about topographic constructs of space.
703
+ [3026.840 --> 3028.840] How about now time?
704
+ [3028.840 --> 3037.840] When we did that experiment, when we did the observations of just using the same spherical stats, we come out with a similar pattern.
705
+ [3037.840 --> 3046.840] So future just should would be pointing away from the entrance that past would be pointing towards the entrance.
706
+ [3046.840 --> 3053.840] So this is now collapsing all the houses that we were using in this case, three different houses with different orientations.
707
+ [3053.840 --> 3068.840] So for us, that was a very interesting observation because we haven't seen anything like this in which you really, as you walk in, you change completely the pattern of the frames of reference.
708
+ [3068.840 --> 3071.840] Now, let me just give you a quick view here.
709
+ [3071.840 --> 3077.840] This is, for example, a speaker talking about yesterday terms over here, tomorrow is over there.
710
+ [3077.840 --> 3086.840] And this is the rotation I was illustrating. So here the person is talking about yesterday by pointing downhill towards his back.
711
+ [3086.840 --> 3090.840] After rotation, continue talking downhill towards the front.
712
+ [3090.840 --> 3101.840] This is an indoor. The person sitting along the fire pointing towards the entrance with the right hand, rotated and now is left hand.
713
+ [3101.840 --> 3108.840] This is the case for tomorrow. This is what I was trying to do. This is the idea that there's more slope for future terms.
714
+ [3108.840 --> 3115.840] And this case, the top of the hill is in front of him, but when rotated then it goes behind him.
715
+ [3115.840 --> 3125.840] This is now happening inside the house, same speaker now pointing towards away from the entrance, or the lightness.
716
+ [3125.840 --> 3129.840] And this is rotated now pointing again away.
717
+ [3129.840 --> 3135.840] So really clearly dissociating the ego from these external markers.
718
+ [3135.840 --> 3140.840] Just to give you an example, here's some quick real world gestures.
719
+ [3140.840 --> 3145.840] Today, Akma, Akjo, Isadam.
720
+ [3145.840 --> 3147.840] That's the sequence again.
721
+ [3147.840 --> 3150.840] Today is the go.
722
+ [3150.840 --> 3151.840] Akma.
723
+ [3151.840 --> 3152.840] Yes today.
724
+ [3152.840 --> 3154.840] Now, Isadam.
725
+ [3154.840 --> 3156.840] Change your hand.
726
+ [3156.840 --> 3159.840] Today is the go.
727
+ [3159.840 --> 3162.840] Yes today.
728
+ [3162.840 --> 3165.840] Today is the go.
729
+ [3165.840 --> 3171.840] Rotate it and then talking later about this engine.
730
+ [3171.840 --> 3173.840] Yes today.
731
+ [3173.840 --> 3174.840] And tomorrow.
732
+ [3174.840 --> 3179.840] Yesterday is outdoors.
733
+ [3179.840 --> 3187.840] And then finally, even though everything I said in this talk was about manual gestures, I just want to illustrate that this is now another topic.
734
+ [3187.840 --> 3192.840] The incredibly pervasive use of the head and nose in this culture for pointing.
735
+ [3192.840 --> 3197.840] Usually, we point with nose and head when we have our hands busy.
736
+ [3197.840 --> 3199.840] So we have current books and what is the toilet?
737
+ [3199.840 --> 3202.840] Oh, they're going to free the organs.
738
+ [3202.840 --> 3205.840] Well, when the hands are free, normally we use our hands.
739
+ [3205.840 --> 3211.840] But in this culture we're really shocked to see the amount of nose pointing.
740
+ [3211.840 --> 3214.840] Actually, we have a paper, a jacket and a gesture.
741
+ [3214.840 --> 3217.840] Only on nose pointing because it's amazing.
742
+ [3217.840 --> 3218.840] It's another topic.
743
+ [3218.840 --> 3221.840] You need different muscles, the labial levitory.
744
+ [3221.840 --> 3224.840] You can move it like crazy in order to have your nose come.
745
+ [3224.840 --> 3226.840] I cannot reproduce that.
746
+ [3226.840 --> 3231.840] You need a lifetime training to really point precisely with your nose.
747
+ [3231.840 --> 3236.840] It has lots of advantages because it frees other articulators for modifying that.
748
+ [3236.840 --> 3241.840] You could say something precise or not precise, which you can do at the same time that I know.
749
+ [3241.840 --> 3243.840] Anyway, here's a cool thing.
750
+ [3243.840 --> 3246.840] Here's an example of a temporal head gesture.
751
+ [3246.840 --> 3251.840] I have your hand, hand, hand, hand, hand, hand, hand, hand, hand.
752
+ [3251.840 --> 3257.840] So it goes like that, higher, higher, and then towards the top of nothing.
753
+ [3257.840 --> 3265.840] Now, one last thing we found also something interesting is the fact that, especially when they were inside the houses,
754
+ [3265.840 --> 3269.840] there would be lots of straight upward pointing.
755
+ [3269.840 --> 3273.840] There's no mountain over there.
756
+ [3273.840 --> 3278.840] We were wondering, some of the points are something about God and things like that.
757
+ [3278.840 --> 3280.840] We weren't sure.
758
+ [3280.840 --> 3286.840] All the statistics I showed here are pointing outwards.
759
+ [3286.840 --> 3292.840] Inside the house are not considering these cases, which is straight forward up.
760
+ [3293.840 --> 3296.840] We saw a few of these outside of the houses.
761
+ [3296.840 --> 3300.840] We believe, we think this is our publics, again, we're going to check that next year,
762
+ [3300.840 --> 3304.840] is that it's probably when you change from village to village,
763
+ [3304.840 --> 3308.840] the direction of the source of the river changes, and this becomes,
764
+ [3308.840 --> 3312.840] if you visit another village, you need something more common,
765
+ [3312.840 --> 3321.840] and this could be like a signature of straight forward up or an unmistakable up,
766
+ [3321.840 --> 3325.840] and that doesn't change when you move around.
767
+ [3325.840 --> 3330.840] But we observe, really, this upward pointing gesture only,
768
+ [3330.840 --> 3334.840] by the way, with future expression, but pretty progressive as well.
769
+ [3334.840 --> 3337.840] Okay, let me conclude now.
770
+ [3337.840 --> 3346.840] Universal trends here that humans come through time using space.
771
+ [3346.840 --> 3350.840] It tends to be a one-dimensional space extended in different ways.
772
+ [3350.840 --> 3355.840] It could be like this type of one-linear space, or it could be a sagittal space,
773
+ [3355.840 --> 3357.840] in which I'm part of the space.
774
+ [3357.840 --> 3361.840] It could be more complicated form, like, cypic or helix-like,
775
+ [3361.840 --> 3366.840] in which if you take the topological segment of a helix,
776
+ [3366.840 --> 3370.840] or a circle, it has the same properties of the one on the line.
777
+ [3370.840 --> 3373.840] So some of these spaces, the speaker is part of,
778
+ [3373.840 --> 3379.840] and some of the spaces out of the centric, but it's one-dimensional nonetheless.
779
+ [3379.840 --> 3382.840] And then another thing that seems to be universal,
780
+ [3382.840 --> 3386.840] is that the dytic center seems to be associated with E.g. location.
781
+ [3386.840 --> 3389.840] And now we observe it even in a group like you know,
782
+ [3389.840 --> 3392.840] which has a dramatically different notion,
783
+ [3392.840 --> 3394.840] but still, they're still the present.
784
+ [3394.840 --> 3396.840] The dytic center is still collocated.
785
+ [3396.840 --> 3401.840] And the other thing, as we saw with some of the gestures going like the day before yesterday,
786
+ [3401.840 --> 3407.840] yesterday now, and tomorrow, preserving transitivity of distance from the speaker,
787
+ [3407.840 --> 3410.840] is also something that we observe in Yamara,
788
+ [3410.840 --> 3414.840] and many other studies I've shown with other groups.
789
+ [3414.840 --> 3418.840] So this is also something that the student of mine can see,
790
+ [3418.840 --> 3421.840] cooperated with whom I was doing all this work along with your passman,
791
+ [3421.840 --> 3426.840] was also trying to figure out in terms of gesture production,
792
+ [3426.840 --> 3429.840] how these properties are produced, sorry,
793
+ [3429.840 --> 3431.840] could be documented, it's productive,
794
+ [3431.840 --> 3433.840] and in this particular culture.
795
+ [3433.840 --> 3436.840] So with Kenzie and Nuremberg, we were wondering,
796
+ [3436.840 --> 3440.840] what are the other factors motivating the I'mara and the Yuno pattern?
797
+ [3440.840 --> 3443.840] The work with I'mara, I mostly did with the Yig suitor,
798
+ [3443.840 --> 3448.840] a linguist at UC Berkeley, the work with the Yuno with the Yub Basman and Kenzie Cooper,
799
+ [3448.840 --> 3451.840] so we were wondering, what is motivating these patterns?
800
+ [3451.840 --> 3453.840] Why these patterns?
801
+ [3453.840 --> 3457.840] What is it that bring them forth this way?
802
+ [3457.840 --> 3461.840] So in the case of the I'mara, we believe that there is something like
803
+ [3461.840 --> 3465.840] over-enphasis and visual perception, as a source of knowledge.
804
+ [3465.840 --> 3468.840] I'mara has really a strong use of evidentials,
805
+ [3468.840 --> 3470.840] and this also shows up in Castellano,
806
+ [3470.840 --> 3473.840] the Creole type I was describing.
807
+ [3473.840 --> 3476.840] So in I'mara, everything you say,
808
+ [3476.840 --> 3480.840] where is the parking lot, have you seen Bob and so on,
809
+ [3480.840 --> 3483.840] everything you answer to those questions,
810
+ [3483.840 --> 3486.840] you would have to say whether you saw it with your own eyes,
811
+ [3486.840 --> 3490.840] or whether you heard that Bob was there or something like that,
812
+ [3490.840 --> 3492.840] or you read it in the book,
813
+ [3492.840 --> 3495.840] supposed to sing it with your own eyes.
814
+ [3495.840 --> 3497.840] So for those you have different markers,
815
+ [3497.840 --> 3501.840] and they get also into the Creole, sometimes in a very funny way,
816
+ [3501.840 --> 3506.840] because they recruit grammatical distinction that work in Spanish,
817
+ [3506.840 --> 3512.840] but they use those distinction that are Spanish based for the I'mara needs.
818
+ [3512.840 --> 3516.840] So sometimes turning out in completely crazy construction,
819
+ [3516.840 --> 3519.840] so it's very clear in Castellano Lino,
820
+ [3519.840 --> 3523.840] that's absolutely no sense in Spanish,
821
+ [3523.840 --> 3525.840] in Spanish.
822
+ [3525.840 --> 3528.840] Alright, so what I think is something like,
823
+ [3528.840 --> 3530.840] there's something like a metaphor,
824
+ [3530.840 --> 3532.840] so it's like a knowingly seeing.
825
+ [3532.840 --> 3536.840] What you see, what you know is what you actually see.
826
+ [3536.840 --> 3540.840] So if you want to talk about one last summer hot or cold,
827
+ [3540.840 --> 3544.840] or was it wet or dry, that's something you saw.
828
+ [3544.840 --> 3548.840] And in that sense you would sort of invoke the properties of sea,
829
+ [3548.840 --> 3554.840] frontal visual field, and bodily orientation with respect to that.
830
+ [3554.840 --> 3557.840] And what's outside of the visual field then,
831
+ [3557.840 --> 3560.840] it would be those things that you cannot talk about.
832
+ [3560.840 --> 3563.840] I can't tell you right now what books are behind me.
833
+ [3563.840 --> 3567.840] If I want to know that, then I have to turn a good dad in my visual field.
834
+ [3567.840 --> 3569.840] So that's what we think is going on.
835
+ [3569.840 --> 3575.840] Of course, it's not the sufficient condition in the sense that
836
+ [3575.840 --> 3579.840] there's many other cultures that have very strong user-based intellectuals as well,
837
+ [3579.840 --> 3582.840] but not necessarily having this body of orientation.
838
+ [3582.840 --> 3586.840] So this is still working progress.
839
+ [3586.840 --> 3589.840] The cultural use of the markers was also interesting.
840
+ [3589.840 --> 3594.840] In a manner you're not allowed to talk about your childhood using the marker for I saw it.
841
+ [3594.840 --> 3599.840] Let's say, oh, I remember my first pet, it was blah, blah, blah.
842
+ [3599.840 --> 3602.840] If what you know is what your parents told you about that,
843
+ [3602.840 --> 3605.840] then you should use the marker saying, I didn't see that.
844
+ [3605.840 --> 3608.840] Also, when you had a good time in a party and you were drunk,
845
+ [3608.840 --> 3612.840] you can't say the following, oh my god, wasn't that a very party?
846
+ [3612.840 --> 3614.840] No, you have to use that.
847
+ [3614.840 --> 3617.840] I was told it was a great party.
848
+ [3617.840 --> 3619.840] And it truly is.
849
+ [3619.840 --> 3621.840] Is it interpreted in the culture, like say, oh, come on,
850
+ [3621.840 --> 3623.840] you're telling me you were not drunk?
851
+ [3623.840 --> 3626.840] If you use the marker saying, I was there, and I saw it.
852
+ [3626.840 --> 3631.840] No, I said, I was told it was great, and I was told I've drunk a lot.
853
+ [3631.840 --> 3633.840] So that's what I was told.
854
+ [3633.840 --> 3637.840] So the practices really are embedded in all these forms.
855
+ [3637.840 --> 3640.840] So what comes out in the, I might have had to napently,
856
+ [3640.840 --> 3646.840] is something like a landscape, let's say, characterizing briefly here,
857
+ [3646.840 --> 3651.840] you would have the observer only that now everything is outside of the visual field,
858
+ [3651.840 --> 3658.840] preserve the properties of the non-seeing, non-known aspects
859
+ [3658.840 --> 3662.840] and preserving the transitivity further away behind this further away
860
+ [3662.840 --> 3668.840] in the future and the opposite for password things can be invisible and observable.
861
+ [3668.840 --> 3673.840] Now, we didn't observe any motion in these cases, so that's why I didn't put any arrows.
862
+ [3673.840 --> 3676.840] So there's no motion involved in this type of post-tool.
863
+ [3676.840 --> 3682.840] Things are in front or behind, but we don't know anything from the data we have
864
+ [3682.840 --> 3684.840] that anything is approaching anything.
865
+ [3684.840 --> 3689.840] It's just, it's a static picture, at least based on the data.
866
+ [3689.840 --> 3691.840] How about the UNO?
867
+ [3691.840 --> 3697.840] Well, we know that there's a, this sort of centrality of the topographic spatial system
868
+ [3697.840 --> 3704.840] that has been already documented before, how pervasive is the characterization of spatial relationship
869
+ [3704.840 --> 3706.840] using topographic markets.
870
+ [3706.840 --> 3710.840] So this is not now cardinal, absolutely formed, like North and South,
871
+ [3710.840 --> 3713.840] it's all slopes into rain fertility.
872
+ [3713.840 --> 3720.840] And so it seems also that not documented by your vastness, this sort of association
873
+ [3720.840 --> 3724.840] between the macro scale downhill with ancestral past.
874
+ [3724.840 --> 3729.840] So in their cosmology, they talk about the origin of the group coming from, you know,
875
+ [3729.840 --> 3736.840] the ancestors coming from the shores and walking all their way up and the motion is in to the valley.
876
+ [3736.840 --> 3741.840] And this is now, if we want to make sense of the data we observe inside the houses
877
+ [3741.840 --> 3745.840] and the data we observe outside, so the outdoor and the indoor data
878
+ [3745.840 --> 3753.840] with a single and compassionate foundation to serve parsimony.
879
+ [3753.840 --> 3757.840] We think what's going on is something like an entrance schema going on.
880
+ [3757.840 --> 3761.840] In the sense that indoor houses, because they're big slopes,
881
+ [3761.840 --> 3766.840] the houses have to be built with an horizontal footprint.
882
+ [3766.840 --> 3769.840] So the slope is like that and the house like this.
883
+ [3769.840 --> 3774.840] And every time you enter, you have to go up with several steps, between two and five steps.
884
+ [3774.840 --> 3778.840] We have an entire paper of steps to get into the house.
885
+ [3778.840 --> 3783.840] The point is that you only enter houses going up.
886
+ [3783.840 --> 3789.840] So we think that maybe the temporal, the temporal corresponding inferences
887
+ [3789.840 --> 3797.840] is that if you profile the entering action, the process of entering the earlier moments of the entering
888
+ [3797.840 --> 3802.840] in the past relative to the moment when you're inside, they're down.
889
+ [3802.840 --> 3809.840] And when you go in, you're in the temporal sequence, you're moving towards the inside.
890
+ [3809.840 --> 3813.840] So we think that what's going on in the future is just by contrast.
891
+ [3813.840 --> 3818.840] But this is the entrance schema that is probably, this is the hypothesis,
892
+ [3818.840 --> 3824.840] being profile where you can derive now the temporal relationship between the now of being inside
893
+ [3824.840 --> 3829.840] and the past of being outside, whether it is the ancestral past or the individual past,
894
+ [3829.840 --> 3831.840] but it's to the house.
895
+ [3831.840 --> 3837.840] Now what is novel among the you know is first the geocentric topographic,
896
+ [3837.840 --> 3841.840] which is something has not been documented ever.
897
+ [3841.840 --> 3843.840] And this is very new.
898
+ [3843.840 --> 3846.840] And this was actually a shocker for us.
899
+ [3846.840 --> 3850.840] When we were doing the work with Yamara, we assumed that all these gestures,
900
+ [3850.840 --> 3855.840] all these things were kind of a consequence of the use of language and linguistic expressions
901
+ [3855.840 --> 3858.840] that have this metaphorical content.
902
+ [3858.840 --> 3864.840] Get your metaphor straight, look for gestures just to have motor behavioral evidence all it.
903
+ [3864.840 --> 3870.840] Well here we found there were no, there was no metaphorical expressions for time-based and space.
904
+ [3870.840 --> 3874.840] All the temporal expressions except the one I gave at the beginning of Oromo,
905
+ [3874.840 --> 3880.840] they're all just temporal expressions like tomorrow, like after and so on.
906
+ [3880.840 --> 3884.840] We don't use tomorrow in spatial sense, for example.
907
+ [3884.840 --> 3886.840] So all these were temporal.
908
+ [3886.840 --> 3889.840] There was no metaphorical language.
909
+ [3889.840 --> 3893.840] So the only thing that gave us a hint was gesture production.
910
+ [3893.840 --> 3897.840] And now this says something else is that there's many papers out there
911
+ [3897.840 --> 3903.840] that when they go to a local linguistic structure in particular countries,
912
+ [3903.840 --> 3907.840] oh this culture doesn't have a spatial notion of time or something like that.
913
+ [3907.840 --> 3912.840] When you just look to see, are there any metaphors, temporal metaphors for space?
914
+ [3912.840 --> 3916.840] Oh, we check, we check, we check, we didn't find any conclusion? No.
915
+ [3916.840 --> 3920.840] Well we would have reached the same conclusion here and we've done that.
916
+ [3920.840 --> 3924.840] It's only when you analyze a gesture production that you get a hint.
917
+ [3924.840 --> 3931.840] So you could have conceptualization that is statistically incredibly significant or robust.
918
+ [3931.840 --> 3933.840] Patents very specified.
919
+ [3933.840 --> 3944.840] But only visible through bodily motion in this case and not through linguistic markers or terms like lexical items.
920
+ [3944.840 --> 3947.840] The other thing is the asymmetry of the broken line.
921
+ [3947.840 --> 3952.840] So in that sense the timeline that we tend to have, even for the Imarra case,
922
+ [3952.840 --> 3957.840] front and back in this case now there's no real line, essentially slope driven
923
+ [3957.840 --> 3961.840] and depending on the terrain disability, there's slope to go in any direction.
924
+ [3961.840 --> 3966.840] And the other thing which was known is the adaptation of the pattern when going inside.
925
+ [3966.840 --> 3974.840] All of a sudden as soon as you're inside this house, all the relationship between the topographic map
926
+ [3974.840 --> 3978.840] is now recruited and mapped into the internal structure.
927
+ [3979.840 --> 3982.840] So to close, I want to go back to the quote,
928
+ [3982.840 --> 3987.840] it meant they did not have the same conception of time, space, cause, number, etc.
929
+ [3987.840 --> 3992.840] All contact between their minds would be impossible and with that all that together.
930
+ [3992.840 --> 3998.840] Well in a certain way, he was right at certain level when you think about let's say the dietic centers.
931
+ [3998.840 --> 4003.840] Well they all tend to be collocation, they're all spatial under collocation with the speaker.
932
+ [4003.840 --> 4010.840] In that sense, there is something fundamental that seems to be shared up to the data we have right now.
933
+ [4010.840 --> 4015.840] But of course there's a huge variation of whether you anchor time to photographically,
934
+ [4015.840 --> 4019.840] whether there is a line or is not, whether it goes uphill or in front or behind.
935
+ [4019.840 --> 4023.840] These are all like really radically a different structure.
936
+ [4023.840 --> 4030.840] Not only that, the grounding and probably the whole motivation for bringing forth that particular structure
937
+ [4030.840 --> 4033.840] have tremendous variability.
938
+ [4033.840 --> 4038.840] So we can't just say that that is shared in the fundamental way.
939
+ [4038.840 --> 4042.840] So I want to go back to kind of the ideal biological evolution here.
940
+ [4042.840 --> 4047.840] If we really want to understand life and how the different forms of life evolve,
941
+ [4047.840 --> 4049.840] we cannot get rid of the outliers.
942
+ [4049.840 --> 4054.840] It's actually a variability that would tell us an incredible story by logical evolution.
943
+ [4054.840 --> 4063.840] But for some reason, maybe influenced by experimental psychology or other techniques in which averaging is the way of going and so on.
944
+ [4063.840 --> 4069.840] Then these cases that would occur in very isolated places, then they tend to be dismissed.
945
+ [4069.840 --> 4076.840] Sometimes of course with the issues in methods, of course you can't do brain scans in the way you would like to do,
946
+ [4076.840 --> 4079.840] let's say with these people, because you will get completely,
947
+ [4080.840 --> 4085.840] even if you bring people to the scanner, you won't get the signals because the noise will be tremendous.
948
+ [4085.840 --> 4089.840] It will be something like, no, let me out, I want to go being somewhere.
949
+ [4089.840 --> 4093.840] So that's going to be the signal you're going to see in the future in the past,
950
+ [4093.840 --> 4096.840] and be just very weak relative to these other big signals.
951
+ [4096.840 --> 4103.840] So the point is we should really take these forms, I would say, not like an edole variation,
952
+ [4104.840 --> 4111.840] but really as a central forms of variation that we need to understand seriously in order to have a better story of who are we as humans,
953
+ [4111.840 --> 4115.840] and how do we come up with abstract notions?
954
+ [4115.840 --> 4120.840] And for that we need to take all these variations very seriously.
955
+ [4120.840 --> 4126.840] And with that, I want to thank all the collaborators in Northern Chile for their IMARA project,
956
+ [4127.840 --> 4132.840] all the collaborators in that you know Valley and other people in my lab.
957
+ [4132.840 --> 4133.840] And thank you.
958
+ [4133.840 --> 4139.840] Applause
959
+ [4139.840 --> 4143.840] I, of course, I find yourself subjected to the surenium time.
960
+ [4143.840 --> 4146.840] I'm just wondering if this is one of the questions.
961
+ [4146.840 --> 4147.840] Sure, sure.
962
+ [4147.840 --> 4149.840] This is sure. I didn't take the other question.
963
+ [4149.840 --> 4151.840] Okay, it's a point of question.
964
+ [4152.840 --> 4158.840] Okay, they have the larger valley, but if you have a larger valley, you can find that here in campus where you can see them on a command.
965
+ [4158.840 --> 4163.840] You could also get in a place like in one of the bookstore where the nearest uphill would impact you in a different direction.
966
+ [4163.840 --> 4169.840] So which way are they pointing, and yet in fact they're using the larger mountain as a primary frame,
967
+ [4169.840 --> 4172.840] isn't in a sense that they switch the mountains.
968
+ [4172.840 --> 4173.840] So they can do it.
969
+ [4173.840 --> 4175.840] So you're talking about space.
970
+ [4176.840 --> 4183.840] So that's right, we, we, I didn't go into all the details of how you can suppose that it's an even move around.
971
+ [4183.840 --> 4189.840] You can say, you know, if you're facing this way in that building, even though you're facing a different one,
972
+ [4189.840 --> 4192.840] and you may be gesturing in that in that sense.
973
+ [4192.840 --> 4198.840] So here what we, we didn't actually, our study wasn't based on space proper.
974
+ [4199.840 --> 4209.840] We assumed that these distinctions were, when we checked with the, like the standard, you know, table top kind of objects to be sure that this was the case,
975
+ [4209.840 --> 4212.840] and then we were focused on temporal relations.
976
+ [4212.840 --> 4218.840] So we just looked at, you know, is there any dieting center, and where are they pointing when they're talking about passive future?
977
+ [4218.840 --> 4220.840] So in that sense it was simplistic.
978
+ [4220.840 --> 4226.840] Now the other thing we did with space was to say, well, as soon as you move into this space, the indoor houses,
979
+ [4226.840 --> 4232.840] then something happens with this distinctions, downhill, uphill, which become now a different level.
980
+ [4232.840 --> 4235.840] And then we wanted to see, is that robust and stable?
981
+ [4235.840 --> 4239.840] Yes, it was, and then the next question was, well, is that recruited for time?
982
+ [4239.840 --> 4240.840] But you're right.
983
+ [4241.840 --> 4247.840] Another much, you know, much more in depth, we would understand all the nuances of temporal positioning and pointing in,
984
+ [4247.840 --> 4254.840] sorry, the spatial pointing and positioning, then we would need to dig in and look for those cases.
985
+ [4254.840 --> 4255.840] Right, thanks.
986
+ [4255.840 --> 4257.840] Thank you.
987
+ [4257.840 --> 4258.840] Thank you.
transcript/allocentric_JFj8kWm_N-Y.txt ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.400] Hi, everyone. Today, I'll be introducing some ideas from Maurice Merle-Pontese,
2
+ [4.400 --> 8.240] very important book, The Phenomenology of Perception from 1945.
3
+ [9.360 --> 17.360] Merle-Pontese is best known for his contention that phenomenology and philosophy in general
4
+ [17.360 --> 24.720] need to emphasize the living body. In order to get away from the problems of my body dualism,
5
+ [24.720 --> 32.000] which it to his mind are pseudo-problems, and in order to ground philosophy in our lived experience.
6
+ [32.000 --> 38.480] From Merle-Pontese, a lot of philosophy is just a bunch of pseudo-problems. We imagine that we can
7
+ [38.480 --> 43.280] remove ourselves from the world and be these disembodied minds, and then we come up with all of these
8
+ [43.280 --> 49.200] ideas about how disconnected we are, and wonder what the connection is between us and the world.
9
+ [49.200 --> 54.320] Wonder how do I know that there are other people? How do I know that I'm not a brain in a vat somewhere?
10
+ [55.120 --> 61.120] And he says that actually if we start from the facticity of the world, and if we start from
11
+ [61.120 --> 69.360] our lived experience within that, the world is there. And all very ideas are possible only on the
12
+ [69.360 --> 74.880] background of perception. So there's so much that we don't actually need to worry about once we
13
+ [74.880 --> 83.600] really start from the experience of the body in space. He says the world and reason are not
14
+ [83.600 --> 88.560] problematical. And so this might lead you to think that because Merle-Pontese thinks that a lot
15
+ [88.560 --> 93.840] of problems in philosophy are just unnecessary pseudo-problems, that he wants us to return to good
16
+ [93.840 --> 99.200] common sense. But that's not his claim either. He actually thinks that common sense can be wildly
17
+ [99.200 --> 106.640] misleading. This is why for him we need the method of phenomenology, which is a method of philosophy
18
+ [106.640 --> 114.320] that begins with our experience in the rich complex spectrum in which it is lived on the horizon,
19
+ [114.320 --> 122.320] in which we experience it through our bodies, and do our best to find key or essential features
20
+ [122.320 --> 126.960] of that experience, even as we recognize that we can never remove ourselves and have a view from
21
+ [126.960 --> 134.560] nowhere. Phenomenology, according to Merle-Pontese, is the study of essences. We want to find the essence
22
+ [134.560 --> 140.800] of perception or the essence of consciousness, for example. But the way to find those essences
23
+ [141.600 --> 149.280] is not to imagine that they are removed from existence, but to investigate existence on its own
24
+ [149.280 --> 155.520] terms and uncover the structures that we find within it. He says that our philosophical efforts
25
+ [155.520 --> 163.200] should be concentrated on re-achieving a direct and primitive contact with the world,
26
+ [163.200 --> 170.240] and giving that contact a philosophical status. So that goal is really to give a direct description
27
+ [170.240 --> 177.600] of experience as it really is. Phenomenology as a method in general is focused on description.
28
+ [178.160 --> 183.680] And Merle-Pontese, you can see this in his work as well. He really wants us to describe experience
29
+ [183.680 --> 190.320] as it is. When we're describing experience as it is, we have a tendency to fall into two ways
30
+ [190.320 --> 198.160] of viewing the world that he finds insufficient. The first is intellectualism, which is often associated
31
+ [198.160 --> 205.200] with abstraction or idealism in philosophy. He thinks that intellectualism tends to remove us
32
+ [205.200 --> 210.720] from the world and imagine that we can understand things in a vacuum and that that's for him wrong.
33
+ [211.520 --> 216.720] But then the other side of it is that we often tend to see the world through the lens of
34
+ [216.720 --> 224.320] empiricism where we are reducing what exists to what we can directly perceive and not really
35
+ [224.320 --> 231.760] investigating the structures that make that perception possible and that make it historically contingent.
36
+ [231.760 --> 238.640] In order to really be able to describe essences for Merle-Pontese, we have to undertake what he calls
37
+ [238.640 --> 243.200] following Edmund Husserl, the founder of Phenomenology and other people in this tradition,
38
+ [243.200 --> 250.240] the phenomenological reduction. The phenomenological reduction is an act of bracketing or natural
39
+ [250.240 --> 258.240] attitude, which presumes that we are seeing the world as it is in reality. By putting the natural
40
+ [258.240 --> 266.960] attitude out of play, we step back to watch the world, but we do not withdraw from it and imagine
41
+ [266.960 --> 272.240] that we are abstracted from it, right? And so there's this tension in Merle-Pontese between wanting to
42
+ [272.240 --> 277.440] stick with things as they appear and also recognizing that we can best understand things as they appear
43
+ [277.440 --> 284.080] if we have an attempt at some detachment, but that detachment is always relative to our embeddedness
44
+ [284.080 --> 287.920] within the world. We can't fully accomplish a detachment from the world.
45
+ [289.840 --> 296.240] But we can pass from the fact of existence to the nature of existence in order to understand the
46
+ [296.240 --> 302.400] fact of our existence more clearly. Merle-Pontese thinks that in this way, phenomenology unites
47
+ [302.400 --> 309.120] extreme subjectivism with extreme objectivism, and ultimately helps us get out of the binary between
48
+ [309.120 --> 315.840] subject and object altogether. I've mentioned that Merle-Pontese emphasizes the living body in his
49
+ [315.840 --> 322.960] philosophy, and so you might wonder, okay, well, what exactly does that mean? Merle-Pontese begins
50
+ [322.960 --> 328.720] by drawing on the work of Edmund Husserl, who articulates the key distinction for phenomenology
51
+ [328.720 --> 338.080] between the body as considered as a third person objective inert entity or what in German is the
52
+ [338.080 --> 345.520] kerber, and a second way of thinking about the body, which is as living subjective first person,
53
+ [345.520 --> 352.640] German live. Merle-Pontese thinks that I cannot understand myself as a subject apart from my
54
+ [352.640 --> 359.840] body, and when I think about my body, it's not the body as a kind of object. It's the living body.
55
+ [360.800 --> 368.640] It's my body as a means of expression. So Merle-Pontese thinks that one of the main problems with philosophy
56
+ [368.640 --> 375.760] is its tendency towards mind-body dualism. He thinks that we go astray when we imagine ourselves as
57
+ [375.760 --> 382.000] disembodied minds or as inert bodies or as a combination of the two, which you find a lot in
58
+ [382.000 --> 386.640] dualistic philosophies. When we're talking about the body, we're talking about something that is
59
+ [386.640 --> 393.920] in between pure subject and object. There is no inner self for male-opunty. We know ourselves
60
+ [393.920 --> 401.200] only in and through the world. So I actually am the exterior that I present to others, but there are
61
+ [401.200 --> 406.800] sedimented layered dimensions of this. One of the things that Merle-Pontese is known for is emphasizing
62
+ [406.880 --> 412.560] the historical nature of the person, and that also means a historical nature of the body.
63
+ [413.680 --> 419.840] We as humans are in a constant state of becoming, and this means that we don't have a fixed essence,
64
+ [419.840 --> 426.160] but rather that we are in this process of becoming over time through personal and collective history.
65
+ [426.160 --> 436.320] The body is lived as a here and now for Merle-Pontese. It's not a thing that exists in space,
66
+ [436.960 --> 443.040] but rather an orientation towards space. There would actually be no such thing as space for
67
+ [443.040 --> 449.200] Merle-Pontese without a body. We have to think space from the body rather than thinking the body
68
+ [449.200 --> 456.480] from space. So this doesn't mean that he's against some modicum of scientific objectivity. It's
69
+ [456.480 --> 462.480] just that he thinks we have to start with phenomenology. We go astray when we try and apply abstract
70
+ [462.480 --> 467.440] scientific categories on to lived experience. We need to have the opposite approach.
71
+ [468.000 --> 472.880] This has actually been pretty influential in recent decades within scientific studies of
72
+ [472.880 --> 477.840] consciousness where Merle-Pontese is known as one of the main people to articulate the
73
+ [477.840 --> 485.040] inactivist view of consciousness. For Merle-Pontese, movement is crucial to understanding how we live
74
+ [485.120 --> 492.400] in the world. Motility is the speciality of the body brought about in action. Consciousness
75
+ [492.400 --> 500.080] and original intentionality actually take place in motility, he thinks, where the body is engaged
76
+ [500.080 --> 506.400] with the possibilities that its surroundings present it with. He moves away from the view that we
77
+ [506.400 --> 514.400] get in people like Descartes that I am an I think and he says that I am an I can. My body is an
78
+ [514.400 --> 521.440] I can. Each of us has what Merle-Pontese calls a body schema, which is a tacit knowledge of the
79
+ [521.440 --> 528.960] body's situation in space. It's a pre-reflective bodily awareness that is dynamic and directed
80
+ [528.960 --> 535.280] towards possibilities. So when I reach out for the doorknob, I have an intuitive sense of where my
81
+ [535.280 --> 540.560] hand is relative to the doorknob, of how far I need to reach. I might be wrong in some cases,
82
+ [540.560 --> 544.880] probably not if it's a doorknob I've touched many times before, but perhaps if it's the first time.
83
+ [545.680 --> 552.560] But either way, I am pre-reflectively oriented towards reaching for the doorknob, and that orientation
84
+ [552.560 --> 558.640] involves a tacit sense of my body in space. It also means that my primary way of engaging with both
85
+ [558.640 --> 565.760] my body and things in the world is as possibilities for me. Merle-Pontese is influenced in this respect by
86
+ [565.840 --> 571.920] Heidegger's conception of the distinction between something being present at hand or something
87
+ [571.920 --> 577.440] being ready to hand. The ready-to-hand nature of existence is really key for Merle-Pontese.
88
+ [578.000 --> 582.880] So let's think about this back in terms of space. Merle-Pontese suggests that there are two
89
+ [582.880 --> 590.480] primary ways that we might conceive of space, objective space, and oriented space. Objective space
90
+ [590.480 --> 596.720] is an external or homogeneous way of thinking about it, almost as if it's a grid, right? Something is
91
+ [596.720 --> 603.840] four feet away from something else. This is how we often think about the body as being in space,
92
+ [603.840 --> 609.840] right? An entity in space. This is a positional way of thinking about it that is third person,
93
+ [610.400 --> 616.880] and also is often described as allocentric space, a view from outside of a given situation,
94
+ [616.880 --> 622.880] perhaps a view from above. So if you think about the way that when you look up directions,
95
+ [622.880 --> 629.680] what you first see is a flat surface where the place that you're going to is in one spot,
96
+ [629.680 --> 634.400] and the place where you are is in another spot, and there's sort of seen as if from above,
97
+ [634.400 --> 639.680] that is allocentric space, or what Merle-Pontese calls objective space. There's a second way of
98
+ [639.680 --> 645.440] thinking about space that he finds really important for phenomenology, which actually grounds in his
99
+ [645.440 --> 653.440] view objective space, and that is oriented space, or bodily space. Here, the body inhabits space.
100
+ [655.680 --> 662.400] This form of space is characterized by motricity or motility, and it is fundamentally situational
101
+ [662.400 --> 671.200] and first person. Rather than being allocentric, it's egocentric. So when you click start on your
102
+ [671.200 --> 677.520] Google maps, and you go from seeing the place where you are in the place where you're going,
103
+ [677.520 --> 683.520] as points on a two-dimensional surface, to now having the two-dimensional surface shift,
104
+ [683.520 --> 691.120] where you are here, and where your going is projected as a place that is in front of you,
105
+ [691.120 --> 697.040] that is egocentric space, or the best analog that Google maps can give us because it's still
106
+ [697.680 --> 702.240] two-dimensional, right? And for Merle-Pontese, science goes wrong a lot of times by thinking about
107
+ [702.240 --> 708.720] objective space as fundamental and oriented space as derivative. Again, he wants to invert the way
108
+ [708.720 --> 713.600] that science usually thinks about things and say, no, we have to think of objective space from oriented
109
+ [713.600 --> 719.360] space, because oriented space is how space shows up for all of us, including the scientists.
110
+ [720.160 --> 726.400] Ultimately for Merle-Pontese, perception is the background from which all acts stand out,
111
+ [726.400 --> 733.360] and so our philosophy needs to begin with perception. But perception is not thought of in the
112
+ [733.360 --> 743.600] traditional empirical way as sort of receiving of percepts into some inner space, but as our
113
+ [743.600 --> 748.800] active engagement with the world around us, because our conscious is fundamentally embodied.
114
+ [749.440 --> 753.520] Hope you enjoyed this video. If you want more about phenomenology, we often talk about it on my
115
+ [753.520 --> 763.520] podcast Overthink, and you can also find some related videos in our channel here.
transcript/allocentric_KfHUtWHQ8vM.txt ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.000] Cheers. Cheers. Cheers.
2
+ [4.000 --> 5.000] Hmm.
3
+ [7.000 --> 9.000] It's sort of an oaky afterbirth.
4
+ [9.000 --> 12.000] Hmm. What was that?
5
+ [12.000 --> 16.000] There are many negative emotions that we as human beings can experience.
6
+ [16.000 --> 18.000] Despair, rage, jealousy.
7
+ [18.000 --> 23.000] But is there any emotion quite as uncomfortable, as cringe,
8
+ [23.000 --> 25.000] embarrassment at the actions of others?
9
+ [25.000 --> 28.000] We've all done something embarrassing at some point or another.
10
+ [28.000 --> 32.000] And while it could be argued that the entire appeal of television shows like the office,
11
+ [32.000 --> 35.000] it's just laughing at the shameful actions of another person.
12
+ [35.000 --> 40.000] Many people report being unable to watch shows like that due to their pure distilled cringe.
13
+ [40.000 --> 45.000] Even fans of the office will likely tell you that there are some scenes that are just hard to watch.
14
+ [45.000 --> 49.000] But while the office is just one prominent media example,
15
+ [49.000 --> 53.000] surely we've all personally experienced cringe at some point in our lives,
16
+ [53.000 --> 56.000] be it on social media or in real life.
17
+ [56.000 --> 59.000] While posting cringe is a relatively new phenomenon,
18
+ [59.000 --> 64.000] and several have suggested that we live in an age of cringe culture that thrives on the awkward actions of others.
19
+ [64.000 --> 69.000] Being cringe in public and cringing at others in public is far from new behavior
20
+ [69.000 --> 71.000] and is as old as human society itself.
21
+ [71.000 --> 76.000] While politicians and celebrities are frequently in the news for cringy behavior nowadays,
22
+ [76.000 --> 79.000] and cause their supporters or fans to wince at their actions,
23
+ [79.000 --> 84.000] stop this.
24
+ [84.000 --> 85.000] Stop it.
25
+ [85.000 --> 86.000] Even that's not new.
26
+ [86.000 --> 90.000] As Lyndon B. Johnson was reported to have liked telling people about his, um,
27
+ [90.000 --> 96.000] Johnson, and frequently show it off, surely to the dismay and embarrassment of all around him,
28
+ [96.000 --> 99.000] including one event when asked why the US was at war with Vietnam,
29
+ [99.000 --> 103.000] LBJ allegedly whipped it out to a reporter, pointed to it and said,
30
+ [103.000 --> 104.000] that's why.
31
+ [104.000 --> 108.000] Okay, maybe that's not cringe, that's just hardcore chat energy.
32
+ [108.000 --> 111.000] But just wait cause we'll get into some peak cringe as we go forward.
33
+ [111.000 --> 116.000] We can certainly feel cringe not just for strangers, but our friends too, right?
34
+ [116.000 --> 118.000] Maybe particularly our friends.
35
+ [118.000 --> 120.000] Ever been around with friends who's had one too many?
36
+ [120.000 --> 124.000] Why do we seemingly feel this pain at the embarrassment of others
37
+ [124.000 --> 128.000] and is secondhand embarrassment actually more uncomfortable than our own embarrassment?
38
+ [128.000 --> 133.000] But before we go forward, if you're interested in learning new things like why cringe is uncomfortable,
39
+ [133.000 --> 136.000] then you might be interested in completing or expanding upon your education,
40
+ [136.000 --> 140.000] and you can do just that with this video sponsor, Coursera.
41
+ [140.000 --> 144.000] Coursera is an online learning platform that allows you to take your education or career
42
+ [144.000 --> 149.000] to the next level with thousands of courses on subjects ranging from computer science to languages,
43
+ [149.000 --> 152.000] and of course my favorite social science.
44
+ [152.000 --> 158.000] More than 200 universities and companies, including major universities like Yale, Princeton, Duke, and Johns Hopkins,
45
+ [158.000 --> 161.000] have partnered with Coursera to not just offer you useful courses,
46
+ [161.000 --> 165.000] but real-world benefits in the form of everything from certificates
47
+ [165.000 --> 168.000] to bachelors and even master's degrees.
48
+ [168.000 --> 172.000] If you're like me, you always want to learn more, and Coursera can help you take that love of learning
49
+ [172.000 --> 177.000] and turn it into new options for success with professional certificates to make your resume shine.
50
+ [177.000 --> 182.000] With everything that's going on in the world, a lot of us have had to put parts of our lives on hold,
51
+ [182.000 --> 188.000] but with Coursera, you can continue to improve your resume or even earn your degree entirely online.
52
+ [188.000 --> 193.000] So if you're looking to further your education or your career, consider checking out Coursera.
53
+ [193.000 --> 197.000] So you too can learn useful skills or just more about psychology and communication.
54
+ [197.000 --> 201.000] And speaking of which, let's get into the psychology of cringe.
55
+ [209.000 --> 213.000] Cringe is a psychological phenomenon has been studied since the late 1980s,
56
+ [213.000 --> 218.000] and likely became of interest due to the unique fashion choices of the age.
57
+ [218.000 --> 223.000] But was initially described by Miller 1987 as empathetic embarrassment.
58
+ [223.000 --> 227.000] Much like pornography embarrassment is something that we recognize when we see it,
59
+ [227.000 --> 230.000] in the body language and facial expressions of others.
60
+ [230.000 --> 234.000] Thus whenever someone suffers the flustered discomfort of embarrassment,
61
+ [234.000 --> 239.000] observers may recognize and empathetically come to share in the same flustered feeling
62
+ [239.000 --> 244.000] even though they themselves have not been embarrassed, because humans are social animals.
63
+ [244.000 --> 247.000] Miller, in a first experiment, paired diets of strangers together,
64
+ [247.000 --> 252.000] and they were tasked with playing one of three guessing games, either cooperative,
65
+ [252.000 --> 254.000] a competitive, or individual game.
66
+ [254.000 --> 258.000] In the cooperative game, participants were asked to select the answer to questions such as,
67
+ [258.000 --> 261.000] quote, would you rather go on a first date to the movies or to a party?
68
+ [261.000 --> 264.000] But they didn't answer this question for themselves,
69
+ [264.000 --> 267.000] but rather how they anticipated the other player would answer.
70
+ [267.000 --> 270.000] If they guessed right, they would win a point for their team.
71
+ [270.000 --> 273.000] In the competitive condition, subjects were told guessing correctly
72
+ [273.000 --> 277.000] would only award him or her a point, while their opponent would win a point
73
+ [277.000 --> 279.000] when he or she guessed correctly.
74
+ [279.000 --> 282.000] Finally, in the individual condition, participants were asked these questions,
75
+ [282.000 --> 287.000] not in reference to their fellow participants, but rather in regards to the average student on campus,
76
+ [287.000 --> 292.000] and were told they would win a point for how many of these questions that they answered correctly.
77
+ [292.000 --> 298.000] Afterwards, a coin flip decided that one subject would be an actor and the other an observer.
78
+ [298.000 --> 302.000] The observer then left the room, but could still see the actor through one-way glass
79
+ [302.000 --> 304.000] and hear him or her through headphones.
80
+ [304.000 --> 307.000] Next, the actor was given a set of cards that contained instructions
81
+ [307.000 --> 310.000] regarding some embarrassing action that he or she was to perform,
82
+ [310.000 --> 313.000] knowing they were being watched by the other subject.
83
+ [313.000 --> 315.000] Who was a complete stranger?
84
+ [315.000 --> 318.000] These acts included dancing to music for 60 seconds,
85
+ [318.000 --> 322.000] laughing for 30 seconds as if he or she had just heard a funny joke,
86
+ [322.000 --> 325.000] singing the entire star-spangled banner with the lyrics provided,
87
+ [325.000 --> 330.000] and the rock is red-backed.
88
+ [330.000 --> 335.000] Or imitating a five-year-old throwing a temper tantrum to avoid going to bed for 30 seconds.
89
+ [335.000 --> 339.000] I'm suffering already just thinking about the stimuli.
90
+ [339.000 --> 344.000] Observers were asked either to empathize with the actor or simply observe his or her actions,
91
+ [344.000 --> 349.000] and further, the observer's galvanic skin response that is changes in the electrical conductivity of the skin,
92
+ [349.000 --> 354.000] including sweating, were measured while bearing witness to this embarrassing display.
93
+ [354.000 --> 358.000] Observers felt the actor was the most embarrassed when asked to be empathetic,
94
+ [358.000 --> 361.000] when they had previously played the independent form of the game,
95
+ [361.000 --> 365.000] in which they neither competed against nor cooperated with the actor.
96
+ [365.000 --> 369.000] In contrast, the actor was seen as the least embarrassed when simply being observed
97
+ [369.000 --> 372.000] and similarly having played the independent game with them.
98
+ [372.000 --> 378.000] The personal embarrassment felt in response to watching this cringe varied by participant's sex.
99
+ [378.000 --> 382.000] Men experienced the most empathetic embarrassment when asked to be empathetic
100
+ [382.000 --> 385.000] after previously having played the independent game,
101
+ [385.000 --> 392.000] and the lowest levels of embarrassment when asked to be empathetic after having played the competitive game against the embarrassed actor.
102
+ [392.000 --> 397.000] In turn, women experienced the most second-hand embarrassment when they were empathizing with the actor
103
+ [397.000 --> 401.000] and had previously engaged in either the competitive or the cooperative game,
104
+ [401.000 --> 406.000] and felt the least embarrassment when they were mere observers who had previously competed against the actor.
105
+ [406.000 --> 411.000] In short, both men and women tend to feel more cringe when they are empathizing with another person,
106
+ [411.000 --> 415.000] but women experienced slightly more when they have directly interacted with others,
107
+ [415.000 --> 421.000] while men feel slightly more when they have had little interpersonal interaction with a cringey person,
108
+ [421.000 --> 423.000] when they have kept them at a distance.
109
+ [423.000 --> 427.000] The strongest levels of galvanic skin response occurred in those asked to be empathetic
110
+ [427.000 --> 430.000] who had also played the cooperative game with the actor,
111
+ [430.000 --> 434.000] and the weakest response in observers who had played the independent game.
112
+ [434.000 --> 439.000] This indicates that people who are being empathetic towards others are more likely to feel with them,
113
+ [439.000 --> 443.000] not just emotionally, but physically, when that person is doing something cringey,
114
+ [443.000 --> 448.000] and we may even sweat in response to this uncomfortable situation.
115
+ [448.000 --> 452.000] Further, perceptions of the embarrassment of the actor was related to personal levels of embarrassment,
116
+ [452.000 --> 454.000] as well as feelings of sorryness and sympathy,
117
+ [454.000 --> 458.000] while only personal embarrassment was related to galvanic skin response.
118
+ [458.000 --> 462.000] Taken together, experiencing cringe can actually make us sweaty.
119
+ [462.000 --> 464.000] Mom's spaghetti. Never forgety.
120
+ [464.000 --> 469.000] A second experiment predominantly replicated the first,
121
+ [469.000 --> 474.000] but this time participants played a variation of the prisoner's dilemma game, called Beat the Bank,
122
+ [474.000 --> 478.000] wherein it was explained that cooperation against the bank would help both players
123
+ [478.000 --> 483.000] mutually earn the most reward, while competition would result in decreased rewards in turn.
124
+ [483.000 --> 486.000] Participant diets were moved into separate rooms,
125
+ [486.000 --> 491.000] and after making a series of decisions, again encouraged to be cooperative for their own and mutual benefit,
126
+ [491.000 --> 496.000] we're told that the other subject had either made cooperative choices nine out of ten times,
127
+ [496.000 --> 500.000] or cooperative choices only three out of ten times.
128
+ [500.000 --> 506.000] In both cases, subjects were informed that this competitive choice resulted in the diet failing to beat the bank.
129
+ [506.000 --> 511.000] As in the previous experiment, a coin flip separated the diet into actor and observer,
130
+ [511.000 --> 514.000] and the actor performed the aforementioned embarrassing actions.
131
+ [514.000 --> 520.000] In general, the embarrassment of the observers was less intense than that of the actors, as you might expect.
132
+ [520.000 --> 523.000] However, subjects' susceptibility to embarrassment,
133
+ [523.000 --> 527.000] that is their general tendency towards embarrassment, influences results,
134
+ [527.000 --> 531.000] in that observers who were uniquely more susceptible towards being embarrassed,
135
+ [531.000 --> 537.000] reported stronger reactions towards the behavior of the actor than those low in embarrassability.
136
+ [537.000 --> 540.000] The more embarrassed the observers believed the actors to be,
137
+ [540.000 --> 543.000] the more empathetic embarrassment they themselves experienced,
138
+ [543.000 --> 547.000] but in a fascinating turn, observer reports of empathetic embarrassment
139
+ [547.000 --> 552.000] were entirely not correlated with self-reports of embarrassment from the actors.
140
+ [552.000 --> 559.000] Oftentimes, we feel more embarrassed for another person than that person feels for him or herself.
141
+ [559.000 --> 564.000] Subjects hire in trait embarrassability expressed greater autonomic response,
142
+ [564.000 --> 569.000] sweating more while watching the actor than those low in embarrassability.
143
+ [569.000 --> 574.000] If you've ever seen someone do something cringey on the internet and felt ashamed for that person,
144
+ [574.000 --> 580.000] there's a very real possibility that the embarrassment that you felt is not only entirely your own,
145
+ [580.000 --> 584.000] but a feeling just not shared by the person posting cringe.
146
+ [584.000 --> 588.000] Whether the actor had competed against or cooperated with the observer
147
+ [588.000 --> 593.000] had no significant effect on these findings, indicating that we are equally as likely
148
+ [593.000 --> 597.000] to experience vicarious embarrassment towards people that we have a reason to like,
149
+ [597.000 --> 598.000] or to dislike.
150
+ [599.000 --> 605.000] Cringe, therefore, is not dependent on whether or not the cringey person has previously been cooperative.
151
+ [605.000 --> 610.000] With that in mind, to further understand if friendship or familiarity with another person influences
152
+ [610.000 --> 616.000] the effects of vicarious embarrassment, Chakran Inujie 2011 connected a series of experiments
153
+ [616.000 --> 621.000] to further understand the effects of cringe, originating from a stranger or from a friend.
154
+ [621.000 --> 624.000] In their first experiment, French university students were asked to imagine
155
+ [624.000 --> 629.000] meeting with a group of other students while on campus, one being French, one Swiss, and one Belgian.
156
+ [629.000 --> 633.000] At some point during the conversation, either the Belgian or the French student
157
+ [633.000 --> 638.000] was described as letting up a cigarette despite their being numerous no-smoking signs nearby.
158
+ [642.000 --> 646.000] Subjects levels of shame and embarrassment and guilt were measured,
159
+ [646.000 --> 649.000] and they were asked how they might respond to the smoker.
160
+ [649.000 --> 651.000] If they would want to continue to associate with the smoker,
161
+ [651.000 --> 656.000] if the behavior violated social norms, and how the behavior might make observers feel,
162
+ [656.000 --> 661.000] either about the French or about the Belgians, and how that behavior might thereby change
163
+ [661.000 --> 665.000] the opinions of observers towards the participant him or herself.
164
+ [665.000 --> 669.000] For respondents were more concerned with how others might see themselves and the French in general
165
+ [669.000 --> 671.000] when the smoker was French.
166
+ [671.000 --> 676.000] The most commonly reported intended response to the smoker was shooting the student a disapproving look
167
+ [676.000 --> 679.000] and a polite request for the smoker to stop.
168
+ [679.000 --> 684.000] Subjects who were told the smoker was more similar to themselves being French rather than Belgian
169
+ [684.000 --> 688.000] were more likely to report that he or she would intervene in some form.
170
+ [688.000 --> 692.000] Subjects were more ashamed when the smoker was French and greater reports of shame
171
+ [692.000 --> 696.000] were related to increased propensity to intervene in the scenario.
172
+ [696.000 --> 702.000] Shame was further related to concerns both for one's own self-image and the image of French people in general
173
+ [702.000 --> 704.000] when the smoker was French.
174
+ [704.000 --> 708.000] A second study replicated the same intergroup interaction conditions as the first
175
+ [708.000 --> 712.000] were in a French student let a cigarette around a group of international students.
176
+ [712.000 --> 717.000] But this time in the in-group condition participants read that a fellow French student let a cigarette
177
+ [717.000 --> 722.000] in a no smoking area in a group comprised entirely of French students only.
178
+ [722.000 --> 726.000] Subjects were more concerned for their own self-image and the image of French people in general
179
+ [726.000 --> 730.000] in the scenario where in the smoker was in the presence of non-French students.
180
+ [730.000 --> 737.000] 81% of subjects said that they would intervene against the smoker when the students around them were from other nations
181
+ [737.000 --> 741.000] compared to only 62% who would intervene in a group of all French students.
182
+ [741.000 --> 749.000] Similarly respondents reported more shame in the intergroup multinational context than in the in-group all French context.
183
+ [749.000 --> 753.000] And those who felt shame were more likely to say that they would intervene.
184
+ [753.000 --> 759.000] Moreover those more concerned with the potential harm to their group or personal image were more likely to feel shame.
185
+ [759.000 --> 763.000] That is when we are concerned about our own impression management
186
+ [763.000 --> 769.000] we experience more shame at the inappropriate or embarrassing actions of someone who is more similar to us.
187
+ [769.000 --> 775.000] And as a result we are more likely to step in and tell that person who is similar to us to knock it off.
188
+ [775.000 --> 780.000] It was not cake, it was cream father cut out.
189
+ [780.000 --> 786.000] In their final study the researchers were interested in the influence of stereotypes on Vicarious shame and embarrassment.
190
+ [786.000 --> 792.000] The procedure was identical to the first study but before answering questions about the event some subjects read a news article
191
+ [792.000 --> 800.000] about perceptions of the French from other Europeans in which the French were described as a disrespectful, arrogant and dirty people.
192
+ [800.000 --> 807.000] This is verbal abuse.
193
+ [807.000 --> 819.000] Those who read the article about French stereotypes were more concerned with negative perceptions about the French when they were also more concerned with how the smoker's behavior might damage their personal or group identity.
194
+ [819.000 --> 826.000] Guilt rather than shame mediated the relationship between the stereotyping article and intentions to intervene against the smoker.
195
+ [826.000 --> 837.000] As such we might expect that our degrees of second hand embarrassment would be affected by how much we know or like someone who is being cringy, which was examined by stocks at all 2011.
196
+ [837.000 --> 848.000] In their first experiment subjects were recruited supposedly by their university to read reports and listen to audio diaries from students to help develop strategies for transition between high school life and university life.
197
+ [848.000 --> 854.000] First subjects read a description of a fellow student, Zach, which manipulated how much students would like him.
198
+ [854.000 --> 861.000] In the liking condition, Zach was described as helping an old confused woman find her house and carrying her groceries home for her.
199
+ [861.000 --> 872.000] In the disliking condition, Zach refused to help the old woman, was rude to her and was responsible for her falling over and dropping her groceries, telling her that she got what she deserved for bothering him.
200
+ [873.000 --> 875.000] So a real douchebag.
201
+ [875.000 --> 880.000] Participants then read a transcript and listened to audio of Zach describing a particularly bad day.
202
+ [880.000 --> 886.000] He slept in, missed his first class, and later went on a date arranged by his friends with an attractive woman named Sarah.
203
+ [886.000 --> 894.000] While on their date, Zach left so hard at one of her jokes that he shot soda out of his nose and then while laughing, and I quote,
204
+ [894.000 --> 899.000] ripped a big one and it smelled really bad and we both know who did it.
205
+ [905.000 --> 909.000] This vicarious embarrassment stuff is going to vicariously kill me, I swear to god.
206
+ [909.000 --> 912.000] And by the way, this is the true danger of women trying to be funny.
207
+ [912.000 --> 920.000] Respondents liked Zach less when he was cruel to the old woman, cared less about Zach's welfare, were less distressed by the story,
208
+ [920.000 --> 929.000] and felt both less empathetic concern for Zach, as well as less empathetic embarrassment for him, compared to when Zach was a likable figure who helped out the old woman.
209
+ [929.000 --> 934.000] Thus we can perceive the same embarrassing event differently based on how much we like a cringy person.
210
+ [934.000 --> 940.000] And are more empathetic towards someone who has done something embarrassing when we think that person is more likable.
211
+ [940.000 --> 949.000] A second experiment replicated the first, but this time some participants were asked to be objective, some were asked to try and put themselves into Zach's position during the awkward date story,
212
+ [949.000 --> 952.000] and a third group was asked to try and empathize with Zach.
213
+ [952.000 --> 960.000] Afterwards, subjects were asked if he or she would like to receive update emails about Zach's welfare and college transition over the next six weeks.
214
+ [960.000 --> 969.000] Those asked to put themselves in Zach's shoes, reported more feelings of personal distress and response to the date story, as well as more empathetic embarrassment.
215
+ [969.000 --> 973.000] In turn, those asked to empathize with Zach, reported more empathetic concern for him.
216
+ [973.000 --> 985.000] Those asked to be objective, or to take Zach's perspective, were equally as likely to ask to be updated about Zach's condition, while those who were asked to be empathetic towards Zach were twice as likely to request updates.
217
+ [985.000 --> 990.000] Simply trying to think about how someone else might feel during an embarrassing situation then.
218
+ [990.000 --> 993.000] Increases our capacity for vicarious embarrassment.
219
+ [993.000 --> 999.000] While imagining ourselves as the embarrassed person, simply increases feelings of personal distress.
220
+ [999.000 --> 1006.000] While we can feel embarrassed for someone, trying to feel embarrassment with someone just makes us uncomfortable too.
221
+ [1006.000 --> 1009.000] I would really prefer if you would be quiet.
222
+ [1011.000 --> 1013.000] But yes, you are correct.
223
+ [1013.000 --> 1020.000] Second-hand embarrassment at the actions of people that we like or are friends with is not just something that we can measure with social science instruments.
224
+ [1020.000 --> 1034.000] It's something that people recognize and will tell you in their own words, as seen in a qualitative analysis from Killian, Steinman, and Haymes 2017, involving customer interactions in retail environments, potentially the cringiest of all environments.
225
+ [1034.000 --> 1039.000] I want someone to stop it. Where is time for someone to stop?
226
+ [1039.000 --> 1048.000] Subjects were an example of an embarrassing customer interaction at a doctor's office, wherein one woman needed a prescription, and then when the employee was not able to help this woman immediately,
227
+ [1048.000 --> 1056.000] she became upset and started to argue with the employee, which made another woman who was in the office feel vicariously embarrassed for her.
228
+ [1056.000 --> 1058.000] I think we've all been there.
229
+ [1058.000 --> 1063.000] Subjects were then asked to recall a similar event that he or she had experienced within his or her own life.
230
+ [1063.000 --> 1077.000] While several were called events involving strangers, such as a rude and impatient girl in the school cafeteria, multiple respondents noted that having a relationship with the person acting embarrassing as particularly awkward, with one stating, quote,
231
+ [1077.000 --> 1089.000] if the person is a total stranger, then I'll just observe and think, oh my god, turn around and go. Perhaps, I think about the situation.
232
+ [1089.000 --> 1102.000] But if it is a friend or someone I know, then I will say something, or even try to resolve the situation somehow, because it is somehow linked to me.
233
+ [1102.000 --> 1109.000] Subjects further noted that here she might be more likely to intervene due to this friendship affiliation, with another stating, quote,
234
+ [1109.000 --> 1132.000] while these participants were describing friends and acquaintances, some described the actions of their family, with a mother commenting on how she feels when her children misbehaving stores, saying, quote,
235
+ [1132.000 --> 1161.000] clearly, we recognize then that when our friends or family are doing something cringey, it potentially reflects poorly upon ourselves, which in turn manifests as vicarious embarrassment.
236
+ [1161.000 --> 1171.000] Another qualitative analysis from a student research presentation conducted by German at all 2019, found similar results in student stories related to the classroom environment.
237
+ [1171.000 --> 1180.000] Subjects were asked to recall a situation where another student did something cringey, and were asked how they felt during this event, as well as what actions they would take in response to it.
238
+ [1180.000 --> 1186.000] They broke down the stories shared into three types, criticism, awkward acts, and forgetfulness.
239
+ [1186.000 --> 1198.000] Criticism stories, the most common, included instances where any students' grades were disparaged by a professor in front of the class, or a professor asking a student about his parents, only to find out that the student was an orphan.
240
+ [1198.000 --> 1215.000] Finally, the least common type, forgetful stories, included an instance where any student forgot what to say during a public speaking class, or absolutely horrifyingly, a professor leaving her lavalier mic on while going to the bathroom during a lecture, so the whole class got to hear her go.
241
+ [1215.000 --> 1224.000] 39% of respondents reported feeling empathy for those in these situations, the most commonly reported emotion, followed by shock, anger, and awkwardness.
242
+ [1224.000 --> 1241.000] It's not a radical idea, then, to suggest that because we feel more second-hand embarrassment at the behaviors of those with whom we are associated with that a politician or group leader who is seen as doing embarrassing things, would elicit a form of vicarious shame on the part of members of an entire political party.
243
+ [1241.000 --> 1246.000] And love and marhadeem, Trump has sure done some embarrassing things, particularly on Twitter.
244
+ [1246.000 --> 1264.000] Thus, Paulus at all, in 2019, examined the use of the term embarrassment from US-based Twitter accounts over time, and related this use to major scandals or events concerning Donald Trump, and found that as you might expect, when Trump did something heavily criticized, the use of the word embarrassed majorly increased on the social media website.
245
+ [1264.000 --> 1282.000] The researchers tracked the use of the word embarrassment over time on Twitter, between 2015, under the Obama administration, through August 2017, well into the Trump administration, and found a 45% increase in the total usage of the term embarrassment between the 2016 debates and the end of data collection.
246
+ [1282.000 --> 1294.000] Embarrassment peaked during the 2016 presidential debates, when Trump refused to shake hands with Angela Merkel, and when Trump pushed Montenegro Prime Minister, Duce Gomarkovish, out of the way during a 2017 NATO summit.
247
+ [1294.000 --> 1304.000] Specifically, during the peak days of embarrassment, tweets explicitly, including the word Trump, comprised between 20 and 35% of all embarrassment-related messages.
248
+ [1304.000 --> 1320.000] Now, while it's completely reasonable to assume that these mentions of embarrassment came primarily from the left, if, instead, as we've seen that we tend to be more embarrassed by those within our own in-group, it may be that these expressions of embarrassment arose more from conservatives or Trump supporters than from Democrats.
249
+ [1320.000 --> 1334.000] However, considering that the peak embarrassment events occurred during international faux pas, it may be that this embarrassment was a result of shared identity as Americans, rather than any specific party or political affiliation.
250
+ [1334.000 --> 1344.000] An American football team kneeling for the star-spangled banner while standing for God-safe the Queen is perhaps a perfect exemplar of political group-based embarrassment, for example.
251
+ [1344.000 --> 1366.000] Since we've seen that empathy and affiliation both play a role in vicarious embarrassment, so too then should perspective-taking, or imagine how we would feel in another person's shoes or situation, which was examined by Hawk Fisher and Run Clif 2011.
252
+ [1366.000 --> 1378.000] In their first study, female Dutch participants were paired into diets, much as with Miller's study, however, the other supposed participant was really just a research assistant Confederate. No, still not that type of Confederate.
253
+ [1378.000 --> 1389.000] After the Confederate was artificially chosen as the actor in the experiment, she left the room and subject watched a pre-recorded video, supposedly a live feed of the Confederate dancing in the other room.
254
+ [1389.000 --> 1399.000] In the embarrassed condition, the Confederate displayed physical signs of shame or awkwardness, including gaze aversion, smiling, touching her face, hair and clothing, and downward head movements.
255
+ [1399.000 --> 1403.000] While in the non-embarish condition, the Confederate remained cool and alloof while dancing.
256
+ [1403.000 --> 1416.000] Respondents then reported on their own emotions regarding the video and were asked to evaluate it as objectively as possible, the objective condition, while those in the perspective-taking condition were asked to report on the emotions of the Confederate dancer.
257
+ [1416.000 --> 1430.000] Participants felt that the dancer was more embarrassed when she showed physical signs or symbols of awkwardness, but those asked to think about the feelings of the dancer reported more sensations of embarrassment regardless of whether or not she looked physically uncomfortable.
258
+ [1430.000 --> 1443.000] Thus, people can feel more embarrassed when they imagine themselves in the position of someone doing something embarrassing, similarly to how they do when they view themselves as part of a shared social group when someone in that group is violating social norms.
259
+ [1443.000 --> 1451.000] A second study sought to identify emotional contagion and mimicry by examining the body language of subjects exposed to an embarrassed person.
260
+ [1451.000 --> 1462.000] Subjects were placed in a cubicle in front of a computer with a webcam. Subjects watched the video of the dancing woman from the previous experiment, and their body language was unbeknownst to them recorded via webcam as they watched.
261
+ [1462.000 --> 1468.000] They then reported on their feelings of empathetic embarrassment for the dancer and their degree of perspective taking with her.
262
+ [1468.000 --> 1475.000] Under the auspices of studying rhythm, participants were then asked to dance along or sing along with a song with which he or she was unfamiliar.
263
+ [1475.000 --> 1479.000] At this point, they were told that the webcam would record their actions.
264
+ [1479.000 --> 1487.000] Those who watched the awkward version of the video of the girl dancing expressed more empathetic embarrassment than those who watched the non-offered version.
265
+ [1487.000 --> 1493.000] While there was generally no such effect for those asked to sing between the embarrassed or non-embarist Confederate videos.
266
+ [1494.000 --> 1499.000] Relatedly, those who were asked to dance were more able to take the perspective of the woman in the film.
267
+ [1499.000 --> 1506.000] Whether subjects danced or sang, they were more likely to mimic awkward body language when the woman dancing looked uncomfortable.
268
+ [1506.000 --> 1511.000] Have you ever watched some cringy thing online and found yourself averted in your eyes or a wincing?
269
+ [1511.000 --> 1519.000] Well, these data would indicate that this is a physical reaction that we experience when a cringy person seems ashamed him or herself.
270
+ [1520.000 --> 1524.000] And as a byproduct, increases our own experience of empathetic embarrassment.
271
+ [1537.000 --> 1548.000] Because embarrassment is such an uncomfortable emotion, why do we feel it in the first place and perhaps more importantly, why do we feel embarrassment for others when we haven't done anything shameful ourselves?
272
+ [1549.000 --> 1555.000] Well, Irving Goughman proposed that embarrassment is not just a social emotion, but a pro-social emotion.
273
+ [1555.000 --> 1565.000] Specifically, Goughman 1956 early analysis of embarrassment hypothesized that embarrassment signals an individual's underlying processiality and trustworthiness.
274
+ [1565.000 --> 1575.000] To test this hypothesis, Feinberg-Willer and Keltner 2012 conducted a series of experiments to identify the relationship between displays of embarrassment and perceptions of trust.
275
+ [1575.000 --> 1593.000] In a preliminary study, students were asked to record a short video describing a time when he or she was embarrassed, such as tripping and falling over or passing gas in public for use and subsequent experiments, so the stimuli in this research were not actors playing a role, but real embarrassing stories from real people.
276
+ [1594.000 --> 1609.000] In their first experiment, subjects watched these embarrassing videos and were asked about the embarrassment they felt towards them, as well as their general levels of embarrassability.
277
+ [1609.000 --> 1616.000] And then, their degrees of prosociality were assessed using a measure of altruism and generosity via a dictator game.
278
+ [1616.000 --> 1628.000] In this version of a dictator game, participants were allocated 10 raffle tickets, each birth one entry into a drawing for a $50 gift certificate, and were told they could divide these tickets between themselves and another person.
279
+ [1628.000 --> 1635.000] Participants who reported more embarrassability were more likely to be generous and to report higher levels of altruism.
280
+ [1636.000 --> 1649.000] In a second study, the researchers selected an example of a very embarrassing story and a minimally embarrassing story from the aforementioned stimuli, and asked participants how prosocial or antisocial they thought the embarrassed person was.
281
+ [1649.000 --> 1662.000] Respondents viewed women as generally more prosocial regardless of whether they were minimally or maximally embarrassed than men and in turn viewed men regardless of their level of embarrassment as more antisocial than women.
282
+ [1662.000 --> 1681.000] Because embarrassment is a low status emotion, researchers expected women to be seen as better people when they expressed shame, perhaps because we tend to see women as less agenteic, as having less control over their lives, and therefore less to blame and more prosocial when women disclose some embarrassing event in their own past.
283
+ [1681.000 --> 1686.000] In other words, when girls do it, it's cute when guys do it, it's cringe.
284
+ [1686.000 --> 1707.000] In order to reduce some potential bias introduced in the differences in speech patterns used in the shameful story videos collected from subjects, a third study only utilized photos of people looking embarrassed by averting their gaze and holding a compressed smile, or looking prideful with a wider smile and an upturned head.
285
+ [1707.000 --> 1717.000] Participants in this study reported how prosocial trustworthy and moral they thought the photographed person was, and how much he or she might want to interact with the photographed person.
286
+ [1717.000 --> 1723.000] Respondents felt that the embarrassed person was more prosocial and had a greater desire to affiliate themselves with that person.
287
+ [1723.000 --> 1735.000] Moreover, desire for affiliation was mediated through perceived prosociality, in that the more trustworthy or moral an embarrassed person was seen as being, the more likely respondents were to say that they would want to hang out with that person.
288
+ [1736.000 --> 1757.000] In a fourth experiment, these same stimuli were used, but this time rather than choosing to associate with the photographed person, participants were told the photos were of other participants in the experiment and played the dictator game again with the person in the photo as the receiver and themselves as the sender, allocating 10 raffle tickets for a chance to win $50 between themselves and the stranger.
289
+ [1757.000 --> 1765.000] While subjects gave more tickets to the embarrassed person in general, this relationship was strongly mediated by perceived prosociality.
290
+ [1765.000 --> 1774.000] That is, embarrassed people were seen as more prosocial, and the more prosocial they were seen as, the more raffle tickets that embarrassed person was allocated.
291
+ [1774.000 --> 1790.000] A final study in this set, Saltage Elineate Embarrassment from Shame, and while I've used the terms mostly interchangeably up to this point, they are a little bit different, and to illustrate the difference in the first segment of this experiment, subjects were placed in a room with a Confederate purported to be another experimental subject.
292
+ [1790.000 --> 1803.000] The researchers entered the room and explained that the participant would take some opinion task with no right-or-wrong answers, while the Confederate, the other person in the room, would complete a set of example questions from the graduate record exam or GRE,
293
+ [1803.000 --> 1811.000] which is a standardized test required for admins into masters and doctoral programs, contains complex math and reading questions, and is generally a real pain.
294
+ [1811.000 --> 1821.000] After completing the tasks, the experimenter returned and excitedly congratulated the Confederate on a perfect score, something no other participant had ever done.
295
+ [1821.000 --> 1832.000] To separate embarrassment from shame, in the embarrassment condition, the Confederate averted her gaze, shook her head, or nervously touched her face in response to this news, while in contrast, in the pride condition,
296
+ [1832.000 --> 1840.000] the Confederate raised her arms, smiled, and raised her head upward, and in the neutral condition, she gave little to no emotional response.
297
+ [1840.000 --> 1855.000] Participants compassion and sympathy towards the Confederate were measured, and then the two played a modified and computerized trust game and version of the Prisoner's Dilemma game, deciding how to divvy up 10 raffle tickets, where in each ticket, given to the opponent, would be doubled by the researcher.
298
+ [1855.000 --> 1865.000] Subjects gave more tickets and resources to the Confederate when she looked embarrassed, compared to when she looked neutral or prideful, and responds to her getting a high score.
299
+ [1865.000 --> 1873.000] Further, the tendency to give more to the embarrassed person may not have only indicated perceptions of weakness, but also a perception of humility.
300
+ [1873.000 --> 1884.000] While we can cringe at the embarrassment of others, we also feel pity for them, and may see them as better people, or just good people in a bad situation, without information to the contrary.
301
+ [1884.000 --> 1890.000] The response cringe is context dependent, is everyone equally susceptible to vicarious embarrassment then?
302
+ [1890.000 --> 1899.000] A study of correlates to empathetic embarrassment from Yusul at all, in 2014, developed a measurement of vicarious embarrassment and tested it against various personality variables.
303
+ [1899.000 --> 1911.000] They found that experience of empathetic embarrassment was related positively to susceptibility to embarrassment, empathy, perspective taking, fear of negative evaluations, and negatively was related to self-esteem.
304
+ [1911.000 --> 1920.000] A second study provided participants with an example of something embarrassing, in this case a video of a contestant on X-Factor Bulgaria, specifically this clip.
305
+ [1924.000 --> 1938.000] Responses were a bit different when it involved a media figure, namely in that vicarious embarrassment was unrelated to perspective taking, indicating that while we can empathize with regular people, particularly those that we know and like,
306
+ [1938.000 --> 1947.000] and we can understand those people's embarrassment, we struggle with understanding the perspective of someone who is in even a minor position of fame or fortune.
307
+ [1947.000 --> 1967.000] While embarrassment is a social emotion itself, vicarious embarrassment is reliant upon socialization to even exist, as it depends on empathizing with the emotions of others, and as such often we don't want to be helpful or associated with people who have been embarrassed, and instead want to differentiate ourselves from a cringey or embarrassed person.
308
+ [1967.000 --> 1975.000] But why? Could it be because cringe is physically painful? Well, to find out, let's look into the neuropsychology of cringe.
309
+ [1982.000 --> 1995.000] When our friends do something embarrassing, we are more likely to feel embarrassed for them and with them, indicating that the pain of our friends or people similar to us can be our pain when that pain arises from social awkwardness.
310
+ [1995.000 --> 2007.000] But this pain is not just a minor passing thing, it's something we can actually see within brain activity because social pain, as it turns out, is something we feel, just like we feel physical pain.
311
+ [2007.000 --> 2020.000] Mulear Pinsler, at all 2015, sought to understand the relationship between social and empathetic embarrassment and brain activity to further understand why the cringe of others so often makes us cringe as well.
312
+ [2020.000 --> 2026.000] Subjects were shown images and descriptions of two scenarios while their brain activity was measured via FMRI.
313
+ [2026.000 --> 2036.000] The embarrassing scenario depicted a woman at a grocery store counter who wasn't able to pay for her purchase, and the woman was described either as the participant's friend or as a stranger.
314
+ [2036.000 --> 2042.000] While in the neutral scenario, the woman, once again, described as a friend or a stranger, returned some books to the library.
315
+ [2042.000 --> 2048.000] After each scenario, respondents were asked to think about how much vicarious embarrassment these scenes elicited.
316
+ [2048.000 --> 2055.000] The researchers found increased activation in various parts of the brain during the embarrassing condition in regards to both friends and strangers.
317
+ [2055.000 --> 2062.000] This activation increased significantly in the anterior-sangulate cortex, a region associated with ethics, morality, and emotions.
318
+ [2062.000 --> 2071.000] The left anterior insula, a region associated with empathy, the medial prefrontal cortex, a region associated with conflict monitoring and emotional information.
319
+ [2071.000 --> 2080.000] The brain stem, which regulates autonomic functions such as breathing in heart rate and the right temporal pole, a region associated with emotions and socially relevant memory.
320
+ [2080.000 --> 2090.000] As we can see right away then here, watching anyone else be embarrassed stimulates the part of our brains related to social emotions as well as basic brain functions like heart rate.
321
+ [2090.000 --> 2100.000] Moreover, there was increased activation in the anterior-sangulate cortex and left anterior-insula when the embarrassing actor was described as a friend rather than a stranger.
322
+ [2101.000 --> 2107.000] Once again, these two regions are associated with ethics, morality, emotion, and empathy respectively.
323
+ [2107.000 --> 2118.000] Thus, we might expect damage to these regions to inhibit the ability to feel cringe, but may also be a key component in developing skills necessary for high level or conahart's gameplay.
324
+ [2118.000 --> 2129.000] Further, there was more activation in the pre-kineas, a region associated with self-reflection for friends than there was for strangers, an increased connectivity between the pre-kineas and the anterior-sangulate cortex,
325
+ [2129.000 --> 2140.000] illustrating that the part of our brain related to personal reflection communicates with the part of our brain related to morality and emotion specifically when we see our friends being ashamed.
326
+ [2140.000 --> 2142.000] Can you totally see the pressure?
327
+ [2142.000 --> 2144.000] Like an x-ray.
328
+ [2144.000 --> 2152.000] Because empathy, which we just saw, is related to activation in the left anterior-insula, and we know it's related to feelings of vicarious embarrassment,
329
+ [2152.000 --> 2158.000] Crocodile 2011 sought to further understand their relationship between empathy and embarrassment in the brain.
330
+ [2158.000 --> 2167.000] As with previous experiments, subjects were asked to imagine themselves as a person doing something embarrassing, but the intentionality and awareness of each event differed.
331
+ [2167.000 --> 2172.000] For example, unaware, but accidental embarrassing event would be falling and slipping in the mud.
332
+ [2172.000 --> 2179.000] An accidental but unaware event would be walking around with one's fly open or with toilet paper stuck to the bottom of one's shoe.
333
+ [2179.000 --> 2190.000] An intentional and aware event would be belching at a high-end restaurant, and an intentional but unaware event would be wearing a t-shirt with some kind of self-aggrandizing statement on it, such as I am sexy.
334
+ [2190.000 --> 2192.000] My mother didn't quote a shure on me and it said,
335
+ [2192.000 --> 2194.000] The birthday party!
336
+ [2194.000 --> 2207.000] Subjects rated how embarrassing each of these acts were from a first-person perspective or from the perspective of another person, and were asked how likely they were to feel with another person, being more emotional and empathetic,
337
+ [2207.000 --> 2216.000] or how likely they were to take the perspective of another person, processing information more cognitively, thinking about the things as if they were happening to themselves.
338
+ [2216.000 --> 2226.000] Interestingly, embarrassment was experienced more strongly when thinking about something shameful happening to another person than happening to the self for all but one type.
339
+ [2226.000 --> 2233.000] The most embarrassing type of event, both personally and vicariously, was accidental and aware type.
340
+ [2233.000 --> 2246.000] Again, something like spilling a glass of red wine on your shirt at a restaurant, outside of the aware and accidental, respondents felt more embarrassed when asked to think about others, then when asked to think about him or herself in every case.
341
+ [2246.000 --> 2261.000] Feeling embarrassment for another person vicariously was related to people who report to being empathetic, emotional and cognitive, while feeling embarrassment for the self was primarily only related to those who tended to be more cognitive and less emotional or empathetic.
342
+ [2261.000 --> 2273.000] A second study utilized the same types of events as those presented in the first experiment, which were illustrated and then shown to participants along with descriptions of these events, this time while their reactions were measured via FMRI.
343
+ [2273.000 --> 2289.000] Example scenarios included, aware and accidental, seeing someone rip their pants while bending down, unaware and accidental, seeing someone wear their pants in such a way that they're underwear is visible, aware and intentional, seeing someone talking on their phone in a movie theater, aware and unintentional,
344
+ [2289.000 --> 2296.000] seeing a pedestrian on the street wearing a VIP necklace, and the neutral condition, seeing a woman checking out a book from a library.
345
+ [2296.000 --> 2308.000] All types of vicarious embarrassment were related to increased activation in the anterior singulate cortex and the left anterior insula, which we know again are related to ethics, morality, emotion and empathy.
346
+ [2308.000 --> 2320.000] The greatest activation in both the ACC and the left anterior insula was in response to the accidental and aware scenario, wherein subjects thought about someone ripping their pants in public for example.
347
+ [2320.000 --> 2332.000] Additionally, increased activations were noted in the phalamus, the pre-aquaductal gray, the brainstem and the cerebellum, all structures that have been associated with empathetic perceptions of pain in others.
348
+ [2332.000 --> 2352.000] Activation in both the ACC and the left anterior insula were consistently related to cognitive perceptions, while the relationship to emotional and empathetic perceptions were less consistent, indicating that we understand quite logically why something is embarrassing, and recognition of something being embarrassing is related to greater neural activation.
349
+ [2352.000 --> 2359.000] Vicarious embarrassment is not something that we just feel for others, it's something that we understand cogently.
350
+ [2359.000 --> 2379.000] In our daily lives, we're probably far less likely to come across someone doing something cringey or embarrassing in public than we are online, or through other forms of media, and there's perhaps no larger intercultural cringed phenomenon than reality television, which was assessed via FMRI analysis by Melkars at all 2015.
351
+ [2379.000 --> 2391.000] Subjects first watched a short clip from a German reality TV program, they didn't specify which, wherein something embarrassing happened to someone, so because they didn't specify I'm going to use Klaus the Forklift driver.
352
+ [2397.000 --> 2406.000] This embarrassing scene was compared to a neutral control scene, and then subjects were placed into an FMRI machine and shown stills from the program they had just watched.
353
+ [2406.000 --> 2412.000] Participants experienced more compassion for the protagonists of the scenes when some embarrassing event occurred.
354
+ [2412.000 --> 2416.000] Amusement was not related to either Vicarious embarrassment nor compassion.
355
+ [2416.000 --> 2429.000] The researchers noted greater activation during the Vicarious embarrassment scene than the control scene in the bilateral middle temporal gyros, the bilateral super marginal gyros, the right and furrier frontal gyros, and the left gyros rectus.
356
+ [2430.000 --> 2437.000] The middle temporal gyros has been associated with the need to take the perspective of another person and the processing of social rejection.
357
+ [2437.000 --> 2443.000] The super marginal gyros has been related to reduced emotional egocentricity and to enhanced perspective taking.
358
+ [2443.000 --> 2454.000] Other research has illustrated that the super marginal gyros may explain differences in perceptions of the pain of others, indicating once again involvement in the process of perspective taking.
359
+ [2454.000 --> 2461.000] The inferior frontal gyros is involved in emotional empathy, cognitive empathy, and self-reported feelings of compassion.
360
+ [2461.000 --> 2467.000] Finally, the left gyros rectus has similarly been related to subject's ability for emotional perspective taking.
361
+ [2467.000 --> 2482.000] Thus, the Vicarious embarrassment that we feel when someone is doing something cringey on television is related to activation in the parts of our brains that are themselves related to being empathetic, feeling compassion, and understanding the perspectives of others.
362
+ [2482.000 --> 2493.000] A similar experiment regarding German reality television programs specifically, this shows Germany's next top model and farmer wants a wife, which I've never heard of, but I would imagine looks something like this.
363
+ [2493.000 --> 2499.000] Hey, I'm Plague of Grypsons watching Rosalmania. I get the hell of my property.
364
+ [2499.000 --> 2511.000] Was conducted by Hynan and Melcher 2014. This time assessing differences in the brain's grey and white matter volumes, the size and shape of the brain, in response to embarrassing situations.
365
+ [2511.000 --> 2519.000] Grey matter is distinguished from white matter in that grey matter contains cell bodies, dendrites, axon terminals, and is where the brain's synapses are.
366
+ [2519.000 --> 2527.000] While white matter helps connect grey matter areas to one another, subjects watch short clips from these programs while being monitored by FMRI.
367
+ [2527.000 --> 2537.000] Highly empathetic individuals exhibited decreased white matter volumes in the posterior Singular cortex, the Relandic Opicularum, the Precuneus, and the Inchilla.
368
+ [2537.000 --> 2551.000] While in contrast, individuals who rated the clips as more amusing and hilarious presented with increased grey and white matter volumes in the inferior frontal and Relandic Opicularum, the medial Singular cortex, the Inchilla, and the Paracentral Lobule.
369
+ [2551.000 --> 2562.000] These results are indicative that more empathetic people have smaller white matter volumes in the brain regions associated with empathy, processing of emotional cues, and social pain.
370
+ [2562.000 --> 2574.000] That is, people who are less empathetic may have brain structures that simply require more time to process the pain associated with embarrassment while this connectivity is more direct in more empathetic people.
371
+ [2574.000 --> 2585.000] Put simply, the brains of more empathetic and less empathetic people aren't potentially just a little bit different, and the such, the way we all respond to embarrassing events differs.
372
+ [2585.000 --> 2598.000] Since empathetic embarrassment is feeling embarrassment for someone else, how do different perspectives, either taking the role of an embarrassed person or imagining ourselves as an embarrassed person, affect our neurological processing?
373
+ [2598.000 --> 2613.000] For answers we can look to Mayor at all 2020, who used FMRI to assess activation during egotistical perspective taking and allocentrism in reaction to embarrassing events wherein an embarrassed person is aware or unaware of the shameful action.
374
+ [2613.000 --> 2617.000] Allocentrism is focusing on the emotions of someone else.
375
+ [2617.000 --> 2634.000] As with previous studies we've looked at, participants were shown a series of images accompanied by a description that depicted someone doing something embarrassing in public, in which that person was either aware or unaware of the reactions that others would have to this behavior, as well as neutral images of social situations.
376
+ [2634.000 --> 2644.000] For example, an unaware embarrassing event would include someone having bad breath, falling asleep and drooling on themselves during a train ride, or having spinach stuck in their teeth,
377
+ [2644.000 --> 2653.000] while an aware embarrassing event would include someone walking into a light pole, tripping and dropping a tray of food at the cafeteria, or forgetting one speech during a presentation.
378
+ [2653.000 --> 2666.000] Participants were the most embarrassed when thinking about the feelings of another person rather than taking their perspective, when that person was aware of the embarrassing event, such as tripping and spilling their tray in the cafeteria.
379
+ [2666.000 --> 2682.000] Generally, reports of embarrassment were lower when the person involved in the shameful event was unaware of what was happening, such as having spinach stuck in their teeth, but in these cases embarrassment was stronger, in those as to imagine themselves as the embarrassed but unaware person.
380
+ [2682.000 --> 2688.000] Indicating, we are most embarrassed when we don't know that we're doing something embarrassing at the time.
381
+ [2688.000 --> 2691.000] I wouldn't mind kissing that man between the cheeks, so to speak.
382
+ [2691.000 --> 2695.000] And he realizes there is something distinct about the way he speaks.
383
+ [2695.000 --> 2698.000] Tobias, you blow hard.
384
+ [2698.000 --> 2706.000] Moreover, more empathy was reported when thinking about another person rather than thinking about the South in an awkward situation.
385
+ [2706.000 --> 2716.000] In terms of neural activation, similar levels of activity were recorded in the media prefrontal cortex, whether subjects were asked to be allocentric, again thinking about the emotions of others,
386
+ [2716.000 --> 2719.000] or egocentric when processing these events.
387
+ [2719.000 --> 2729.000] The media prefrontal cortex has been associated with perceptions of negative evaluations by others, and thus is central to both first-hand and vicarious experiences of embarrassment.
388
+ [2729.000 --> 2739.000] Differences in activation were similar but not identical in other regions as well, including the anterior insula, which was slightly more active when being allocentric rather than egocentric.
389
+ [2739.000 --> 2745.000] The anterior insula has been related to social exclusion, including heartaches after the breakup of a romantic relationship.
390
+ [2745.000 --> 2755.000] In contrast, there was slightly more egocentric activation in the anterior singulate cortex, which, as previously mentioned, is a region associated with ethics, morality, and emotion.
391
+ [2755.000 --> 2766.000] These results indicate that we recognize things are more unfair when they happen to us, but still don't quite understand the pain of social judgment and social exclusion when it happens to others.
392
+ [2766.000 --> 2776.000] There was greater activation in the left and right parietal lobeals when the embarrassment was shared by the observer and the actor, both aware that the embarrassing event was occurring.
393
+ [2776.000 --> 2792.000] The inferior parietal lobeal bilaterally has been related to bodily pain, thus embarrassment, both on the part of ourselves, but seemingly particularly on the part of others, is processed by the same part of our brain that processes physical pain and suffering.
394
+ [2792.000 --> 2797.000] Cringe is painful, not just psychologically, not just emotionally, but physically.
395
+ [2797.000 --> 2801.000] When the world star guys freak out, there's a problem.
396
+ [2801.000 --> 2806.000] Whew! Well, if you guys made it through all of that neuropsychology, congratulations.
397
+ [2806.000 --> 2817.000] And while we're not quite done yet, let's expand to further understand the physiological impact of cringe on the human body, starting with some research on something viewers of my channel may be very sick of hearing me say.
398
+ [2818.000 --> 2820.000] And that's... shodden for the.
399
+ [2820.000 --> 2832.000] The German term for enjoyment at the embarrassment of others. Contrasted now with the shame felt at the embarrassment of others, which, also as all good things, has its own unique German word, Frenchom.
400
+ [2832.000 --> 2836.000] And we can learn more about it in the study from Paulus et al. 2018.
401
+ [2836.000 --> 2841.000] The same stimuli that we've seen in this video over and over again were used in this study.
402
+ [2841.000 --> 2845.000] Images and short descriptions and the response to which were measured via FMRI.
403
+ [2845.000 --> 2853.000] But this time, the researchers were looking at the enjoyment associated with embarrassment of others versus the shame associated with embarrassment of others.
404
+ [2853.000 --> 2863.000] The researchers found increased activation in the left anterior insula in response to instances of Frenchom compared to instances of shodden for the, in response to the experiences of others.
405
+ [2863.000 --> 2867.000] Again, the left anterior insula is a region related to empathy.
406
+ [2867.000 --> 2879.000] So as we might expect, feeling pain or shame at the suffering of others is related to the part of the brain associated with empathy, while feelings of joy at the suffering of others is to a far lesser degree.
407
+ [2879.000 --> 2890.000] Similarly, self-reported feelings in response to the embarrassing stimuli presented with greater activation in the left anterior insula in response to Frenchom over shodden for the.
408
+ [2890.000 --> 2897.000] In contrast, activation of the left nucleus acumen was noted in cases of shodden for it up but not in cases of Frenchom.
409
+ [2897.000 --> 2901.000] The nucleus acumen in general is the reward center of the brain.
410
+ [2901.000 --> 2906.000] Indicating that while shodden for it up may feel rewarding, Frenchom is not.
411
+ [2906.000 --> 2918.000] Once again, seeing someone else being embarrassed when we feel that pain with them is not particularly enjoyable, but rather is processed quite similarly to physical pain.
412
+ [2918.000 --> 2925.000] Going outside of the brain, Harris, 2001, examined cardiovascular responses to embarrassment in a social setting.
413
+ [2925.000 --> 2931.000] Participants were attached to a blood pressure monitor and asked to sing the star-spangled banner alone in the room while being recorded.
414
+ [2931.000 --> 2938.000] Subjects then had 10 minutes to relax and in an additional 5 minutes to take an unrelated survey to establish their relaxed heart rate.
415
+ [2938.000 --> 2949.000] Then, the researcher entered the room accompanied by two research assistant confederates and played back the recording of the subject singing the star-spangled banner on a television in the room.
416
+ [2949.000 --> 2961.000] The researcher and their assistants sat in the middle of the room between the subject and the television so that their faces could be clearly seen by their participant towards whom they glanced and smiled during the screening.
417
+ [2961.000 --> 2964.000] Truly only a mad scientist would do something so cruel.
418
+ [2964.000 --> 2966.000] Why would you do that?
419
+ [2966.000 --> 2969.000] Because I can.
420
+ [2969.000 --> 2978.000] Afterwards, participants answered questions about how embarrassed, anxious, happy, fearful, amused, nervous, and angry he or she felt during this experience.
421
+ [2978.000 --> 2990.000] They found that systolic blood pressure, that is the pressure that the heart exerts while beating and diastolic blood pressure, that is, arterial pressure in between heartbeats were both elevated when participants were being embarrassed,
422
+ [2990.000 --> 2995.000] being forced to watch themselves singing in front of smiling researcher strangers.
423
+ [2995.000 --> 3001.000] Blood pressure remained slightly elevated compared to the control period for five minutes after the embarrassing event.
424
+ [3001.000 --> 3010.000] Heart rate spiked massively while watching the tape, but then dropped far below the baseline and suddenly began to creep back towards normal over the five minute period.
425
+ [3010.000 --> 3029.000] 43% of subjects described the situation as embarrassing, with the second most common descriptor being funny or amusing and physiological data drawn from people who specifically described the interaction as embarrassing, were similar to those who did not use the term embarrassing or similar word like awkward to describe the situation.
426
+ [3029.000 --> 3039.000] Even if we don't recognize an event as specifically embarrassing, uncomfortable social situations seemingly cause an increase in heart rate and blood pressure regardless.
427
+ [3039.000 --> 3044.000] In a second study, the researchers sought to understand the potential effects of emotional suppression.
428
+ [3044.000 --> 3056.000] The experiment was identical to that conducted in the first study, but this time some participants were specifically told to maintain a neutral facial expression while watching the recordings of themselves singing in front of the researcher and their confederates.
429
+ [3056.000 --> 3061.000] Faces of subjects were recorded during the event, along with their cardiovascular activity.
430
+ [3061.000 --> 3072.000] They found that subjects asked to suppress their emotions, had even higher systolic and diastolic blood pressure during and after the embarrassing situation than those not asked to suppress their emotions.
431
+ [3072.000 --> 3090.000] The basal heart rate of those suppressing their emotions was lower than those not suppressing, however, as soon as the embarrassing event began, those attempting to hide their feelings immediately experienced a huge spike in heart rate that far surpassed those not suppressing and maintained more beats per minute, up to five minutes.
432
+ [3090.000 --> 3095.000] After the event, then those not suppressing their emotions.
433
+ [3095.000 --> 3105.000] Those asked to suppress emotions, had longer shifts in gaze, more smile control for longer periods, fewer smiles, and face touches, fewer blanks and they swallowed more.
434
+ [3105.000 --> 3117.000] Thus, while on the outside it might be difficult to detect that someone is intentionally trying not to show emotions, their blood pressure and heart rate betray the true uncomfortableness of an embarrassing social interaction.
435
+ [3117.000 --> 3146.000] Thus, it's not just your brain that processes shame or embarrassment similarly to pain, embarrassing situations make our hearts beat faster and cause our blood pressure to skyrocket, remaining elevated even after the embarrassing event is over, which in tandem is related to increased galvanic skin response, illustrating the very real physiological and psychological and emotional effects of cringe, and because we know we feel cringe vicariously, this increased cardiovascular activity.
436
+ [3146.000 --> 3153.000] It's not just likely to occur when we are embarrassed, but when we see someone else being embarrassed as well.
437
+ [3153.000 --> 3161.000] So, is everyone as likely to feel cringe all the time, or are only some of us uniquely susceptible to cringe, psychologically or physiologically?
438
+ [3161.000 --> 3168.000] Muller-Pinsler at all 2015 found that the degree of embarrassment we experience is related to social anxiety.
439
+ [3168.000 --> 3173.000] Subjects in this experiment were placed in a room with three Confederates and took a quick intelligence test.
440
+ [3173.000 --> 3181.000] Soon after, a researcher announced that the participant had the highest IQ out of the group and as such was selected for further testing via FMRI.
441
+ [3181.000 --> 3186.000] While within the machine, respondents answered more questions purportedly designed to measure their intelligence.
442
+ [3186.000 --> 3191.000] While this testing occurred, some were told that the other three test takers were observing the progress,
443
+ [3191.000 --> 3199.000] while others merely saw photos of the other test takers, but were told they were not watching, and the eye movements of the subjects were monitored during the test.
444
+ [3199.000 --> 3209.500] Some subjects saw that they scored poorly, performing better than only 5-15% of the population, while others heard that they scored better than 40-60% of the population, a mediocre score,
445
+ [3209.500 --> 3215.500] and some saw that they scored better than 85-99% of the population, and obviously quite high score.
446
+ [3215.500 --> 3223.500] Afterwards, subjects were returned to the room with the Confederates and answered a questionnaire about his or her feelings of embarrassment, pride or anxiety.
447
+ [3223.500 --> 3228.500] Participants reported greater embarrassment when they performed at an average or below average level.
448
+ [3228.500 --> 3236.500] Moreover, embarrassment was by far the most prominent emotion, when subjects performed better than only 5-15% of the population.
449
+ [3236.500 --> 3246.500] Subjects reported more embarrassment, particularly in the low performance condition, when their performance was public, rather than private, however pride was not influenced by publicity whatsoever.
450
+ [3246.500 --> 3259.500] People's dilation was greater when participants thought others were watching their actions, particularly when they performed poorly and were related to greater activation in the right insula, a region associated with sympathetic social arousal.
451
+ [3259.500 --> 3271.500] When the test was public, greater activation was reported in the medial prefrontal cortex and the prechineus, regions associated with mentalizing, indicating subjects were thinking about how others might have been reacting to their responses.
452
+ [3271.500 --> 3286.500] Variation in the amount of time spent gazing at the faces of others was related to greater activation in the fusing form gyros, a region related to comprehension of facial expressions, indicating subjects were perhaps looking for some kind of emotional feedback from the photos on the screen.
453
+ [3286.500 --> 3294.500] However, gazing mediated the relationship between neural activation of the medial prefrontal cortex and prechineus and reports of social anxiety.
454
+ [3294.500 --> 3306.500] That is, the more socially anxious people reported themselves to be, the more time they spent looking at the faces of other people on the screen, and subsequently, the more concerned they were about how others might react to their poor performance.
455
+ [3306.500 --> 3316.500] Taken all together, these complicated data indicate that people who are more socially anxious are also more aware of judgments from others when they are doing something embarrassing.
456
+ [3316.500 --> 3318.500] Farleaf clover, make a wish.
457
+ [3318.500 --> 3320.500] Wish you weren't so far inward, bud.
458
+ [3320.500 --> 3334.500] So, if cringe is physically painful, increases sweating, and raises their blood pressure, is there anything that makes the potentially nasty side effects of cringe any better, since social anxiety only seems to make it worse?
459
+ [3334.500 --> 3344.500] Well, perhaps, as with so many things, we can look to answers from the old cuddle bug. That's so beloved of human hormones, oxytocin, as seen in data from Gang at All 2018.
460
+ [3344.500 --> 3352.500] Oxytocin is a hormone produced during skin-to-skin contact and is related to human pair bonding, including both romantic and parental relationships.
461
+ [3352.500 --> 3361.500] Some subjects in this study were given an intranasal snort of oxytocin, and then shown images of embarrassing everyday situations similar to the stuff we've seen before.
462
+ [3361.500 --> 3371.500] While their reactions were measured via fmorae, as well as skin-conductive reactivity, they found that subjects reported more general embarrassment when they had been administered oxytocin.
463
+ [3371.500 --> 3385.500] While you might think that means that people feeling physically close to others are more negatively affected by cringe, those given oxytocin reported with lower skin conductivity response, meaning being potentially less sweaty when thinking about something embarrassing happening.
464
+ [3385.500 --> 3399.500] Subjects given oxytocin also experienced decreased activation in the right amygdala and the dorsal anterior insula. There was also a negative association between right amygdala activation and the degree of skin reactivity in those given oxytocin.
465
+ [3399.500 --> 3414.500] While in those given a placebo, this relationship was positive. The dorsal anterior insula has been specifically associated with increased arousal during embarrassing social situations, and the right amygdala has been associated with negative emotions, including fear in other research.
466
+ [3414.500 --> 3430.500] Thus, it seems that oxytocin may provide resilience against embarrassment. So if you're feeling particularly affected by some serious cringe, grab someone nearby you and rub your face all over them. That will probably create more personal embarrassment too, but you just might feel a little better for it.
467
+ [3430.500 --> 3436.500] Or at least you will before the arrest. So with all of that in mind and before the cops show up, let's come to some conclusions.
468
+ [3444.500 --> 3466.500] Cringe is a powerful thing. It can make us laugh to see someone else being embarrassed, but it can also make us feel shame when that person is in some way associated with us. Be they a friend or remember of the same social group, and that shame can cause physiological effects, from sweaty palms to increased blood pressure and heart rate, to activation of the regions of the brain associated with physical pain.
469
+ [3466.500 --> 3478.500] Cringe hurts, but it's particularly likely to hurt when someone we know is being cringy. While we can sometimes enjoy someone being cringy, the office wouldn't have been the most popular comedy show on television for years if that wasn't the case.
470
+ [3478.500 --> 3488.500] Many of us feel not just psychologically uncomfortable, but physiologically uncomfortable when watching someone like Michael Scott parade around his utter lack of self-awareness.
471
+ [3488.500 --> 3498.500] As a social emotion, we luckily developed this reaction to prevent ourselves from doing something cringy, knowing just how bad it makes us feel to see someone else doing them.
472
+ [3498.500 --> 3506.500] Thus while vicarious embarrassment often makes us feel awkward, it exists to discourage cringy behavior in ourselves or at least I would surmise.
473
+ [3506.500 --> 3512.500] Maybe our brains just evolved to be averse towards cringe. But hey, what do you guys think?
474
+ [3512.500 --> 3517.500] Does seeing someone doing something embarrassing make you laugh or does it make you cringe and feel uncomfortable?
475
+ [3517.500 --> 3525.500] When someone you know is making a fool of themselves in public, do you stop to intervene or do you overt your gaze and perhaps pretend you don't know that person?
476
+ [3525.500 --> 3529.500] Let me know what you guys think about the concept of cringe in the comments down below.
477
+ [3529.500 --> 3534.500] If you liked this video, please consider subscribing and sharing it with your friends, cringy or otherwise.
478
+ [3534.500 --> 3541.500] I want to give an enormous thank you to all of my absolutely wonderful supporters on Patreon and Subscribe star.
479
+ [3541.500 --> 3547.500] You guys are amazing, YouTube has been expressing my channel for a while now and I really, really appreciate your ongoing support.
480
+ [3547.500 --> 3551.500] If you want to help out the channel, you can go ahead down to the link below and check out Coursera.
481
+ [3551.500 --> 3555.500] You can support me on the aforementioned platforms or buy some merch from my merch store.
482
+ [3555.500 --> 3561.500] Thank you guys so much for watching. Again, your continued support means so much to me, despite how cringy it may sound.
483
+ [3561.500 --> 3565.500] Take care and as always dear friends, all ton of old.
484
+ [3565.500 --> 3567.500] I'm not even supposed to go.
485
+ [3567.500 --> 3574.500] Attention all users, it appears that somebody's been posting cringe.
486
+ [3574.500 --> 3579.500] Thanks for the post bro, really cool.
487
+ [3589.500 --> 3592.500] Comrade you just posted cringy.
transcript/allocentric_LtGY85JXTUM.txt ADDED
@@ -0,0 +1,536 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.000] The
2
+ [22.500 --> 26.880] 2018 was a terrible year at the University of Bristol.
3
+ [26.880 --> 34.880] We were at the middle of a crisis in student mental health, a crisis which was impacting upon the whole community.
4
+ [34.880 --> 37.880] But it wasn't just Bristol.
5
+ [37.880 --> 45.880] Other universities were also experiencing a tsunami of mental health problems in the students.
6
+ [45.880 --> 48.880] This wasn't even a new problem.
7
+ [48.880 --> 59.880] Looking back in my early career, about 40 years ago I had published a paper on the stress of moving to university back in 1987.
8
+ [59.880 --> 71.880] But the difference was that this time round the students were being hampered, they were being distressed, dismayed, to the extent that they couldn't engage in their studies anymore.
9
+ [71.880 --> 74.880] I made data back this tsunami up.
10
+ [74.880 --> 83.880] The exponential rise in mental health problems in the student population had risen sevenfold in 10 years up to 22.
11
+ [83.880 --> 86.880] So I had to do something.
12
+ [86.880 --> 100.880] So I looked around and I discovered that a former student of mine, who I taught at Harvard, Laurie Santos, had now reached this amazing position as president of a residential college at Yale.
13
+ [100.880 --> 106.880] So she was looking after students in her pastoral role.
14
+ [106.880 --> 111.880] And she too was encountering this terrible problem with her students and decided to do something about it.
15
+ [111.880 --> 116.880] So she put this course on called Psychology and the Good Life.
16
+ [116.880 --> 123.880] And it was a phenomenal success, the most popular course ever at Yale.
17
+ [123.880 --> 130.880] So I obviously contacted Laurie and said, hey Laurie, we got real problem going on as well.
18
+ [130.880 --> 135.880] And Laurie and her typical, gracious, generous way, shared her notes with me.
19
+ [135.880 --> 142.880] And then we started working together. And I put on my version a little bit more pithy, the science of happiness.
20
+ [142.880 --> 147.880] And I wrote as a pilot just to see if anyone would turn up.
21
+ [147.880 --> 150.880] 600 people turned up.
22
+ [151.880 --> 168.880] And then university then gave me the green light to create a new course that was credit-bearing for first year students that combined lectures on positive psychology, neuroscience, my own interest in child development, a little bit of philosophy.
23
+ [168.880 --> 173.880] And what made this course very different is it combined lectures with practical activities.
24
+ [173.880 --> 182.880] But what made it unique is that this course awarded credit on the basis of engagement alone. There were no graded examinations.
25
+ [182.880 --> 188.880] So the students thought this is going to be such an easy breeze.
26
+ [188.880 --> 191.880] More full of them.
27
+ [191.880 --> 200.880] Not only did they have to come to all the live lectures, they were not allowed to use computers and we banned smartphones.
28
+ [200.880 --> 209.880] Because you see, I argued, look, you need to be engaging. And you cannot be engaging with me as a lecturer if you're looking at your phone.
29
+ [209.880 --> 212.880] So I had them. They couldn't have their phones.
30
+ [212.880 --> 226.880] They had to turn up every week to small group meetings of about eight students, which were mentored by senior students who we had taught to engage in activities, to review the content, to discuss and talk about these problems.
31
+ [226.880 --> 232.880] So we had to undertake the various activities, the evidence-based practices that we were talking about.
32
+ [232.880 --> 238.880] And they had to write about them in their diaries and their journals every week. And we monitored these.
33
+ [238.880 --> 246.880] We also got them to do a group project where they had to get on with each other for the first time, throw in the deep end.
34
+ [246.880 --> 258.880] And then we invited them to also undertake their own assessments by undertaking various questionnaires or mental health from happiness and loneliness and anxiety before the course.
35
+ [258.880 --> 265.880] And then we followed them up afterwards to see if there had been any effect.
36
+ [265.880 --> 271.880] We have now been running the course for six years and we find the same pattern over and over again.
37
+ [271.880 --> 280.880] And this is an amazing gold mine of data, of course. We're producing a whole multitude of papers. These are just some.
38
+ [280.880 --> 289.880] And what we find is that there's a 10 to 15 percent every year increase in happiness, well-being, if you want to call it, depending on which measure you're talking about.
39
+ [289.880 --> 291.880] So it works.
40
+ [291.880 --> 300.880] Now, you might think 10 to 15 percent doesn't sound really transformative to me, but who wouldn't want to be 10 to 15 percent happier, healthier or wealthier.
41
+ [300.880 --> 306.880] And that's the average course. For some students, it was life transforming. That's a good news.
42
+ [306.880 --> 316.880] The bad news is this paper here that I'm sure some of you are thinking about. Does this really last? Well, it's good in bad news.
43
+ [316.880 --> 323.880] The bad news is about six months after the courses have finished, the students go back down to their baseline measures again.
44
+ [323.880 --> 335.880] But in those students who are half the population we sampled, who kept up with the activities, they maintained their levels of happiness up to two years later and went on to graduate.
45
+ [335.880 --> 346.880] So, I haven't got the luxury of 10 to 12 weeks to take you through the entire course, but of course, I want to just point out that you just suddenly become, become happy.
46
+ [346.880 --> 352.880] This lecture tonight will not be changing your lives. If you want to be a happier person, you have to put in the effort.
47
+ [352.880 --> 360.880] There's no point going to a lecture for 10 weeks, and then not doing anything afterwards. It's like going to the gym. In my mind, mental health is the same as physical health.
48
+ [360.880 --> 369.880] There's no point walking in and trying to pick up a heavy weight. You'll give yourself a hernia. Much better to practice over long term, just building up your strength.
49
+ [369.880 --> 375.880] The same way you build up your mental, physical strength, you can build up your mental strength.
50
+ [375.880 --> 383.880] So, as I said, I don't have the 10 weeks, but of course, I have a book to promote. The size of happiness.
51
+ [383.880 --> 395.880] Now, look, why do we need another book on happiness? There are thousands of books on happiness. I'm sure many know some. Many celebrity books on happiness. Everyone has an opinion on happiness. What could I possibly be offering?
52
+ [395.880 --> 402.880] Well, first, everything I say is evidence-based, so it's all based on research. Maybe that's not too surprising.
53
+ [402.880 --> 413.880] But I think I've got a unifying account for why happiness is so difficult and elusive, and why I think many positive psychology interventions work.
54
+ [413.880 --> 420.880] And I'm going to give you the punchline now. They work because they change the way we think about ourselves.
55
+ [420.880 --> 433.880] I'm going to argue that in order to become a happier person, we have to review the way we think about ourselves in relation to our problems, in relation to others, in relation to the environment.
56
+ [433.880 --> 446.880] That's the secret to becoming a happier person. Now, of course, this is a role institution. And, of course, any opportunity to speak in this auspicious whole invites you to try demonstrations.
57
+ [446.880 --> 461.880] And so, of course, those shy. I'll be asking you at various points to take part in some of my demonstrations. If only to illustrate the points. So, pay attention. I'm hoping to keep you entertained for the next hour.
58
+ [461.880 --> 470.880] Okay. But first, some ground rules. Someone asked me in a reception just a moment ago, what is happiness? Well, it depends on who you speak to.
59
+ [470.880 --> 481.880] Happiness means different things. But in my mind, there are two components to it, which are worth noting. First, there's the emotional component, the feelings, the moods of positive joy, elation.
60
+ [481.880 --> 498.880] But there's also a sense of happiness, which is more cognitive. How am I getting on in life? Am I progressing? Is my life worthwhile? Am I using it? Am I content? So, I think there's an emotional component to happiness, as well as this more cognitive component.
61
+ [498.880 --> 514.880] Okay. So, why does it seem to dissipate? Why do our students seem to kind of go back down to baseline? Well, for one very obvious reason, in order to experience happiness, you've got to know what it's like to be unhappy.
62
+ [514.880 --> 521.880] Because, if you were constantly happy, you wouldn't notice a difference. Our brains only really ever detect differences.
63
+ [521.880 --> 535.880] Okay. So, what about these students who got this extra boost to 15 percent? Does that mean they're eternally happy? No. They also have moments of unhappiness. But the difference is that when they encounter moments of unhappiness, they will recover much more quickly.
64
+ [535.880 --> 547.880] So, in many ways, I really should have called the book about how not to be unhappy. But there's another interesting reason why happiness appears to be elusive.
65
+ [547.880 --> 563.880] When it comes down to the fact that happiness is an emotion, and emotions and motivation have their same origin and the Latin from Mavere to mean to move. So, their drives. And drives by their nature must satiate.
66
+ [563.880 --> 573.880] We have hunger drives, sex drives. You must get used to them in order to compel you to do them again. So, happiness is something that we're pursuing. We're working towards.
67
+ [573.880 --> 589.880] And we can never achieve it because we need to maintain this drive to keep acting towards it. Indeed, I restockled with the pleasure principle. Because every decision you make to some extent is premise on the assumption that you're going to be happier afterwards or avoiding something which is unpleasant.
68
+ [589.880 --> 601.880] But then this gets to a problem. Because if you get used to things, then you have this issue of adaptation. And this leads to the so-called hedonic treadmill.
69
+ [601.880 --> 627.880] Now, if I was to go out in the street or ask my students what do you think would make you happy? You know what they say? And I guess many of you would say money. I'd like a large sum of money. Now, don't get me wrong. Contrary to popular opinion, money can buy you happiness. It can. Absolutely it can. It just depends on how much you have and where you are and what your disposition is and what your starting point is.
70
+ [627.880 --> 646.880] But what I also know is that you can never have too much money. Or in other words, enough is never enough. Because once you've enjoyed the benefits and utility of some money, you want more money. And so you work harder and you're pursued and you're compelled to keep going after this because it's a drive.
71
+ [646.880 --> 655.880] And this leads to the hedonic treadmill. And it means that you keep moving forward. You keep chasing it. You never seem to get there.
72
+ [655.880 --> 671.880] Okay. So this is our opportunity to try a demonstration. I can't induce emotional adaptation in you. But what I'm going to do is sensory adaptation because the adaptation is a feature of the central nervous system.
73
+ [671.880 --> 681.880] So what I'm going to do now is I'm going to ask you to look at the center of the screen and hopefully no one has a tendency for epilepsy, but look at the center because I'm going to rotate it.
74
+ [681.880 --> 698.880] And I want you to stare at the center of the screen because I'm going to talk you what I'm doing at this point. What I'm doing is I'm fatiguing or adapting a set of cells in the back of your brain, the visual cortex, which are selectively tuned to rotational motion in the clockwise direction.
75
+ [698.880 --> 711.880] And as you're looking to it, your brain is adapting. Now the way that we perceive the world is combining all the information from all the relevant sensors so that if you fatigue one set of sensors, you experience a distortion.
76
+ [711.880 --> 716.880] So now look at the back of your hand.
77
+ [716.880 --> 728.880] It should be writhing. If you've been fatiguing, you can see it writhing. Yes, now you have others weren't really focusing.
78
+ [728.880 --> 737.880] That writhing is the consequence. It's the same. It's a much faster effect. It's the same as when you're sitting on a bus or a train and you're driving along, you come to a halt.
79
+ [737.880 --> 743.880] And suddenly the world seems to be moving in the opposite direction. That's a demonstration adaptation.
80
+ [743.880 --> 757.880] But the other reason I wanted to show you that is a good example about how our conscious awareness or perceptions or experiences are determined by the brain and the brain has design features that can be manipulated or biased.
81
+ [757.880 --> 764.880] So this is a recurrent theme I'll be talking about tonight that our happiness or lack of is somewhat dependent on our brain.
82
+ [764.880 --> 769.880] Now this is not a new idea. The stoic philosophers Epic Tectis said,
83
+ [769.880 --> 779.880] Manor to serve not by things but by the views which they take of things. In other words, it's not what happens to you is how you respond that matters.
84
+ [779.880 --> 787.880] Two people could face exactly the same adversity. One will be absolutely decimated by it. The other person will draw a line under it and move on.
85
+ [787.880 --> 794.880] Why is that? Why do some of us seem to see the glass half empty and others see it half full?
86
+ [794.880 --> 802.880] Well, again, that's a complex answer. But part of it comes down to our biology, our dispositions, what we get from our parents.
87
+ [802.880 --> 813.880] So we know from the field of behavioral genetics where you look at the relationship between some personality aspect and identical twins versus non-identical twins.
88
+ [813.880 --> 818.880] You can estimate what the variation is down to the genes.
89
+ [818.880 --> 825.880] And to give you the bottom line, happiness is about the same as intelligence. Around about 50% in terms of heritability.
90
+ [825.880 --> 836.880] What does that mean? Well, that means that some of us are very much like our parents and some of us are very unlike our parents, but on average overall, the relationship is only 50%.
91
+ [837.880 --> 844.880] Which means that there's a lot of room for life events, experiences, education, and what we discover.
92
+ [844.880 --> 850.880] And this is where positive psychology can make a difference by changing the way that you think about things.
93
+ [850.880 --> 853.880] The problem with that, of course, is this thing.
94
+ [853.880 --> 864.880] Okay? This three-pound lump of tissue that's evolved because it has a design feature, which means that you pay more attention to things which are going badly in your life.
95
+ [864.880 --> 869.880] Why would you have this negativity bias to focus on threats or problems?
96
+ [869.880 --> 880.880] Well, strategically, it's much better to pay attention and respond to potential threats that it is to notice the status quo or pay attention to good things.
97
+ [880.880 --> 886.880] If you can react to threats, then this protects you and keeps you in the gene pool.
98
+ [887.880 --> 895.880] Not only do we focus more on the criticism, as I write it, I tell you, when you get a negative review on Amazon, it's much worse.
99
+ [895.880 --> 897.880] All the positive reviews make no difference.
100
+ [897.880 --> 904.880] Anyone will tell you that a criticism stings. And this just reflects this bias.
101
+ [904.880 --> 911.880] But we're also wired to respond automatically in a way which reveals this bias.
102
+ [911.880 --> 922.880] For example, how many of you have driven along and someone has cut you off and you suddenly thought you and you swore and you just stick you in and you honked your horn?
103
+ [922.880 --> 924.880] Is it just me?
104
+ [924.880 --> 926.880] Yeah, okay. Thank God for that.
105
+ [926.880 --> 928.880] That's an example of road rage.
106
+ [928.880 --> 931.880] Okay, road rage is a good example of the fight or flight response.
107
+ [932.880 --> 945.880] A body is wired to respond to potential threats with a whole set of physiological responses, a release of cortisol, the stress hormone, anodrenalin, glucose, floods into the body, and your heart rate increases, and breathing.
108
+ [945.880 --> 949.880] And it's all to mobilize you for action, the so-called fight or flight.
109
+ [949.880 --> 955.880] But it's also highly triggered to negative emotions, rage, anger, fear, panic.
110
+ [955.880 --> 960.880] These are automatic events.
111
+ [960.880 --> 965.880] Now in the past, from an evolutionary point of view, these would generally be good responses.
112
+ [965.880 --> 971.880] But today's society seems to present so many triggering factors that many of us are living very stressful lives.
113
+ [971.880 --> 983.880] Thankfully, we've also evolved ways of dealing with this, which is the second system, the one to allow you to contemplate, to evaluate, to simulate.
114
+ [983.880 --> 988.880] You know that retaliation on the road is not getting into an altercation is probably not a good thing.
115
+ [988.880 --> 990.880] So you can reign them with reason.
116
+ [990.880 --> 993.880] These automatic thoughts, some ways of thinking.
117
+ [993.880 --> 997.880] And that's really where positive psychology tries to work.
118
+ [997.880 --> 1001.880] It can't really change these evolutionary embedded ways.
119
+ [1001.880 --> 1007.880] But what we can do is try to bolster this more cognitive way of thinking about problems.
120
+ [1007.880 --> 1009.880] Okay, so that's laid the ground rules.
121
+ [1009.880 --> 1012.880] Okay, so what are the things that we need to change?
122
+ [1012.880 --> 1015.880] Well, this is where we get to the seven lessons.
123
+ [1015.880 --> 1019.880] Now the first lesson is alter your ego.
124
+ [1019.880 --> 1024.880] And this is after my heart, my research interests, about the sense of self.
125
+ [1024.880 --> 1028.880] I have been fascinated by the self for many decades now.
126
+ [1028.880 --> 1030.880] And what do I mean by the self?
127
+ [1030.880 --> 1035.880] Well, it's that conscious awareness of who we are, the sort of sense of, you know, knowing and thinking.
128
+ [1035.880 --> 1037.880] But also knowledge about our past.
129
+ [1037.880 --> 1043.880] So it's a combination of our autobiographies, as well as our stream of consciousness as we're having it.
130
+ [1043.880 --> 1046.880] And we tend to think that ourselves don't change.
131
+ [1046.880 --> 1049.880] But in fact, they do change quite significantly.
132
+ [1049.880 --> 1051.880] But you're never aware of it.
133
+ [1051.880 --> 1052.880] You don't wake up each day.
134
+ [1052.880 --> 1054.880] I think, oh, I'm different from yesterday.
135
+ [1054.880 --> 1056.880] You experience continuity.
136
+ [1056.880 --> 1060.880] But where we do see very obvious transitions in the self is in development.
137
+ [1060.880 --> 1064.880] Because children and anyone with children will know how they change so radically.
138
+ [1064.880 --> 1070.880] They've got personalities, but their sense of self definitely undergoes very significant transitions.
139
+ [1070.880 --> 1077.880] The great Swiss psychologist Jean-Pierre Describes the young infant as being egocentric.
140
+ [1077.880 --> 1081.880] In other words, they can only conceive of everything from their own perspective.
141
+ [1081.880 --> 1085.880] They cannot conceive of an external world.
142
+ [1085.880 --> 1089.880] So they have this very distorted view.
143
+ [1089.880 --> 1094.880] Now, to avoid problems with copyright, because I knew this was being broadcast.
144
+ [1094.880 --> 1097.880] I decided to deploy the efforts of chat GBT.
145
+ [1097.880 --> 1101.880] So when I entered an egocentric child, this is what came up.
146
+ [1101.880 --> 1104.880] And I think it kind of captures what I was trying to make.
147
+ [1104.880 --> 1109.880] This sort of sense of the child being at the center of the nurturing universe and their parents all applauding them.
148
+ [1109.880 --> 1113.880] And so the child sees themselves as dominant.
149
+ [1113.880 --> 1119.880] But of course, that's all very well and fine, when you're very young.
150
+ [1119.880 --> 1125.880] But if you do not change that outlook as you start to interact with other children,
151
+ [1125.880 --> 1129.880] then this is a real course or a real source of problems.
152
+ [1129.880 --> 1132.880] Because children must learn to get on.
153
+ [1132.880 --> 1134.880] They must learn to cooperate.
154
+ [1134.880 --> 1136.880] They must learn to share.
155
+ [1136.880 --> 1139.880] Sharing is something it takes quite a bit of time to do.
156
+ [1139.880 --> 1150.880] And you know, childhoods are long because there's a whole set of rules that children must assimilate and learn to move by.
157
+ [1150.880 --> 1154.880] Now, there's some interesting demonstrations of how egocentric children can be.
158
+ [1154.880 --> 1157.880] This is a photograph by a colleague of mine.
159
+ [1157.880 --> 1164.880] Don't be surprised if you want to play hide and seek, and egocentric child runs into the bathroom and then pulls the towel over their head.
160
+ [1164.880 --> 1167.880] Why would they do something so peculiar?
161
+ [1167.880 --> 1169.880] Clearly, their body is visible.
162
+ [1169.880 --> 1175.880] Well, the reason is, because they're so egocentric, they literally think that others see the world the same way they do.
163
+ [1175.880 --> 1185.880] So they think, if I can't see you, well, as a reason you can't possibly see me, or consider faults.
164
+ [1185.880 --> 1190.880] If I show an egocentric child, oops, that's not going to work.
165
+ [1190.880 --> 1199.880] If I show an egocentric child, a box of smarties, never worked with inanimate orchicks.
166
+ [1199.880 --> 1201.880] And I said, what do you think's in here?
167
+ [1201.880 --> 1203.880] The child will say smarties.
168
+ [1203.880 --> 1206.880] A three-year-old kind of knows these are confectory.
169
+ [1206.880 --> 1211.880] If I then show them in a fact it contains pencils, well, they think that's hilarious.
170
+ [1211.880 --> 1214.880] They're really easy to amuse.
171
+ [1214.880 --> 1218.880] But the point is, if I then ask the child, what did you think was in there when I first showed you?
172
+ [1218.880 --> 1220.880] The child will say, pencils.
173
+ [1220.880 --> 1224.880] Having conveniently forgotten a moment ago, they were holding a false belief.
174
+ [1224.880 --> 1226.880] That's kind of interesting.
175
+ [1226.880 --> 1231.880] But if I then ask the child, imagine your friend Billy comes into the room, and I show him the tube.
176
+ [1231.880 --> 1233.880] What will he say is in there?
177
+ [1233.880 --> 1237.880] And the child says, oh, no confidence, pencils.
178
+ [1237.880 --> 1241.880] Similarly, thinking that everybody shares the same knowledge states that they do.
179
+ [1241.880 --> 1244.880] Again, this is egocentric thinking.
180
+ [1244.880 --> 1253.880] In order to be able to cooperate and communicate and negotiate, you have to understand that other people have different views, different perspectives, and different thoughts about the world.
181
+ [1253.880 --> 1255.880] So you need to work together.
182
+ [1255.880 --> 1260.880] So we grow up out of this egocentric view of the world.
183
+ [1260.880 --> 1262.880] This sort of sense that you're the most important person.
184
+ [1262.880 --> 1271.880] Well, when I say we grow up, most of us grow up, some of us retain a very egocentric view of the world, some of us even become president.
185
+ [1271.880 --> 1279.880] But my point is that we all very along this dimension.
186
+ [1279.880 --> 1285.880] And it's all too easy, especially when we're stressed to revert to our childhood ways of thinking.
187
+ [1285.880 --> 1288.880] This is a theme I developed in the earlier book of mine.
188
+ [1288.880 --> 1290.880] Because these things never go away.
189
+ [1290.880 --> 1294.880] It takes effort to think less egocentric.
190
+ [1294.880 --> 1299.880] But unless you do this, then you're going to suffer from what Mark Liri calls the curse of self.
191
+ [1299.880 --> 1303.880] Because when you are the center of your world, then you blew everything out of proportion.
192
+ [1303.880 --> 1307.880] You think you're paramount, you're problems are worse than everyone else's problems.
193
+ [1307.880 --> 1312.880] And given the negativity by us, I just told you about, you distort things.
194
+ [1312.880 --> 1314.880] Much better to become more egocentric.
195
+ [1314.880 --> 1317.880] Where you see yourself connected with others.
196
+ [1317.880 --> 1323.880] Because not only does that mean that you're more likely to get some support, because frankly, who wants to be friends with an arsesist?
197
+ [1323.880 --> 1326.880] Maybe I should strike that.
198
+ [1326.880 --> 1331.880] The point is, you can then benefit from the reciprocity of support.
199
+ [1331.880 --> 1338.880] Moreover, you can see that other people have things going on in their lives, which are way more important than your problems.
200
+ [1338.880 --> 1347.880] So it gives you perspective, which is why we argue that we advise our students to write gratitude lessons.
201
+ [1347.880 --> 1350.880] So gratitude letters are well known positive intervention.
202
+ [1350.880 --> 1360.880] It forces you to recognize that you have benefited by the help of people helping you, but also recognize that others are less fortunate than you.
203
+ [1360.880 --> 1363.880] So I think this helps to do that.
204
+ [1363.880 --> 1367.880] Okay, so one of the lessons from this first chapter.
205
+ [1367.880 --> 1373.880] Well, one of the things I do is teach you how to become less egocentric.
206
+ [1373.880 --> 1376.880] And for this, we're going to use a little bit of audience participation.
207
+ [1376.880 --> 1382.880] Okay, so I want you in the next moment to think about a problem that you have.
208
+ [1382.880 --> 1383.880] It could be a personal problem.
209
+ [1383.880 --> 1388.880] I don't want a global problem like the war or climate change.
210
+ [1388.880 --> 1390.880] Choose a problem which is relevant to you.
211
+ [1390.880 --> 1392.880] Personal to you. Like it could be financial.
212
+ [1392.880 --> 1393.880] It could be health.
213
+ [1393.880 --> 1394.880] It could be relationships.
214
+ [1394.880 --> 1395.880] It could be work problems.
215
+ [1395.880 --> 1397.880] And we've all got them.
216
+ [1397.880 --> 1398.880] Okay.
217
+ [1398.880 --> 1400.880] Because I want you to process that problem in the following way.
218
+ [1400.880 --> 1405.880] I want you to talk about your problem in a public form not like this.
219
+ [1405.880 --> 1407.880] I want you to use your internal voice.
220
+ [1407.880 --> 1409.880] But if you're alone, you could do it out loud.
221
+ [1409.880 --> 1412.880] But here, I'd like you to talk to yourself in the following way.
222
+ [1412.880 --> 1414.880] I want you to discuss your problem.
223
+ [1414.880 --> 1417.880] Reflect upon it using first person language.
224
+ [1417.880 --> 1418.880] I, me, and so on.
225
+ [1418.880 --> 1422.880] So let's say it's an example.
226
+ [1422.880 --> 1426.880] I am worried about my role in just in kind of you see.
227
+ [1426.880 --> 1431.880] I'm worried about my pronunciation during the royal institution lecture
228
+ [1431.880 --> 1435.880] because I think it makes me look foolish and this upsets me.
229
+ [1435.880 --> 1438.880] I want you now to do something similar about your own problem, your head.
230
+ [1438.880 --> 1446.880] Just do it now for a moment.
231
+ [1446.880 --> 1449.880] Now, if you have a real problem, how does that make you feel?
232
+ [1449.880 --> 1450.880] Probably not too good.
233
+ [1450.880 --> 1452.880] I've just reminded you of something that you weren't thinking about.
234
+ [1452.880 --> 1455.880] It made you acknowledge it's a problem and recognize it's upsetting you.
235
+ [1455.880 --> 1457.880] What a jerk I am.
236
+ [1457.880 --> 1459.880] But I have a quick solution.
237
+ [1459.880 --> 1461.880] I want you to do the same thing again.
238
+ [1461.880 --> 1464.880] But this time, do not use any first person terms like I or me.
239
+ [1464.880 --> 1466.880] I want you to use your own name.
240
+ [1466.880 --> 1471.880] And I want you to use non-first person terms like he or she, him, her, whatever it is that you use.
241
+ [1471.880 --> 1472.880] And review the problem.
242
+ [1472.880 --> 1474.880] Going back to what I'm saying.
243
+ [1474.880 --> 1478.880] Bruce is worried about his role in situation lectures because he's stumbling over his words.
244
+ [1478.880 --> 1480.880] And this is upsetting him.
245
+ [1480.880 --> 1482.880] I want you to do it with your own problem.
246
+ [1482.880 --> 1488.880] Do it now.
247
+ [1488.880 --> 1489.880] Okay.
248
+ [1489.880 --> 1493.880] So in comparison, which did you find less distressing?
249
+ [1493.880 --> 1499.880] Put your hand up if you thought talking about your problem using I was less distressing.
250
+ [1499.880 --> 1504.880] Put your hand up if you thought talking about it the third person was less distressing.
251
+ [1504.880 --> 1511.880] It's almost invariable that when you do this, this seems to create a real psychological distance between you and your person.
252
+ [1511.880 --> 1514.880] And there are many ways you can do this in the chapter I talk about.
253
+ [1514.880 --> 1521.880] But psychological distancing is a powerful way to disengage from the egocentric view and become malacentric.
254
+ [1521.880 --> 1525.880] Because we always talk about ourselves in the first person, not less we're royalty.
255
+ [1525.880 --> 1528.880] And we say, you know, we are not amused.
256
+ [1528.880 --> 1531.880] But most of us use I.
257
+ [1531.880 --> 1536.880] And therefore, we immerse ourselves in the intensity of negative emotions.
258
+ [1536.880 --> 1546.880] But when you're using non-first person terms, talking about Bruce, you can use it to amplify positive emotions and attenuate those negative ones.
259
+ [1546.880 --> 1550.880] Because it's like you're talking to yourself as a friend.
260
+ [1550.880 --> 1557.880] So there's a good example from lesson one about how you can become less egocentric and more alcentric.
261
+ [1557.880 --> 1560.880] Okay. How we do for time?
262
+ [1560.880 --> 1565.880] All right. Number two, avoid isolation.
263
+ [1565.880 --> 1569.880] Now this chapter really outlines something peculiar about our species.
264
+ [1569.880 --> 1572.880] First of all, we have an unusual life strategy.
265
+ [1572.880 --> 1574.880] We live for a very long period of time.
266
+ [1574.880 --> 1579.880] But we also have the longest childhood of any animal if you think about it.
267
+ [1579.880 --> 1583.880] Why is that? Well, part of it is because we have these very large brains.
268
+ [1583.880 --> 1587.880] Now, Robin Debar has proposed what he calls the social brain hypothesis.
269
+ [1587.880 --> 1592.880] When you look at the brains of mammals which live in groups of different complexity,
270
+ [1592.880 --> 1596.880] those that live in the most complex social environments have larger brains.
271
+ [1596.880 --> 1601.880] And we, by comparison, have the biggest brains, seven times larger than you would imagine.
272
+ [1601.880 --> 1608.880] Why? Well, according to chat GP, it's because we can live in these very intensively complex environments.
273
+ [1608.880 --> 1610.880] Okay.
274
+ [1610.880 --> 1616.880] The problem with having a big brain is, well, it takes a long time to mature childhood, up to 18 years.
275
+ [1616.880 --> 1620.880] But also delivering a big brain is really challenging.
276
+ [1620.880 --> 1624.880] Childbirth is very particularly dangerous for our species.
277
+ [1624.880 --> 1631.880] I think this is one of the reasons that I think we're fairly unique in the animal kingdom to have assisted deliveries.
278
+ [1631.880 --> 1634.880] Chimpanzees can go off and have babies after a couple hours by themselves.
279
+ [1634.880 --> 1637.880] Most humans require some form of assistance.
280
+ [1637.880 --> 1643.880] Moreover, the period immediately after birth is also a vulnerable period because you have to raise a child.
281
+ [1643.880 --> 1648.880] In general, I would say that raising children requires cordy-day cooperative efforts.
282
+ [1648.880 --> 1650.880] And I think that's been with us for quite a while.
283
+ [1650.880 --> 1654.880] And this is why we've evolved all these emotions of attachment and love and so on.
284
+ [1654.880 --> 1658.880] We are so codependent that you couldn't really exist if you were on your own.
285
+ [1658.880 --> 1663.880] So that's why being excluded is such a painful experience.
286
+ [1663.880 --> 1668.880] Ostracism is something which resonates painfully for all of us.
287
+ [1668.880 --> 1672.880] And it begins early because when the child's in their home environment is fine,
288
+ [1672.880 --> 1676.880] but when they go into the environment where their peers start to become more important,
289
+ [1676.880 --> 1680.880] then they become acutely sensitive to the possibility of being excluded.
290
+ [1680.880 --> 1685.880] To the extent it registers in the same regions of the brain were pain registers.
291
+ [1685.880 --> 1690.880] So in these studies, this is a magnetic resonance imaging study.
292
+ [1690.880 --> 1700.880] They got patients to play a game and they induced a sense of being ostracized by excluding the player, the patient.
293
+ [1700.880 --> 1705.880] And what you find is an area known as the anterior singular here, lights up.
294
+ [1705.880 --> 1708.880] This is the same area which lights up under pain.
295
+ [1708.880 --> 1713.880] Pain if you think about is nature's way of warning you, you've got to do something to change your behavior.
296
+ [1713.880 --> 1715.880] Take your hand off the fire and so on.
297
+ [1715.880 --> 1722.880] And the same way social exclusion or social pain tells you you need to do something to change your circumstances.
298
+ [1723.880 --> 1730.880] Not only is isolation problematic when you're young, it's really problematic as you get older.
299
+ [1730.880 --> 1732.880] And this is an increasing problem of loneliness.
300
+ [1732.880 --> 1736.880] Now I don't expect you to be able to read all of this, but this is a meta-analysis,
301
+ [1736.880 --> 1740.880] which is a study of all the studies looking at the morbidity risks for various factors.
302
+ [1740.880 --> 1746.880] And here we have famously social relationships have a greater morbidity risk than more familiar ones,
303
+ [1746.880 --> 1752.880] such as smoking 15 cigarettes a day and alcohol consumption lack of exercise and obesity.
304
+ [1752.880 --> 1759.880] So being left alone is not just mentally distressing, it's physically distressing as well.
305
+ [1759.880 --> 1763.880] So this is the lesson avoid loneliness.
306
+ [1763.880 --> 1767.880] Lesson number three, reject negative comparisons.
307
+ [1767.880 --> 1771.880] Ah, I mean in many ways this is how the brain experiences life.
308
+ [1772.880 --> 1774.880] It's always making comparisons.
309
+ [1774.880 --> 1778.880] Every sensation, every experience you have is a comparison.
310
+ [1778.880 --> 1780.880] You're a comparison engine.
311
+ [1780.880 --> 1783.880] Your brain is always drawing these comparisons.
312
+ [1783.880 --> 1789.880] So in food tastes saltier if you're just being something sweet, something as louder if you've come in from a quiet room and so long.
313
+ [1789.880 --> 1792.880] This shows up in perception as a famous ebb and how solution.
314
+ [1792.880 --> 1794.880] The two circles look different.
315
+ [1794.880 --> 1798.880] The two circles are in circles look different, but in fact they're exactly the same.
316
+ [1798.880 --> 1801.880] The cause of brain is comparing them to the context.
317
+ [1801.880 --> 1808.880] One side, one circle seems smaller, but if you remove the context, you can see in fact they are identical.
318
+ [1808.880 --> 1811.880] Now we are the same.
319
+ [1811.880 --> 1812.880] Okay?
320
+ [1812.880 --> 1817.880] How we think about our status and how we're getting on is relative to what everyone else is doing.
321
+ [1817.880 --> 1820.880] So we too are subject to the biases of context.
322
+ [1820.880 --> 1823.880] If you take them away, our brains are the same.
323
+ [1823.880 --> 1827.880] Now this produces some interesting sort of distortions.
324
+ [1827.880 --> 1831.880] With the Olympics coming up, I want you to look out the television the next time you get an opportunity.
325
+ [1831.880 --> 1836.880] Because if you look at the finalist podium, there's usually one person who is unhappy.
326
+ [1836.880 --> 1840.880] And it's not the person who came last in the lineup.
327
+ [1840.880 --> 1845.880] So here we have, I think it's from 2016.
328
+ [1845.880 --> 1850.880] There's two very, very happy people here and one Frenchman who is not very happy at all.
329
+ [1850.880 --> 1856.880] If you look at the emotional expressions, it's typically the silver medalist who's the least happy.
330
+ [1856.880 --> 1861.880] Now why is that? Well, because he's comparing himself to the gold medalist.
331
+ [1861.880 --> 1863.880] He's going to go down the record books.
332
+ [1863.880 --> 1868.880] The silver medalist is saying, if I just tried a bit harder, I would have got it, damn it.
333
+ [1868.880 --> 1873.880] The bronze medalist on the other hand, he's not comparing himself to the gold medalist or silver medalist.
334
+ [1873.880 --> 1877.880] He's comparing himself to every schmuck who didn't even get onto the podium.
335
+ [1877.880 --> 1880.880] So he's going, hey, I got a medal.
336
+ [1880.880 --> 1881.880] Okay.
337
+ [1881.880 --> 1884.880] Now I think we all do this to a greater or less extent.
338
+ [1884.880 --> 1889.880] Some of us are really socially oriented to use the technical term.
339
+ [1889.880 --> 1897.880] I think I'm a good-looking charismatic, successful professor until I compare myself with Brad Pitt,
340
+ [1897.880 --> 1900.880] and maybe Barack Obama, or maybe Elon Musk.
341
+ [1900.880 --> 1906.880] My point is there's always someone on any dimension of your personality, whatever you think, is doing better than you.
342
+ [1906.880 --> 1911.880] Otherwise, we'd imagine if we ever had the opportunity, and this may be the R I should think about,
343
+ [1911.880 --> 1916.880] if you invite one of these guys here and get them to be honest, we could all identify aspects of our lives
344
+ [1916.880 --> 1918.880] where we feel inadequate compared to someone else.
345
+ [1918.880 --> 1926.880] But that's not the way we think, because social media presents this endless evidence of everyone having a much better life than ourselves.
346
+ [1926.880 --> 1933.880] In fact, it encourages us to post our best parties, our best dresses, our best friends, whatever.
347
+ [1933.880 --> 1939.880] Everyone's doing so well because we're seeking validation, which is why ostracism works as well,
348
+ [1939.880 --> 1945.880] because we're so fearful of missing out, we need to have all the support, but then we compare and despair.
349
+ [1945.880 --> 1947.880] I think that's the real problem of social media.
350
+ [1947.880 --> 1954.880] It's this tendency to produce a distortion of reality, and therefore we feel inadequate.
351
+ [1954.880 --> 1961.880] Before I move on, there is one situation where both the silver medalist and the bronze medalist can feel inadequate,
352
+ [1961.880 --> 1965.880] especially if their boys competing in a wrestling competition.
353
+ [1966.880 --> 1970.880] As you can see, they've been beaten.
354
+ [1970.880 --> 1976.880] Okay, lesson number four, become more optimistic.
355
+ [1976.880 --> 1978.880] This is really challenging.
356
+ [1978.880 --> 1983.880] As I said earlier on, we have a brain which has this design feature to pay attention to negative things,
357
+ [1983.880 --> 1987.880] they impact on us more, we pay more attention to it.
358
+ [1987.880 --> 1991.880] If people are reading the newspaper, we know that their eyes are looking at the negative information,
359
+ [1992.880 --> 1993.880] they're remembering it more.
360
+ [1993.880 --> 2000.880] So we have this real difficulty of being balanced because we're seeking it out, bad is stronger than good.
361
+ [2000.880 --> 2004.880] And yet some people do seem to have a disposition which is more positive.
362
+ [2004.880 --> 2008.880] Now, what is it about optimists and pessimists?
363
+ [2008.880 --> 2012.880] Well, first of all, optimists are more willing to try.
364
+ [2012.880 --> 2016.880] They don't give up as easy, pessimists say, what's the point? It's never going to work.
365
+ [2016.880 --> 2021.880] The optimists then generate are end up being more productive because people like to be around positive people.
366
+ [2021.880 --> 2025.880] They don't like to be around people who don't sort of, you know, try.
367
+ [2025.880 --> 2028.880] They end up being happier and healthier.
368
+ [2028.880 --> 2033.880] The same sort of mobility studies show, in one recent study in 2019, again,
369
+ [2033.880 --> 2039.880] very large study over 25 years, show that optimists live on average eight to ten years longer
370
+ [2039.880 --> 2042.880] than the most pessimistic people in that study.
371
+ [2042.880 --> 2047.880] So there is a real benefit to being optimistic, but how do you become more optimistic?
372
+ [2047.880 --> 2053.880] Well, this is a work of Marty Seeligman, and he's one of the four founding fathers of the positive psychology movement.
373
+ [2053.880 --> 2062.880] He, his colleagues, looked at optimism and pessimism, and they discovered there are different ways in which the two groups interpret events.
374
+ [2062.880 --> 2065.880] Remember, epitech this, it's not what happens, it's what you make of it.
375
+ [2065.880 --> 2072.880] I may identify three dimensions, permanence, pervasiveness, and personal, taking things personally.
376
+ [2072.880 --> 2076.880] So imagine you're a student and you fail a term paper.
377
+ [2076.880 --> 2080.880] The optimists and pessimists would differ in the way they make sense of it.
378
+ [2080.880 --> 2085.880] The pessimists would take that example, say, I'm never going to be able to pass exams, okay?
379
+ [2085.880 --> 2088.880] This is a fail, I'm never going to be able to change.
380
+ [2088.880 --> 2093.880] The optimists will say, draw a line under and say, all right, I'll do better next time.
381
+ [2093.880 --> 2101.880] In terms of pervasiveness, the pessimist is likely to extrapolate and say, I fail this exam, I'm a failure at everything I do in life.
382
+ [2101.880 --> 2110.880] Okay? Whereas the optimist is much more likely to curtail it or silo it and say, okay, that's one exam, but I've got other things going on in my life that are going well.
383
+ [2110.880 --> 2117.880] And in terms of responsibility, the pessimist is likely to internalize it with an internal locus of control.
384
+ [2117.880 --> 2120.880] It's my fault, I'm the one responsible for failure.
385
+ [2120.880 --> 2125.880] Whereas the optimist is more likely to deflect. Maybe it's not Professor Hood, he thinks it's a great lecture.
386
+ [2125.880 --> 2130.880] I don't think he's really a good teacher, you know? This problem, not mine.
387
+ [2130.880 --> 2137.880] So clearly you've got to find a balance because if you don't recognize your responsibility in any of these situations, you're never going to change, are you?
388
+ [2137.880 --> 2141.880] So you've got to recognize that you need to take some responsibility, you need to find a balance.
389
+ [2141.880 --> 2146.880] But if you're going to strike the balance, try and become more optimistic because of all the benefits.
390
+ [2146.880 --> 2148.880] How do you do that?
391
+ [2148.880 --> 2153.880] Well, what we recommend is a Sealick Men's ABCDE technique.
392
+ [2153.880 --> 2160.880] And if you're thinking about doing this, I would advise you to invest in a good journal or diary and a pen or a good pencil.
393
+ [2160.880 --> 2165.880] Don't do it digitally, you don't process thoughts as well when you do it on a computer.
394
+ [2165.880 --> 2170.880] When you're writing it down, the effort it takes to do it actually makes it process more deeply.
395
+ [2170.880 --> 2174.880] And what you do in this technique is it comes in two stages. ABC.
396
+ [2174.880 --> 2183.880] ABC is take an event that has upset you and articulating as much detail the nature of the adversity, what you believe happen and the consequences.
397
+ [2183.880 --> 2188.880] So let's say this is tomorrow.
398
+ [2188.880 --> 2194.880] I was giving this lecture the Royal Institution Discourse and I try to do a demonstration and the box fell apart and it was awful.
399
+ [2194.880 --> 2199.880] Everyone was laughing at me and I'm, oh my god, what a fool I must salute.
400
+ [2199.880 --> 2206.880] That's my belief. What the consequences? I'll never be in via back to the R.I. again. No one will buy my book.
401
+ [2206.880 --> 2214.880] And the point is you're articulating as much detail the nature of the problem because you need as much evidence in detail in order to dispute it.
402
+ [2214.880 --> 2218.880] This is what you do in the second half. In the second half, you step out of yourself.
403
+ [2218.880 --> 2223.880] I like to say a defense lawyer. You are now defending the client.
404
+ [2223.880 --> 2234.880] You take every example and piece of evidence that you've just written down and you challenge it because you can always find an alternative way of thinking about anything.
405
+ [2234.880 --> 2244.880] People were laughing. You can't imagine how you can take anything and you can always find a more positive spin on everything.
406
+ [2244.880 --> 2253.880] Having just gone through that process, you can then recognize that something that was upsetting you only half an hour earlier, you've now actually realized was not so bad.
407
+ [2253.880 --> 2263.880] And this should energize you. It should give you confidence to go out and try again and maybe prepare and get a better set of props next time rather than rubber shoes of parties.
408
+ [2263.880 --> 2274.880] OK, so that's the ABC Dictitute, but don't be delusional. OK, you've got to recognize as a reality to pay attention to. OK, lesson number five, control your attention.
409
+ [2274.880 --> 2278.880] So we should be about 40 minutes into the talk. Yeah, about right.
410
+ [2278.880 --> 2282.880] Which means that a lot of you are not paying attention.
411
+ [2283.880 --> 2290.880] Actually, I never really talk longer than 40 minutes because I know that's the limit for the attention span.
412
+ [2290.880 --> 2301.880] Thank you, Roland, situation, discourse lecture. But anyway, I am not disappointed because mind wandering is happening right now and a lot of you have not most of you because it's difficult.
413
+ [2301.880 --> 2308.880] I'm talking quite fast. I'm giving you lots of information. Your brain is the most metabolically hungry organ in the body.
414
+ [2309.880 --> 2318.880] It's effortful listening to someone, but we tend to just mind wander all the time. Now, you might think mind wandering control your attention is this lesson, by the way.
415
+ [2318.880 --> 2324.880] You might think mind wandering is pleasant, but a daydreamy. Oh, I wonder what I'm going to do after this. I might be fun.
416
+ [2324.880 --> 2330.880] But in fact, a lot of mind wandering is negative rumination. It's worrying about things.
417
+ [2331.880 --> 2339.880] We know this from the work of Killingsworth and my friend, Daniel Burt. They did this for the innovative study with smartphones.
418
+ [2339.880 --> 2345.880] They got people who gave them a nap and this app would notify them at various points of the day. Randomly and said, ask them three questions.
419
+ [2345.880 --> 2350.880] What are you doing right now? Are you thinking about what you're doing? And by the way, how happy are you?
420
+ [2351.880 --> 2357.880] And what they found was astounding. Actually, we've just replicated this in our student population as well.
421
+ [2357.880 --> 2369.880] So I'm quite sure this is robust. It turns out mind wandering happens 50% of the waking hours. Half of the time, we're not thinking about what we're doing.
422
+ [2369.880 --> 2375.880] Which is a little worrying if you're driving a car or flying a plane. Thank God for autopilot. That's a lie.
423
+ [2376.880 --> 2389.880] And in the third of all activities with one exception, which is sex, which is not too surprising because if there's one occasion where you really should be concentrating, it's during an intimate act like that.
424
+ [2389.880 --> 2398.880] But what was really interesting is that, yeah, we have positive moments of thinking about the upcoming glass-tumbly or whatever we're going to do and excited about that.
425
+ [2398.880 --> 2405.880] But when our thoughts are neutral or negative and you measure the happiness, it's much worse than even the positive events.
426
+ [2405.880 --> 2410.880] So this is why the paper was entitled, Wandering Mind is an Unhappy Mind. What can you do about it?
427
+ [2410.880 --> 2417.880] Well, paradoxically, oh, I need to remind you this. What's going on in the brain here? And this is really fascinating.
428
+ [2417.880 --> 2427.880] It turns out that when you are not on task in your mind is wandering, then there's a series of systems in the brain which fire into action.
429
+ [2427.880 --> 2439.880] They're called the default mode network. And I just want to take a moment to tell you this was discovered by chance by some of the early pioneers looking at F-M-R-I, which is functional magnetic resonance imaging.
430
+ [2439.880 --> 2448.880] This is where you can measure blood flow. And then these early studies in order to kind of measure what's going on when you're looking at visual targets or hearing words is you had to have a baseline measure.
431
+ [2448.880 --> 2453.880] So the early research has said, okay, just lie there and don't think of anything. We just need your baseline measures.
432
+ [2453.880 --> 2458.880] And what they thought would happen is that you get flattened brain activation. In fact, they found the opposite.
433
+ [2458.880 --> 2464.880] When you're told not to think of anything just lie there. The default mode network kicks into action.
434
+ [2464.880 --> 2471.880] And this network we now know is representation of self and others. And what we think is going on is it's running simulations.
435
+ [2471.880 --> 2477.880] I wish I hadn't said that and what's going to happen and what's going to happen tomorrow. So you're doing this housework.
436
+ [2477.880 --> 2493.880] There's housekeeping in your brain looking at all these potential things. So the default mode network is the basis for mind wandering and it's associated with negative thinking. So what can you do? Well, interestingly, the way to stop mind wandering is actually to clear your mind.
437
+ [2493.880 --> 2501.880] Meditation is known to be very effective in reducing mind-warning. So again, imaging studies, this is Brewster's work.
438
+ [2501.880 --> 2512.880] The green, the red and the blue are just different forms of meditation. And what they did is they looked at meditators defined as those who were experts with at least 10,000 hours or so.
439
+ [2512.880 --> 2522.880] And the controls were others who had just been taught the techniques. So they didn't have a lot of experience in it. And then they put them in the scanner and said, okay, just for your mind, don't think of anything.
440
+ [2522.880 --> 2539.880] And what you can see is that in the meditators, their bold signals, the blood oxygenation levels, drop. So they're becoming deactivated. Whereas in the novices, it starts to rise, which is the signal for the default mode network, same for the prefrontal and the more interior.
441
+ [2539.880 --> 2548.880] So this is confirmation that meditation seems to attenuate the default mode network, which is normally associated with being unhappy.
442
+ [2548.880 --> 2565.880] So what's going on? Well, all meditations, depending on what forms they are, kind of share a similar mechanism of switching your attentional spotlight from the internal dialogue, the internal little critic in your head, this little voice, which is saying you're useless, Bruce, why do you bother?
443
+ [2565.880 --> 2577.880] And switching it towards your sensations, pay attention to your breathing, think about this, think about. And basically trying to draw your attention away from the internal dialogue in your head.
444
+ [2577.880 --> 2589.880] Because you can't simultaneously attend to your thoughts and your bodily sensations like breathing or distance sounds, because attention operates a little bit like a spotlight.
445
+ [2589.880 --> 2599.880] A spotlight can be intense, you can focus a beam, and that will amplify the experience. So if you're enjoying pleasure, focus your attention on it, and it'll be even more pleasurable.
446
+ [2599.880 --> 2605.880] You can focus your attention on pain, and that makes it more painful, or you can have diffuse attention.
447
+ [2605.880 --> 2616.880] But the point is attention cannot be split easily, and if you're not attending to something, you won't notice it. In fact, attention and consciousness are interdependent in many ways.
448
+ [2616.880 --> 2627.880] And I'm going to discuss me an opportunity to demonstrate, I'm so grateful I got this wonderful demonstration from Michael Cohen at Amherst University Senate to me. I knew about it, but I asked him, he said you could show it.
449
+ [2627.880 --> 2641.880] Because what I'm going to show you now is a sequence. This is actually a movie, because as you're looking at this movie, it's changing right in front of your very eyes. So there we have our famous little spiral in the middle there, so that's what you saw before.
450
+ [2641.880 --> 2649.880] But as you're looking at this, because your attention is kind of spread and diffuse, you're not noticing that it's changing right in front of your very eyes.
451
+ [2649.880 --> 2660.880] In fact, it's almost a completely different thing. Now some of you may have spotted something, but my expectation that most of you have not spotted anything. So put your hands up, you've not spotted anything.
452
+ [2660.880 --> 2668.880] Okay, that's the majority. Let me show you what the first slide was. Here we go. That's the first slide.
453
+ [2668.880 --> 2676.880] And that's the end of the film. Okay, so oops, there we go. That's the end of the film. That's the first slide.
454
+ [2676.880 --> 2687.880] But because the change, remember I said the brain is a comparison engine. Those changes are so subtle. If you don't see it, if you don't know that you won't see them, they don't enter into consciousness.
455
+ [2687.880 --> 2698.880] So that's the mechanism behind attention. Whoops, now where are we going next? So you might think, okay, how many of us have lay in and bed at night, so stop thinking about this.
456
+ [2698.880 --> 2707.880] You know, last night, I was like, don't think about the lecture tomorrow. Just stop doing that. Because, you know, it's this intrusive thought. That's the worst thing you can do.
457
+ [2707.880 --> 2712.880] Okay, and I'm going to use another demonstration here. Are we doing 50 minutes? Okay, I can work it out.
458
+ [2712.880 --> 2721.880] Okay, for the next minute, you can think about anything you like. Anything. But the one thing you can't think about, you're not allowed to think about is a white bear.
459
+ [2721.880 --> 2729.880] So we're going to start there. Watch the clock. Go now. If it pops into your head, be honest. Put your hand up. So think about anything, but not the white bear. Go.
460
+ [2729.880 --> 2737.880] Okay, all right, the hands are right. It usually takes a little bit longer than that. I think your guys attention is fair. All right, some of you are managing it.
461
+ [2737.880 --> 2749.880] It's really tough. And this is what Doski Eski pointed out. He wrote back in 19th century that the pose yourself, this task, the one thing you can't stop yourself doing is ignoring a thought. You can't do it.
462
+ [2749.880 --> 2761.880] And the reason is down to my late colleague, Dan Wagner, point out it's called aronic thought suppression. It's ironic because the very thing you do want to think about becomes the most prominent thing in your mind.
463
+ [2761.880 --> 2773.880] The reason is because of attention. Okay, because you're trying to stop yourself thinking it. Okay, I can't think of this bear. Paradoxically, you're making that the strongest representation in your mind.
464
+ [2773.880 --> 2779.880] And also you're monitoring your stream of thought to see what if it's popped into your head ago. So how do you how do you deal with it?
465
+ [2779.880 --> 2794.880] Well, you can distract yourself. Remember attention cannot be split. And that I think is in many ways why gaming has become so popular when people are kind of because the games are designed to attack your attention or pull your attention in.
466
+ [2794.880 --> 2803.880] The trouble is it doesn't really help the unresolved problem. So I would suggest that you could possibly do the postponement and then process it a later day.
467
+ [2803.880 --> 2814.880] I think a big fan of keeping a journal because I think it allows you to process things but also gives you tangible evidence that the problem which was paramount last year has gone away.
468
+ [2814.880 --> 2825.880] We seem to forget how much easily the problems which we thought we could never get over actually do disappear. So I think processing thoughts write them down is very valuable.
469
+ [2825.880 --> 2835.880] Or you could try meditation because one of the things you're taught is if a thought comes into your head just acknowledge it and let it float out the window like a cloud. Don't give it any emphasis.
470
+ [2835.880 --> 2848.880] Okay. So that's lesson number five. Lesson number six. Now this really does cycle back to what I was saying about lesson number two. The fact was such a social animal.
471
+ [2848.880 --> 2860.880] The irony is that we now live in incredibly dense societies where there's lots of opportunities to form these connections. And it's a one thing we don't want to do.
472
+ [2860.880 --> 2872.880] If you travel on the tube you'll see that everyone is staring at their phones. And actually people actively avoid interacting because we think it's just going to be so awful and awkward.
473
+ [2872.880 --> 2889.880] And we don't like having to strike up conversations. So Nick Eppley, the study in Chicago and London. And he chose London specifically because people said this is the last thing. I couldn't imagine anything worse than speaking to someone else in public transport.
474
+ [2890.880 --> 2903.880] Seriously. And actually just come back to an anecdote in a moment. But he did this study where he got commuters and he paid them and then he measured their happiness. And then they were given one envelope which had one of three instructions.
475
+ [2903.880 --> 2913.880] They were told to commute out that day and sit in solitude or to strike up a conversation with a stranger or to do what they normally do which was the control.
476
+ [2914.880 --> 2922.880] And then they went out and did their commute and they were called up later in the day. They also got a second group of commuters who were asked to predict.
477
+ [2922.880 --> 2928.880] How do you think that would be? Okay. What do you think how positive do you think that experience would be? So here's the data.
478
+ [2928.880 --> 2939.880] And what you can see showing here is the data for the trains and the buses. These are the predictions. So white, the white square is solitude and that's going up from the scale.
479
+ [2939.880 --> 2947.880] So they were saying that would be best. That would be the most positive. And the least positive, in other words negative, would be forming a connection.
480
+ [2947.880 --> 2954.880] What did the people who actually had to do these instructions report? Well, probably guessed it. Entirely the opposite pattern.
481
+ [2954.880 --> 2968.880] People forced to strike up a conversation actually enjoyed more than being forced to sit in solitude. And this is what's called pluralistic ignorance. In other words, everyone thinks it will be so awkward to kind of strike up a conversation that no one does it.
482
+ [2969.880 --> 2977.880] Okay. And interesting, the people who are talked to, you might think, oh God, they're going to move away from me. They actually enjoy being talked to as well.
483
+ [2977.880 --> 2983.880] And the anecdote I was going to refer to is that when some, I think it was an American journalist heard about this research.
484
+ [2983.880 --> 2997.880] He actually tried a campaign on the tube saying strike up a conversation with me. And he had this, this Londoner had a counter campaign. I can't remember exactly the wording, but like hell, you will. I'd rather drink acid.
485
+ [2997.880 --> 3005.880] So anyway, so people just think this will be so awful. But in fact, it is generally good. So do strike up a conversation. Okay.
486
+ [3005.880 --> 3026.880] Lesson number seven, get out of your own head. What did chat GBT come up for that image? Well, not surprisingly, something psychedelic. Now, that is kind of relevant because in the book I do talk about this recent development in clinical research in clinical assisted therapies, psychedelics clinically assisted therapy.
487
+ [3026.880 --> 3038.880] And because it's going on here in London, I think it kings and imperialist researchers on this and in America. Obviously, it's under restraint because these are illegal drugs. But psilocybin.
488
+ [3038.880 --> 3050.880] In addition to its obviously euphoria and vivid hallucinations has been proven to actually impact on those with intractable depression with remarkable findings.
489
+ [3050.880 --> 3067.880] And large number of studies, but enough to get people really excited about it. It just takes one session. And months later, these people are still good. And the reason is, well, the one lasting legacy of this experience they report is they feel that their sense of self has been changed.
490
+ [3067.880 --> 3076.880] They feel a greater connection with humanity and the environment and the cosmos. So I think it's also maybe a coincidence.
491
+ [3076.880 --> 3087.880] psilocybin works on the serotonin system, which guess what? Is the mechanism for the default mood network? So maybe what we're seeing is an alteration of representation of self and others.
492
+ [3087.880 --> 3096.880] Now look, I'm not advocating that people should do this because actually it can be dangerous if you have certain dispositions. That's why it should be done on a clinical supervision.
493
+ [3096.880 --> 3104.880] But there are other ways to alter your sense of self. And this has been a feature of many civilizations and rituals.
494
+ [3104.880 --> 3114.880] And they don't all have to be religious, some of them are secular. But here is a religious ceremony. These are the famous whirling dervishes of Turkey, these are the Sufis.
495
+ [3114.880 --> 3128.880] And I did not know this, but then I have subtly discovered that this dance is a symbolic death of the ego, the hat. This bizarre hat on the head is supposed to be the tombstone of the ego. And the skirt is this shroud.
496
+ [3128.880 --> 3139.880] And they whirl around like this, constantly into a trance state. And they have this desolution of self, this annihilation, the sense of self.
497
+ [3139.880 --> 3156.880] Buddhists, Anata, Buddhism advocates the annihilation of self because of all the associated problems with it. Join a choir. Choirs are really great. People's sense of self, they feel more connected as the harmony of the rises of the sound.
498
+ [3156.880 --> 3163.880] And people have this emotional experience, which isn't just the musical experience, it's an emotional experience.
499
+ [3163.880 --> 3169.880] Or if you're very lucky, take a trip out into space.
500
+ [3169.880 --> 3177.880] One of the most common reports of astronauts that I believe, you know, Kevin von, certainly talked to them, is the so-called overview effect.
501
+ [3177.880 --> 3193.880] When you go out into outer space and look back at the planet, they typically report an unexpected profound sense of humanity and connection and seeing that the problems that they have done are inconsequential in comparison to the vastness of the universe and the beauty of the planet.
502
+ [3193.880 --> 3200.880] And this is called the overview effect. And I think that this report resonates with what Carl Sagan talked about.
503
+ [3200.880 --> 3207.880] I was speaking to someone earlier and I said, I will be mentioning some about physics. And this is the famous blue dot, the small pale blue dot.
504
+ [3207.880 --> 3216.880] And in it, you can see a pale little dot caught in a sunbeam. And Sagan, and I've kind of paraphrased, he's like, that's home, that's us.
505
+ [3216.880 --> 3232.880] On it, everyone you love, everyone you know, everyone you ever heard of, every human being, whoever was, lived out their lives, every hero, and coward, every creator, and destroyer of civilizations, every king, and peasant, every corrupt politician,
506
+ [3232.880 --> 3244.880] and we've got lots of them, every superstar, every supreme leader, every saint and sinner in the history of the species lived there on a remote, remote dust, suspended in the sunbeam.
507
+ [3244.880 --> 3256.880] And this is the image of Voyager as it's leaving our galaxy. I'm some people might think, oh god, that sounds terribly humbling. I think it's beautifully, I just think it's beautiful, and I think it captures.
508
+ [3256.880 --> 3264.880] This sense that we've got to see the connected ourselves, and enjoy the fragility of life and the brief moments that we have.
509
+ [3264.880 --> 3276.880] Or you could go detecting. This is me, and yes, I am a nerdy detectorist, this is me out with my mel detector in the fields in the summer set.
510
+ [3276.880 --> 3289.880] And you might think, oh my god, you must be mind wandering all the time, now I'm wandering around the fields. And yeah, my mind does wander, but the beauty of mel detector, it's a bit like fishing on land, okay, you're doing this, and you're, oh gosh, that's treasure.
511
+ [3289.880 --> 3296.880] And then you get down into it. So you're constantly being interrupted via thought processes, and I find this incredibly soothing.
512
+ [3296.880 --> 3306.880] And in fact, there's a study, a Danish study showing that my mel detector has been therapeutic for people with PTSD. And I can totally understand why it's a totally absorbing holiday.
513
+ [3306.880 --> 3318.880] So how do you achieve happiness, okay, I'm taking you through the seven lessons and let me just kind of summarize what I'm going to say. It's really about recognizing that there are different forms of happiness.
514
+ [3318.880 --> 3327.880] You can have hedonism, you know, we're going to be seeing a lot of that in the next couple days at Glassenbury, and I know some people are going there, you know, spotted you guys.
515
+ [3327.880 --> 3337.880] And that will be an opportunity to have intense emotional experiences, and that's all very well and fine. But it's very fleeting, okay.
516
+ [3337.880 --> 3347.880] This is our stippus, Aristotle. He was, he was a much more sort of a book type of person. He said, no, flourishing, eudamonia. This is where you should be doing. You should be enriching the lives of others.
517
+ [3347.880 --> 3356.880] You should be leaving a morally good life, and this will convey back upon you all the benefits of making other people happy.
518
+ [3356.880 --> 3370.880] I think it can be both, okay. I think there's something to make yourself happy, but there's also something to be said for making others happy. It's getting the balance right. If you are so self-centered, then others must suffer as a consequence. There's a limit, okay.
519
+ [3370.880 --> 3383.880] But if you are so other focused, there's a danger of becoming overly empathic to their problems, and losing your own sense of purpose and self-respect. So you've got to find this balance. And that, I think, is the secret to happiness.
520
+ [3383.880 --> 3397.880] And again, chat GBT, and now you can see the failures of AI. It can't spell selfish. When I said, right, selfish, it had some weird thing, and it doesn't even connect up properly. So I'm not worried about my jobs about to be taken away.
521
+ [3397.880 --> 3410.880] But this is the balance that we should seek, okay. Now, I just want to just point out something that maybe you haven't appreciated. And something I've discovered as I've been teaching this for six years.
522
+ [3410.880 --> 3422.880] That the happiness that you derive from your own pleasure, as it were, is inauthentic. You could do retail therapy. You could decide, I'm going to go out and treat myself today.
523
+ [3422.880 --> 3437.880] But the trouble is that level of joy and happiness doesn't last very long. It's inauthentic. Because if you are the instigator, purveyor, and recipient of happiness, it's never a surprise. You know when you're bored of it, and then you move on.
524
+ [3437.880 --> 3452.880] On the other hand, if you direct your efforts to making lots of other people happy, it's authentic because they're not expecting it. And you don't know how they're receiving it, but you can imagine that they're having a great time, and you never know when they're tired of your efforts.
525
+ [3452.880 --> 3464.880] And you know what, after my course is over, I often have students writing to me to express, I think, really profound sentiments about how this course has changed them.
526
+ [3464.880 --> 3477.880] And I think this is part of the issue, that when you force yourselves to try and be more alacentric, there are just for seeing consequences, which are generally really positive. And that's the beauty of it.
527
+ [3477.880 --> 3488.880] I'm always continually surprised by the little gestures and the messages of love for one of the better words that I'm receiving from my students. And nothing can be more gratifying than that.
528
+ [3488.880 --> 3504.880] Last year, I was out with my metal detector, and it was on my birthday, which made it more auspicious. And I discovered this Roman coin, which is probably not surprising because I live very close to Bath.
529
+ [3505.880 --> 3514.880] And then I looked and discovered that it was none other than Marcus Relius. Now, those of you who enjoyed Gladiator, he was one played by Richard Harris.
530
+ [3514.880 --> 3523.880] Marcus Relius was the last of the five great emperors, okay. And he was a stoic philosopher as well.
531
+ [3523.880 --> 3535.880] Famously, he used to be followed around by an assistant. I have anyone should bow to him or seduce to him, the assistant had to whisper into the emperor's ear.
532
+ [3535.880 --> 3549.880] You're just a man. You're just a man. And as for happiness, he said, look, the happiness of your life depends on the quality of your thoughts.
533
+ [3550.880 --> 3561.880] And I held that coin in my hand in that field in Somerset, and I looked at it and I thought, what a lucky guy I am.
534
+ [3562.880 --> 3570.880] And these are the teams I work with, okay. I can't take all the credit. This is the team who delivered the signs of happiness at University of Bristol.
535
+ [3570.880 --> 3578.880] And I'm truly blessed to be working with such an amazing team. And I know I'm 30 seconds too short, but that's my hour for you.
536
+ [3579.880 --> 3586.880] Thank you.
transcript/allocentric_OOXcH9dJsWA.txt ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 21.000] Scientists are extraordinary species.
2
+ [21.000 --> 25.680] They are not only curious, as Michael mentioned, but they can be mobilized for the right social
3
+ [25.680 --> 26.680] cause as well.
4
+ [26.680 --> 30.240] I think this event is a proof of that.
5
+ [30.240 --> 32.400] It's not a new phenomenon, of course.
6
+ [32.400 --> 37.640] It was the same thing many years ago when we needed to mobilize people from all over
7
+ [37.640 --> 38.640] the world.
8
+ [38.640 --> 42.080] This is an event that was the first meeting I ever organized in my life.
9
+ [42.080 --> 53.040] And there are some dignitaries there that came to us behind the Iron Curtain and brought
10
+ [53.040 --> 58.920] us very important knowledge, not only about scientific issues, but about how the rest of
11
+ [58.920 --> 60.760] the world is operating.
12
+ [60.760 --> 67.080] That meant a lot to me that John O'Keefe visited us several times and later on, of course,
13
+ [67.080 --> 69.240] other people also came.
14
+ [69.240 --> 76.000] So you can see from my eyes and the way I behave that I move my hands and do a lot of things,
15
+ [76.000 --> 80.040] but in fact, I can stay here completely in mobile.
16
+ [80.040 --> 84.640] I can close my eyes and I can give this talk from my memory.
17
+ [84.640 --> 87.040] There is no need to move.
18
+ [87.040 --> 93.600] And the reason why I can do that is because there are assemblies somewhere in my brain
19
+ [93.600 --> 96.880] that were initiated by this microphone.
20
+ [96.880 --> 101.960] And the moment it happened, that particular information that was carried by this assembly
21
+ [101.960 --> 105.960] is given to another one, which in turn gives the information to another one and another
22
+ [106.040 --> 107.040] one.
23
+ [107.040 --> 114.000] And if I had the opportunity to speak here for 10 hours, I could do it without moving around.
24
+ [114.000 --> 120.000] And the interesting thing, of course, is that the same machinery, the same structure,
25
+ [120.000 --> 125.560] the hippocampus and the entural cortex, is at work while we are giving talks like this.
26
+ [125.560 --> 127.120] So how does it work?
27
+ [127.120 --> 132.360] There is a deep relationship between navigation and memory.
28
+ [132.360 --> 137.680] But these are the two types of navigation that John mentioned in his introductory part.
29
+ [137.680 --> 139.840] That he called it differently.
30
+ [139.840 --> 144.160] But basically, this is what we can call that reckoning type of navigation.
31
+ [144.160 --> 148.280] This is the kind of navigation that Christopher Columbus used.
32
+ [148.280 --> 154.360] You have to remember how many eyes, notical eyes, and what kind of angles you make.
33
+ [154.360 --> 159.440] And once you are done, then you can calculate the vector how to come back.
34
+ [159.440 --> 162.960] Now this is a special way of navigating because it all depends on me.
35
+ [162.960 --> 165.120] All the information has to be in me.
36
+ [165.120 --> 168.360] In fact, I can close my eyes and I can move around.
37
+ [168.360 --> 173.080] And I come back to the same podium from what we call idiothetic memory, from my body
38
+ [173.080 --> 175.040] or from my mind.
39
+ [175.040 --> 181.440] Now once you navigate the world and you explore every single part of the environment,
40
+ [181.440 --> 182.840] then you can make a map.
41
+ [182.840 --> 188.480] In fact, I can do the same thing when I move around in a completely dark room here, I come
42
+ [188.480 --> 193.040] to Mike from one angle to another angle and Mike is an object.
43
+ [193.040 --> 195.400] And it's a node in a network.
44
+ [195.400 --> 200.240] And the more I wander around, the more nodes I have, and we will have a graph.
45
+ [200.240 --> 205.680] Once we have a graph, then we can have this extremely flexible map that John mentioned
46
+ [205.680 --> 212.480] that is not only that I visited 96 parts, but I can go from anywhere to anywhere else.
47
+ [212.480 --> 216.920] So the difference between the two, that this is a map that I can give to anybody, it's
48
+ [216.920 --> 217.920] allocentric.
49
+ [217.920 --> 218.920] It's independent of me.
50
+ [218.920 --> 222.360] It's not my collection of information.
51
+ [222.360 --> 227.320] It is, I think you gave the name egocentric and allocentric.
52
+ [227.320 --> 229.840] This is the allocentric map.
53
+ [229.840 --> 236.680] The interesting thing, of course, is that this mechanism, this machinery that has been
54
+ [236.680 --> 244.440] worked out by nature, initially, for navigating space, can change a little bit and it can internalize
55
+ [244.440 --> 247.800] it in such a way that you no longer need cues from the environment.
56
+ [247.800 --> 254.040] I no longer need any cues from you or from my computer, I can keep going with my, oh,
57
+ [254.040 --> 257.520] those because I can completely disengage the environment.
58
+ [257.520 --> 264.800] So the good news is that on the other side of the equation, this machinery is working
59
+ [264.800 --> 267.120] also in two different ways.
60
+ [267.120 --> 271.760] One is what we call egocentric or personal memories.
61
+ [271.760 --> 275.120] These are the precious memories that we all collected in our lifetimes.
62
+ [275.120 --> 280.680] What makes difference between you and somebody else is the memories that we own, that we collected
63
+ [280.680 --> 281.680] all of it.
64
+ [281.680 --> 290.040] This is a special egocentric memory kind of memory.
65
+ [290.040 --> 296.360] Now just like the way how we make maps, if I have one experience and I have the same experience
66
+ [296.360 --> 301.080] over and over again, for example, when I met the first dog in my life, that was a personal
67
+ [301.080 --> 302.080] experience.
68
+ [302.080 --> 307.960] But when I met many, many, many dogs, then the spatio temporal specific conditions of
69
+ [307.960 --> 311.720] those things that led me to recognize a dog are irrelevant.
70
+ [311.720 --> 314.000] We just have the idea of a dog.
71
+ [314.000 --> 315.000] This is abstract nature.
72
+ [315.000 --> 317.600] It's an abstract thing just like the map.
73
+ [317.600 --> 319.640] This is called semantic information.
74
+ [319.640 --> 324.720] In order to get semantic information in, under regular and simple conditions, we have to
75
+ [324.720 --> 326.280] go through this process.
76
+ [326.280 --> 333.280] So there is a nice one to one relationship between the machinery that was initially prepared
77
+ [333.280 --> 337.160] with the help of the external environment to navigate in space.
78
+ [337.160 --> 341.200] Now what we do is we navigate mentally.
79
+ [341.200 --> 344.480] We can travel back to the past, we call it memory.
80
+ [344.480 --> 349.880] We can travel forward into the future and we can call it planning.
81
+ [349.880 --> 357.440] So how to get closer to these ideas that we have written up with that one.
82
+ [357.440 --> 362.080] More than a couple of years ago, to do experiments, but first we need a good hypothesis.
83
+ [362.080 --> 367.360] The good hypothesis usually comes from theories and the good theories are laid in books.
84
+ [367.360 --> 372.520] The best book in this business is Okif Denel 1978.
85
+ [372.520 --> 378.640] The idea was back then and it still is that the reason why place cell one and place cell
86
+ [378.640 --> 383.600] two and place cell three are coming one after the other is because somehow the environmental
87
+ [383.600 --> 386.960] constellations make them fire.
88
+ [386.960 --> 391.240] So this is a way to generate sequences that is when we wander around that different neurons
89
+ [391.240 --> 397.720] will be active one after the other as we have seen in three talks already.
90
+ [397.720 --> 403.560] There is another way of doing sequences which I call internally generated sequences,
91
+ [403.560 --> 409.040] meaning that there is a self-organized system that once it has an initial condition it just
92
+ [409.040 --> 410.280] can't help.
93
+ [410.280 --> 412.640] It keeps generating sequences forever.
94
+ [412.640 --> 419.280] I can't help when you ask me a question that lingers in my mind and it keeps generating
95
+ [419.280 --> 422.400] one assembly after the other and without any issues.
96
+ [422.400 --> 423.680] So how do we test it?
97
+ [423.680 --> 427.640] The good thing is I said let's test John's idea.
98
+ [427.640 --> 436.200] Which means that in theory we can freeze a rodent, a rat, here and now but by some magic
99
+ [436.200 --> 439.480] we maintain the hippocampal theory oscillation system.
100
+ [439.480 --> 446.000] The prediction of the theory is that the here and now will be carried by a subset of the
101
+ [446.000 --> 450.960] hippocampal cells and as long as the animal is in this particular place in the world that
102
+ [450.960 --> 455.480] those subset of cells should fire forever because the here and now is determined by those
103
+ [455.480 --> 456.800] subset of cells.
104
+ [456.800 --> 459.600] So we can do an approximate experiment for that.
105
+ [459.600 --> 464.720] We train an animal in a hippocampal dependent task which is a spontaneous alternation task
106
+ [464.720 --> 469.680] which means that the animal can choose here if it's rewarded on the right with water
107
+ [469.680 --> 470.680] he has to remember.
108
+ [470.680 --> 473.480] Ah, I collected water from the right.
109
+ [473.480 --> 477.040] Next time there's no going point going there I have to go to the opposite direction.
110
+ [477.040 --> 480.520] I have to alternate one or the other.
111
+ [480.520 --> 486.640] The only thing that we did here, this is an old task, is that we asked the animal to
112
+ [486.640 --> 493.000] run in a running wheel and face it in always the same direction and run approximately
113
+ [493.000 --> 494.520] with the same speed.
114
+ [494.520 --> 499.320] So we have done everything in our power to make sure that the information from the world
115
+ [499.320 --> 500.640] is constant.
116
+ [500.640 --> 504.520] The information from the body, we call it the idiotic information that comes from running
117
+ [504.520 --> 506.040] is also constant.
118
+ [506.040 --> 512.600] So the prediction will be, or from the map theory is that when we find a cell that happens
119
+ [512.600 --> 515.840] to fire, that should fire forever.
120
+ [515.840 --> 524.000] And we should find a lot of cells that don't fire whatsoever because they don't design
121
+ [524.000 --> 528.680] or don't decide about the xy coordinates of this map here.
122
+ [528.680 --> 534.680] So I'm going to show you one neuron that happens to be in this situation and this cracking
123
+ [534.680 --> 539.080] sound that everybody has shown before me means that it's a signal neuron fires.
124
+ [539.080 --> 543.400] You can see that there was some firing and then you can see that the animal went to the
125
+ [543.400 --> 544.400] left.
126
+ [544.400 --> 550.240] Now, for 15 seconds this is our criteria that we have to make the animal run for 15 seconds.
127
+ [550.240 --> 553.680] There was no firing whatsoever, the animal goes to the right.
128
+ [553.680 --> 562.720] Comes back, fires it will go to the left.
129
+ [562.720 --> 569.320] No firing.
130
+ [569.320 --> 572.920] You can make a prediction, the animal will go to the right.
131
+ [572.920 --> 576.920] Now there is good firing.
132
+ [576.920 --> 584.160] We have fast learners, so we already know that the animal has to go to the right.
133
+ [584.160 --> 588.240] Oh, left.
134
+ [588.240 --> 590.080] Now what should happen now?
135
+ [590.080 --> 594.080] No firing.
136
+ [594.080 --> 596.680] I take votes.
137
+ [596.680 --> 600.480] The animal will go to the right.
138
+ [600.480 --> 606.320] But sometimes we make mistakes and animals also make mistakes.
139
+ [606.320 --> 609.520] And the question is what makes up the mind of the animal?
140
+ [609.520 --> 612.360] Is it the mind itself or these neurons?
141
+ [612.360 --> 615.360] And so this animal has been running for a while and you could hear there was election
142
+ [615.360 --> 616.360] potential.
143
+ [616.360 --> 619.160] Oops, there are two more.
144
+ [619.160 --> 622.520] Now we are hesitant because we are not sure whether then we will go to the left or the
145
+ [622.520 --> 623.520] right.
146
+ [623.520 --> 626.080] The cell tells you have to go to the right.
147
+ [626.080 --> 631.280] But the behavior that we observe from outside it tells us the opposite.
148
+ [631.280 --> 634.600] You can see that the animal's behavior follows the neuron.
149
+ [634.600 --> 635.960] So this is good.
150
+ [635.960 --> 638.360] What you have seen here is two things.
151
+ [638.360 --> 641.920] One is that this neuron was active for a short period of time.
152
+ [641.920 --> 647.520] It didn't obey the rule that you are here, therefore this neuron should fire forever.
153
+ [647.520 --> 650.720] It just had a very short lifetime.
154
+ [650.720 --> 655.200] Now we can have many neurons recorded just like this.
155
+ [655.200 --> 659.680] This neuron is not very useful for anything interesting.
156
+ [659.680 --> 661.080] Why is that?
157
+ [661.080 --> 664.920] Because this neuron fired only for about two seconds and the animal has to remember at
158
+ [664.920 --> 668.280] least for another 13 seconds that I have to make a data turn.
159
+ [668.280 --> 670.320] So it has to have a partner.
160
+ [670.320 --> 674.000] So our neuron is somewhere here and it fired.
161
+ [674.000 --> 677.880] But on the same neuron in the opposite runs when the initial condition was different it
162
+ [677.880 --> 679.040] didn't fire.
163
+ [679.040 --> 684.520] But it could give the information to another neuron, another neuron, another neuron.
164
+ [684.520 --> 686.800] So this trajectory is nothing else.
165
+ [686.800 --> 695.040] But the activity in this end-dimensional space, what we call the CA3 system that Edward already
166
+ [695.040 --> 698.920] have shown, that it travels very nicely in one direction.
167
+ [698.920 --> 703.200] In the next time the initial condition is different, the travel of this trajectory is
168
+ [703.200 --> 704.200] uniquely different.
169
+ [704.200 --> 710.800] And we have about 65,000 memories and there are 65,000 unique assembly sequences.
170
+ [710.800 --> 716.040] So now when you have enough number of neurons you can see that the animal is not getting
171
+ [716.040 --> 717.040] anywhere.
172
+ [717.040 --> 722.280] It's at the same spot yet many, many, many neurons fire along the journey and the entire
173
+ [722.280 --> 727.800] memory journey is tired by some of these cells and they are unique, different.
174
+ [727.800 --> 732.600] It's enough to make a very short slice of time and every single time me the experimenter
175
+ [732.600 --> 741.480] can make good prediction that the animal will go to the right or left, 15 second later
176
+ [741.480 --> 743.480] including errors.
177
+ [743.480 --> 752.680] So this is the power of the system that shows that indeed it's not only about guided information
178
+ [752.680 --> 755.800] from the outside world but it can be guided from inside also.
179
+ [755.800 --> 757.200] This is a rodent.
180
+ [757.200 --> 761.160] Every single time we showed something like this, the psychologist and the cognitive scientist
181
+ [761.160 --> 766.360] the kind you are here at the CU said, it's not kosher enough.
182
+ [766.360 --> 770.840] The reason for that is because the episodic memory has a test.
183
+ [770.840 --> 773.640] The test is in the eating.
184
+ [773.640 --> 775.160] How is it Canadian?
185
+ [775.160 --> 779.200] The eating of the pudding is in the test of the pudding is in the eating.
186
+ [779.200 --> 784.680] Here the test of course is taking the same experiment to the human level.
187
+ [784.680 --> 790.720] I did a mini-synastobatical in Hebrew University and I met a nice neurosurgeon to my
188
+ [790.720 --> 793.720] friend and respected a lot of it's like freed.
189
+ [793.720 --> 797.720] And he is doing pretty much the same kind of experiments in humans that I do in rodents or
190
+ [797.720 --> 798.720] we do in rodents.
191
+ [798.720 --> 803.560] He just puts electrodes in epileptic patients in the hippocampus and he has the opportunity
192
+ [803.560 --> 810.760] to record from those neurons and he can do things that none of us can do which is asking
193
+ [810.760 --> 813.720] questions from the patients.
194
+ [813.720 --> 822.960] So once you can do that, the problem we have here is we can't ask the animal whether
195
+ [822.960 --> 824.720] you have a spontaneous recall.
196
+ [824.720 --> 831.440] When you have a spontaneous recall that I remember the first time I have seen John that
197
+ [831.440 --> 833.880] has no clue that it just happens.
198
+ [833.880 --> 837.680] So this is the test we cannot do in rodents so we have to do it in humans.
199
+ [837.680 --> 845.680] So here is a nice movie that was recorded by its accrued group.
200
+ [845.680 --> 850.560] What happens here is that there is only one euro not so many as we have but they can
201
+ [850.560 --> 854.720] ask questions in forms of movie clips.
202
+ [854.720 --> 857.160] And so there is a trajectory that goes in the resurrection.
203
+ [857.160 --> 862.040] Here is myself that can be part of many trajectories but not everyone of them.
204
+ [862.040 --> 866.040] You can expect that at least there should be some unique.
205
+ [866.040 --> 872.280] You can see that the media or if I wasn't there was a robot firing in this area.
206
+ [872.280 --> 874.520] The other movies don't do anything interesting.
207
+ [874.520 --> 878.600] No, I'm not pregnant with your sweatshirts, your waist, your lower leg, the later.
208
+ [878.600 --> 885.200] Now silent activity that is not much activity but you can really put up and see it.
209
+ [886.200 --> 892.200] This is a Tom Cruise cell that is activated by this particular feature.
210
+ [897.200 --> 901.760] It is not particularly different than what you can do with the laboratory in the rodent.
211
+ [901.760 --> 904.440] This part of the experiment is distinct.
212
+ [904.440 --> 909.840] A few minutes later the experiment has asked what did you see.
213
+ [909.840 --> 912.120] So this is from spontaneous recall.
214
+ [912.120 --> 916.120] There is no external environment.
215
+ [916.120 --> 921.400] And you can see that there is no activity whatsoever here even though one after the other
216
+ [921.400 --> 924.920] the items or the movie clips were recalled verbally.
217
+ [924.920 --> 927.080] And this is a very intelligent audience here.
218
+ [927.080 --> 930.920] You can already tell me what is going to happen here.
219
+ [930.920 --> 936.040] What is going to happen here is not what you exactly think because first the neuron will
220
+ [936.040 --> 943.040] fire and later about 200 millisecond later the person will say the right answer.
221
+ [943.040 --> 949.040] So what I have shown here is that the activity initially goes from the outside world all
222
+ [949.040 --> 952.600] the way through at least five synapses to the hippocampus.
223
+ [952.600 --> 957.920] And when you spontaneously call the activity starts in the hippocampus and goes up and reconstructs
224
+ [957.920 --> 959.440] just the way how it may be said.
225
+ [959.440 --> 963.560] There is a memory is a reconstruction and it is shown very nicely here.
226
+ [963.560 --> 970.560] So this is the kind of things that the hippocampus can do both during navigation and memory recall
227
+ [970.560 --> 973.240] when the animal is awake and attending.
228
+ [973.240 --> 980.320] But those memories are not like the digital cameras taking a picture.
229
+ [980.320 --> 986.720] In fact there is time to constantly that information and that happens after the initial experiments.
230
+ [986.720 --> 991.520] The hippocampus like many other structures in the brain is at least show two different
231
+ [991.520 --> 992.520] states.
232
+ [992.520 --> 995.840] One is what we call the interactive state or preparatory state.
233
+ [995.840 --> 997.680] Otherwise the consumer state.
234
+ [997.680 --> 1002.320] And we record from the hippocampus you can very beautifully see that there are two
235
+ [1002.320 --> 1003.320] distinct patterns.
236
+ [1003.320 --> 1006.880] Anybody in this room can tell me that this is different from this one.
237
+ [1006.880 --> 1012.000] And if you look at the electrical activity of the hippocampus we can tell precisely what
238
+ [1012.000 --> 1013.000] the animal does.
239
+ [1013.000 --> 1015.320] We can't tell where the animal is.
240
+ [1015.320 --> 1018.400] That's what lead it for the grid cells and the places.
241
+ [1018.400 --> 1022.120] But we can tell exactly what the animal does.
242
+ [1022.120 --> 1027.960] So I call this sharp wave patterns but we zoomed in a little bit and then you can see
243
+ [1027.960 --> 1031.600] it's a little wiggly of the trace here.
244
+ [1031.600 --> 1033.800] It's called Ripples.
245
+ [1033.800 --> 1035.160] Baptized by John O'Keefe.
246
+ [1035.160 --> 1040.000] So now I combine the two and say these are sharp wave ripples because they have two different
247
+ [1040.000 --> 1042.520] mechanisms but they are useful for something.
248
+ [1042.520 --> 1046.760] And I was convinced very early on when I have seen that this is a extraordinary pattern for
249
+ [1046.760 --> 1049.360] the following reason.
250
+ [1049.360 --> 1055.800] That this is the most synchronous pattern in the mammalian brain.
251
+ [1055.800 --> 1060.240] It starts from here, it's the C8 origin, it goes up and goes to the internal cortex and
252
+ [1060.240 --> 1064.200] it broscasts from the internal cortex to the entire near cortex.
253
+ [1064.200 --> 1066.720] So what can it do?
254
+ [1067.720 --> 1071.680] Well, I have my own movie.
255
+ [1079.920 --> 1086.120] They are both going to sleep now and that means their brains are going to work.
256
+ [1086.120 --> 1091.840] Because when mammals sleep our brains are active consolidating our memories.
257
+ [1091.840 --> 1096.720] If memories are not consolidated, Smela and her hamster will not recognize each other
258
+ [1096.720 --> 1100.920] when they wake up.
259
+ [1100.920 --> 1101.920] This is Yorip Bushaki.
260
+ [1101.920 --> 1102.920] I'm sorry for the...
261
+ [1102.920 --> 1105.920] I didn't want to bring this in 2000.
262
+ [1105.920 --> 1106.920] I couldn't get it out.
263
+ [1106.920 --> 1107.920] I couldn't get it out.
264
+ [1107.920 --> 1111.800] To the discovery of how this memory consolidation works.
265
+ [1111.800 --> 1118.120] Dr. Bushaki is a professor in neuroscience at the New York University School of Medicine.
266
+ [1118.120 --> 1123.080] In his laboratories, scientists study how the internal structures of the brains of mammals
267
+ [1123.080 --> 1133.200] communicate and how daytime impressions are secured as memories while we are sound asleep.
268
+ [1133.200 --> 1139.120] In the late 80s, Yorip Bushaki discovered an important element of the memory process.
269
+ [1139.120 --> 1141.120] It takes place in the middle of the brain.
270
+ [1141.120 --> 1143.120] Yorip Bushaki, a name like this, means...
271
+ [1143.120 --> 1147.320] The entire army of Adesia, this or so.
272
+ [1147.320 --> 1152.320] Even though this structure looks different from species to species, there is one in every mammal.
273
+ [1152.320 --> 1156.320] And it keeps track of our daily memories.
274
+ [1156.320 --> 1163.320] When Bushaki made his discovery, he was listening to neurons firing in the hippocapus of a sleeping rodent.
275
+ [1163.320 --> 1168.320] I was listening to the loudspeaker and how neurons come together and work together very powerfully.
276
+ [1168.320 --> 1170.320] That was an astonishing pattern for me.
277
+ [1170.320 --> 1175.320] And then I began to explore in depth what this could be.
278
+ [1175.320 --> 1179.320] It had been known for some time that sleep is important for memory.
279
+ [1179.320 --> 1182.320] But it was not a bit like a mid-oven fifth.
280
+ [1182.320 --> 1183.320] You hear it once.
281
+ [1183.320 --> 1185.320] It's not speaking, because you remember it forever.
282
+ [1185.320 --> 1189.320] During sleep, the hippocampus instructs the brain about what to remember
283
+ [1189.320 --> 1194.320] and does so in bursts of time-compressed information of what we learn during daytime.
284
+ [1194.320 --> 1198.320] Shuffled and played back in fast forward and fast reverse.
285
+ [1198.320 --> 1204.320] They are replayed, so to speak, the gain and the gain and the gain in small fragments.
286
+ [1204.320 --> 1207.320] And this is the fragment that we identified.
287
+ [1207.320 --> 1211.320] This is called hippocampus sharp wave ripple.
288
+ [1211.320 --> 1215.320] This is a pattern that lasts for about 100 milliseconds or so.
289
+ [1215.320 --> 1219.320] You have about 2,000 of those patterns every single night.
290
+ [1219.320 --> 1224.320] If I erase those patterns from your brain, you will not remember this interview tomorrow.
291
+ [1224.320 --> 1228.320] Same thing, if I erase your suffering, you will become your brain.
292
+ [1228.320 --> 1230.320] You won't remember anything wrong.
293
+ [1230.320 --> 1233.320] It's just a story. Maybe a story or a story.
294
+ [1233.320 --> 1237.320] Today, the sharp wave ripples are widely acknowledged in neuroscience
295
+ [1237.320 --> 1241.320] and a part of the two-stage model of memory.
296
+ [1241.320 --> 1245.320] This model explains why we must sleep to remember well.
297
+ [1245.320 --> 1249.320] And it predicts that we might one day be able to improve our memories,
298
+ [1249.320 --> 1255.320] learn faster and alleviate the negative effects of brain diseases.
299
+ [1255.320 --> 1258.320] So how does this memory consolidation work?
300
+ [1258.320 --> 1262.320] And how might it at some point make us learn better and remember more?
301
+ [1262.320 --> 1268.320] The answer is in the brain and in the laboratory.
302
+ [1268.320 --> 1269.320] Hi David.
303
+ [1269.320 --> 1271.320] I'm here in the hippocampus yet.
304
+ [1271.320 --> 1278.320] Humans, rodents, all mammals seem to consolidate memories in the same way.
305
+ [1281.320 --> 1287.320] While awake, the brain processes information and stores it transiently in the hippocampus,
306
+ [1287.320 --> 1290.320] exactly where Bushacky made his discovery.
307
+ [1290.320 --> 1296.320] And that makes sense because the hippocampus is now known as the work desk of our memory.
308
+ [1296.320 --> 1298.320] What happens during stage one?
309
+ [1298.320 --> 1301.320] You are listening to me, we are having a conversation,
310
+ [1301.320 --> 1307.320] and fragments of this conversation are detected and stored transiently,
311
+ [1307.320 --> 1309.320] mostly in the hippocampus.
312
+ [1309.320 --> 1316.320] Neural information about sounds, feelings, smells, visions, places, persons,
313
+ [1316.320 --> 1321.320] or whatever we experience is stored here during the first part of our two-stage memory.
314
+ [1321.320 --> 1327.320] But just like a work desk is limited in size, so is the hippocampus.
315
+ [1327.320 --> 1333.320] The brain needs a library for memories, a safe place where it can consolidate and porten information,
316
+ [1333.320 --> 1336.320] so we can retrieve it again.
317
+ [1336.320 --> 1340.320] Where are the bookshelves in the brain?
318
+ [1340.320 --> 1344.320] Right here in the neocortex.
319
+ [1344.320 --> 1349.320] The large areas in the outer brain where the information was originally processed.
320
+ [1349.320 --> 1356.320] These are the areas of the brain that receive the nightly flashes of compressed memories.
321
+ [1356.320 --> 1362.320] What Bushacky saw decades ago was the hippocampus writing memories on the neocortex,
322
+ [1362.320 --> 1369.320] organizing memories like a library and returning books to the shelves after use.
323
+ [1369.320 --> 1372.320] This is the second part of the two-stage memory model,
324
+ [1372.320 --> 1375.320] and it rounds up the role of the hippocampus in the brain.
325
+ [1375.320 --> 1377.320] The hippocampus is a...
326
+ [1377.320 --> 1381.320] if you want, is it a appendage to the large neocortex?
327
+ [1381.320 --> 1385.320] Its inputs are coming from the neocortex, and its outputs are going back to the neocortex.
328
+ [1385.320 --> 1387.320] So this structure cannot do a lot of things.
329
+ [1387.320 --> 1393.320] The only thing that can do reasonably well is modify its inputs and organize its inputs.
330
+ [1393.320 --> 1396.320] And this is exactly what the hippocampus is about,
331
+ [1396.320 --> 1402.320] is organizing the different information that are stored in different parts of the neocortex.
332
+ [1402.320 --> 1406.320] This process of organizing memories goes on night after night,
333
+ [1406.320 --> 1411.320] and it turns out if the nightly flashes of information from the hippocampus are disturbed,
334
+ [1411.320 --> 1413.320] so are the memories.
335
+ [1413.320 --> 1416.320] Memories can be manipulated during sleep.
336
+ [1416.320 --> 1420.320] We can... you raise memories, we can make memories disappear,
337
+ [1420.320 --> 1423.320] we can make memory pamper pretty easily.
338
+ [1423.320 --> 1426.320] The interesting thing would be, of course, how to improve them.
339
+ [1426.320 --> 1428.320] Well, you're... without the insidious...
340
+ [1428.320 --> 1430.320] This is an important challenge,
341
+ [1430.320 --> 1434.320] because many diseases like epilepsy or Alzheimer's disease,
342
+ [1434.320 --> 1439.320] involve disturbances in the communication between the hippocampus and the neocortex.
343
+ [1439.320 --> 1441.320] And that creates memory problems,
344
+ [1441.320 --> 1443.320] as it happened for Richard Shane,
345
+ [1443.320 --> 1448.320] who had epilepsy before he was finally cured with brain surgery.
346
+ [1448.320 --> 1451.320] For 22 years, I had roughly...
347
+ [1451.320 --> 1454.320] I'm not sure how many I had when I was sleeping, but I had...
348
+ [1454.320 --> 1457.320] 3,000 seizures.
349
+ [1457.320 --> 1460.320] Could the memories of patients with epilepsy be helped?
350
+ [1460.320 --> 1465.320] If the communication between the hippocampus and the neocortex is brought back to normal?
351
+ [1465.320 --> 1470.320] Every single animal model of Alzheimer's disease and autism,
352
+ [1470.320 --> 1474.320] comes with a distorted form of sharp-favoripples.
353
+ [1474.320 --> 1477.320] So we know that that pattern is impaired.
354
+ [1477.320 --> 1483.320] So another question is, how we can change the balance between the abnormal patterns
355
+ [1483.320 --> 1487.320] and the normal good sharp-favoripple patterns?
356
+ [1487.320 --> 1490.320] Can we restore them by any means?
357
+ [1490.320 --> 1495.320] Improving the memory process is a huge challenge for neuroscience,
358
+ [1495.320 --> 1501.320] but it is one of the promising perspectives of the work in leading laboratories.
359
+ [1501.320 --> 1505.320] That one day, we might succeed in this challenge.
360
+ [1505.320 --> 1509.320] One day, it might be possible to secure more people
361
+ [1509.320 --> 1513.320] that consolidated memory of a good night's sleep.
362
+ [1515.320 --> 1519.320] So, sorry for the propaganda pad,
363
+ [1519.320 --> 1523.320] but the show's story predicts at least two things,
364
+ [1523.320 --> 1527.320] that if the substrate is the same for making maps and making memories,
365
+ [1527.320 --> 1531.320] then erasing these sharp ways should have an impact on both.
366
+ [1531.320 --> 1535.320] So the first experiment is a simple one.
367
+ [1535.320 --> 1540.320] Erase every possible sharp-favoripple during sleep.
368
+ [1540.320 --> 1543.320] And we can do it in a rodent, in a complicated way.
369
+ [1543.320 --> 1547.320] It doesn't really matter, but what the goal here is that everyone learns
370
+ [1547.320 --> 1551.320] that there are three arms in this multiple arm maze,
371
+ [1551.320 --> 1553.320] and they have to find food there.
372
+ [1553.320 --> 1556.320] After every single learning session, we put the animal back to the home cage,
373
+ [1556.320 --> 1562.320] and then erase every single clip where this replay occurs.
374
+ [1562.320 --> 1566.320] The sleep doesn't, is not a fact at all.
375
+ [1566.320 --> 1568.320] All you have is a sleep without sharp-favoripples.
376
+ [1568.320 --> 1572.320] The animal wakes up and tries to learn again, and it learns again and again.
377
+ [1572.320 --> 1575.320] And it turns out that without sharp-favoripple,
378
+ [1575.320 --> 1581.320] the performance of this animal over days is as bad as lesioning the entire hippocampus.
379
+ [1581.320 --> 1584.320] So this is a very important and interesting pattern.
380
+ [1584.320 --> 1588.320] So let's zoom in and see what happens in this particular time window,
381
+ [1588.320 --> 1592.320] which is about 50 to 100 milliseconds, when it occurs.
382
+ [1592.320 --> 1596.320] So here is a situation, very similar to what you have seen before.
383
+ [1596.320 --> 1601.320] When the animal is asked very simply just to run from here to here and back,
384
+ [1601.320 --> 1603.320] and there is reward and there is reward.
385
+ [1603.320 --> 1606.320] These are the play cells that you already familiar with.
386
+ [1606.320 --> 1609.320] This entire journey is test-related.
387
+ [1609.320 --> 1611.320] There are play cells everywhere.
388
+ [1611.320 --> 1617.320] But before the animal runs, it sits there, and then there is a sharp wave there.
389
+ [1617.320 --> 1620.320] And you ask what happens during the sharp phase.
390
+ [1620.320 --> 1625.320] And you can see that here at a single time window, only few neurons fire.
391
+ [1625.320 --> 1627.320] Here, nearly all of them fire.
392
+ [1627.320 --> 1631.320] This is the most synchronous pattern of the mammalian brain, as I mentioned before.
393
+ [1631.320 --> 1634.320] But the interesting thing, of course, is that before the journey,
394
+ [1634.320 --> 1638.320] the subconsciously, if you want, recapitulating or planning,
395
+ [1639.320 --> 1644.320] whatever the term you use, the sequence is the same as the sequence on the track.
396
+ [1644.320 --> 1648.320] And at the end of the journey, the animal is rewarded.
397
+ [1648.320 --> 1653.320] Now, the animal recapitulates the journey, but in the reverse order.
398
+ [1653.320 --> 1658.320] So that's an interesting thing that we can manipulate time and space back and forth
399
+ [1658.320 --> 1663.320] during this short pattern in waking state, but also during sleep.
400
+ [1663.320 --> 1665.320] So I'm showing you an example here.
401
+ [1666.320 --> 1668.320] Now, we have many more neurons.
402
+ [1668.320 --> 1674.320] The entire track is represented by at least one type of a neuron.
403
+ [1674.320 --> 1681.320] And then we can ask, can we reconstruct what can we think about this pattern that occurred during the sharp phase?
404
+ [1681.320 --> 1686.320] If we had to put the animals activity back into the brain while,
405
+ [1686.320 --> 1689.320] imaginary, the animal is running on the maze.
406
+ [1689.320 --> 1691.320] This is called the Bayesian reconstruction method.
407
+ [1691.320 --> 1696.320] Can we put the animals somewhere here at the beginning or the middle of the end of the maze?
408
+ [1696.320 --> 1701.320] And you can say that works very well during waking, but it works equally well during sleeping,
409
+ [1701.320 --> 1704.320] both forward and backwards.
410
+ [1704.320 --> 1711.320] So this is indeed good pattern, and I have shown you in the previous experiment
411
+ [1711.320 --> 1716.320] that we can indeed interfere with the consolidation of the memory.
412
+ [1716.320 --> 1722.320] Now, if the consolidation of the memory is analogous to consolidation or stabilizing a map,
413
+ [1722.320 --> 1726.320] then you can ask what happens in a learning environment with play cells?
414
+ [1726.320 --> 1729.320] So we now use a different manipulation, which is often generic,
415
+ [1729.320 --> 1736.320] a different maze. This is called the honeycomb maze that are many, many, sorry,
416
+ [1736.320 --> 1739.320] what is it called? The honeycomb is your terminology.
417
+ [1739.320 --> 1744.320] Okay, this is a complicated maze with 96 holes, and only three of them are baited.
418
+ [1744.320 --> 1748.320] The animal has to learn that there is water reward there.
419
+ [1748.320 --> 1753.320] And we wonder if the animal learns something, whether the play cells remain stable,
420
+ [1753.320 --> 1756.320] and is that in a role for hippocampal sharp-haves here?
421
+ [1756.320 --> 1759.320] All we have to do is when the animal is rewarded, and there is a sharp-have here,
422
+ [1759.320 --> 1762.320] we have to erase it, or at least erase part of it.
423
+ [1762.320 --> 1768.320] So I'm just showing you what happens in this situation that the animal learned already every day.
424
+ [1768.320 --> 1770.320] The animal is very familiar with the task.
425
+ [1770.320 --> 1775.320] It's just the kind of thing that we have when we go to the airport and leave our car there,
426
+ [1775.320 --> 1780.320] where we, where we, where we, where we leave the car, and the, the animal is asking itself,
427
+ [1780.320 --> 1783.320] is that where is the water? Yesterday it was here, I can't find it now,
428
+ [1783.320 --> 1786.320] now I have to find a new constellation of these three things.
429
+ [1786.320 --> 1791.320] And as you can see, the animal is walking around for a, for a long time.
430
+ [1791.320 --> 1796.320] In fact, it takes about five minutes before it collects all the three rewards.
431
+ [1796.320 --> 1801.320] But by the end of the, the, the, the day, it goes pretty fast.
432
+ [1801.320 --> 1802.320] Oops.
433
+ [1809.320 --> 1812.320] This is a well-known pattern.
434
+ [1812.320 --> 1814.320] The animal collects the reward here, goes to the other one,
435
+ [1814.320 --> 1817.320] and the whole thing can happen in about, but, but 10 seconds.
436
+ [1817.320 --> 1822.320] So now we can ask the question, what would happen if, under this recording electrode,
437
+ [1822.320 --> 1827.320] that we can place in the brain, we would get rid of this pattern,
438
+ [1827.320 --> 1831.320] the sharp very, very bad, and at least part of it, and silence all the neurons,
439
+ [1831.320 --> 1836.320] all the places that were active under this electrode, when the animal was outside the people.
440
+ [1836.320 --> 1844.320] And only a few hundred microns away, there will be other neurons that are not affected whatsoever.
441
+ [1844.320 --> 1848.320] So remember in the memory experiment what we have done is that we shot off the entire hippocampus
442
+ [1848.320 --> 1851.320] for the entire time of the report.
443
+ [1851.320 --> 1857.320] And here we leave everything normally, the animal can learn only a handful of neurons are affected.
444
+ [1857.320 --> 1863.320] And we ask, what, what is the, how do those neurons behave, this handful of neurons,
445
+ [1863.320 --> 1868.320] who, who, which were silenced during the sharp wave activity, that they were not part of this concept.
446
+ [1868.320 --> 1872.320] They, they didn't have an opportunity to interact with the other places.
447
+ [1872.320 --> 1874.320] And the answer is very simple.
448
+ [1874.320 --> 1877.320] This is without, this is a cultural experiment or cultural recordings.
449
+ [1877.320 --> 1884.320] There is no sharp wave, erasure or killing, and you can see that the place cells are pretty stable.
450
+ [1884.320 --> 1890.320] But those neurons that didn't have the opportunity to be part of the game,
451
+ [1890.320 --> 1896.320] because at the time when neurons were interacting strongly with each other, they were silenced.
452
+ [1896.320 --> 1899.320] You can see that the stability is pretty low.
453
+ [1899.320 --> 1906.320] Overall, what I have shown in this experiment, indeed, that sharp wave ripples are necessary for both memory consolidation.
454
+ [1906.320 --> 1910.320] And perhaps for consolidation of the spatial map.
455
+ [1910.320 --> 1915.320] So to summarize, cortical circuits have uL functions.
456
+ [1915.320 --> 1918.320] What is they can respond very effectively to environmental cues?
457
+ [1918.320 --> 1922.320] But when those environmental cues are not present, they just can't help.
458
+ [1922.320 --> 1925.320] They sell, generate their activity and work within.
459
+ [1925.320 --> 1933.320] And that work is not going away while we fall asleep, because in fact the brain is very active during sleep,
460
+ [1933.320 --> 1936.320] but in a different dynamic.
461
+ [1936.320 --> 1946.320] Perhaps the most important takeaway message is that spatial navigation created by nature for the animal to find food
462
+ [1946.320 --> 1956.320] is the way how to create an internalized version of that, and then we can mentally travel back into the past or forward into the future.
463
+ [1956.320 --> 1961.320] And we may still remember the last two conclusions that Shippekhan Pershaal,
464
+ [1961.320 --> 1966.320] for every pulse, are necessary for both memory and creating the spatial map.
465
+ [1966.320 --> 1968.320] And thanks very much.
transcript/allocentric_QOkrS1v7Ywk.txt ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 3.200] Hi, I'm Dr. York.
2
+ [3.200 --> 10.160] Not that kind of doctor.
3
+ [10.160 --> 12.000] I'm a doctor of communication.
4
+ [12.000 --> 16.680] Did you know only 7% of communication is the words that we say.
5
+ [16.680 --> 20.880] That means 93% of communication is nonverbal communication.
6
+ [20.880 --> 23.840] For example, gestures mean a lot.
7
+ [23.840 --> 28.040] The most open thing you can possibly do is show the palms of your hands.
8
+ [28.040 --> 32.480] Now speaking of hands, let's talk about handshakes.
9
+ [32.480 --> 35.040] What do you need to give a really good handshake?
10
+ [35.040 --> 36.040] Help!
11
+ [36.040 --> 38.040] That's right, Sean.
12
+ [38.040 --> 39.040] Help.
13
+ [39.040 --> 42.280] A handshake says a lot about a person.
14
+ [42.280 --> 43.880] Rule number one.
15
+ [43.880 --> 46.360] Always stand up to shake someone's hand.
16
+ [46.360 --> 51.280] Rule number two, don't be a bone crusher.
17
+ [51.280 --> 52.760] Rule number three.
18
+ [52.760 --> 55.160] Don't be a weakling.
19
+ [55.160 --> 62.360] So, gentlemen, please be sure to shake a woman's hand the same way you shake a man's
20
+ [62.360 --> 63.360] hand.
21
+ [63.360 --> 66.840] So if you're someone fancy like the president and you take a number of pictures, make
22
+ [66.840 --> 70.160] sure you're on the camera, the audience side, left side.
23
+ [70.160 --> 72.760] So when you shake someone's hand, your hand is on top.
24
+ [72.760 --> 76.200] Makes you look more powerful for those pictures.
25
+ [76.200 --> 78.400] But you can't always be on that side.
26
+ [78.400 --> 81.280] So if you are on the wrong side, there's some defensive measures.
27
+ [81.280 --> 82.280] Like this one.
28
+ [82.280 --> 83.280] The rap.
29
+ [84.280 --> 92.720] Or a second defensive measure is you pull them toward yourself so no one's hand is on top.
30
+ [92.720 --> 97.360] People like good looking people and close to a lot of a personality.
31
+ [97.360 --> 99.960] For example, always wear traditional clothing.
32
+ [99.960 --> 100.960] Blues.
33
+ [100.960 --> 101.960] Blacks.
34
+ [101.960 --> 102.960] Greys.
35
+ [102.960 --> 104.440] Even a pop of red are fantastic.
36
+ [104.440 --> 108.440] Stay away from the neon green colors.
37
+ [108.440 --> 113.240] For example, during a job interview, you'd want to stay away from a neon green tie because
38
+ [113.240 --> 118.200] the interviewer would be paying more attention to your tie than what you have to say.
39
+ [118.200 --> 120.560] Non-verbal communication can also show that we're listening.
40
+ [120.560 --> 121.560] Isn't that right?
41
+ [121.560 --> 122.560] Absolutely, Dr. York.
42
+ [122.560 --> 123.560] In fact.
43
+ [123.560 --> 124.560] Thank you.
44
+ [124.560 --> 127.360] So if you are speaking to someone, if you're currently talking, you should be looking
45
+ [127.360 --> 131.120] at the other person in the eye, about 60 to 70% of the time.
46
+ [131.120 --> 136.560] On the other hand, if you are listening to someone, increase that to about 90% of the
47
+ [136.560 --> 139.440] time to show engagement, to show that you are listening.
48
+ [139.440 --> 145.040] Also, be sure that you're not looking around the room looking to trade up in the conversation.
49
+ [145.040 --> 148.560] Non-verbal communication can also help you detect lies.
50
+ [148.560 --> 154.400] Look for things like too much eye contact, hiding behind barriers or stiff body movements.
51
+ [154.400 --> 159.320] But make sure you are getting a baseline for the individual, what they usually do, to make
52
+ [159.320 --> 163.920] sure they are lying or there's just nervous energy.
53
+ [163.920 --> 169.880] Usually college students and police officers somewhat have the exact same ratio of detecting
54
+ [169.880 --> 170.880] lies.
55
+ [170.880 --> 176.160] So next time you are interrogating someone, it's not what you say.
56
+ [176.160 --> 178.240] It's how you say it.
57
+ [178.240 --> 181.800] As a professor of communication, I'm usually aware of my student's feelings.
58
+ [181.800 --> 186.700] And there are seven universal micro expressions that everyone has, whether they're from St.
59
+ [186.700 --> 190.000] Louis, Missouri, Tokyo, or Mongolia.
60
+ [190.000 --> 194.160] And I'm about to show you all seven of these expressions while doing something fun,
61
+ [194.160 --> 195.160] messing with students.
62
+ [195.160 --> 199.080] Class I'd like to remind you that I've canceled the final exam.
63
+ [199.080 --> 203.840] However, to make up for those missed points, we're going to have an exam today.
64
+ [203.840 --> 206.560] Sean, thank you so much for joining us.
65
+ [206.560 --> 209.720] Here's your surprise exam.
66
+ [209.720 --> 214.160] And since Sean didn't make it a priority to be on time today, I'm doubling the points
67
+ [214.160 --> 216.680] on this exam.
68
+ [216.680 --> 217.680] And here's where it gets fun.
69
+ [221.000 --> 223.000] I don't even like tuna.
70
+ [223.000 --> 224.000] Yeah.
71
+ [224.000 --> 227.000] Those look like a big F.
72
+ [227.000 --> 234.400] Here's a tip for interviewing on air or for a job interview.
73
+ [234.400 --> 237.120] Have you ever played the lava game as a child?
74
+ [237.120 --> 239.680] Pretend the back half of your chair is lava.
75
+ [239.680 --> 246.040] This will force you to either sit up straight or lean forward looking engaged.
76
+ [246.040 --> 253.040] My research shows that you can increase memory recall by 22% just through nonverbal communication.
77
+ [253.040 --> 256.800] One of those steps, excuse me, is getting rid of all the barriers.
78
+ [256.800 --> 257.800] You don't need those anyway.
79
+ [257.800 --> 259.800] Sorry, buddy.
80
+ [259.800 --> 264.400] Would you believe me if I told you that you could increase your own confidence through
81
+ [264.400 --> 266.600] nonverbal communication?
82
+ [266.600 --> 269.120] Simply the Superman pose.
83
+ [269.120 --> 275.080] So if you do the Superman pose for just two minutes with testosterone levels increase
84
+ [275.080 --> 278.600] and your core is all the levels decrease with managed to stress.
85
+ [278.600 --> 281.520] Everyone do the Superman pose.
86
+ [281.520 --> 284.040] So remember, the podium is kryptonite.
87
+ [284.040 --> 287.480] Do the Superman pose once a day and keep the kryptonite away.
88
+ [287.480 --> 290.400] So as you can tell, communication is a powerful tool.
89
+ [290.400 --> 293.120] It's not just what we say that's important.
90
+ [293.120 --> 298.240] Everything from gestures to handshaking to micro expressions to lie detection to how we
91
+ [298.240 --> 299.640] dress.
92
+ [299.640 --> 301.640] Everything says something to everyone.
93
+ [301.640 --> 304.760] You're speaking volumes to everyone around you.
94
+ [304.760 --> 306.360] Everyone, you don't say a word.
95
+ [306.360 --> 307.360] Huh.
96
+ [307.360 --> 308.360] Because I'm happy.
97
+ [308.360 --> 311.360] Come along and give you a feel like a room.
transcript/allocentric_SKhsavlvuao.txt ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [60.000 --> 67.000] I'm going to do a little bit of the same thing.
2
+ [67.000 --> 72.000] I'm going to do a little bit of the same thing.
3
+ [72.000 --> 79.000] I'm going to do a little bit of the same thing.
4
+ [79.000 --> 84.000] I'm going to do a little bit of the same thing.
5
+ [84.000 --> 91.000] I'm going to do a little bit of the same thing.
6
+ [91.000 --> 98.000] I'm going to do a little bit of the same thing.
7
+ [98.000 --> 103.000] I'm going to do a little bit of the same thing.
8
+ [103.000 --> 109.000] I'm going to do a little bit of the same thing.
9
+ [109.000 --> 116.000] I'm going to do a little bit of the same thing.
10
+ [116.000 --> 123.000] I'm going to do a little bit of the same thing.
11
+ [123.000 --> 130.000] I'm going to do a little bit of the same thing.
12
+ [130.000 --> 136.000] I'm going to do a little bit of the same thing.
13
+ [136.000 --> 143.000] I'm going to do a little bit of the same thing.
14
+ [143.000 --> 150.000] I'm going to do a little bit of the same thing.
15
+ [150.000 --> 157.000] I'm going to do a little bit of the same thing.
16
+ [157.000 --> 163.000] I'm going to do a little bit of the same thing.
17
+ [163.000 --> 170.000] I'm going to do a little bit of the same thing.
18
+ [170.000 --> 177.000] I'm going to do a little bit of the same thing.
19
+ [177.000 --> 184.000] I'm going to do a little bit of the same thing.
20
+ [184.000 --> 191.000] I'm going to do a little bit of the same thing.
21
+ [191.000 --> 196.000] I'm going to do a little bit of the same thing.
22
+ [196.000 --> 202.000] I'm going to do a little bit of the same thing.
23
+ [202.000 --> 209.000] I'm going to do a little bit of the same thing.
24
+ [209.000 --> 216.000] I'm going to do a little bit of the same thing.
25
+ [216.000 --> 217.000] I'm going to do a little bit of the same thing.
26
+ [217.000 --> 223.000] I'm going to do a little bit of the same thing.
27
+ [223.000 --> 224.000] I'm going to do a little bit of the same thing.
28
+ [224.000 --> 231.000] I'm going to do a little bit of the same thing.
transcript/allocentric_TGwnvyUlc18.txt ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.840] Why is you probably guessed, grades are simply a tool, which means grades have embedded
2
+ [7.840 --> 11.120] within them a world view, a way of understanding the world.
3
+ [11.120 --> 16.880] And when we use grades, we have to adopt that world view that changes our top down thinking
4
+ [16.880 --> 19.840] and changes the world you are perceiving and living in.
5
+ [20.480 --> 32.320] Hello everybody and welcome to this week's From Theory to Practice, where I take a look at the
6
+ [32.320 --> 36.720] research so you don't have to. Now as you know, we're taking a journey through my new book,
7
+ [36.720 --> 40.800] 10 Things Schools Get Wrong and How We Can Get Them Right. And this week we're going to take
8
+ [40.800 --> 45.920] a look at chapter 3, which is entitled, grades, the Problem with Modern Assessment. Now as you can
9
+ [45.920 --> 50.720] expect, this is a huge topic. So rather than tacklet head on in this video, I want to hit it from
10
+ [50.720 --> 54.720] the side and see if we can come at it from kind of a side angle. So the article I've selected this
11
+ [54.720 --> 61.280] week is called, Can Language Restructure Cognition by Majeed and colleagues. Now to understand this
12
+ [61.280 --> 66.560] paper, we have to go back to the very foundations of the brain and how it works. Now for the longest
13
+ [66.560 --> 71.040] time, we used to think the brain worked in what's called a bottom up manner. Essentially the world is
14
+ [71.040 --> 76.480] out there, signals come into our body through all of our senses, activate the brain and that's our
15
+ [76.480 --> 81.840] experience of reality. We now know that's not true. The human brain, by and large, works on what's
16
+ [81.840 --> 88.160] called a top down system. Essentially this is where parts of the brain can feed back and change how
17
+ [88.160 --> 93.120] those signals from the world interact with and activate the brain up here. So as a simple example,
18
+ [93.120 --> 97.760] let's just take a look at this simple image. Some of you right now are seeing a nice pretty young
19
+ [98.160 --> 102.640] lady and some of you are seeing a more run down older woman. Now regardless of what you're seeing,
20
+ [102.640 --> 107.600] what I want you to do is flip. Change now and see the other image. If you're looking at the old
21
+ [107.600 --> 113.040] woman, see the young woman. If you're looking at the young woman, see the old woman. Now as you know,
22
+ [113.040 --> 118.000] this image isn't changing in any way, shape or form. The signals coming in and activating your
23
+ [118.000 --> 124.880] brain are identical right now as they were 10 seconds ago, but you can see two different things.
24
+ [124.880 --> 130.800] That's the power of top down processing. You can push back and dictate how your brain reacts
25
+ [130.800 --> 136.720] to those signals. Now this leads to a very important question. What drives our top down control?
26
+ [136.720 --> 141.840] What's guiding this thing to change how all of the rest of the brain works? The easiest way to
27
+ [141.840 --> 147.600] to conceptualize it is it's your world view. The stories you use to make sense of the world
28
+ [147.600 --> 152.560] changes your top down processing, changes the world you are perceiving and living in. So as you
29
+ [152.560 --> 157.920] can assume, our world view comes from a dozen different sources from our parents, from our family,
30
+ [157.920 --> 163.600] our friends, our experiences, our schooling, the things we learn. But there's one major influence
31
+ [163.600 --> 168.240] on our world view, on our top down understanding of the world that most people miss. They don't
32
+ [168.240 --> 176.080] think of it. And it's our tools. Every tool we choose to use comes embedded with its own world view.
33
+ [176.080 --> 182.480] And when we use those tools, we assume and must then also use that world view. And that's what
34
+ [182.480 --> 188.240] this paper takes a look at. This paper says, okay, language is one of our most basic tools.
35
+ [188.240 --> 193.120] And if tools have an ability to change our world view, can language change our perception,
36
+ [193.120 --> 197.680] our memory, our cognition, and the world we are living in? And to see where these researchers
37
+ [197.680 --> 201.840] arrived, I want to play a quick memory game with you. So what I'm going to do is I'm going to
38
+ [201.840 --> 206.000] show you a picture of a table and there will be three objects that can be placed on that table,
39
+ [206.000 --> 210.640] a house, a doll, and a tree. Now the most you're ever going to see at any one time is only two
40
+ [210.640 --> 214.320] objects. So I'm going to show you a series of pictures and what I want you to do is try and
41
+ [214.320 --> 219.680] memorize the location of each of the objects on the table as they sit. So let's start. So here we've
42
+ [219.680 --> 224.560] gone our table and we've got the house and we've got the doll. I want you to memorize their positions.
43
+ [227.360 --> 231.440] Now in picture two, we have the same house from the previous picture, but we have a new object,
44
+ [231.440 --> 235.840] a tree. So I want you to try and memorize the position of these two objects as they stand.
45
+ [238.560 --> 243.840] Picture three always sees the doll. I want you to imagine that I gave you that tree object.
46
+ [243.840 --> 249.040] Now here's the question. Using our memories from the last two images, where on this table
47
+ [249.040 --> 253.520] does the tree belong? So again, putting all the images together now in our mind,
48
+ [253.520 --> 258.320] where on this table would you place it in relation to this doll? Now I'm going to go have a limb
49
+ [258.480 --> 263.200] here and I'm going to guess you put the tree here. Congratulations, not a very difficult game.
50
+ [263.200 --> 266.960] You've got a wonderful memory. Great job. But check this out. When we play the same exact
51
+ [266.960 --> 271.200] game with members of the Aboriginal tribes from Northern Australia and Aboriginal tribes from
52
+ [271.200 --> 279.680] Mexico, they place the tree here. Now why might that be? It turns out this difference has to do
53
+ [279.680 --> 285.920] with language. In English, most of our languages for space concerns relative location. It's to my left,
54
+ [285.920 --> 290.160] it's in front of me. It's to the right of that thing over there. Whereas in Northern Aboriginal
55
+ [290.160 --> 296.240] and some Mexican languages, space is always absolute. It's always tied to the cardinal points of
56
+ [296.240 --> 301.040] northeast, south and west. So whereas I might say, oh, there's a scorpion in front of your foot,
57
+ [301.040 --> 305.680] they would say there's a scorpion south of your foot. Nothing is relative. Everything is
58
+ [305.680 --> 310.400] absolute. So why would that impact their placement of the tree? Well, let's go back to our images.
59
+ [310.480 --> 316.240] As you can see, I kept rotating that table between different images. Now us as English speakers
60
+ [316.240 --> 322.240] who think in relative terms, we rotated our body with that table as well. So when we saw picture
61
+ [322.240 --> 327.600] one, we imagined ourselves in front of the table and we swung with the table when we moved to
62
+ [327.600 --> 332.880] picture two and swung back when we moved picture three, which means the tree was always going to
63
+ [332.880 --> 339.360] be to the left of the doll. Now in absolute languages, there is no spatial relativity. So they don't
64
+ [339.440 --> 345.520] swing between images. So in absolute terms, the house is south of the doll. The tree is south of
65
+ [345.520 --> 351.760] the house. Therefore, the tree has to be south of the doll. Now remember, this wasn't a perceptual
66
+ [351.760 --> 358.400] game. This was a memory game, which means the tool of language can literally change our memory
67
+ [358.400 --> 363.040] for events and how things unfold. And it works with movement as well. So let's say I put you at a
68
+ [363.040 --> 368.480] table facing this direction and I moved a little toy car down then towards you. Once I flipped that
69
+ [368.480 --> 372.880] table to the other direction and I say move the car the same way I just moved it. Most of you will
70
+ [372.880 --> 378.080] move it to your left then towards you. But when we play this same game with absolute speakers,
71
+ [378.080 --> 383.360] because the car first moved south and then east, once we flipped the table, they will move the
72
+ [383.360 --> 390.560] car south and then east. This is the power of our tools to change, to interact with, to influence
73
+ [390.560 --> 394.720] our top down perceptions of the world. So let's bring this back. What does this mean for us as
74
+ [394.720 --> 399.360] teachers? Well, first this means all of our students are going to have unique world views.
75
+ [399.360 --> 403.600] And a lot of the times when we clash with our students, we assume, oh, they're not trying hard
76
+ [403.600 --> 408.560] enough or all they're not putting in the effort. When an actuality no, they might not be seeing
77
+ [408.560 --> 414.080] understanding, smelling, tasting, perceiving the same world we are. Which means one of the essential
78
+ [414.080 --> 418.960] things we have to do is form relationships with our students. We have to know their stories,
79
+ [418.960 --> 422.800] their world views, how they're experiencing the world. Because in those world views are going to
80
+ [422.800 --> 427.600] be the clues we need to understand how they're thinking and how we can bring them to that next level.
81
+ [428.160 --> 432.720] And beyond individual students, we now have to think, okay, what is the worldview of our school
82
+ [432.720 --> 437.680] at large? What are the stories we are using to define our purpose, to drive and guide our kids?
83
+ [437.680 --> 443.200] Those global world views we are using to organize our school days and our pedagogy are also dictating
84
+ [443.200 --> 447.840] how we're defining and understanding student learning, growth, and development. So what are the
85
+ [447.840 --> 451.920] stories we're using collectively and how is that influencing our interaction with students?
86
+ [452.000 --> 455.440] But let's bring it back now. So remember, we're talking about chapter three of the book,
87
+ [455.440 --> 461.200] grades and modern assessment. So what does any of what we've just discussed have to do with grades?
88
+ [461.200 --> 466.960] Well, as you probably guessed, grades are simply a tool. Which means grades have embedded within
89
+ [466.960 --> 472.800] then a worldview a way of understanding the world. And when we use grades, we have to adopt that
90
+ [472.800 --> 479.120] worldview that changes our top down thinking and that changes the world we live in. So what is the
91
+ [479.200 --> 483.600] worldview of grades? What is the story hidden within grades? Well, that's what we take a look at in
92
+ [483.600 --> 487.840] chapter three of the books. So now that you kind of have the framework, now that chapter will help you
93
+ [487.840 --> 492.080] dig deeper into the concept of grades and see what is this really doing and is this really what we
94
+ [492.080 --> 496.160] want. So thank you all so much for hanging out with me. If you like what you heard, please subscribe
95
+ [496.160 --> 498.960] and comment below. Otherwise, I'll see you guys next time. Bye, y'all.
transcript/allocentric_Y89Cd_0wXik.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.320] goal-directed reaching, the ill-acentric coding of target location renders an offline mode of control.
2
+ [7.040 --> 9.280] Via experimental brain research.
3
+ [10.080 --> 10.800] Abstract.
4
+ [11.520 --> 15.760] Reaching to a theoretical target permits an egocentric spatial code,
5
+ [15.760 --> 21.120] that is, absolute limb and target position, to affect fast and effective online
6
+ [21.120 --> 27.040] trajectory corrections supported via the visual motor networks of the doer's all-visual pathway.
7
+ [27.360 --> 33.280] In contrast, a response in tailing decoupled spatial relations between stimulus and response
8
+ [33.280 --> 39.680] is thought to be primarily mediated via an ill-acentric code, that is, the position of a target relative
9
+ [39.680 --> 44.480] to another external queue, laid down by the visual perceptual networks of the
10
+ [44.480 --> 50.240] ventral visual pathway. Because the ventral stream renders a temporally durable percept,
11
+ [50.240 --> 55.600] it is thought that an ill-acentric code does not support a primarily online mode of control,
12
+ [55.600 --> 61.760] but instead supports a mode where an a response is evoked largely in advance of movement on set via
13
+ [61.760 --> 68.000] central planning mechanisms, that is, offline control. Here, we examine whether reaches
14
+ [68.000 --> 74.080] defined via ego and ill-acentric visual coordinates are supported via distinct control modes,
15
+ [74.080 --> 79.120] that is, online versus offline. Participants perform target-directed
16
+ [79.120 --> 83.040] and ill-acentric reaches in limb visible and limb-acluded conditions.
17
+ [83.520 --> 89.680] Notably, in the ill-acentric task, participants reach to a location that matched the position of
18
+ [89.680 --> 95.840] a target stimulus relative to a reference stimulus, and to examine online trajectory amendments.
19
+ [95.840 --> 102.000] We computed the proportion of variants explained, that is, are two values, by the spatial
20
+ [102.000 --> 108.160] position of the limb at 75 percent of movement time relative to a response s-ultimate movement
21
+ [108.320 --> 114.080] end point. Target-directed trials performed with limb vision showed more online corrections and
22
+ [114.080 --> 119.600] greater end point precision than their limited counterparts, which in turn were associated with
23
+ [119.600 --> 124.800] performance metrics comparable to ill-acentric trials performed with and without limb vision.
24
+ [125.440 --> 131.280] Accordingly, we propose that the absence of ego-motion cues, that is, limb vision,
25
+ [131.280 --> 137.280] and or the specification of a response via an ill-acentric code renders motor output served via
26
+ [137.280 --> 144.880] the slow-visual perceptual networks of the vental visual pathway.
transcript/allocentric_ZkZjfqo6h3I.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ [0.000 --> 5.000] Ego-centric smaller-person experience through a change in visual perspective.
2
+ [5.000 --> 10.000] We developed a wearable visual translator that provides the perspective of a smaller person
3
+ [10.000 --> 13.000] by shifting the wearer's eyesight level down to their waist.
4
+ [13.000 --> 21.000] We investigated how the Ego-centric smaller-person experience changes the wearer's perceptions, actions, and interactions.
transcript/allocentric_Zq0a__Ltr3Q.txt ADDED
@@ -0,0 +1,413 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.240] So, I'm very happy to be here.
2
+ [4.240 --> 6.040] It's such an honor to be here.
3
+ [6.040 --> 15.400] And first thing I have to echo, Nicole, in thanking Mr. Schwab, is the first time that I can
4
+ [15.400 --> 21.160] give a talk with the title of the endowed professor, Charles Schwab, endowed professor.
5
+ [21.160 --> 29.960] So, and thank you for the opportunity to let me work on that.
6
+ [29.960 --> 40.200] I wanted to spend a couple of minutes talking about how the UCSF dyslexia center was born
7
+ [40.200 --> 46.120] and how is really kind of modeling itself to the success of the memory and aging center
8
+ [46.120 --> 47.120] at UCSF.
9
+ [47.120 --> 53.840] So, the memory and aging center is a hub that is being built in the past 20 years at UCSF
10
+ [53.840 --> 57.720] for the study of aging, cognitive disorders in aging.
11
+ [57.720 --> 67.440] And it really takes the strength of the medical world to advance the cause and diminish the
12
+ [67.440 --> 69.640] stigma of aging.
13
+ [69.640 --> 76.720] And it really is a multidisciplinary team that puts together clinicians, scientists, social
14
+ [76.720 --> 82.200] workers, nurses, community advocates, all together for the same goal.
15
+ [82.200 --> 88.080] So a few years ago when my own daughter was identified with the dyslexia, I looked around
16
+ [88.080 --> 93.160] and I said, well, where is the place, like the memory and aging center for her?
17
+ [93.160 --> 99.720] Why isn't anybody in the medical world or in a multidisciplinary way helping me figure
18
+ [99.720 --> 100.720] this out?
19
+ [100.720 --> 106.680] If I can't figure it out, and I'm a behavioral neurologist, I have all these titles and
20
+ [106.680 --> 110.200] how can anybody else figure it out?
21
+ [110.200 --> 115.480] And so right at the time, two visionary parents from the Charles Armstrong School, Steve
22
+ [115.480 --> 121.560] Carnivalian, Dave Evans, happen to come at UCSF and say, we want a better care and better
23
+ [121.560 --> 126.920] understanding of our children who are Charles Armstrong and also visionary educator like
24
+ [126.920 --> 128.240] Claudia Kutche.
25
+ [128.240 --> 130.920] And so the whole thing came together and said, we have to do this.
26
+ [130.920 --> 140.760] We have to build a center that really protects, takes care, does research, advocate for our
27
+ [140.760 --> 144.120] children because there is not a single person that can do it.
28
+ [144.120 --> 147.880] There is not a single person that knows about the brain, that knows about school, that
29
+ [147.880 --> 150.040] knows about education methods.
30
+ [150.040 --> 152.640] We all have to do it together.
31
+ [152.640 --> 153.960] So this was the story.
32
+ [153.960 --> 162.920] We're not quite as big of the Memorandering Center yet, but with the generous help of philanthropists
33
+ [162.920 --> 166.800] like Mr. Shrabb, we are getting there.
34
+ [166.800 --> 175.640] So of course, at the mission of the Memorandering Center, we had to add a piece which is the educational
35
+ [175.640 --> 176.640] and school piece.
36
+ [176.640 --> 181.840] And we were lucky to be able to work with excellent schools starting with the Charles Armstrong
37
+ [181.840 --> 187.680] schools and other schools in the Bay Area like Chartwell and Athena Academy that already
38
+ [187.680 --> 193.480] hubs where children with learning differences and neurodiversity are taking care of.
39
+ [193.480 --> 199.680] So the idea is that we have a center in which we do research, we do medical care.
40
+ [199.680 --> 205.560] I think these children have fallen through the crack and adults and aging individuals
41
+ [205.560 --> 207.680] with neurodiversity have fallen through the cracks.
42
+ [207.680 --> 209.280] Is it an education problem?
43
+ [209.280 --> 211.600] Is it a workplace problem?
44
+ [211.600 --> 213.400] Is it a university problem?
45
+ [213.400 --> 217.200] And the medical world has not been behind them.
46
+ [217.200 --> 220.480] And in our society, the medical world is very powerful.
47
+ [220.480 --> 221.480] We need it.
48
+ [221.480 --> 226.920] We need to take care of families and children and adults without the need of spending thousands
49
+ [226.920 --> 230.760] of dollars for private evaluation.
50
+ [230.760 --> 234.920] So we have the medical care part and then the partnership with the schools.
51
+ [234.920 --> 241.040] And also being the Bay Area, we need to take advantage of the incredible technology expertise
52
+ [241.040 --> 248.160] so that we can better identify and better help children and adults with specific cognitive
53
+ [248.160 --> 250.600] or behavioral challenges.
54
+ [250.600 --> 253.000] So where do these behavioral challenges come from?
55
+ [253.000 --> 257.880] I'm a behavioral neurologist which is a little bit of a hybrid between neurologists and
56
+ [257.880 --> 258.880] a psychiatrist.
57
+ [258.880 --> 264.880] So we really study the output of the brain which is cognition and behavior.
58
+ [264.880 --> 271.240] So we think of human behavior as an emerging properties of different networks in the brain.
59
+ [271.240 --> 273.440] And we can think of networks like muscles.
60
+ [273.440 --> 277.120] So there is different pathways that do different things.
61
+ [277.120 --> 282.600] A big distinction that we can make to start with is left hemisphere and right hemisphere.
62
+ [282.600 --> 289.000] We think that the left hemisphere is very involved in linguistic processes but also processes
63
+ [289.000 --> 294.480] that involve root memory, memorizing things.
64
+ [294.480 --> 299.560] The right hemisphere is the hemisphere more involved in visual spatial functioning and
65
+ [299.560 --> 301.320] in social emotional functioning.
66
+ [301.320 --> 303.560] This is something that is a little bit newer.
67
+ [303.560 --> 308.960] We all kind of use to thinking of language and memory and visual spatial functioning coming
68
+ [308.960 --> 310.280] from the brain.
69
+ [310.280 --> 317.440] But social emotional reactions and abilities also come from specific muscles at the brain
70
+ [317.440 --> 319.080] that we need to look at.
71
+ [319.080 --> 325.600] So when Mr. Schrobs says I was a charming boy, that means he also has a strength in social
72
+ [325.600 --> 326.760] emotional functioning.
73
+ [326.760 --> 327.760] It doesn't.
74
+ [327.760 --> 336.520] It's not just something that we do acquired but is also an intrinsic strength that comes
75
+ [336.520 --> 339.240] from specific brain networks.
76
+ [339.240 --> 345.760] So the idea of a whole brain assessment in dyslexia but also for other learning differences
77
+ [345.760 --> 350.000] is that we need to look at all these aspects because they work in synergy.
78
+ [350.000 --> 353.480] These different networks in the brain and I'll show you some evidence of it.
79
+ [353.480 --> 357.840] We now understand more and more that they work in synchrony one with the other.
80
+ [357.840 --> 362.320] So if one is working differently, the other will probably will too and it might be the
81
+ [362.320 --> 366.720] basis of some of the strengths that we see.
82
+ [366.720 --> 372.600] So here is an example really an analysis that we just finished with one of the our near-imaging
83
+ [372.600 --> 376.040] engineer colleagues that really shows you this concept.
84
+ [376.040 --> 380.800] So red means areas of the brain that are activated and we don't need to go into details
85
+ [380.800 --> 384.320] in what they do and blue are the ones that are deactivated.
86
+ [384.320 --> 389.280] So you can see that there is a balance when certain areas are activated, others need
87
+ [389.280 --> 390.920] to be deactivated.
88
+ [390.920 --> 397.600] And for the red ones to activate optimally, there needs to be a downside in the blue
89
+ [397.600 --> 398.600] ones.
90
+ [398.600 --> 401.720] So there is really a balance that we just starting to understand.
91
+ [401.720 --> 405.840] So we don't think that the strengths of the dyslexic brain come just from the fact that
92
+ [405.840 --> 408.920] well, I couldn't read fast so I need to look at pictures.
93
+ [408.920 --> 414.560] No, the brain is probably, that's our theories, wired from the beginning to have a strength
94
+ [414.560 --> 419.680] in this other function, for instance, visual spatial functioning.
95
+ [419.680 --> 425.800] So at some point I think we will be able to identify slow readers from their, even from
96
+ [425.800 --> 430.120] their specific strengths.
97
+ [430.120 --> 438.480] So going back to our networks, one of the, say, funding principles of our research is
98
+ [438.480 --> 442.200] that we can have these predictable patterns of strengths and weaknesses.
99
+ [442.200 --> 447.560] And dyslexia is not a unitary concept that here also comes from the pioneering work
100
+ [447.560 --> 455.040] of Marianne Wolf, who you hear in a little bit, is that reading slowly is a symptom.
101
+ [455.040 --> 462.760] And so if this left hemisphere network is, for some reason, in development or in aging,
102
+ [462.760 --> 469.840] which are two most vulnerable times for our brain, functioning differently, you'll have
103
+ [469.840 --> 471.320] very common symptoms.
104
+ [471.320 --> 476.720] So in aging, the common symptom is not finding words, having difficulty finding words.
105
+ [476.720 --> 481.680] In children, while in school, the most common symptom is reading slowly.
106
+ [481.680 --> 483.200] And then we have to go deeper.
107
+ [483.200 --> 489.600] We have to understand where it comes from, from which of those errors, pathways, or areas
108
+ [489.600 --> 490.760] is coming from.
109
+ [490.760 --> 498.880] Because if we don't, we're not going to be able to help the child or the adult in a specific
110
+ [498.880 --> 500.040] way.
111
+ [500.040 --> 507.240] And we've done this in the aging world by looking at anomia, so word finding problems,
112
+ [507.240 --> 512.120] and dissecting it and trying to find which are the cognitive and neural basis of it,
113
+ [512.120 --> 517.240] we were able to identify different disorders of aging that before we were unable to identify,
114
+ [517.240 --> 520.520] and we're very close to targeted treatments because of that.
115
+ [520.520 --> 525.920] So this is the hypothesis also in the dyslexic neurodiversity brain.
116
+ [525.920 --> 529.600] We really need to understand where the symptom is coming from.
117
+ [529.600 --> 533.840] Kind of like if someone has a fever, someone has a fever, a child has a fever, we wouldn't
118
+ [533.840 --> 536.080] stop it, just give them paracetamol.
119
+ [536.080 --> 541.600] We just want to know where that fever is coming from, then we can treat it properly or help
120
+ [541.600 --> 543.880] it properly.
121
+ [543.880 --> 548.720] So if we think about all these networks in balance one of each other, we can start making
122
+ [548.720 --> 553.920] sense of some of these terms and comorbidities that we've all went through with our children
123
+ [553.920 --> 560.240] or with ourselves, and do I have this, do I have that, do I have comorbid ADHD or comorbid
124
+ [560.240 --> 565.920] dyscalculia, and we can start thinking it from a system neuroscience perspective.
125
+ [565.920 --> 571.320] So we can think that dyslexia can come from different cognitive mechanisms, so someone
126
+ [571.320 --> 578.000] might have difficulty remembering words fast, or someone might have difficulties processing
127
+ [578.000 --> 586.280] sounds, or having a visual or understanding difficulty or an executive difficulty and
128
+ [586.280 --> 593.000] sensory motor, maybe many of our children have difficulties with fine motor processing,
129
+ [593.000 --> 596.680] and all of these different mechanisms can cause a slow reading.
130
+ [596.680 --> 601.720] But we need to identify which mechanism is the cause of the slow reading.
131
+ [601.720 --> 605.680] And then because of how those brain networks are organized, as you saw in the previous
132
+ [605.680 --> 611.120] picture, which ones work in balance with each other, which ones are close to each other,
133
+ [611.120 --> 615.520] then we can start understanding other functions, like social emotional, which is in the right
134
+ [615.520 --> 620.920] hemisphere, very near the visual semantic processing, and math.
135
+ [620.920 --> 624.360] Math is a whole other domain kind of like language.
136
+ [624.360 --> 628.000] Is it high-level math, or is it math facts?
137
+ [628.000 --> 635.160] And we can start looking at this as a neurodiversity spectrum in which there are different symptoms
138
+ [635.160 --> 640.760] that most often than others can co-occur with each other.
139
+ [640.760 --> 647.360] But then, if we think of the way that these networks and functions are in balance with
140
+ [647.360 --> 652.640] each other, we can actually start thinking of how they are associated with specific strengths.
141
+ [652.640 --> 661.680] So a child or a person who has a wrote memory weakness for words will likely have a strength
142
+ [661.680 --> 667.040] in the network that is in balance with that network, which likely, and we'll see some
143
+ [667.040 --> 672.520] examples, would be visual strengths or semantic strengths, which are understanding patterns
144
+ [672.520 --> 679.600] or getting meanings, understanding meaning and problem solving in a way that is different
145
+ [679.600 --> 684.880] and more and faster or more efficient than other people.
146
+ [684.880 --> 689.200] Or another example, if there is a fine motor weakness, then we could predict that the
147
+ [689.200 --> 692.680] network in balance might be the one in social emotional processing.
148
+ [692.680 --> 699.800] So we can predict that maybe children and adults who have a dysgraphia or dyspraxia,
149
+ [699.800 --> 705.120] these other terms that are used might be very strong in social emotional and need to be
150
+ [705.120 --> 710.200] valued and protected because of that.
151
+ [710.200 --> 714.160] So let's look at some specific examples.
152
+ [714.160 --> 718.520] This is an example of a child that we saw in the dyslexia center.
153
+ [718.520 --> 720.080] Oh, and there is a point.
154
+ [720.080 --> 724.200] Great, thank you.
155
+ [724.200 --> 730.000] Who had a very small, oh, now I need to choose where to point.
156
+ [730.000 --> 733.280] So here, whoa, that doesn't get that far.
157
+ [733.280 --> 739.200] So anyway, you can see the brain in the middle is the brain of this child that we were studying
158
+ [739.200 --> 746.080] and you can see that little red dot, which is exactly where auditory processing happens.
159
+ [746.080 --> 752.360] So this child had difficulty with auditory processing and you can, in the right schematics,
160
+ [752.360 --> 756.800] you can see a smaller blue dot in that same area.
161
+ [756.800 --> 762.120] And we can expect that the same network in the right hemisphere might be enhanced.
162
+ [762.120 --> 769.440] And so he might have strength with visual attention and emotional processing.
163
+ [769.440 --> 775.480] So from this approach, we realized that there were different phenotypes of dyslexia with
164
+ [775.480 --> 777.120] different strengths and weaknesses.
165
+ [777.120 --> 781.880] These are three children all from the Charles Armstrong school, all diagnosed with dyslexia.
166
+ [781.880 --> 784.320] We don't need to go through the details.
167
+ [784.320 --> 789.000] We can look in the patterns of visual that the three children each of different colors
168
+ [789.000 --> 790.000] are very different.
169
+ [790.000 --> 793.560] They're very, they all three very bright.
170
+ [793.560 --> 796.840] They all have very high nonverbal reasoning.
171
+ [796.840 --> 802.680] But the patterns of strength and weaknesses in those networks is very different.
172
+ [802.680 --> 804.480] There are anatomies also very different.
173
+ [804.480 --> 807.880] We have this technology now that we can look at single brains.
174
+ [807.880 --> 812.800] This is a technique that is being used and that we borrowed from neurosurgery.
175
+ [812.800 --> 820.720] So the people that have to go into surgery to get tumor resected in the brain, the surgeons
176
+ [820.720 --> 825.200] really want to know where the pathways are, especially the ones involved in language,
177
+ [825.200 --> 826.880] so that they don't touch them.
178
+ [826.880 --> 832.920] And so we borrowed some of the engineers and your imagers from that field to look at exactly
179
+ [833.080 --> 837.240] how the pathways are organized in each individual child.
180
+ [837.240 --> 844.920] And you can see that the symmetry between right and left of the brain of some of these pathways,
181
+ [844.920 --> 850.400] you can see the blue one or the dark green one are kind of different and almost opposite
182
+ [850.400 --> 851.400] in a case.
183
+ [851.400 --> 856.120] So the one in the middle, in the high middle, the green pathway is much bigger on the left
184
+ [856.120 --> 857.400] on the right.
185
+ [857.400 --> 861.760] And in the lower right, for example, the green pathway is much bigger on the right than
186
+ [861.760 --> 862.840] on the left.
187
+ [862.840 --> 864.520] So very, very different.
188
+ [864.520 --> 871.520] And at this point, treated in a very protective, wonderful school like Charles Armstrong,
189
+ [871.520 --> 874.960] still treated in the same way, still remediated in the same way.
190
+ [874.960 --> 877.880] And for one child is working and for the other one is not.
191
+ [877.880 --> 883.520] And that was basically the reason why Stephen and Dave came to us at UCSF to start with
192
+ [883.520 --> 884.520] this.
193
+ [884.520 --> 889.600] Like, why is the approach working for some kids and not for others?
194
+ [889.600 --> 894.920] So let's look at some strengths that we are looking at.
195
+ [894.920 --> 896.520] So we do these evaluation.
196
+ [896.520 --> 900.640] Many of you have had children that came through the program.
197
+ [900.640 --> 904.600] It's a long evaluation that we know we need to.
198
+ [904.600 --> 909.160] It's kind of our discovery cohort that then through technology, we hope will make the
199
+ [909.160 --> 910.680] assessment easier.
200
+ [910.680 --> 917.640] Now as long as about 20 hours, it involves imaging, genetics, psychiatry, neuropsychology,
201
+ [917.640 --> 924.320] emotion, visual, spatial, evaluation of all aspects of the whole brain.
202
+ [924.320 --> 925.880] Some happens at UCSF.
203
+ [925.880 --> 928.160] Some happens in the school.
204
+ [928.160 --> 933.440] But it's a long commitment, but everybody seems to be happy and enjoying it.
205
+ [933.440 --> 936.720] So we'll look at four strengths.
206
+ [936.720 --> 941.200] The first one that I'm really proud of because it's very, very new and is very dear to my
207
+ [941.200 --> 947.360] heart, especially because I think many of these children are not protected enough and they
208
+ [947.360 --> 954.440] have a gift of empathy, a gift of emotional processing that we have not previously recognized.
209
+ [954.440 --> 958.360] So as we said, the left hemisphere is more involved in language.
210
+ [958.360 --> 962.560] The right hemisphere is more involved in emotion processing.
211
+ [962.560 --> 968.440] So the hypothesis is if these two, some of the networks within these two hemispheres are
212
+ [968.440 --> 969.760] in balance.
213
+ [969.760 --> 975.800] Reading and language difficulties might be associated with emotional enhancement.
214
+ [975.800 --> 983.000] So just as the children might have difficulty recognizing words because of the left hemisphere
215
+ [983.000 --> 989.160] difference, they might have an enhanced ability to recognize facial expressions and to produce
216
+ [989.160 --> 995.320] facial expressions because of the balance between these two systems in the brain.
217
+ [995.320 --> 996.840] So I'm a behavior neurologist.
218
+ [996.840 --> 1000.080] I study language.
219
+ [1000.080 --> 1001.320] That is my expertise.
220
+ [1001.320 --> 1005.760] I was lucky enough that in a place like UCSF, there is the expert.
221
+ [1005.760 --> 1008.440] In emotion physiology measuring.
222
+ [1008.440 --> 1012.840] And so I could just convince her, Virginia's term and say, hey, we need to look at this and
223
+ [1012.840 --> 1017.560] she could take all her tools and her lab and concentrate the efforts on seeing all of
224
+ [1017.560 --> 1019.040] our children.
225
+ [1019.040 --> 1027.960] And this is very important kind of bridge between psychiatry and system based neurology because
226
+ [1027.960 --> 1031.720] we, for the first time, are able to measure emotions.
227
+ [1031.720 --> 1036.600] We'll just ask parents, well meaning, but anxious parents about their children.
228
+ [1036.600 --> 1040.960] We actually looking at the children response.
229
+ [1040.960 --> 1043.040] So children are in this lab.
230
+ [1043.040 --> 1047.360] In this room at UCSF, they watch videos that have been standardized and coded.
231
+ [1047.360 --> 1052.960] We measure heart rate, respiration rate, skin conductance, basically all these measures
232
+ [1052.960 --> 1060.560] in the body that are the produce really emotions.
233
+ [1060.560 --> 1063.280] So they are hooked up and this is an example.
234
+ [1063.280 --> 1070.440] Many of you have seen this video.
235
+ [1070.440 --> 1077.360] So the child is looking at the video and we measure her facial expression and her heart
236
+ [1077.360 --> 1086.040] rate and respiration moment by moment as she's watching that video.
237
+ [1086.040 --> 1087.520] It's almost impossible, right?
238
+ [1087.520 --> 1089.280] Not to react.
239
+ [1089.280 --> 1093.840] And actually one of the things that we ask the children, we look at emotional regulation,
240
+ [1093.840 --> 1100.840] we ask them not to express also their feelings while they watch the videos.
241
+ [1100.840 --> 1106.080] So the finding that is really consistent and it's probably going to be, you know, is our
242
+ [1106.080 --> 1113.320] first second paper from the center is that there is enhanced emotional reactivity and understanding
243
+ [1113.320 --> 1118.520] in children with dyslexia and the phonological classic type of dyslexia.
244
+ [1118.520 --> 1122.800] So this is a strength, there's going to be a strength for them in their life but it
245
+ [1122.800 --> 1127.760] is also put some at risk to especially in the critical times in their life.
246
+ [1127.760 --> 1134.880] So from a young age to adulthood and in aging, it's very, we need to protect them because
247
+ [1134.880 --> 1140.120] understanding emotions more means that we're going to react to them more.
248
+ [1140.120 --> 1144.960] So this is an example of a reaction to disgust and you can see the difference of the child
249
+ [1144.960 --> 1146.280] on the left.
250
+ [1147.280 --> 1153.280] This is a disgusting video and the child on the right.
251
+ [1153.280 --> 1158.280] It's a pretty disgusting video cleaning ear wax from it.
252
+ [1158.280 --> 1169.280] So it can get more different than this and it has really something that we want to look at.
253
+ [1169.280 --> 1170.280] Okay.
254
+ [1170.280 --> 1175.280] And now we have brain images from all of these children and so we say, always this just
255
+ [1175.280 --> 1179.840] like a random fluke that you are finding, no, because we see actually brain correlates
256
+ [1179.840 --> 1180.840] of it.
257
+ [1180.840 --> 1186.120] So there's changes in the dyslexic brain, I won't get into more in many details, but in the
258
+ [1186.120 --> 1193.480] right hemisphere regions that are involved in emotional understanding and emotional regulation.
259
+ [1193.480 --> 1197.280] So really important point that we don't, we shouldn't just look at language in these
260
+ [1197.280 --> 1201.640] children, we should really take this whole brain approach.
261
+ [1201.640 --> 1204.440] So another skill, visual spatial memory.
262
+ [1204.440 --> 1209.200] This is the task that we use in visual spatial.
263
+ [1209.200 --> 1216.120] Very different from the classic task that is used in the educational, usually academic
264
+ [1216.120 --> 1219.760] batteries in which children are asked to copy figures.
265
+ [1219.760 --> 1224.360] Here they're playing this video game in which they're navigating around a virtual environment
266
+ [1224.360 --> 1227.680] is a very well studied task.
267
+ [1227.680 --> 1230.480] We know, we even know how mice do this.
268
+ [1230.480 --> 1233.080] We know which genes regulate this task.
269
+ [1233.080 --> 1236.720] We know which neural networks are associated with it.
270
+ [1236.720 --> 1240.800] So they need to learn 15 turns in this virtual environment.
271
+ [1240.800 --> 1245.560] And most of them use a strategy that is called allocentric, so they take landmarks from
272
+ [1245.560 --> 1247.600] this virtual neighborhood.
273
+ [1247.600 --> 1248.600] It's really hard.
274
+ [1248.600 --> 1252.320] 15 turns that they need to memorize.
275
+ [1252.320 --> 1255.240] And then they have to get it right twice in a row.
276
+ [1255.240 --> 1258.080] And then we ask them again after 45 minutes.
277
+ [1258.080 --> 1261.920] What I really like about it is it doesn't involve fine motor movements.
278
+ [1261.920 --> 1268.560] So it kind of dissociates visual memory from fine motor skills which are necessary to copy
279
+ [1268.560 --> 1271.320] figures.
280
+ [1271.320 --> 1273.120] So how do they do?
281
+ [1273.120 --> 1278.760] Not all kids with dyslexia do extremely well, but there is definitely a subgroup.
282
+ [1278.760 --> 1280.160] You can see them circle there.
283
+ [1280.160 --> 1281.920] The distribution is different.
284
+ [1281.920 --> 1285.800] Out of 30 kids with dyslexia, there are nine who are exceptional.
285
+ [1285.800 --> 1295.640] So while in the typical neurotypical population, there are 14% of super visual learners.
286
+ [1295.640 --> 1297.480] This is really visual memory.
287
+ [1297.480 --> 1302.160] In the dyslexic cohort, there is 30%.
288
+ [1302.160 --> 1306.960] And it's not correlated to general measure of visual IQ.
289
+ [1306.960 --> 1310.080] So what I think this indicates is that it's something different.
290
+ [1310.080 --> 1312.960] It's not just that they're smarter, visual, spatially.
291
+ [1312.960 --> 1315.440] They're doing this task in a different way.
292
+ [1315.440 --> 1322.440] Because of this difference in the wiring and in the synchronous functioning of their brain
293
+ [1322.440 --> 1323.960] networks.
294
+ [1323.960 --> 1325.720] For the more, we would have missed them.
295
+ [1325.720 --> 1330.320] If we just did the copying of figures in the classic neuropsych testing, we would have
296
+ [1330.320 --> 1331.320] missed them.
297
+ [1331.320 --> 1332.320] It doesn't correlate.
298
+ [1332.320 --> 1337.760] So one can do really poorly in the classic tests of copying figures and remembering them
299
+ [1337.760 --> 1345.000] after a few minutes and still do extremely well at this task.
300
+ [1345.720 --> 1346.920] How about in the brain?
301
+ [1346.920 --> 1356.040] In the children who have this, and in superior visual memory, we definitely see bigger structures,
302
+ [1356.040 --> 1359.560] more connected structures in the right hemisphere.
303
+ [1359.560 --> 1365.880] So you can see in the panel at the top, on the right, there is this structure that is
304
+ [1365.880 --> 1370.680] called the superior longitudinal fasciculus on the right and the same structure on the left.
305
+ [1370.680 --> 1374.120] And you can see visually that it's thicker on the right on the left.
306
+ [1374.120 --> 1378.600] So again, there is a neural correlate of that.
307
+ [1378.600 --> 1385.440] Another important function that we've talked about before is semantic conceptual abilities.
308
+ [1385.440 --> 1390.880] Again, there is a spectrum, but there is a subgroup of children that are really superior
309
+ [1390.880 --> 1398.720] in finding the pattern and mixing in grouping information based on their meaning.
310
+ [1398.720 --> 1402.560] And these are usually the kids who have a road to memory difficulties.
311
+ [1402.560 --> 1409.200] So if someone has difficulty in remembering things by road, the way to remember it better
312
+ [1409.200 --> 1412.600] is to make stories out of it, to see patterns.
313
+ [1412.600 --> 1419.880] And then that becomes a problem solving and a creative thinking ability.
314
+ [1419.880 --> 1424.520] So not everybody, but again, there is a subset of dyslexic children that are really good
315
+ [1424.520 --> 1425.520] at this.
316
+ [1425.520 --> 1431.280] And in our cohort so far, I still small, we need more to make these more robust finding,
317
+ [1431.280 --> 1434.880] but there is a different subset from the visual spatial, super learner.
318
+ [1434.880 --> 1441.880] So there is this association that is not always so easy, so we need to look at the individuals
319
+ [1441.880 --> 1443.600] to really identify it.
320
+ [1443.600 --> 1446.160] Again, we see a neural correlates.
321
+ [1446.160 --> 1453.240] This other pathway is called the archaeophysiculus in the semantic learners, which is a temporary
322
+ [1453.240 --> 1456.680] name that we're using is much thicker on the left than on the right.
323
+ [1456.680 --> 1462.960] So there is a neural signature associated with this strength in these children.
324
+ [1462.960 --> 1467.320] Lastly, and I thought this was very interesting, it's very preliminary, but I thought it
325
+ [1467.320 --> 1472.280] was very interesting today talking about entrepreneurship.
326
+ [1472.280 --> 1478.520] So there is a field of neuroscience and neurology that is called neuro-economics and studies
327
+ [1478.520 --> 1483.160] the way that people make decision and how they deal with uncertainty.
328
+ [1483.160 --> 1487.120] So many economic decisions are made in the face of uncertainty.
329
+ [1487.120 --> 1491.920] And so there is a whole field of neuroscience studies how people make these decisions
330
+ [1491.920 --> 1495.080] and how they deal with uncertainty.
331
+ [1495.080 --> 1499.800] Now we all know from our own experience, or our children's experience, that children
332
+ [1499.800 --> 1502.400] would just like to deal with uncertainty every day.
333
+ [1502.400 --> 1505.160] They look at a word and they just try.
334
+ [1505.160 --> 1506.160] They don't know.
335
+ [1506.160 --> 1511.800] They are uncertain on how they're going to approach it and what the result is going
336
+ [1511.800 --> 1512.800] to be.
337
+ [1512.800 --> 1520.120] So Krista Watson in the center is an expert in neuro-economics and she used the task to
338
+ [1520.120 --> 1524.800] look at this in our children and see how they react to uncertainty.
339
+ [1524.800 --> 1528.200] So is this a strength, the way they make decision?
340
+ [1528.200 --> 1531.480] Is it a strength that could have to do with entrepreneurship?
341
+ [1531.480 --> 1537.320] And understanding how children make decision, children with dyslexia make decision, can
342
+ [1537.320 --> 1542.880] it help us from the school point of view and the remediation point of view, can it help
343
+ [1542.880 --> 1548.720] us create better strategies for learning for them?
344
+ [1548.720 --> 1553.280] So this is the task, it's called again a very well established task that of course is
345
+ [1553.280 --> 1558.320] never been used in children with neurodiversity.
346
+ [1558.320 --> 1564.480] So they have to fish and what they know is that they have several trials but every red
347
+ [1564.480 --> 1572.800] fish is five points but they catch the yellow fish, they lose everything.
348
+ [1572.800 --> 1579.520] So there is different amount of the yellow fish in the tank but they're uncertain because
349
+ [1579.520 --> 1582.200] let's see if I can play it again.
350
+ [1582.200 --> 1590.640] Because at some point the algorithm actually makes them get a yellow fish.
351
+ [1590.640 --> 1598.080] So basically what the task does is it changes the probability of them getting a yellow fish
352
+ [1598.080 --> 1602.000] and losing everything and the children need to learn whether for that trial they're
353
+ [1602.000 --> 1607.040] okay with the amount of points they have collected or they want to take the extra risk
354
+ [1607.040 --> 1608.720] and go another time.
355
+ [1608.720 --> 1615.560] So there is a very specific cognitive model and you know the task is constructed in a very
356
+ [1615.560 --> 1621.240] precise way with Bayesian probabilities that we don't need to go into but the idea is
357
+ [1621.240 --> 1628.340] that there is two main concepts that we can look at this task is this is how children
358
+ [1628.340 --> 1632.000] assess risk and how consistent they are.
359
+ [1632.000 --> 1637.920] So to give an example is this of what children say during this task for the first construct
360
+ [1637.920 --> 1641.480] is this time I'm going to catch the yellow fish, I just know it.
361
+ [1641.480 --> 1646.280] And in the second one is well I got five points already but I want to see what happens
362
+ [1646.280 --> 1647.360] if I fish again.
363
+ [1647.360 --> 1654.640] So consistency means how basically how high they put the bar for themselves.
364
+ [1654.640 --> 1660.480] And these are our preliminary results showing that really children with dyslexia have different
365
+ [1660.480 --> 1667.000] decision making and this is again the phonological more classic type of phenotype of dyslexia
366
+ [1667.080 --> 1672.640] and what they do is they set a bar very high and they tolerate risk.
367
+ [1672.640 --> 1677.520] They have higher tolerance for risk and so I will let the entrepreneurs in the next
368
+ [1677.520 --> 1685.560] speakers think about this and how this relates to entrepreneurship.
369
+ [1685.560 --> 1693.600] So in summary I hope I convinced you that a whole brain approach is a novel and very
370
+ [1693.600 --> 1698.480] relevant approach to look at children with neurodiversity with dyslexia but also other
371
+ [1698.480 --> 1703.520] learning differences that is not enough to just look at how they spell and they read that
372
+ [1703.520 --> 1706.080] we need to get an idea of their strength and weaknesses.
373
+ [1706.080 --> 1713.160] There are many strengths that can lead to many successful careers in their life and hopefully
374
+ [1713.160 --> 1716.880] will work with Nicole to figure that out.
375
+ [1716.880 --> 1722.640] We work concentrated in the K to 12 and in the aging cohort.
376
+ [1722.640 --> 1728.680] So we have the whole age span here between the two institutions.
377
+ [1728.680 --> 1735.920] We think that interventions should include knowledge about the strengths of the children
378
+ [1735.920 --> 1742.320] and that the approach should be having the right teaching method but also specific cognitive
379
+ [1742.320 --> 1749.520] training tools for the specific difficulties and that's where the technology will help us.
380
+ [1749.520 --> 1754.680] And then we cannot, none of the different professions can solve this problem alone.
381
+ [1754.680 --> 1759.360] We need to work together, clinician scientists, teachers, legislators.
382
+ [1759.360 --> 1764.760] We have one of our first studies in a prison here in California to look at the prevalence
383
+ [1764.760 --> 1768.400] of learning disabilities in the prison population.
384
+ [1768.400 --> 1775.200] We know that we need to stop the downward spiral that we heard about before.
385
+ [1775.200 --> 1776.680] So what is stopping us?
386
+ [1776.680 --> 1778.680] What is stopping and nothing is stopping us.
387
+ [1778.680 --> 1787.240] What is needed for the dyslexia center to match the impact of the membrane aging center?
388
+ [1787.240 --> 1796.480] And one frustration that I've had in starting this center that was, fortunately was remediated
389
+ [1796.480 --> 1803.080] by the generosity of Mr. Schwab and other donors is that NIH, the National Institute of
390
+ [1803.080 --> 1805.720] Health does not fund this research.
391
+ [1805.720 --> 1809.720] And this goes back to our initial thought of these kids falling through the cracks.
392
+ [1809.720 --> 1811.320] Is it a school problem?
393
+ [1811.320 --> 1812.480] Or is it a medical problem?
394
+ [1812.480 --> 1814.400] Or is it a scientific problem?
395
+ [1814.400 --> 1817.040] And in all this question, they fall through the cracks.
396
+ [1817.040 --> 1822.360] So look at the amount of funding that NIH gives to Alzheimer's disease or to aging disorders
397
+ [1822.360 --> 1824.080] compared to dyslexia.
398
+ [1824.080 --> 1829.360] We really need to work together on changing this and although these are not diseases, we
399
+ [1829.360 --> 1831.600] don't want to medicalize too much.
400
+ [1831.600 --> 1839.040] We need to bring the medical and health world into the field.
401
+ [1839.040 --> 1841.160] And this is what happened at the membrane aging center.
402
+ [1841.160 --> 1843.400] This is the funding scheme of the membrane aging center.
403
+ [1843.400 --> 1848.080] It started with a very generous donation as an endowed chair to Dr. Miller who is a director
404
+ [1848.080 --> 1853.280] of the center and then both philanthropy and NIH funding started going together.
405
+ [1853.280 --> 1860.120] So I hope that the future of the dyslexia center will depend on our generosity, but that
406
+ [1860.120 --> 1867.240] we will soon be able to also convince NIH that this is a problem that needs to be tackled.
407
+ [1867.240 --> 1869.880] So I want to thank everybody for being here.
408
+ [1869.880 --> 1874.600] The children and the families, many of you have trusted us with your children.
409
+ [1874.600 --> 1875.600] Thank you.
410
+ [1875.600 --> 1877.560] It's an incredible honor.
411
+ [1877.560 --> 1883.080] All our staff and collaborators, teachers, doctors, psychologists, basic scientists, all
412
+ [1883.080 --> 1885.920] working together for the same cause.
413
+ [1885.920 --> 1889.920] And again, thank you to Mr. Shrab and our other generous supporters.
transcript/allocentric_aiWpeqABPw8.txt ADDED
@@ -0,0 +1,376 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [90.000 --> 95.680] Hello everyone, my name is Vicki Chan and I am the National Manager of Programs and
2
+ [95.680 --> 98.560] Community Engagement here at APDA.
3
+ [98.560 --> 103.280] It is my pleasure to welcome everyone to this session entitled The Science Behind PD and
4
+ [103.280 --> 104.280] Art.
5
+ [104.280 --> 108.280] Our speaker is Dr. Alberto Cucca.
6
+ [108.280 --> 115.040] Dr. Cucca is a movement disorders neurologist currently working in clinical research.
7
+ [115.040 --> 120.560] At H.Lundbach, a pharmaceutical company based in Copenhagen, Denmark.
8
+ [120.560 --> 127.960] Dr. Cucca graduated cum laude from the University of Trieste Medical School, where he also completed
9
+ [127.960 --> 135.240] his neurology residency and where he is currently working towards his PhD in neural and cognitive
10
+ [135.240 --> 136.240] neuroscience.
11
+ [136.240 --> 143.360] In 2016, Dr. Cucca was the first recipient of the Marlinge and Paolo Fresco postdoctoral
12
+ [143.360 --> 150.840] fellowship in Parkinson's disease and related disorders at NYU Grossman School of Medicine
13
+ [150.840 --> 152.240] in New York City.
14
+ [152.240 --> 158.640] His research is focused on the rehabilitative potential of art therapy in Parkinson's disease.
15
+ [158.640 --> 163.520] We're honored to have Dr. Cucca with us today and he will answer questions life after
16
+ [163.520 --> 164.520] his presentation.
17
+ [164.520 --> 167.520] So be sure to include your questions in the chat.
18
+ [167.520 --> 170.200] Dr. Cucca, take it away.
19
+ [170.200 --> 171.200] Thank you very much.
20
+ [171.200 --> 173.160] Thank you for the nice introduction.
21
+ [173.160 --> 174.160] Hello, everyone.
22
+ [174.160 --> 175.440] Thank you for having me here.
23
+ [175.440 --> 179.680] I'm truly delighted to be part of this great initiative of the American Parkinson's
24
+ [179.680 --> 180.680] disease association.
25
+ [180.680 --> 188.480] I'd like to thank all the organizers and those ones who worked on making this event possible.
26
+ [188.480 --> 193.640] The topic of my talk is the science behind Parkinson's disease and art, which sounds quite
27
+ [193.640 --> 195.960] an ambitious topic.
28
+ [195.960 --> 200.040] And I hope you'll forgive me if I'll try to take a more humble approach.
29
+ [200.040 --> 205.720] And I'll try in this health and hour to highlight what could be the rehabilitative and therapeutic
30
+ [205.720 --> 212.560] potential of creativity based complementary intervention for people living with Parkinson's
31
+ [212.560 --> 213.560] disease.
32
+ [213.560 --> 220.320] And I will focus specifically on visual art making, not because this is the only kind of
33
+ [220.320 --> 224.560] art therapy that is available for people living with Parkinson's disease.
34
+ [224.560 --> 229.960] Indeed, there are different forms of art therapy, including dance therapy or drama therapy.
35
+ [229.960 --> 234.800] And some of the speakers in this conference will talk about this.
36
+ [234.800 --> 242.120] But I will focus mostly on the impact of visual art making on a specific subset of symptoms
37
+ [242.120 --> 246.160] of Parkinson's disease, which are the so-called visual spatial symptoms.
38
+ [246.160 --> 252.720] But before getting into there, just to share my disclosure and to introduce a little bit
39
+ [252.720 --> 253.720] myself.
40
+ [253.720 --> 260.240] As Vicky said before, I work in clinical research for a pharmaceutical company that is based
41
+ [260.240 --> 261.240] in Copenhagen.
42
+ [261.240 --> 263.080] It's called Lundbeck.
43
+ [263.080 --> 268.080] I also hold an agent faculty appointment at NYU in the Department of Neurology, where
44
+ [268.080 --> 270.320] I completed part of my training.
45
+ [270.320 --> 277.760] And I'm also involved in the neuro and cognitive neuroscience doctoral program at the University
46
+ [277.760 --> 279.200] of Trieste in Italy.
47
+ [279.200 --> 281.600] As you can tell by my accent, I am Italian.
48
+ [281.600 --> 283.360] I was born and raised in Venice.
49
+ [283.360 --> 290.400] I have no relevant specific disclosures for the topic of my today's talk.
50
+ [290.400 --> 298.200] So let's start diving in by trying to define what art means from a neuroscientific perspective.
51
+ [298.200 --> 303.040] Of course, art means different things to different people and artists will have their opinion
52
+ [303.040 --> 305.280] on what constitutes a form of art.
53
+ [305.280 --> 307.800] Philosophers will presumably do as well.
54
+ [307.800 --> 312.200] Actually have a human being for that mean will have a definition of art.
55
+ [312.200 --> 315.880] That's actually intrinsic inherent to the definition of art.
56
+ [315.880 --> 321.800] But if we want to constrain the question on a pure neurological and neuroscientific level,
57
+ [321.800 --> 330.880] I think it's safe to say that art is really highly symbolic communicative system.
58
+ [330.880 --> 339.400] It is based on referential cognition and it allows to convey messages bypassing roadblocks
59
+ [339.400 --> 346.600] that could be encountered in more conventional forms of expression like verbal language or
60
+ [346.600 --> 348.000] written language.
61
+ [348.000 --> 355.800] So really through art, shape and meaning can be given to ideas and emotions and feelings
62
+ [355.800 --> 359.040] which would otherwise remain unformulated.
63
+ [359.040 --> 365.920] And this capability of art to communicate something is really the foundation of art therapy
64
+ [365.920 --> 369.040] as a mental health profession.
65
+ [369.040 --> 374.440] The American Parkinson's, the American Art Therapy Association defies art therapy as a mental
66
+ [374.440 --> 382.720] health profession which uses the creative process of art making to foster emotional well-being,
67
+ [382.720 --> 387.240] psychosocial adaptation and psychological wellness.
68
+ [387.240 --> 395.320] So art is used as a media tool for example express potentially masked or latent or under-recognized
69
+ [395.320 --> 401.520] sources for psychological distress and then address them through formal counseling for
70
+ [401.520 --> 402.520] example.
71
+ [402.520 --> 406.920] So as such, art therapies are not really much interested in the aesthetic quality of the
72
+ [406.920 --> 412.720] art creations of their clients, but they're taking advantage of the process of making art
73
+ [412.720 --> 419.960] to help supporting the mental wellness and the psychological well-being or their clients.
74
+ [419.960 --> 425.240] From a neurological perspective, as neurologist, one of the things we were mostly
75
+ [425.240 --> 433.720] interested in at NYU was that our interactions with the process of making art are actually
76
+ [433.720 --> 439.920] key to sophisticated neurological functions that are elaborated at different levels of our
77
+ [439.920 --> 442.040] central nervous system.
78
+ [442.040 --> 447.960] And many of these functions are the so-called visual spatial functions.
79
+ [448.360 --> 457.400] In a natural visual spatial functions reflect the cognitive views that our brain can do on the
80
+ [457.400 --> 460.640] upcoming visual perceptual information.
81
+ [460.640 --> 467.080] So they're not like basic sensory functions like my ability to see something or not see
82
+ [467.080 --> 470.880] something on a low resolution level.
83
+ [470.880 --> 476.360] Rather, visual spatial functions reflect the cognitive views that my brain is capable
84
+ [476.360 --> 479.000] to do of that visual information.
85
+ [479.000 --> 485.120] For example, to adapt my behavior, I will make a practical example of a visual spatial
86
+ [485.120 --> 489.960] function, especially for those who are following us from New York City.
87
+ [489.960 --> 495.440] Getting on a subway train, especially when it's crowded there, it actually is a good example
88
+ [495.440 --> 500.480] of visual spatial function because we need to decide whether it's safe enough for us to
89
+ [500.480 --> 501.800] get on the train.
90
+ [501.800 --> 505.760] We need to take into account the speed of the sliding doors of the train.
91
+ [505.760 --> 510.680] We need to factor our relative motion towards the target of our trajectory.
92
+ [510.680 --> 517.520] And we even have to take into account the direction of people that are walking towards us or
93
+ [517.520 --> 521.920] probably happen in New York trying to walk over us.
94
+ [521.920 --> 527.680] And our brain has to provide us with an accurate answer on whether it's safe enough for us
95
+ [527.680 --> 532.560] to move forward in a few hundreds of milliseconds.
96
+ [532.560 --> 538.880] And so this is really reflect the use, the cognitive use of visual information to generate
97
+ [538.880 --> 545.960] a map, to generate topographic coordinates which will guide my movements in space safely
98
+ [545.960 --> 548.200] and effectively.
99
+ [548.200 --> 554.040] And unfortunately, people with Parkinson's disease can experience signs and symptoms related
100
+ [554.040 --> 556.200] to visual spatial dysfunction.
101
+ [556.200 --> 559.880] They can report difficulties in estimating distances between objects.
102
+ [559.880 --> 566.680] For example, they can experience difficulties in perceiving the direction of moving items
103
+ [566.680 --> 572.800] or negotiating obstacles while they're walking through narrow and crowded spaces.
104
+ [572.800 --> 579.160] And in question, studies up to 78% of people with Parkinson's disease without known
105
+ [579.160 --> 586.360] how to calculate or of the technological problem and doors at least one difficulty related
106
+ [586.360 --> 590.080] to in per visual spatial functions.
107
+ [590.080 --> 592.640] So we can move forward.
108
+ [592.640 --> 601.720] And basically, one of the greatest success and advances in the field of neuroscience has
109
+ [601.720 --> 608.760] been from our perspective to be able to characterize what could be the neural substrates,
110
+ [608.760 --> 614.520] the areas of the brain that are involved with the different stages of the process of making
111
+ [614.520 --> 615.520] art.
112
+ [615.520 --> 620.680] By taking advantages of techniques like the resting state functional MRI or task-created
113
+ [620.680 --> 627.760] functional MRI, it is nowadays possible with a fair degree of approximation to understand
114
+ [627.760 --> 634.640] which are the areas of our brain that get activated while we are engaged into the process
115
+ [634.640 --> 640.520] of drawing, for example, or painting or creating a work of visual art.
116
+ [640.520 --> 649.360] For example, we have to think and visualize what we are intended to portray and to draw.
117
+ [649.360 --> 655.200] And this process of visual imagery presumably activates certain areas of our visual system
118
+ [655.200 --> 658.800] that are concerned with shape recognition.
119
+ [658.800 --> 665.800] Then we have to store temporarily the intended target that we plan to paint or to draw into
120
+ [665.800 --> 674.080] some sort of spatial temporary memory while we are about to start executing our artwork.
121
+ [674.080 --> 679.600] And then we have to guide our movements as we progress through the creation of the artifacts
122
+ [679.600 --> 685.200] to correct and adapt the movements so that what we are trying to draw and paint actually
123
+ [685.200 --> 688.280] matches what we want to draw our paint.
124
+ [688.280 --> 694.360] And so that stage of the process of making art will also activate other areas of our brain,
125
+ [694.360 --> 701.040] particularly the docile visual system that is involved in the control of visually guided
126
+ [701.040 --> 703.040] movements.
127
+ [703.040 --> 711.200] So this process of making art can recruit and engage those very functions that can be
128
+ [711.200 --> 713.520] affected in people with Parkinson's disease.
129
+ [713.520 --> 717.320] And this is truly the main rational of the Explorer Art PD study.
130
+ [717.320 --> 725.200] The reason at NYU that art therapy could be used as a rehabilitative strategy to enhance
131
+ [725.200 --> 730.160] and improve those visual spatial functions that can be affected in people with Parkinson's
132
+ [730.160 --> 731.160] disease.
133
+ [731.160 --> 737.400] And the reason why we believe so, it's because we have a growing body of evidence from
134
+ [737.400 --> 745.440] devulcative reports and scientific studies that art does engage the process of creating
135
+ [745.440 --> 749.840] visual art does engage visual spatial functions.
136
+ [749.840 --> 755.880] And the proof of that is that artists have been, we could say, successful neuroscientists
137
+ [755.880 --> 762.520] for centuries, they have been purposefully and deliberately taking advantage of the computational
138
+ [762.520 --> 769.160] features of the way our visual system operates to process the visual information that have been
139
+ [769.160 --> 776.000] taking advantage of these mechanisms to do what, to elicit specific aesthetic precepts
140
+ [776.000 --> 779.000] or to achieve their conceptual goals.
141
+ [779.000 --> 785.160] And they've been doing this basically to bypass the limitations that are dictated by the
142
+ [785.160 --> 789.040] physics of the nature, the physics of the artifacts.
143
+ [789.040 --> 795.880] For example, if I have to portray a three-dimensional scene on a canvas which is on two-dimension,
144
+ [795.880 --> 801.520] I need to manipulate the spatial relationships of the things of the elements of the visual
145
+ [801.520 --> 807.560] scene so that my brain will be tricked in some way to believe that what I'm seeing can
146
+ [807.560 --> 813.640] be realistically perceived as a three-dimensional space when in fact it's just a two-dimensional
147
+ [813.640 --> 815.120] representation.
148
+ [815.120 --> 821.040] So artists have been quite proficient in doing this, in taking advantage of these alternative
149
+ [821.040 --> 822.040] physics.
150
+ [822.040 --> 829.480] And the reason is because our visual system fundamentally is wired up in a way that makes
151
+ [829.480 --> 834.360] it mostly concerned in detecting things that are meaningful, in detecting things that
152
+ [834.360 --> 839.520] are important rather than physically consistent.
153
+ [839.520 --> 842.320] And I have a few examples to convince you about that.
154
+ [842.320 --> 848.560] But before getting there, I think I like to spend a few words on the physiology of the
155
+ [848.560 --> 852.280] visual system how our visual system operates.
156
+ [852.280 --> 860.440] And basically our visual system consists in two major components, the what system and
157
+ [860.440 --> 862.400] the where system.
158
+ [862.400 --> 868.360] The what system is the philogenetically more recent acquisition of our visual system.
159
+ [868.360 --> 872.320] It's the component of the visual system that we share with other primates.
160
+ [872.320 --> 878.120] And as the name suggests, this part of the visual system is concerned with the detailed
161
+ [878.120 --> 885.560] specification with a detailed characterization of the visual stimuli.
162
+ [885.560 --> 891.440] As such, this component, the what system is very sensitive to color changes.
163
+ [891.440 --> 895.560] Obviously, color informs us on the nature of the things we're seeing.
164
+ [895.560 --> 900.000] We are capable to understand whether something is poisonous, for example, or it could be
165
+ [900.000 --> 903.120] a potential source of food based on its color.
166
+ [903.120 --> 908.440] So the what system is really sensitive even to minimal changes of color.
167
+ [908.440 --> 910.480] It has a very high resolution.
168
+ [910.480 --> 915.600] It has a very detailed, high resolution that allows this part of the visual system to
169
+ [915.600 --> 921.760] really scrutinize the granular aspects of what we're looking at.
170
+ [921.760 --> 927.760] And it is mostly activated when visual stimuli are presented in the central part of our visual
171
+ [927.760 --> 931.520] field in the so-called foveil region.
172
+ [931.520 --> 937.200] This system requires time to kick in, but it also shows us low adaptation.
173
+ [937.200 --> 943.440] Now everything I just told you about the what system can be basically flipped and reversed
174
+ [943.440 --> 948.640] and can be said about the other component of the visual system, which is the where system.
175
+ [948.640 --> 956.320] The where system operates in many aspects in the opposite way than the what system.
176
+ [956.320 --> 963.120] It is a more ancient, a more evolutionary ancient component of the visual system that we
177
+ [963.120 --> 966.080] share, for example, with reptiles.
178
+ [966.080 --> 972.040] It is not concerned at all about the specific attributes of the visual stimuli.
179
+ [972.040 --> 977.080] What really matters to the where system is to be able to tell us who is where.
180
+ [977.080 --> 982.760] It's to be able to tell us the location of the various components of the visual system,
181
+ [982.760 --> 984.600] how the visual scene.
182
+ [984.600 --> 990.440] As such, the where system is color blind.
183
+ [990.440 --> 998.160] And it is usually activated when stimuli are presented peripherally and transciently.
184
+ [998.160 --> 1004.960] And the most important computational feature of this component of the visual system, the
185
+ [1004.960 --> 1011.440] where system, is that it is very sensitive to changes in perceived brightness.
186
+ [1011.440 --> 1018.960] In other words, in luminance discontinuities, the where system takes advantage of changes
187
+ [1018.960 --> 1025.960] in perceived brightness to infer that where there is a discontinuity of brightness, in other
188
+ [1025.960 --> 1031.640] words where there's a contrast, that means that whatever is there, it's different from
189
+ [1031.640 --> 1035.440] its immediate surrounding, from its immediate background.
190
+ [1035.440 --> 1040.880] And the where system will be able to take advantage of that information to tell us who is
191
+ [1040.880 --> 1045.840] where in relation to the other aspects, in relation to the other elements of the visual
192
+ [1045.840 --> 1047.200] scene.
193
+ [1047.200 --> 1054.800] And the where system usually kicks in quite fast, but then it also shows a very rapid decay.
194
+ [1054.800 --> 1059.840] So keeping these notions in mind of the general physiology of the visual system, I want to
195
+ [1059.840 --> 1069.120] show you how really the artists are capable to, in some ways, trick our visual system by
196
+ [1069.120 --> 1074.400] activating mostly the what system or the where system or a different combination of the what
197
+ [1074.400 --> 1077.320] and the where system at the same time.
198
+ [1077.320 --> 1082.880] And a good example of the fact that the where system is most concerned about contrast discontinuities
199
+ [1082.880 --> 1089.640] comes, for example, with the first lesson that we can take from artists, which is we are
200
+ [1089.640 --> 1096.600] capable to know where things are, generally speaking, before knowing what things are.
201
+ [1096.600 --> 1101.160] And so the fact that the where system takes advantage of contrast discontinuities allow
202
+ [1101.160 --> 1107.120] us to understand immediately that the shadow that is casted in the foreground of the painting
203
+ [1107.120 --> 1112.000] on the left side of the slide belongs to the Orc Angel Gabriel in this case.
204
+ [1112.000 --> 1119.800] And that's enough for our system to associate each shadow to its projecting to its casting
205
+ [1119.800 --> 1121.920] objects without ambiguity.
206
+ [1121.920 --> 1128.080] So for us, it makes perfect sense that we have the shadow of the Orc Angel Gabriel in going
207
+ [1128.080 --> 1132.960] in that particular direction, even though it is physically impossible that in the same place
208
+ [1132.960 --> 1138.080] and in the same source of light, we have one shadow moving from the left to the right side.
209
+ [1138.080 --> 1142.720] And at the same time, we have the shadow coming from the outer space forward.
210
+ [1142.720 --> 1148.040] We don't care because what the where system really cares about is contrast discontinuity.
211
+ [1148.040 --> 1154.240] I have another example of that, which is a Piazza Samarco, the square of my city and Venice.
212
+ [1154.240 --> 1159.000] And you can see again that because of what I was telling you before about how our visual
213
+ [1159.000 --> 1164.000] system works, we don't seem to pay particular attention on the fact that it's impossible
214
+ [1164.000 --> 1168.160] that people on the left side of the piazza will have their shadow going leftward.
215
+ [1168.160 --> 1171.960] And at the same time, people on the right side of the piazza will have their shadow going
216
+ [1171.960 --> 1175.480] into the opposite direction, which is obviously not possible.
217
+ [1175.600 --> 1179.000] But again, we don't seem to pay much attention about that.
218
+ [1179.000 --> 1183.600] We can move forward because I have a few other examples to convince you about the ability
219
+ [1183.600 --> 1188.240] of artists to take advantage of the way our visual system operates.
220
+ [1188.240 --> 1189.800] This is a famous example.
221
+ [1189.800 --> 1193.800] It's one of the most famous smiles in art.
222
+ [1193.800 --> 1202.760] And a lot of people wonder why Mona Lisa's smile seems so cheerful and apparent
223
+ [1202.760 --> 1208.600] when I'm looking away from it, whereas the moment I keep staring directly at her smile
224
+ [1208.600 --> 1212.520] after a few seconds, some smiles seems to fade away.
225
+ [1212.520 --> 1214.520] Why her smile is so elusive?
226
+ [1214.520 --> 1217.480] Why her smile is so difficult to catch?
227
+ [1217.480 --> 1222.880] A potential explanation that has been advanced by a neuroscientist like Margaret Livingston
228
+ [1222.880 --> 1228.240] is that the door-self visual system that is mostly concerned with the emotional content
229
+ [1228.240 --> 1235.360] of the images is activated when stimuli are presented peripherically and transciently.
230
+ [1235.360 --> 1242.280] And so for us, literally the best way to catch Mona Lisa's smile is to look away from
231
+ [1242.280 --> 1243.720] Mona Lisa's smile.
232
+ [1243.720 --> 1248.080] So we can move forward with the other examples.
233
+ [1248.080 --> 1255.120] This is a way artists can take advantage of equivalent colors, which means colors that
234
+ [1255.120 --> 1262.720] have the same quantity of perceived luminance so that we can find ourselves in a situation
235
+ [1262.720 --> 1268.600] where we recognize Mona's son because the ventral system recognizes a different color
236
+ [1268.600 --> 1271.680] of the sun as compared to its immediate surrounding.
237
+ [1271.680 --> 1277.200] But then the door-sauce system, the wear system, is uncapable to tell us exactly where the
238
+ [1277.200 --> 1278.200] sun is.
239
+ [1278.200 --> 1282.560] And so the sun has this ambiguously floating quality that is an effect that has been
240
+ [1282.560 --> 1285.160] mastered by impressionists.
241
+ [1285.160 --> 1291.680] Another example of how artists can take advantage of the visual system is by using certain
242
+ [1291.680 --> 1298.520] viewing variant visual memory templates, those abstract categorizations of the visual stimuli
243
+ [1298.520 --> 1304.760] by virtue of which we can immediately perceive two dancing figures, actually presumably two
244
+ [1304.760 --> 1309.960] feminine dancing figures, right away even though the image is actually made of only by
245
+ [1309.960 --> 1315.120] a collection of triangles of different shape and different orientation.
246
+ [1315.120 --> 1320.840] And finally, the two components of the visual system can be recruited in what is technically
247
+ [1320.840 --> 1325.720] said in a diachronical fashion to elicit perceptions of motion.
248
+ [1325.720 --> 1330.360] I don't know if these effects will work, but I will encourage you to keep looking at
249
+ [1330.360 --> 1336.160] the center of the spiral on the top of this light and the gray button of this light.
250
+ [1336.160 --> 1344.000] And perhaps we can try to have the spiral start by clicking on it.
251
+ [1344.000 --> 1347.960] And I will encourage you to keep looking at the center of the spiral for a few seconds,
252
+ [1347.960 --> 1351.240] disregarding the background, just the center of it.
253
+ [1351.240 --> 1358.640] And after a few seconds you have been looking at the center, you can lower your gaze and
254
+ [1358.640 --> 1361.720] enjoy the stary night of Angog.
255
+ [1361.720 --> 1368.720] So I hope I was able to convince you that art can engage and recruit visual special functions
256
+ [1368.720 --> 1371.200] in quite a sophisticated way.
257
+ [1371.200 --> 1377.400] Now let's move forward and let's talk about the Explore Art PD study.
258
+ [1377.400 --> 1383.560] So our reasoning was to look if 20 sessions of art therapy can improve and restore those
259
+ [1383.560 --> 1393.520] full visual special functions that can be particularly functionally interfering for
260
+ [1393.520 --> 1398.960] our patients as they can impact the broader array of motor activities of daily living.
261
+ [1398.960 --> 1404.040] So patients with Parkinson's disease were tested with an extensive battery of clinical
262
+ [1404.040 --> 1410.000] and psychological tests and then they underwent 20 sessions of art therapy and then they were
263
+ [1410.080 --> 1413.680] retested according to the same procedures of baseline.
264
+ [1413.680 --> 1418.240] And this study has been published in 2021 and I will just show you the main findings
265
+ [1418.240 --> 1422.200] of the study in the next slide.
266
+ [1422.200 --> 1428.120] So we can see that basically the way patients were moving their eyes to scan the environment
267
+ [1428.120 --> 1434.080] around them with an eye tracking procedure significantly improved after art therapy.
268
+ [1434.080 --> 1439.920] With the onset of more efficient visual exploration strategies and we look at that by looking
269
+ [1439.920 --> 1446.600] at the saccades, at the rapid conjugated movements that we do when we switch our gaze
270
+ [1446.600 --> 1450.160] from one fixation point to another fixation point.
271
+ [1450.160 --> 1456.200] And these ocular motor behaviors, these catability to scan the environment around us significantly
272
+ [1456.200 --> 1462.000] improved as a result of our art therapy intervention and became virtually indistinguishable for what
273
+ [1462.000 --> 1467.000] it was observed in age-matched healthy controls.
274
+ [1467.000 --> 1472.280] Another important result of our study was the improvements in visual constructional
275
+ [1472.280 --> 1477.520] abilities and also the improvements in figure to background segregation which is basically
276
+ [1477.520 --> 1483.800] the ability to recognize this great single visual stimuli when they're embedded into more
277
+ [1483.800 --> 1488.880] complex sensory patterns with a computerized visual test called navon test.
278
+ [1488.880 --> 1493.320] You can see in the right side of this slide that patients with Parkinson's disease as compared
279
+ [1493.320 --> 1497.840] to controls made a higher number of errors but you can also see that following our therapy
280
+ [1497.840 --> 1503.680] the number of errors significantly dropped down and became again basically comparable to
281
+ [1503.680 --> 1506.400] what we observe in controls.
282
+ [1506.400 --> 1512.640] And the next slide also, it's quite interesting because it shows that following our art therapy
283
+ [1512.640 --> 1518.200] intervention we observe an increased connectivity in areas of the brain that are specifically
284
+ [1518.200 --> 1523.600] concerned with the processing of visual information, particularly V2 and particularly the
285
+ [1523.600 --> 1529.320] inferior temporal gyros which is a concern with shape recognition and other high complex
286
+ [1529.320 --> 1530.920] perceptual dimensions.
287
+ [1530.920 --> 1536.400] So really art therapy seems to have the potential to induce a functional reorganization of neuronal
288
+ [1536.400 --> 1540.360] networks that are concerned with the elaboration of the visual information.
289
+ [1540.360 --> 1547.320] I'm approaching to the conclusion of the results of this study with the next component which
290
+ [1547.800 --> 1551.600] what happened to the motor behavior of our patients?
291
+ [1551.600 --> 1557.040] Well, we were hopeful that by improving somehow their perception this could translate also
292
+ [1557.040 --> 1562.640] and improve, and improve in motor function, particularly those functions like walking and
293
+ [1562.640 --> 1567.600] balance control that rely heavily on accurate visual feedback.
294
+ [1567.600 --> 1569.760] And that seems to be indeed the case.
295
+ [1569.760 --> 1574.920] We observe a significant improvement in the UPDS part three, but also we observe significant
296
+ [1574.920 --> 1581.080] improvement in a test assessing the gate profile of our patient that is called time up
297
+ [1581.080 --> 1590.080] and go test, meaning that presumably by improving the visual spatial feedback of our patients,
298
+ [1590.080 --> 1597.280] they were also able to carry out their movements in the space in a more effective fashion, linking
299
+ [1597.280 --> 1604.040] perception to action, which I think is one of the most interesting implications of our study.
300
+ [1605.040 --> 1609.800] I think that I am approaching to the conclusion of my talk.
301
+ [1609.800 --> 1618.320] So the key points that I try to share with you is that getting engaged into the creation
302
+ [1618.320 --> 1628.640] of visual art can recruit and improve visual spatial skills that can be affected in some
303
+ [1628.640 --> 1631.680] people living with Parkinson's disease.
304
+ [1631.680 --> 1636.720] But these seems indeed to be the case based on the preliminary findings on our exploratory
305
+ [1636.720 --> 1637.720] study.
306
+ [1637.720 --> 1642.560] And these improvements in perception potentially could be translated also into an improved
307
+ [1642.560 --> 1644.480] motor behavior.
308
+ [1644.480 --> 1652.960] These are complementary interventions that are usually well liked and they also provide
309
+ [1652.960 --> 1658.400] presumably broader benefit that extend well behind the improvement of visual spatial
310
+ [1658.400 --> 1659.400] function.
311
+ [1659.400 --> 1666.320] They can also improve psychosocial adaptation, they can improve self-efficacy, self-esteem,
312
+ [1666.320 --> 1670.760] self-awareness, and that will conclude with a personal remark.
313
+ [1670.760 --> 1679.080] One, as a neurologist, one of the things that really impressed me when I was discussing
314
+ [1679.080 --> 1687.400] the results with some of the participants of the study is that I witnessed a sort of
315
+ [1687.400 --> 1694.440] identity shift process taking place as they were getting engaged into art therapy.
316
+ [1694.440 --> 1699.880] They were capable to look at themselves as someone who's capable to create or someone
317
+ [1699.880 --> 1706.040] who's capable to affirm not only like a person who can lose functionality over time because
318
+ [1706.040 --> 1711.680] of the progressiveness of the underlying disease, but actually as someone who can affirm
319
+ [1711.680 --> 1713.520] and communicate something.
320
+ [1713.520 --> 1718.960] And this, I believe, has an incredible healing potential.
321
+ [1718.960 --> 1726.400] And with the final slide where I would like to thank all the people that helped me in
322
+ [1726.400 --> 1732.160] the Explore Art PD project, including Dr. Fagan, the Executive Director of our Division
323
+ [1732.160 --> 1738.360] and all the other collaborators in different institutes and in different academic institutions,
324
+ [1738.360 --> 1742.920] I will thank you for attention, and I'd be happy to take any question.
325
+ [1742.920 --> 1744.440] Thank you very much.
326
+ [1744.440 --> 1752.200] Thank you, Dr. Pugga.
327
+ [1752.200 --> 1755.120] The connection between art and PD is fascinating.
328
+ [1755.120 --> 1758.320] We have time for a few questions.
329
+ [1758.320 --> 1759.320] So let's start.
330
+ [1759.320 --> 1766.840] I know our time is in different ways, but what is the best part of it for a person with
331
+ [1766.840 --> 1771.000] a different perspective?
332
+ [1771.000 --> 1772.000] Thank you, Vicki.
333
+ [1772.000 --> 1776.280] So there was a little bit of difficulty on the audio on my end.
334
+ [1776.280 --> 1781.520] I think that the question was whether there's a specific form of art that can be recommended
335
+ [1781.520 --> 1784.520] to people with Parkinson's disease.
336
+ [1784.520 --> 1790.040] This is an emerging area, so we cannot really generate conclusive recommendations and say,
337
+ [1790.040 --> 1796.640] what this particular art therapy is most suited for that kind of patient versus other.
338
+ [1796.640 --> 1803.280] But generally speaking, we can say that the art media that can be used in art therapy
339
+ [1803.280 --> 1810.520] practice can be adapted based, for example, of the degree of motor impairment or also the
340
+ [1810.520 --> 1812.600] psychological profile of the patient.
341
+ [1812.600 --> 1817.600] There are certain materials that are more malleable, like clay, for example, that can be more
342
+ [1817.600 --> 1824.960] easily utilized by people who can have a more significant motor impairment, or there can
343
+ [1824.960 --> 1831.680] be other media that can more favor, say, a more challenging engagement into the process
344
+ [1831.680 --> 1839.200] of making art, like, for example, ornamental fabrics or drawing where an accurate IAN coordination
345
+ [1839.200 --> 1841.400] is required to complete.
346
+ [1841.400 --> 1848.200] So the program can be adapted and can be tailored on a single patient basis.
347
+ [1848.200 --> 1849.200] Great.
348
+ [1849.200 --> 1852.760] Thank you so much Dr. Kuka.
349
+ [1852.760 --> 1855.800] Last question from Paul in Delaware.
350
+ [1855.800 --> 1861.560] My artistic abilities are as extensive as drawing stick figures.
351
+ [1861.560 --> 1867.080] Are there any programs available for the artistic challenge per consent patient?
352
+ [1867.960 --> 1868.960] Well, thank you.
353
+ [1868.960 --> 1875.800] This is an excellent question and it allows me to say that being formally trained of having
354
+ [1875.800 --> 1881.560] any prior experience in art making is absolutely not required.
355
+ [1881.560 --> 1883.040] It's not a requirement.
356
+ [1883.040 --> 1890.280] As I said, as I mentioned before, art therapies will not care about how good you are and on
357
+ [1890.280 --> 1893.640] the aesthetic quality of your artwork.
358
+ [1893.640 --> 1895.720] It's the process that matters.
359
+ [1895.720 --> 1902.440] So it's really the capability of getting engaged into the purposeful recruitment of those
360
+ [1902.440 --> 1907.840] functions that can be rehabilitated by participating in the program.
361
+ [1907.840 --> 1913.240] In some ways, it's similar to when we recommend, for example, aerobic exercise to our patients.
362
+ [1913.240 --> 1917.400] It's not really needed to be able to run a marathon to start doing aerobic exercises,
363
+ [1917.400 --> 1918.400] right?
364
+ [1918.400 --> 1922.760] You don't need to be a professional runner to start moving.
365
+ [1922.760 --> 1927.080] You don't need to be a professional artist to start getting engaged into art therapy.
366
+ [1931.240 --> 1931.760] Great.
367
+ [1931.760 --> 1933.920] Thank you so much, Dr. Couga.
368
+ [1933.920 --> 1936.200] That's all the time we have right now.
369
+ [1936.200 --> 1941.040] If Dr. Couga got you in the mood to explore your inner artists, we encourage you to join
370
+ [1941.040 --> 1946.640] the Connecting Through Art session coming up soon at 1.45 pm Eastern Time.
371
+ [1946.640 --> 1951.440] Thank you once again, Dr. Couga, and thanks to all of you who have attended this session.
372
+ [1951.440 --> 1957.440] If you have a question that we did not have time for, please feel free to submit it on
373
+ [1957.440 --> 1965.280] the Ask the Presenter Community Board topic, and we will try to get you an answer.
374
+ [1965.280 --> 1968.600] We'd also love to know what you thought of this session.
375
+ [1968.600 --> 1973.960] So please click the rate button at the bottom of your screen to provide us with feedback.
376
+ [1973.960 --> 1978.720] We've got a short break, and then we'll jump right into our afternoon activity sessions.
transcript/allocentric_bvMm8gfFbZ8.txt ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.380] Arctic, a dataset for textuals by manual hand object manipulation.
2
+ [5.380 --> 8.480] Humans constantly manipulate complex objects.
3
+ [8.480 --> 10.820] We open our laptops cover to work.
4
+ [10.820 --> 13.060] We apply water spray to clean windows.
5
+ [13.060 --> 14.880] We use our fingers to cut with scissors.
6
+ [14.880 --> 19.360] We intuitively understand that inanimate objects do not move by themselves.
7
+ [19.360 --> 22.880] And their state changes are typically caused by our hand motion.
8
+ [22.880 --> 26.200] This understanding, however, is not yet the case for machines.
9
+ [26.200 --> 30.480] This is partly because existing datasets focus on grasping rich objects.
10
+ [30.480 --> 35.720] Incontain few or no examples of dexterous object manipulation and they do not study object
11
+ [35.720 --> 37.760] articulation with hand motion.
12
+ [37.760 --> 42.920] To enable the study of dexterous articulated hand object manipulation, we collect a novel
13
+ [42.920 --> 45.020] dataset called Arctic.
14
+ [45.020 --> 48.440] Arctic focuses on the movement of hands interacting with objects.
15
+ [48.440 --> 53.640] It goes beyond static grasps, it includes highly dexterous by manual manipulation and dynamic
16
+ [53.640 --> 55.520] hand object contact.
17
+ [55.520 --> 57.520] Arctic is a large-scale dataset.
18
+ [57.520 --> 64.520] It contains 2.1 million video frames of 10 subjects interacting with 11 articulated objects.
19
+ [64.520 --> 69.160] Each frame is paired with accurately-threaded meshes of hands and articulated objects.
20
+ [69.160 --> 73.400] Here we show render ground truth overlaid on videos.
21
+ [73.400 --> 78.840] Images in Arctic are taken from 8 arosentric study views and 1 egocentric moving view.
22
+ [78.840 --> 80.640] Arctic is highly accurate.
23
+ [80.640 --> 85.400] The videos are captured synchronously with a high quality mocap system, comprised of
24
+ [85.400 --> 89.080] 54 motion capture cameras to minimize the effect of occlusions.
25
+ [89.080 --> 92.000] Here we show objects in Arctic.
26
+ [92.000 --> 94.920] We compare Arctic with existing hand object datasets.
27
+ [94.920 --> 100.600] Our dataset is the first to capture the full human body, both hands and articulated objects.
28
+ [100.600 --> 103.720] It focuses on highly dexterous motion.
29
+ [103.720 --> 107.880] Because of dexterous manipulation, hand-pulls in Arctic are a lot more diverse than those
30
+ [107.880 --> 109.400] in other datasets.
31
+ [109.400 --> 113.400] We show a T-snip slot of hand poses in terms of 3D joints.
32
+ [113.400 --> 116.520] Poses in our dataset are shown as blue dots.
33
+ [116.520 --> 120.480] Here we show frequently contacted regions for hands and objects in Arctic.
34
+ [120.480 --> 124.320] Partly regions represent a higher chance of being in contact.
35
+ [124.320 --> 127.280] Arctic has more diverse contact regions than others.
36
+ [127.280 --> 132.240] We establish baselines for two novel tasks, consist of motion reconstruction and interaction
37
+ [132.240 --> 133.800] field estimation.
38
+ [133.800 --> 138.960] For the reconstruction task, our goal is to take color images from the video and to reconstruct
39
+ [138.960 --> 142.400] both hands and an articulated object for every frame.
40
+ [142.400 --> 146.960] In this task, we require the reconstruction measures to have temporally consistent contact
41
+ [146.960 --> 147.960] and motion.
42
+ [147.960 --> 151.800] Here we show an example of an inconsistent prediction.
43
+ [151.800 --> 155.680] Consistent contact means that the predicted hand should touch the same object region within
44
+ [155.680 --> 157.280] a temporal window.
45
+ [157.280 --> 161.960] If it is the case in the ground truth, consistent motion on the other hand means that the
46
+ [161.960 --> 166.440] hand object vertices they are in contact should move in the same direction in the temporal
47
+ [166.440 --> 167.440] window.
48
+ [167.480 --> 171.520] To provide a baseline method, we present a model called ArcticNet.
49
+ [171.520 --> 176.800] We evaluate two variations of ArcticNet, a single frame baseline and a temporal baseline.
50
+ [176.800 --> 181.120] The temporal baseline allows to join the recent hand object motions.
51
+ [181.120 --> 185.600] Here we show the predicted 3D measures from the temporal baseline to indicate the visibility
52
+ [185.600 --> 186.840] of the task.
53
+ [186.840 --> 189.840] The predictions resemble the ground truth on the right.
54
+ [189.840 --> 191.440] Our baseline is not perfect.
55
+ [191.440 --> 194.440] First, the predicted object can still have jittery motion.
56
+ [194.440 --> 198.520] Secondly, the 2D alignment for hand and object is not always perfect.
57
+ [198.520 --> 202.520] Third, hands do not always provide stable contact in time.
58
+ [202.520 --> 207.960] This indicates that ArcticNet is very challenging and there is sufficient room for future work.
59
+ [207.960 --> 211.760] Contact is important for modeling hand object interaction.
60
+ [211.760 --> 216.840] When two hands interact within objects, our hands are not always in contact with the object,
61
+ [216.840 --> 218.240] but can be near.
62
+ [218.240 --> 222.840] To capture the relative hand object positions, even not in contact, we introduce the task
63
+ [222.840 --> 224.920] of interaction field estimation.
64
+ [224.920 --> 229.720] For each vertex of our hand, the task is to estimate its shortest distance to the object.
65
+ [229.720 --> 234.640] Similarly, for each object vertex, we also estimate its shortest distance to the hand.
66
+ [234.640 --> 238.720] Since there are two hands, we have four interaction fields to estimate in total.
67
+ [238.720 --> 241.880] Brighter colors represent smaller distances in the interaction fields.
68
+ [241.880 --> 244.640] We present a baseline method, Interfield.
69
+ [244.640 --> 247.440] We evaluate both a single frame and a temporal method.
70
+ [247.440 --> 250.040] Here we show predictions from our temporal model.
71
+ [250.040 --> 255.160] In prediction, correlates well with the ground truth for the left hand and the right.
72
+ [255.160 --> 259.440] We use ground truth measures for visualization purposes that are not used as input.
73
+ [259.440 --> 264.200] To conclude, we present a large-scale video dataset called Arctic, containing accurate 3D
74
+ [264.200 --> 268.480] measures of two hands, dexterously manipulating articulated objects.
75
+ [268.480 --> 271.880] We present two novel tasks on the Arctic.
76
+ [271.880 --> 274.240] Consist the motion we can structure.
77
+ [274.240 --> 276.360] Interaction field estimation.
78
+ [276.360 --> 279.840] We introduce ArcticNet, a baseline for the reconstruction task.
79
+ [280.320 --> 281.400] Interfield.
80
+ [281.400 --> 283.680] A baseline for interaction field estimation.
81
+ [283.680 --> 287.040] Our data model and code are available for research.
transcript/allocentric_d3bfdfuruRg.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.640] Ego-centric.
2
+ [8.640 --> 10.640] Adjective.
3
+ [10.640 --> 13.440] 1. Selfish. Self-centered.
4
+ [13.440 --> 16.400] 2. Egotistical.
5
+ [16.400 --> 20.720] 3. Relating to spatial representations.
6
+ [20.720 --> 25.440] Linked to the reference frame based on one's own location within the environment, as
7
+ [25.440 --> 29.760] when giving a direction as, right rather than, north.
8
+ [29.760 --> 33.440] Opposed to our centric.
9
+ [33.440 --> 35.440] Ego-centric.
10
+ [35.440 --> 37.440] Noun.
11
+ [37.440 --> 41.440] 1. A person who is Ego-centric.
12
+ [55.440 --> 57.440] Ego-centric.
transcript/allocentric_dhD_mNoStPs.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_gmc4wEL2aPQ.txt ADDED
@@ -0,0 +1,474 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 14.000] I'm really happy to be here.
2
+ [14.000 --> 19.080] I've spent many happy hours in this building, some in the chairs you're sitting in.
3
+ [19.080 --> 25.440] A couple of times before speaking, when Terry was running, I began this seminar.
4
+ [25.440 --> 32.160] Many more hours with Maneesh and many other colleagues and students upstairs.
5
+ [32.160 --> 37.440] So, as Maneesh said, I'm going to talk a bit about this book I've written.
6
+ [37.440 --> 45.280] It doesn't capture everything I've done, but the things that I've done about space, it's cheap.
7
+ [45.280 --> 48.920] So, I never under $20.
8
+ [48.920 --> 57.320] So, and I have way too much to tell you, and I'm going to try to tell you too much.
9
+ [57.320 --> 61.480] So, all creatures must move and act in space to survive.
10
+ [61.480 --> 65.720] So, this is a small baby. The baby doesn't yet talk.
11
+ [65.720 --> 73.560] The baby has learned an enormous amount, and the baby has solved many, many problems without language.
12
+ [73.640 --> 78.920] And these are great apes, and I could say a similar thing.
13
+ [78.920 --> 87.320] They don't speak, they just are a bit, they cry, but languages we know it, they don't have.
14
+ [87.320 --> 93.160] And yet they solve many intricate problems. They can do rudimentary math.
15
+ [93.160 --> 101.960] They can tell the difference between 80 things and 85 things without counting.
16
+ [102.040 --> 106.840] So, all creatures must move and act in space in order to survive.
17
+ [107.640 --> 116.360] And even plants need to be able to move towards the sun away from the wind in order to survive.
18
+ [117.000 --> 123.560] So, my thesis rather audacious in this book is that all thought begins a spatial thought.
19
+ [125.400 --> 131.160] What spatial thought is moving and interacting in space with the things in it.
20
+ [131.960 --> 137.320] So, spatial thinking is the foundation of thought, not the entire edifice.
21
+ [138.200 --> 143.800] I like language, I use it a lot, but the foundation.
22
+ [144.440 --> 151.000] So, some of the argument comes from the recent work that, or the work that it's not, some of it
23
+ [151.000 --> 157.800] isn't recent, that won the Nobel Prize from the General Chiefs Laboratory in the Mosers.
24
+ [158.360 --> 164.600] They hooked up electrodes to rats running around an environment in single cells
25
+ [165.240 --> 172.600] and found that different places, when the rat was in different places, it ignited specific cells.
26
+ [172.600 --> 178.920] So, one of them is up there. The rat moves here, that cell fires, the rat moves here,
27
+ [178.920 --> 186.040] another cell fires. They wrote a book, Okifin Nadell, saying this was
28
+ [186.920 --> 194.520] a cognitive map. The problem was, it wasn't a map. Those spatial cells, those place cells weren't
29
+ [194.520 --> 203.080] displayed spatially. So, the Mosers working in Okif's lab, both postdocs, found a place where
30
+ [203.080 --> 211.560] this cells are laid spatially in enterrhynal cortex, one synapse away. So, these were called grid
31
+ [211.560 --> 219.160] cells and recent work, and they're created by the movement of the rat. The rat has to be moving,
32
+ [219.160 --> 227.880] it's keeping track of the movement and the orientation of the head. So, they're established by movement.
33
+ [230.200 --> 237.720] Recent work with human beings, in planting single cells in hippocampus and enterrhynal cortex,
34
+ [237.880 --> 245.640] which is done sometimes before epilepsy surgery. So, recent work on human beings has found that
35
+ [245.640 --> 251.800] these single cells in hippocampus, fire for events, people and ideas, not just places.
36
+ [252.680 --> 258.760] They gather information from all over the cortex, they're multimodal and encapsulate them in a
37
+ [258.760 --> 266.680] single cell. What's more, they're arrayed in grid cells. So, grid cells array conceptual
38
+ [266.680 --> 273.640] information, temporal information, and event information. So, that makes the case very strongly
39
+ [273.640 --> 281.160] that spatial thinking is the foundation of all thought. The same brain structures that are
40
+ [281.720 --> 288.680] representing space, locations in space are also representing conceptual and temporal and social
41
+ [288.680 --> 299.160] relations. So, the metaphor I use, and I checked this with my hippocampal friends, and nobody's screened,
42
+ [300.280 --> 307.640] is that the hippocampus creates checkers for ideas or people or events, and the grid cells are like
43
+ [308.360 --> 315.000] a checker board, and they array them in these spatial or conceptual or temporal spaces.
44
+ [315.640 --> 322.840] What's more, they can be erased. So, the grid cells are constantly being rewritten as in for
45
+ [322.840 --> 331.640] their active. Okay, so the space is, the book, this is the cover. I didn't design the cover, but
46
+ [331.640 --> 338.440] I'm fond of the design partly because some people see a man running, and some people see a network,
47
+ [339.080 --> 343.480] and it's meant to be both. So, it's a little bit, do you see the forest or their trees?
48
+ [345.000 --> 351.000] And these are some of the topics that I cover. I talk about, and I will give you capsules of these.
49
+ [351.000 --> 356.840] The spaces we acted, I just gave you brain abilities. I won't give you, perspective is key.
50
+ [358.360 --> 365.400] Just your language cognitive tools, creativity, the design of the world. So, before I get into tiny
51
+ [365.400 --> 371.240] tidbits of each of these, I want to show you some people thinking, you're going to get to watch
52
+ [371.240 --> 381.880] thought in action. So, we bring people into the laboratory, we close the door, so they're alone.
53
+ [381.880 --> 386.200] They're going to be reading a description of space. It's a complicated description,
54
+ [386.200 --> 393.560] locating eight or four landmarks in a spatial array. It's hard, and they're going to be tested.
55
+ [393.960 --> 401.080] Okay? So, they study very hard. These are students, they know how to study. So, this is a bit of one of
56
+ [401.080 --> 407.080] them. Edna's a charming town nestled in an attractive valley, entered on River Highway.
57
+ [407.080 --> 411.640] River Highway runs east west at the southern border of the town of Edna.
58
+ [411.640 --> 416.840] Toward the eastern border, River Highway intersects with Mountain Road, which runs north of it.
59
+ [416.840 --> 425.480] So, you get the idea and you won't be tested. So, here is one woman, and so watch her. She's reading
60
+ [425.480 --> 430.520] that description and watch her hands. She's not watching her hands.
61
+ [439.000 --> 444.680] So, I think you can see she's making a map. Why? She's making lines for the pass,
62
+ [444.680 --> 454.360] and she's poking dots for the places. And 75% of our participants do this, and when they do it,
63
+ [454.360 --> 461.640] they remember better. They perform better on the test. And if we tell them to sit, I'm sorry?
64
+ [461.640 --> 468.600] Oh, no. They aren't primed at all to do it. We don't tell them. In fact, if you tell people to
65
+ [468.600 --> 477.560] adjust, or it's mixed. So, as I said, 75% of the people do it. When they do it, they do better
66
+ [477.560 --> 486.440] on the test. And if we tell them to sit on their hands, they do worse. And some of them say,
67
+ [486.440 --> 494.200] I can't think without my hands. So, I'm going to continue in this one. I'll show you one more.
68
+ [494.200 --> 500.360] We've by now done this with all kinds of stimuli. It works best with spatial environments
69
+ [500.360 --> 503.480] and with mechanical systems. You had a question.
70
+ [506.920 --> 513.160] Sometimes people talk to themselves. Sometimes not. It really varies. And there are people that
71
+ [513.160 --> 521.240] don't gesture and do fine. But, you know, anytime you study people, you get variability. You can
72
+ [521.240 --> 528.840] do fine. People do the strangest things. So, this person is reading a description of how a car
73
+ [528.840 --> 535.960] break works. It's again hard. They get to read it four times. So, from the brake fluid reservoir,
74
+ [535.960 --> 541.880] brake fluid enters and travels sideways and down the tube. As the brake fluid accumulates, it's
75
+ [541.880 --> 547.800] about on the tube. Pressure is exerted on the small pistons inside the wheel cylinders.
76
+ [547.800 --> 553.080] This causes the pistons to move outward toward the brake drum. It's hard for me to read it without
77
+ [553.080 --> 565.240] gesturing. So, it will watch him and hope he comes up. He's just great. There. Yeah, he's a dancer.
78
+ [567.000 --> 573.080] So, you can watch him. He's trying to figure out the car break. And you can see him kind of
79
+ [573.080 --> 578.840] reading, testing, reading, testing. I mean, as I say, you're watching someone think.
80
+ [585.960 --> 590.200] That's not a gesture.
81
+ [590.200 --> 608.760] Okay. So, you got it. Okay. And now I'll go back to the other. So, the gestures model the situation
82
+ [608.760 --> 616.760] described in the text. They, and the models are a spatial motor. They aren't visual. They are
83
+ [616.760 --> 624.440] looking at their hands, preventing that gesture reduces comprehension. And we get the feeling
84
+ [624.440 --> 632.200] watching them. The gestures are translating the language into thought by creating a model.
85
+ [632.200 --> 638.200] The language itself is arbitrary. If that were in Chinese, it would look very different.
86
+ [639.000 --> 643.880] We've also looked at gestures for others. I'll tell you one study of many.
87
+ [644.840 --> 652.760] We students watched an explanation of how a car engine works. Half of them watched one video,
88
+ [652.760 --> 658.840] half the other. One video used gestures that showed structure, the shapes of the parts.
89
+ [658.840 --> 662.920] We didn't think that would have an effect because that kind of information is easy.
90
+ [663.720 --> 672.600] The other one showed action. And action is really hard for people to get. That's a longer story.
91
+ [672.600 --> 678.600] Again, they were tested. They could see these videos four times. They were tested through false
92
+ [678.600 --> 688.120] questions on structure and action. They made a videoed explanation of the engine. And they also
93
+ [688.120 --> 694.840] made a visual explanation of the engine. So, what we found was the people who watched action
94
+ [694.840 --> 701.640] gestures got more action questions correct. But what was more striking to us is they really
95
+ [701.640 --> 708.600] incorporated the action just from seeing the gestures. The text was the same. But just from
96
+ [708.600 --> 715.400] the gestures, there was more action in their visual explanations. There were more action gestures.
97
+ [715.400 --> 722.120] The gestures were invented. They weren't the ones they saw for the most part. And they used more
98
+ [722.120 --> 728.600] action words, even though they hadn't seen more action, heard more action words. So, this is a
99
+ [728.600 --> 735.160] visual explanation from someone who saw structural gestures. Again, same text. Another one.
100
+ [736.760 --> 743.240] This is someone who saw action gestures. And you see much more action there. There were arrows.
101
+ [743.240 --> 750.680] We counted arrows. Many more arrows. You have that explosion there. And you have bubbles there.
102
+ [751.480 --> 758.520] So, there were many more depictions of action just from seeing the gesture. So, seeing those
103
+ [758.520 --> 764.840] action gestures made them understand the action in a much deeper way. Here's another one.
104
+ [764.840 --> 771.560] They also got the stages more clearly because the stages are distinguished by actions.
105
+ [772.920 --> 778.680] So, gestures and graphics. Gestures are, both are a more direct expression of thoughts
106
+ [778.680 --> 785.480] than language, which is pretty arbitrary. Gestures are actions. So, they're especially good for
107
+ [785.480 --> 792.600] actions. However, they're fleeting. Graphics are static. And they stay around for you to think
108
+ [792.600 --> 799.080] about and inspect and maybe change. So, gestures are for here and now. And as I said, there are
109
+ [799.080 --> 807.720] great apes that use gestures, mostly for sex. But they use them. Okay. Graphics are for
110
+ [807.720 --> 818.760] other eyes and other times. So, they're in some sense the first solid evidence for symbolic
111
+ [818.760 --> 825.080] thinking and people. We don't know when language evolved. But we can see when people started
112
+ [825.080 --> 832.520] putting their mind in the world. I'll get to that in a second. I'm already going to see that
113
+ [832.520 --> 839.160] I'm going to have to skip stuff. Okay. So, space is special. It's so promoter. It's kind of the
114
+ [839.160 --> 848.280] confluence of vision. But blind people can have really good spatial cognition. So, you get it
115
+ [848.280 --> 856.120] from hearing, from smell, from kinesthetic where there's wind. What pavement feels like all of
116
+ [856.120 --> 862.360] these are cues that blind people use. But it's a confluence of many senses. It's essential for
117
+ [862.360 --> 867.240] survival. If we didn't know how to get home or get food in our mouths, we'd be dependent on
118
+ [867.240 --> 874.040] people forever. It's the basis for other knowledge you've already seen. It's not like geometry or
119
+ [874.200 --> 882.360] physical measurement. Our mental representations of the spaces that we act in are distorted and
120
+ [882.360 --> 892.840] they're distorted by our action and our perception. We live in many spaces and we create many spaces
121
+ [892.840 --> 900.680] by our actions and for them. So, the space of the body, I'm going really fast over the space of
122
+ [900.760 --> 907.880] the body, here a function or action, trumps size. So, our representations of bodies, you can see from
123
+ [907.880 --> 915.320] children's drawings. They're the same all over the world. Big heads, big hands. And the brain, the
124
+ [915.320 --> 924.200] brain exaggerates the places in our bodies that are important for our action and shrinks the ones
125
+ [924.200 --> 931.320] that are less important. So, the space of the body is distorted in that way around our actions.
126
+ [931.800 --> 941.320] The space around the body, here we've looked at what kinds of reference frames people use to
127
+ [941.320 --> 952.680] understand the world around them and they use their own body axes and gravity. And again, gravity
128
+ [952.680 --> 960.120] affects our actions enormously and the way things work. The body axes are by the asymmetry. Head
129
+ [960.120 --> 969.880] feet is more essential for keeping things in mind than left right, head feet in front back.
130
+ [969.880 --> 978.360] So, it's again a longer story. Space of exploration. A remarkable feat of the human mind and also we
131
+ [978.360 --> 985.320] saw of the rats mind is to experience a world from exploration and create a map of it. That's
132
+ [985.320 --> 992.280] as if from an overview. So, we experience the world as we see it and move through it, but we are
133
+ [992.280 --> 999.240] able to create a mental representation of a larger space, one that we can't see from one place
134
+ [999.240 --> 1007.240] in our mind. So, this strikes me as a remarkable feat of human being. And we talk about the embedded
135
+ [1007.240 --> 1013.960] route, the one where we're walking through it as a route and the overview as a survey. And human
136
+ [1013.960 --> 1020.520] beings can switch back and forth, something we've studied and others. And when we talk, we tend to
137
+ [1020.520 --> 1026.680] mix those perspectives. So, perspective is a key issue. It's a spatial concept. It's a key
138
+ [1026.680 --> 1032.840] issue everywhere. And one part of it is whether we're seeing the world from our point of view,
139
+ [1032.920 --> 1039.960] egocentric or allocentric from above, but another is whose perspective are we taking mine or yours?
140
+ [1040.600 --> 1047.320] And some work with Bridget Martin Hard showed that when this is her husband, Patrick,
141
+ [1047.320 --> 1053.960] and when action is involved, we automatically take the perspective of the other. We're more likely
142
+ [1053.960 --> 1062.040] to take the perspective of the other than our own. So, if we tell people where is the bottle,
143
+ [1062.760 --> 1071.240] where did Patrick put the book with respect to the bottle? And I'm standing next to you
144
+ [1072.040 --> 1078.040] asking that question. So, from our point of view, the bottle is on the left, but people answer
145
+ [1078.040 --> 1083.960] from Patrick's point of view and say, on the right. And if we make them take their own point of view,
146
+ [1083.960 --> 1091.000] takes them longer. So, when we see action, we're inclined to take the perspective of the actor,
147
+ [1091.000 --> 1097.960] perhaps to understand the action, perhaps to prepare our own. Is he going to throw the book at us
148
+ [1097.960 --> 1103.720] or is he going to give it to us? We have to prepare our own. So, perspective taking turns out
149
+ [1105.160 --> 1111.080] to underlie and I'm not going to be able to go through this, to underlie the empathy, certainly
150
+ [1111.080 --> 1117.560] problem solving, prediction, and creativity. We found that when we ask people to think of new
151
+ [1117.560 --> 1124.040] uses for old objects, a common creativity task, they're really good at it when we ask them to adopt
152
+ [1124.040 --> 1131.960] different roles of different people. The space of exploration is full of distortions. And
153
+ [1132.920 --> 1140.440] we group things hierarchically. We use landmarks as landmarks and things get drawn into landmarks.
154
+ [1140.440 --> 1146.840] So, people think that jox houses closer to the eye-fotellers than the eye-fotellers to jox house,
155
+ [1146.840 --> 1153.160] perspective I just told you about. There are a whole range of ways that we construct the space
156
+ [1153.160 --> 1161.480] that we explore and it's full of distortions. So, we liken that to a collage. It's not coherent,
157
+ [1161.480 --> 1170.040] and two dimensions are even three, not a mental map. And each of these distortions of space
158
+ [1170.040 --> 1176.520] has a parallel distortion in social judgments, in temporal judgments, in conceptual judgments.
159
+ [1176.840 --> 1183.800] Something I won't be able to go through, suggesting that maybe all judgments are cognitive
160
+ [1183.800 --> 1192.680] collages. We just gather information from all over how we gather it depends, and we try to make
161
+ [1192.680 --> 1200.520] something coherent or something reasonable from this collage of information. Social spaces.
162
+ [1200.520 --> 1210.600] So, anyone who's observant about ourselves and about babies can't help it knows that we spend
163
+ [1210.600 --> 1216.920] an enormous amount of time playing games that don't involve language, that are social games from
164
+ [1217.880 --> 1223.880] rolling a ball with a baby back and forth, teaching turn-taking and frost, to basket ball,
165
+ [1224.520 --> 1231.800] which is incredibly complicated, and we're doing those actions, or the good players in split
166
+ [1231.800 --> 1238.840] seconds. So, why are we spending so much time playing these social interaction games that don't
167
+ [1238.840 --> 1246.360] involve language? Even Scrabble. It's about words, but the way we communicate is by making a word.
168
+ [1246.360 --> 1255.080] We don't say. So, that, okay. So, then there are the spaces we create in the mind,
169
+ [1255.080 --> 1265.080] in gesture, on a page, in the world, in words, on the page. So, a bit on language. Our minds go
170
+ [1265.080 --> 1271.320] from thought to thought, the way our feet go from place to place. We saw that in place in the
171
+ [1271.320 --> 1278.440] grid cells, and we talk about actions and thought in the ways that we talk about, in the ways that
172
+ [1278.440 --> 1285.320] our hands act on objects. So, splitting hands and feet. We raise ideas, pull them together, tear
173
+ [1285.320 --> 1292.760] them apart, turn them upside down, push them forward, toss them out. Now, like often, John's
174
+ [1292.760 --> 1298.520] and presented these as metaphors, but I don't know if there's metaphors. We have no other way
175
+ [1299.080 --> 1307.400] of talking about actions on thought, except these ways. So, I think we internalize the actions,
176
+ [1307.400 --> 1313.880] and then use them on these thoughts that we objectify or reapply. There are metaphors,
177
+ [1313.880 --> 1320.040] uses all over the place. We talk about being out of depth, the top of the class, feeling up,
178
+ [1320.040 --> 1327.400] falling into depression. These are all communicated with gesture to, we've grown closer,
179
+ [1327.400 --> 1333.160] grown far apart. So, those actions on objects get turned into gestures on thought.
180
+ [1335.400 --> 1340.440] We put the world in the mind, I just talked about that. Now, I'm going to talk a little bit
181
+ [1340.440 --> 1346.520] about putting the mind into the world. And we do that to enhance our physical lives,
182
+ [1346.520 --> 1352.600] but we do, I'm going to talk about the ways we put thought in the world to enhance our cognitive
183
+ [1353.560 --> 1362.360] lives. Our cognitive tools can inform and educate. They can augment memory and information
184
+ [1362.360 --> 1369.160] processing by offloading it to something out there. They use space to represent literal
185
+ [1369.160 --> 1375.000] and metaphoric space in action. They promote inference and discovery because I can look at the
186
+ [1375.000 --> 1381.480] page and think about it. I don't have to keep the page in mind and think. They allow, they're
187
+ [1381.480 --> 1388.920] public. So, they allow creation, revision, and inference by a community. And I put forth
188
+ [1388.920 --> 1398.360] their uniquely human. Nobody's seen a great, make a map in the sand. Now, I'm open to other examples.
189
+ [1398.360 --> 1404.840] I'd be very interested. And as I said, they're the earliest evidence of symbolic thought. So,
190
+ [1404.840 --> 1410.280] let's look now at some of the ways that people put thought in the world. This is the form or
191
+ [1410.280 --> 1415.240] oldest map. And if you still look at websites, you're going to get this one. Note that there are
192
+ [1415.240 --> 1421.320] two perspectives. It's a big stone. That's why it's survived from Babylonia. And it's
193
+ [1421.320 --> 1428.840] representing the overview of the paths and frontal views of the landmarks. So, it's mixing
194
+ [1428.840 --> 1434.920] perspectives, which is extremely common, even though the linguists and the geographers say we
195
+ [1434.920 --> 1443.560] shouldn't be doing that. This is the current oldest map. It's about two inches by one inch.
196
+ [1443.560 --> 1449.960] So, it's very small. It would fit in your pocket. It was from not long ago in a cave in Spain.
197
+ [1449.960 --> 1458.440] So, it's very enlarged. It goes back 13,000 years. It again shows the paths and the landmarks
198
+ [1458.440 --> 1466.920] of the same time. And it depicts the scene outside the cave. So, can you imagine being an archaeologist
199
+ [1466.920 --> 1473.400] walking in this cave, picking up this small stone and seeing it's a map of the place we're in,
200
+ [1473.400 --> 1481.160] and how exciting that would be. So, space, we also show this. There are also ancient examples of
201
+ [1481.160 --> 1488.760] the spaces of the skies. This one is actually showing, it's going back, what, 5,000 years,
202
+ [1488.760 --> 1497.240] and it's showing an asteroid entering the skies. It's something we wouldn't notice without a
203
+ [1497.240 --> 1504.120] telescope anymore, because we can't see stars anymore. These are Eskimo coastal maps. They're small,
204
+ [1504.120 --> 1509.640] they fit in a mitten, and they show the outline of the coast. These are for canoeists that go up
205
+ [1509.640 --> 1515.720] and down the coast, so they can be explored with their hand instead of their eyes. And if they fall
206
+ [1515.720 --> 1522.360] out of the boat, they float. More floating maps. This is South Sea Islanders. The bamboo strips of
207
+ [1522.360 --> 1529.080] the ocean currents, the shells are the islands. They can't be seen, one from another. But people
208
+ [1529.080 --> 1535.560] learned how to navigate with these, also using stars and thoughts of them and jets of the ocean,
209
+ [1535.560 --> 1543.560] and at least some of them got back. Okay, more maps, hands. This was used by North Coast Indians
210
+ [1543.560 --> 1551.880] to show the major cities. Anyone from Michigan holds up a hand and to show where they live.
211
+ [1551.880 --> 1558.840] This is a New Yorker example. Okay, so we have space of the world around us of the stars,
212
+ [1558.920 --> 1567.640] time events. This is Chovey, and it's showing animals. It's people, not here, but in others.
213
+ [1568.200 --> 1574.520] And this is probably a stampede that it's showing. This is what's called newspaper rockets,
214
+ [1574.520 --> 1580.360] somewhere in Utah, a whole set of petroglyphs. And again, showing a hunting scene. You can see
215
+ [1580.360 --> 1587.960] stick figures with their bows and arrows and the animals they're hunting. This is another
216
+ [1589.000 --> 1595.880] celestial event. It is a petroglyph. Here's the petroglyph. Here's the drawing of it. It was found
217
+ [1595.880 --> 1602.200] in Kashmir. And in India, an astronomer recently, they was dated recently, and in India,
218
+ [1602.200 --> 1611.640] an astronomer showed, so why are there two sons? The Indian astronomer showed that there was a
219
+ [1611.640 --> 1620.280] supernova on that day, or on that day. And it was a striking enough event that somebody took
220
+ [1620.280 --> 1624.920] the trouble of carving it into a petroglyph and keeping it there for the future.
221
+ [1625.720 --> 1632.840] So we have events, people, objects, things. This is how to make bread. It's an Egyptian tomb.
222
+ [1634.200 --> 1640.600] You wonder what is doing there. Time, calendars from many different cultures,
223
+ [1640.600 --> 1650.120] some are circular. That's a story I could tell. Some are more linear, like the calendars that we
224
+ [1650.120 --> 1657.960] use. Tallys, one-to-one correspondences, a rudimentary math, more sophisticated counting.
225
+ [1657.960 --> 1662.760] Some people use more than their 10 fingers to count. They were much cleverer than we.
226
+ [1663.640 --> 1670.920] Peruvian people for counting, abacus. So ancient graphics show people, animals, things.
227
+ [1671.640 --> 1680.760] Place and space. Time and events and number. Again, this rudimentary number, one-to-one correspondences.
228
+ [1680.760 --> 1688.440] Modern graphics show the same things. And these are things that must be important to people.
229
+ [1689.000 --> 1692.680] And in fact, the brain takes a special note of them.
230
+ [1694.120 --> 1701.000] Okay. So that's ancient depictions, ancient ways of putting things that are mined in the
231
+ [1701.000 --> 1709.240] world. I'm going to show you some more contemporary ones. More or less modern graphics sort of began
232
+ [1709.240 --> 1714.600] with the Enlightenment. And Deterole was one of the first. This is Deterole,
233
+ [1714.600 --> 1722.600] modern and cyclopedia of all things, including the mind. The beginning has a network of
234
+ [1722.600 --> 1729.240] faculties of the mind, an early cognitive scientist. So here he's teaching people what a
235
+ [1729.240 --> 1735.400] diagram is. People would have seen depictions of scenes, like at the top. That's a factory
236
+ [1735.400 --> 1742.520] making pins. That would be familiar to people. The diagrammatic part wouldn't be. So he's got the
237
+ [1742.520 --> 1748.920] top part in a box. The bottom part in a box separating them. The bottom part is a catalogue
238
+ [1748.920 --> 1756.920] of the instruments used above. It's something familiar to us. But the sizes don't correspond to
239
+ [1756.920 --> 1763.480] the sizes of the actual sizes. The sizes are distorted, so you can see them. The lighting,
240
+ [1763.480 --> 1769.000] you can see at the top, is normal lighting. They all have that kind of lighting, just the waver
241
+ [1769.080 --> 1777.720] used one window with lighting. And the shadows and the lighting is the way natural light would fall,
242
+ [1777.720 --> 1786.120] or the way the etchers thought natural light would fall. The lighting here is to accentuate
243
+ [1786.120 --> 1794.520] the three-dimensionality. So it's using the features that we normally associate with scenes.
244
+ [1794.520 --> 1800.280] It's using them in different ways in order to communicate better. So things are lined up in
245
+ [1800.280 --> 1807.560] rows and columns, not the way they're used. So this is an early diagram that's teaching what a
246
+ [1807.560 --> 1814.280] diagram is. This is an 18th century graph and it's showing balance of payment. It's plotting
247
+ [1814.280 --> 1822.680] balance of payment over time. And somebody looked at all the graphs, this is already 20 years old,
248
+ [1822.680 --> 1829.960] in both scientific journals and places like the economist and USA Today and found that 75%
249
+ [1829.960 --> 1837.000] of them are plotting one or two variables against time, which is again the early uses of it.
250
+ [1837.000 --> 1842.840] So I'm going to skip that because you've seen it. This is work with Maniche. Other ways that
251
+ [1842.840 --> 1848.680] diagrams are used, visual instructions to show you how to put a TV card together. This is what
252
+ [1849.080 --> 1857.240] high-specials produced. And this is what Maniche and his team turned into an algorithm where each
253
+ [1857.240 --> 1863.720] is using the cognitive design principles that we uncovered through the psychological research
254
+ [1865.160 --> 1873.160] and the design principles are show each step and each new part is a new step. Show the perspective
255
+ [1873.400 --> 1881.080] of action and embellish with guidelines and arrows. And this could be an explanation for a car
256
+ [1881.080 --> 1887.560] engine. You'd want to follow the same principles. Other ways of using graphics, comics, I'm a big
257
+ [1887.560 --> 1895.640] fan of comics. Not so much the subject matter is the way they use the visuals. So this is using
258
+ [1895.640 --> 1902.520] space and time together and that's something quite common. You have the time in the windows
259
+ [1902.520 --> 1909.480] going left to right and top to bottom and superimposed out this space. There's a map in the
260
+ [1909.480 --> 1916.520] Mayan codices that did something similar. So it's a natural way to think. Larry Goniq drawing
261
+ [1916.520 --> 1922.280] spatial analogies to tell you about acceleration, Harold and the purple cram,
262
+ [1923.240 --> 1927.320] telling drawing his own story for those of you in all the book.
263
+ [1927.720 --> 1935.880] And Steinberg again using sort of metaphor or humorous ways of using space.
264
+ [1937.400 --> 1947.000] So good graphic schematize. How are we doing on time? They eliminate the irrelevant and exaggerate
265
+ [1947.000 --> 1953.720] the essential. They show multiple perspectives. This London 2 map does not, but you saw plenty of
266
+ [1953.720 --> 1959.240] examples that do and they're multi-modal. They have symbols and language in addition to
267
+ [1960.280 --> 1967.560] the visuals. They use spatial relations and marks and elements in ways that convey meaning
268
+ [1967.560 --> 1976.200] directly. Again in contrast to language. So proximity and space signifies proximity on an
269
+ [1976.200 --> 1985.480] abstract dimension. We've grown closer or grown apart. Centrality is centrality and directionality.
270
+ [1985.480 --> 1992.600] The vertical is loaded again because of gravity. So it takes power and resources and health to go up.
271
+ [1993.160 --> 2000.840] So just about everything goes up. Horizontal seems to be more or less arbitrary. There's nothing in
272
+ [2000.840 --> 2012.120] the world that captures that. But our cultural artifacts or our cultural customs do. So reading and
273
+ [2012.120 --> 2020.840] writing order and also order of math and they don't always agree. So and again there are parallels
274
+ [2020.840 --> 2027.240] in language and gesture. I'm going to I'm going to gesture events in time along a horizontal line
275
+ [2027.320 --> 2036.360] from my right, from my left to right. I just did it the opposite for you. So centrality. We ask
276
+ [2036.360 --> 2043.640] people to make their social networks and of course they're in the middle. Gravity. We looked at
277
+ [2043.640 --> 2049.320] diagrams for evolution and geological ages in all the introductory texts we could find in the
278
+ [2049.400 --> 2056.760] library. And the present time man it's always man is always at the top because that's the best.
279
+ [2059.880 --> 2065.240] So horizontal use of space. Writing order correlates with a number line with
280
+ [2065.240 --> 2072.280] agency and power. I'll give you some examples with speed with aesthetics. And value seems to go
281
+ [2072.280 --> 2079.000] with handedness. It's the only thing that goes with handedness. And it also goes with reading order.
282
+ [2079.320 --> 2086.840] So power. We have Adam on the left and Eve on the right. This is quite common in in pictures.
283
+ [2088.040 --> 2096.280] And in fact there is some I saw an exhibit at the British Museum once of art in India. And when
284
+ [2096.280 --> 2103.240] the language shifted from left to right to right to left the painting shifted. Okay all of a
285
+ [2103.240 --> 2109.160] sudden the Maharaja is not running from left to right. He's running from right to left with his
286
+ [2109.160 --> 2116.840] harem chasing after him. But it shifts when the language direction shifts. So this is soccer goals.
287
+ [2118.840 --> 2125.080] We see motion going and reading order is more forceful, violent and beautiful. And in fact more
288
+ [2125.080 --> 2132.200] files are called when the from the referees point of view the ball is going opposite reading order.
289
+ [2132.200 --> 2138.760] So nobody's tested that yet in Israel or one of the Arab countries they need to. So the
290
+ [2138.760 --> 2147.960] butt of jokes nothing is left nothing is right. Elements. So we can have elements that we talked about
291
+ [2147.960 --> 2153.080] the meaningful use of space. Now the meaningful use of elements and I'm going to argue that they
292
+ [2153.080 --> 2159.720] communicate quite directly. They can be iconic. So they look like what they're representing. Metaphoric
293
+ [2159.800 --> 2166.680] like scales of justice and meaningful marks which is what I'll dwell on and symbolic.
294
+ [2171.160 --> 2179.240] So symbolic we have or the meaningful marks we have dots. I don't know otherwise that one got
295
+ [2179.240 --> 2188.520] truncated. We have dots indicate places or ideas as they do in the brain and lines indicate
296
+ [2188.600 --> 2196.280] paths between ideas. So relations, arrows are asymmetric relations and containers can either be
297
+ [2196.280 --> 2203.320] something like a set or an area or a box as D to row used it. And I'm not going to be able to
298
+ [2203.320 --> 2210.280] talk about all of these but there are shared meanings that for these for dots and lines and
299
+ [2210.280 --> 2217.240] boxes and arrows. And we looked at that in a variety of experiments looked at the shared meanings
300
+ [2217.240 --> 2222.680] by presenting the visuals and asking people to interpret them or by presenting something in
301
+ [2222.680 --> 2228.680] language and asking people to construct a visual. And when you get the same thing you can say the
302
+ [2228.680 --> 2238.440] meaning as shared. So we looked with Jeff Zaxx we looked at inferences from graphs. We gave people
303
+ [2238.440 --> 2244.600] one of those graphs either AB or we labeled them with a height of kids eight and twelve or height
304
+ [2245.080 --> 2251.640] men and women. So they only saw one and they were asked to interpret it. And our idea was that
305
+ [2251.640 --> 2258.360] bars separate. They contain and separate. So they say there are more bees than aes or bees are
306
+ [2258.360 --> 2267.960] greater than aes but they don't imply a relationship. A line implies a relationship. So it says A and B
307
+ [2267.960 --> 2274.280] share a value. They just share a dimension. They just have different values on the same dimension.
308
+ [2274.280 --> 2281.000] And so we expected people to give us discrete comparisons here and trends with the lines. And
309
+ [2281.000 --> 2288.600] that's in fact what happened. And if we describe trends and ask people to create a visual we get lines.
310
+ [2288.600 --> 2294.760] If we describe discrete comparisons and ask people to create a visual we get bars.
311
+ [2295.880 --> 2300.280] I'm going to skip route math but you can see that there are lines and dots and
312
+ [2300.280 --> 2309.400] the visual elements tend to be these sorts of visual elements in the sketch maps that people
313
+ [2309.400 --> 2316.520] produce and they map out a language. So the route maps don't aren't analog even though they
314
+ [2316.520 --> 2324.680] could be. They're distorted in much the way languages. So just as we say turn out takeer the
315
+ [2324.920 --> 2334.280] the angle of a turn on specifying they're often not right. And the curvature is either it's straight
316
+ [2334.280 --> 2341.160] or curved it's discrete. So you either go down a straight path or you go around when it's curved.
317
+ [2341.880 --> 2348.040] So we found that these descriptive and depicting elements are parallel suggesting the same
318
+ [2348.040 --> 2353.400] underlying representation. And when you think about it the brain doesn't have words,
319
+ [2353.400 --> 2359.080] doesn't have propositions, doesn't have images that has neurons and glia and all kinds of other
320
+ [2359.080 --> 2366.360] things that are affecting our thinking. So the representation that generates either pictures or
321
+ [2366.360 --> 2374.440] language seems to be the same. So I showed you that or you saw that these schematic maps this is how
322
+ [2374.440 --> 2382.280] to get to Taco Bell from one of the dorms and it's making right angles where they're all none.
323
+ [2382.280 --> 2391.080] And the distances are way off. So why do they work? They work because of context. Language
324
+ [2391.080 --> 2399.240] too works in a context. We have a mutual understanding for interpreting our language. And a sketch
325
+ [2399.240 --> 2406.520] map is also going to work in a context. And if I come to a fork in the road and I think it's a
326
+ [2406.520 --> 2412.680] right angle but it's not, I'm not going to make a right angle with my car I'm going to drive along
327
+ [2412.680 --> 2420.760] the road. So and we have again similar understandings. So this yielded cognitive design principles
328
+ [2420.760 --> 2429.160] and again, Maneesh and Christoté took over that paths and nodes are important but distances
329
+ [2429.160 --> 2435.080] and exact directions aren't. So instead of this as years ago none of you remember this
330
+ [2435.640 --> 2442.680] or so only some of you. These are the maps you used to get from websites and they don't tell you how
331
+ [2442.680 --> 2447.960] to get on and off the highway. The information you need isn't there. It's cluttered by all kinds of
332
+ [2447.960 --> 2456.120] irrelevant information and these are the first paths of the maps that the Maneesh and Christ
333
+ [2456.120 --> 2462.120] produced and they were loved for good reason. And it just shows you what I love about engine
334
+ [2463.080 --> 2470.760] mirrors. You know we do research engineers gather distilled information from so many different places
335
+ [2470.760 --> 2478.440] and produce a tool that we can use. Okay we get similar things in gesture pointing to landmarks
336
+ [2478.440 --> 2484.760] tracing paths and things. What's interesting about gesture is they make a narrative about it.
337
+ [2484.760 --> 2492.200] It's got to be getting metal and then arrows and then I'm going to really jump. So arrows are
338
+ [2492.200 --> 2499.000] interesting partly they don't seem to appear before the 20th century and I'd be eager for someone
339
+ [2499.000 --> 2507.080] to find examples, early examples of arrows. You do have hands and fingers and even feet. There's a
340
+ [2507.080 --> 2515.640] lovely petroglyph in Ephesus with feet telling you how to get to the brothel. It's bringing you there.
341
+ [2515.640 --> 2521.720] So there are the arrow like things but arrows per se. I don't think anyone's found one before
342
+ [2521.720 --> 2529.640] the 20th century. So they have a sort of natural interpretation in that the erosion lines look
343
+ [2529.640 --> 2535.720] arrow like and of course arrows live fly but they have many uses. So this is work with Julie.
344
+ [2535.720 --> 2542.200] Heiser we gave people one of these drawings, one of these diagrams either with or without arrows
345
+ [2542.200 --> 2552.120] and asked them tell us tell me what it's conveying and we found that without arrows we got structural
346
+ [2552.120 --> 2558.440] descriptions. People told us where the parts were in relation to each other so you could tell
347
+ [2558.440 --> 2566.520] it by the language. They were Isas or Hasas and with arrows we suddenly got step by step beginning
348
+ [2566.520 --> 2575.480] to end functional action descriptions and again active verbs. So it was unambiguous how we
349
+ [2575.480 --> 2581.480] coded them the arrows changed the meaning. Then we gave another group of people either a structural
350
+ [2581.480 --> 2588.680] description or an action description and asked them to produce a diagram and again this is very
351
+ [2588.680 --> 2595.240] typical. So they drew these diagrams just from the descriptions it's pretty good but you can see
352
+ [2595.240 --> 2601.960] from a structural description they label the parts. When we gave them an action functional description
353
+ [2601.960 --> 2607.480] they don't label the parts even though they were labeled in the description but they give us arrows
354
+ [2607.560 --> 2615.560] indicating the action. So arrows convey meaning they change something from structure to action.
355
+ [2615.560 --> 2621.480] The trouble with arrows is if they have many meanings Bob Horn whom you may know because he spoke
356
+ [2621.480 --> 2629.560] here many years ago and then is very interesting visual language. He came up with 107 that seems
357
+ [2629.560 --> 2636.520] too many. But you can get seven or eight they can point connect they can indicate the temple next
358
+ [2636.520 --> 2643.400] step they can indicate causality they can show movement direction manner of movement wavy arrows
359
+ [2643.400 --> 2650.840] staccato arrows that can indicate increases decreases they can indicate invisible forces. So these
360
+ [2650.840 --> 2658.280] were very common in World War II and the arrows are being used they're color coded by the side of
361
+ [2658.280 --> 2665.160] the war and their the width is indicating the number of troops and the direction is where the
362
+ [2665.160 --> 2672.440] troops are moving these are pretty standard and comprehensible. We looked at the diagrams in STEM
363
+ [2673.160 --> 2680.360] books. I have I have those diagrams if somebody wants them we analyze what kinds of diagrams are
364
+ [2680.360 --> 2689.400] used in STEM books. So this one is is foreign DNA entering cells it uses arrows in three different
365
+ [2689.400 --> 2696.200] meanings without disambiguating. So this is hard for students. I mean I could figure it out
366
+ [2696.200 --> 2705.720] I'm no expert in this but there are cases where so these are labeling trench and open foreign
367
+ [2705.720 --> 2714.280] DNA the foreign DNA is entering that's movement this arrow is the next step so it's being used to
368
+ [2714.280 --> 2720.760] label indicate movement and the next temple step without really disambiguating again the
369
+ [2720.760 --> 2726.520] the context was sufficient for me to figure it out I worry about other people and then you get
370
+ [2726.520 --> 2732.840] beautiful things like the rock cycle and you know the geologists will say what's the problem and
371
+ [2732.840 --> 2739.800] then I say what does this arrow mean what does that arrow mean and they say oops so even they
372
+ [2739.800 --> 2745.960] can't interpret it and I have plenty of examples of them where they're beautiful but in even
373
+ [2745.960 --> 2752.040] abstract ones the arrows aren't disambiguated here if you've been to Venice you know you encounter
374
+ [2752.760 --> 2758.760] this kind of situation very frequently where the arrows pointed in both directions this is how
375
+ [2758.760 --> 2767.480] to pass a bill in congress right you go in circles okay so these schematic elements are like
376
+ [2767.480 --> 2774.120] schematic spatial terms point is a place or an idea line is a path or a relation and arrow
377
+ [2774.680 --> 2782.280] and like spatial terms so it becomes a visual vocabulary for diagrams you can create diagrams
378
+ [2782.280 --> 2790.680] out of these and adding the iconic ones and again they're they're schematic and the context
379
+ [2790.680 --> 2797.640] is going to disambiguate is it a romantic relationship or a mathematical relationship again the
380
+ [2797.640 --> 2805.000] context presumably will tell me so Steinberg got there first lines have many meanings here's the
381
+ [2805.000 --> 2811.720] this is wall size and there's Steinberg drawing and it's the grand canal and a closed line
382
+ [2811.720 --> 2818.680] and the sidewalk from above and the nail and the top of a of a railroad bridge and so on but
383
+ [2819.240 --> 2826.200] you're not even aware that it's the same line because you quickly interpret it so neat lines are
384
+ [2826.200 --> 2833.800] great for getting you from A to B or telling you how a car brake works but then there are messy lines
385
+ [2833.800 --> 2840.760] and this is Gary and this is a drawing that he made and we started studying messy lines this
386
+ [2840.760 --> 2848.280] is work with Musaki Musaki Sua we looked at architects and we gave them an assignment to make a
387
+ [2848.280 --> 2857.080] museum with certain constraints and they make discoveries in their sketches and it's exactly
388
+ [2857.080 --> 2864.600] the ambiguity that allows them to reconfigure and make and make new discoveries artists do the same
389
+ [2864.600 --> 2870.680] this is a graduate student former graduate student fine artists who studied artists and they
390
+ [2870.680 --> 2876.520] they can't talk while they're drawing same with the architects language interferes it's
391
+ [2877.160 --> 2884.120] a conversation between the eye and the hand and the page and artists say the ideas emerge from
392
+ [2884.120 --> 2892.520] the page so I'm going to skip here this is using diagrams for inference we've shown that diagrams
393
+ [2892.520 --> 2900.040] promote collaboration and now I'm going to try to end so this is a world that nature gives us
394
+ [2900.040 --> 2907.960] it's pretty messy and in fact messy desks promote new ideas this is the world we design
395
+ [2909.080 --> 2916.760] so how do we design the world we put the mind into the world one prominent way we organize our
396
+ [2916.760 --> 2922.520] lives is around themes we put all the things that belong for cleaning in the bathroom all the
397
+ [2922.520 --> 2928.440] things that belong for cooking in the kitchen and so forth so we gather many different kinds of
398
+ [2929.000 --> 2934.040] things put them in the same place because they're going to be used together I call them themes we
399
+ [2934.040 --> 2941.480] also have categories of things and our categories go on shelves so we put our plates small plates here
400
+ [2941.480 --> 2947.960] our bowls here our cups there and again we're organizing them into categories and higher
401
+ [2947.960 --> 2956.680] our piece of categories in our homes and in the supermarket and we on our bookshelves orders our
402
+ [2956.680 --> 2964.360] books might be ordered by topics they might be ordered by size but we get orders in there so again
403
+ [2964.360 --> 2973.320] this is a vegetable stand categories hierarchies lines boxes the world is organized that way lines
404
+ [2973.320 --> 2981.960] and rows and columns and boxes and 3D assembly lines one to one correspondences is beginning to
405
+ [2981.960 --> 2989.880] sound like programming one to one correspondences repetitions cycles embeddings in our place settings
406
+ [2990.440 --> 2997.320] and kids grow up with this there's so much intelligence around us the caves didn't have so much
407
+ [2998.200 --> 3005.160] similarly here repetitions symmetry embedding we use those patterns which catch the eye because
408
+ [3005.160 --> 3013.240] they're good dish dogs we use those patterns deliberately in our visualizations so the periodic
409
+ [3013.240 --> 3020.360] table train schedules bar charts this is the likelihood of a computer issue being solved by
410
+ [3020.360 --> 3027.880] reconfiguring re-emmenging anti-virus uninstalling other or turning it on and off okay and this is
411
+ [3027.880 --> 3035.560] the amount of time it took me to draw each bow so I introduced the concept of subtraction it's a
412
+ [3035.560 --> 3041.160] contraction and it's an ugly word all night words are ugly it's a contraction of space
413
+ [3042.360 --> 3048.760] action and abstraction and the idea is actions and space creative abstractions you just saw a bunch
414
+ [3048.760 --> 3056.200] of them those actions get internalized and re externalized as gestures as actions on thought
415
+ [3056.840 --> 3066.200] and the patterns get used and externalized as communications as diagrams so this is
416
+ [3066.200 --> 3073.080] fraction it's kind of cycle we can enter at any point linking space action abstraction
417
+ [3073.480 --> 3082.280] so we not only put our minds in the world we design the world and we saw that but we not only design
418
+ [3082.280 --> 3089.720] the world we diagram the world so the world is a diagram and with that I end
419
+ [3090.360 --> 3092.360] you
420
+ [3096.200 --> 3102.520] means that's just thanking thanking the funders so I were probably out of time I don't know
421
+ [3106.760 --> 3113.000] I know I've overwhelmed you it's too much to take in yeah
422
+ [3120.680 --> 3124.360] it's it's actually really matter it's not language but when I'm hearing here's a bit more
423
+ [3125.560 --> 3131.640] uh more nuanced and curious um you know if you can talk a bit about the places where you
424
+ [3131.640 --> 3137.160] are sort of uh isn't important so in the case of like Nicaraphan's language for example you had a
425
+ [3137.160 --> 3141.400] bunch of deaf children who were brought into the school and they knew their language at the
426
+ [3141.400 --> 3147.080] beginning was very filled with gestures and actions and movements um but in the next generations
427
+ [3147.080 --> 3151.880] it's a much more symbolic much more uh like less than actions
428
+ [3154.280 --> 3161.080] and and this so thank you for the question which also informs people of the transition from
429
+ [3161.080 --> 3168.360] gestural sort of language to one that has a real syntax and and semantics so
430
+ [3169.160 --> 3177.880] so again I'm fond of language to differentiate even differentiate what graphics do and what
431
+ [3177.880 --> 3184.600] the annotation of graphics do it becomes difficult and it's overlapping clearly deaf people manage
432
+ [3184.600 --> 3193.240] perfectly well without um hearing or can manage and blind people can manage quite well without um
433
+ [3193.320 --> 3199.640] without seeing although blind people gesture and blind children gesture again indicating it's for
434
+ [3199.640 --> 3207.000] their own thought and I think they're complimenting each other and there are many cases where the
435
+ [3207.000 --> 3212.840] gestures are really giving you more information than the language so if you ask someone where to go
436
+ [3213.480 --> 3218.600] so I was once trying to find a store in London and I knew you had to make two turns from where we
437
+ [3218.600 --> 3223.800] were to get there and I asked a woman in a store and she said you go straight on straight on straight
438
+ [3223.800 --> 3230.280] on and I said don't you have to turn and she said yes but you know her her her head is always pointing
439
+ [3230.280 --> 3235.400] forward so and she doesn't know the name of the street so for her it's straight on straight on
440
+ [3235.960 --> 3243.960] and if you get lost in a foreign country and ask and you know four words of Italian um and not more
441
+ [3243.960 --> 3248.520] you watch the gesture and the gesture will tell you is the wrong curving is it going up and down
442
+ [3248.920 --> 3256.280] so there and that's our canonical way of speaking is it's language grew up you and I talking or
443
+ [3256.280 --> 3263.960] groups talking and it was multimodal it used the indetermination of our voice which can negate what
444
+ [3263.960 --> 3272.600] we're saying by irony so it is human beings are remarkable in being able to use all kinds of
445
+ [3272.600 --> 3278.600] resources to make meaning and some of it is we've had experience undoubtedly making meaning
446
+ [3278.600 --> 3285.240] in those different ways so we can understand things from writing but we all know that emails get
447
+ [3286.040 --> 3293.480] misinterpreted and in in many ways and that watching someone's face and seeing their gesture
448
+ [3293.480 --> 3298.840] and hearing their intonation will change it so you lose something and it's similar again in space
449
+ [3298.920 --> 3307.000] I can blind people manage without seeing so if some things are complimenting so I'm not answering
450
+ [3307.000 --> 3314.040] your question exactly one thing that I think gestures do really nicely that language often forgets
451
+ [3314.040 --> 3320.520] to do with they set up a general scheme of understanding things so simple examples on the one
452
+ [3320.520 --> 3326.680] hand and on the other and then I can keep referring to those places in space I've set them up
453
+ [3326.680 --> 3331.960] and I can keep referring to them and I can even create a metal space and I'm often doing that
454
+ [3331.960 --> 3339.880] without saying and you're seeing that and getting it from this scene so there are many studies
455
+ [3339.880 --> 3346.680] showing that if you take away the gestures from an oral communication like this you lose a lot
456
+ [3347.640 --> 3356.120] if you take away the language you're going to use a lot too so that's not a direct answer but
457
+ [3356.120 --> 3362.520] maybe some of the nuances you were looking for. One more question for me. One more question.
458
+ [3365.080 --> 3369.720] And I don't hear or see so I'm coming back to hear you.
459
+ [3371.320 --> 3377.160] Do you think that abstractions are the way we interpret diagrams and graphs are universal or
460
+ [3377.160 --> 3381.480] do they vary across cultures and are they the nature is it because they're Western education
461
+ [3381.560 --> 3385.000] that when we see it how we think oh yeah this must mean for us or think about that.
462
+ [3385.000 --> 3390.360] And those are good questions and the answer always with cultural things is yes and no
463
+ [3392.120 --> 3399.000] or without education yes and no. Is somebody went to this is just an example somebody went to
464
+ [3399.000 --> 3404.680] and again gestures and diagrams have similar visual vocabulary I hope that was clear.
465
+ [3404.680 --> 3410.520] Somebody went to New Guinea years ago this is probably 40 years ago and there were river traders
466
+ [3410.520 --> 3416.520] that go up and down the river they're not literate and they're trading and they've never seen a map
467
+ [3416.520 --> 3423.480] they've never been in a school room and asked them to make a map and it's a small sample so these
468
+ [3423.480 --> 3430.520] aren't real data they're anecdotes but what they drew was a line with circles where they stopped
469
+ [3431.400 --> 3439.480] so that idea that the river is they regularize to a line it's not a line and that the stops
470
+ [3439.480 --> 3447.000] were these dots was something was their way of expressing so there's going to be some
471
+ [3447.000 --> 3453.880] universality there's going to be some cultural specificity on either diagrams or
472
+ [3455.000 --> 3463.320] and those are harder and harder to study because right okay I think you want to end at this
473
+ [3463.320 --> 3466.520] point. I mean for a couple minutes if you want to come up at this point. Yeah I can
474
+ [3469.240 --> 3473.240] let's thank our speaker
transcript/allocentric_hDy_SaQng68.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.920] Imagine you are preparing yourself to go to Edge Optics. To plan your trip, you will take
2
+ [8.920 --> 14.300] a look at the map from the airport to the Mitsui Garden Hotel. You will be using a top
3
+ [14.300 --> 20.000] view strategy. Then, to get a better knowledge and to create reference landmarks, you can
4
+ [20.000 --> 26.200] take a look at the position of the BL links to find the hotel using a side view strategy.
5
+ [26.200 --> 33.760] But how would you do it in absence of vision? You need a guide. We propose a haptic system
6
+ [33.760 --> 41.480] composed by two different and complementary devices. The TAMO-3 and the Vibrotactile headband.
7
+ [41.480 --> 46.200] TAMO-3 is a tactile mouse providing a feedback on 3 degrees of freedom.
8
+ [46.200 --> 53.840] Hight, roll and pitch. Users can create a mental map of the explored space integrating
9
+ [53.840 --> 60.400] the tactile perception on one finger with the purpose of the arm. TAMO-3 offers a top
10
+ [60.400 --> 66.160] view exploration of the map. The Vibrotactile headband is composed by seven
11
+ [66.160 --> 72.840] electro-mechanical factors and support a nego-centric exploration.
12
+ [72.840 --> 78.400] The high resolution of the forehead creates a tactile fove that facilitates the detection
13
+ [78.400 --> 84.080] of objects of interest around the user. The Vibrotactile headband offers the side view
14
+ [84.080 --> 88.400] perspective. To show how it is simple to use our
15
+ [88.400 --> 95.080] optic system, we build a treasure hunt with treasures, chests hidden in the bottom of several
16
+ [95.080 --> 102.360] dips. Each participant will play as parrots seeking out for as many treasures as they
17
+ [102.360 --> 108.680] can in a given time. The position of the dip is signaled by changing the slope of the
18
+ [108.680 --> 115.520] surface texture. The Vibrotactile headband will also provide a directional cue to complement
19
+ [115.520 --> 118.400] the exploration. Come and try it!
transcript/allocentric_iTB6WoxmJ7A.txt ADDED
@@ -0,0 +1,1042 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.000] occasional psychologist and I
2
+ [3.000 --> 10.100] I'm interested in individual differences in many different traits that are related to educational achievement and
3
+ [10.680 --> 12.680] occupational achievement
4
+ [12.880 --> 14.880] since we have such a close
5
+ [16.000 --> 19.960] Auditory today, I am totally okay with you asking me questions
6
+ [21.120 --> 26.740] Kind of during this session and I'm happy to answer your questions if you don't get something
7
+ [26.740 --> 29.740] Please ask your question and I will be happy to
8
+ [30.860 --> 35.700] Provide some explanations. Okay, so what is spatial ability?
9
+ [35.980 --> 41.820] spatial ability is our ability is broadly defined as our ability to
10
+ [43.020 --> 46.100] process, perceive, store, retrieve, generate
11
+ [47.740 --> 52.620] spatial and visual information so basically if you manage to get here today
12
+ [53.620 --> 58.460] Then you can operate your spatial ability well done
13
+ [59.340 --> 61.340] So yeah, so
14
+ [61.580 --> 68.060] Basically we use spatial ability our spatial ability every day when we navigate around the city for example
15
+ [68.060 --> 72.780] Or need to solve a mathematical task and so on and I will talk more about this later
16
+ [74.260 --> 76.260] so spatial ability is
17
+ [76.740 --> 78.740] very important
18
+ [78.900 --> 85.020] for education and for example here is one study. I'm personally a fan of this study
19
+ [85.700 --> 91.500] This is a longitudinal study so the one which lasts for several years and in this study
20
+ [93.220 --> 96.180] school children were tested for spatial ability
21
+ [97.020 --> 99.020] verbal ability and mass ability and
22
+ [99.660 --> 107.140] We attract for 30 years and even 30 years after this testing so when they were 40 years old
23
+ [107.780 --> 114.180] Their ability like math's verbal spatial we are still predictive of their achievement
24
+ [115.180 --> 120.820] So and this achievement was measured as a number of scientific publications and patient patents
25
+ [121.060 --> 124.060] So this is a very important thing I say
26
+ [124.900 --> 131.940] Because like 30 years and we still can predict something from spatial ability which is measured at data 13
27
+ [131.940 --> 135.340] So when we are at school. So this is pretty much pretty important thing
28
+ [136.220 --> 138.220] Are we on the same page so far?
29
+ [138.740 --> 140.420] Good
30
+ [140.420 --> 147.020] Yeah, so now this is by the way this is the second study from the same group. This is a large project
31
+ [148.060 --> 149.820] It is called
32
+ [149.820 --> 154.500] Mathematical Eprechauss youth and it is overseen by professor David Lubinsky
33
+ [156.540 --> 161.420] In America and basically he established a very large
34
+ [162.580 --> 164.580] study of kids
35
+ [165.060 --> 167.060] gifted in maths and
36
+ [167.060 --> 174.140] One of his main interest is spatial ability because spatial ability is like a very good predictor of
37
+ [175.220 --> 179.540] Future success in maths. So this is again one of his studies
38
+ [179.540 --> 189.660] It is pretty deep dated one but still important because what he shown in this study is that there is a kind of a profile of ability
39
+ [190.620 --> 198.340] Which is linked to different professions? So for example on this slide you can see so this in this study they
40
+ [201.100 --> 208.160] Investigated links between different majors. So you can see art social science humanities and this verbal spatial and
41
+ [208.380 --> 210.380] Maths ability I've mentioned previously
42
+ [210.900 --> 212.900] and so these are
43
+ [213.300 --> 214.820] mostly
44
+ [214.820 --> 218.660] As far as I remember MSC degrees so master's degrees
45
+ [219.660 --> 220.660] And
46
+ [220.660 --> 225.340] What you can see is that the distribution of these free abilities is quite different
47
+ [225.340 --> 231.980] So for example overall as you can see these abilities are slightly lower in people who did
48
+ [232.660 --> 237.780] Masters and social science or arts but most importantly is that
49
+ [239.100 --> 241.100] You can see that for example
50
+ [242.060 --> 244.060] Maths ability
51
+ [244.060 --> 247.660] So kind of the averages for these abilities differ
52
+ [248.060 --> 254.420] You can see for example that the patterns are quite different. So for example verbal ability is higher in arts
53
+ [256.620 --> 258.620] students but for example
54
+ [259.460 --> 263.700] Maths ability is high in engineering and spatial ability is kind of in between them
55
+ [265.060 --> 268.340] But still I guess this kind of is very
56
+ [269.380 --> 274.340] Good kind of image of how these strengths and difficulties works actually
57
+ [274.340 --> 285.620] So I've mentioned that we need spatial ability both to navigate around the city and to for example solve the math
58
+ [285.620 --> 288.900] Stask or I don't know like to build a
59
+ [289.460 --> 292.140] Castle out of the blocks for example like Lego
60
+ [293.140 --> 296.660] So in this kind of ability seems a little bit different
61
+ [297.860 --> 299.860] You see for example
62
+ [299.980 --> 305.260] I'm navigating around the city I'm using the map or I'm looking at some maybe
63
+ [306.300 --> 313.580] Buildings I can remember points of interest and stuff and I remember the path and the other thing is to rotate for example
64
+ [313.580 --> 319.820] Something in your brain. So like can you imagine you probably remember such tasks from your school?
65
+ [321.700 --> 326.180] So this is the question actually for psychology now so whether
66
+ [326.660 --> 331.900] spatial ability is ability or ability is and there were actually
67
+ [332.540 --> 339.860] Couple of research. Well, it's a whole body of research recently that investigated this question and this
68
+ [340.220 --> 342.220] We're all kinds of different
69
+ [343.860 --> 348.500] Research kind of different research methods were used for example to investigate this
70
+ [348.500 --> 355.460] So for example some of these studies were cross-cultural people investigated and I also contributed to this agenda
71
+ [356.300 --> 358.300] People investigated for example
72
+ [359.140 --> 363.420] whether spatial abilities a single construct in
73
+ [364.100 --> 365.860] People who
74
+ [365.860 --> 369.500] Come from different cultures of for example from Russia and from China
75
+ [370.740 --> 373.540] There were some research there was some research in
76
+ [374.220 --> 376.220] populations of kids with different
77
+ [376.660 --> 381.980] Expertise of for example maps expertise or expertise in sports and basically
78
+ [382.620 --> 390.260] What was investigated whether there is a single underlying ability that for example drives our ability to navigate in a city and
79
+ [390.340 --> 392.340] drives our ability to operate
80
+ [393.060 --> 399.420] objects in our heads and so there were a couple of studies that were actually genetically informative
81
+ [401.100 --> 406.780] These are based on twin design. I don't know. Maybe you've heard about such studies
82
+ [407.220 --> 412.380] Yeah, so basically the design
83
+ [414.900 --> 421.180] You probably know that we have well not we have but there are dizygotic and monzygotic twins and
84
+ [423.380 --> 432.260] They are very similar, but monzygotic twins are like literal twins and they come from one egg and so on
85
+ [433.100 --> 435.540] Yeah, yeah, yeah, they're almost
86
+ [436.220 --> 445.540] Yes, exactly they are similar like 99.9 something percent similar, but there are dizygotic twins who are
87
+ [446.460 --> 448.460] for example
88
+ [448.460 --> 451.540] As similar as brother and brother for example
89
+ [452.660 --> 454.660] genetically and so
90
+ [456.180 --> 461.540] These kind of we all actually are similar almost I mean like 99%
91
+ [462.460 --> 464.460] similarity in our genetics, but
92
+ [465.620 --> 469.020] This one percent of different kind of genes
93
+ [469.700 --> 478.940] Is shared well 100% of these different genes is shared by monzygotic twins, but only 50% of these genes are shared by dizygotic twins
94
+ [478.940 --> 480.940] And so based on these difference
95
+ [481.300 --> 483.300] we can infer
96
+ [483.820 --> 485.820] the genetic kind of
97
+ [489.060 --> 492.660] Contribution to different traits and so this
98
+ [493.220 --> 498.660] Twin studies actually provided a lot of insights into psychology recently, so
99
+ [499.500 --> 502.740] This is called behavioral genetics and it's like a huge field now
100
+ [503.700 --> 509.340] And so what they've shown for example in one study is that pretty recent one
101
+ [510.020 --> 514.060] Is that so they've measured 16 different spatial abilities?
102
+ [514.060 --> 519.020] So people needed to navigate it in a city people needed to solve some spatial tasks and so on
103
+ [519.020 --> 521.780] There were 16 different psychological tests
104
+ [522.620 --> 524.620] And so what they've shown is that
105
+ [525.220 --> 529.900] Special ability are kind of clustered into three bigger abilities
106
+ [530.220 --> 534.780] So object manipulation visualization and navigation so navigation is
107
+ [535.500 --> 539.380] How we kind of navigate in a city for example over and using a map
108
+ [540.660 --> 546.260] Visualization is how we actually can imagine some stuff and object manipulation is how we can manipulate this
109
+ [546.620 --> 549.660] Different objects like for example 2d or 3d object
110
+ [550.140 --> 554.340] But what's most important here is that they actually shown that
111
+ [554.820 --> 556.820] these three
112
+ [556.820 --> 559.300] Clusters are also underlight by a
113
+ [560.100 --> 562.460] General kind of spatial ability so
114
+ [563.260 --> 565.860] One of the results from this study is that
115
+ [567.340 --> 574.780] Actually, there is a single spatial ability rather than multiple different abilities and this is actually good news. I think
116
+ [574.780 --> 576.780] I
117
+ [578.380 --> 583.300] Will talk a little bit more about this so I've also actually contributed to this research
118
+ [584.140 --> 586.140] We published a paper this year
119
+ [586.740 --> 595.460] We also investigated 16 different spatial ability tests and what we've shown is that they actually correlate with each other at least two degrees
120
+ [595.460 --> 602.420] So correlate you probably you know this word I guess so they are associated so if I for example solve
121
+ [602.420 --> 604.420] a
122
+ [605.180 --> 610.380] Mental rotation task good there is some probability that I will also
123
+ [610.980 --> 617.300] Orient in a city good so and vice versa, but so there is no kind of causal links here
124
+ [617.300 --> 619.300] But I maybe come back later to this
125
+ [620.660 --> 626.780] And so what we've shown is that they all all are correlated, but most importantly here
126
+ [627.580 --> 632.700] We kind of used a relatively novel approach to psychological data
127
+ [632.700 --> 637.660] We use network analysis to investigate the complex links between different
128
+ [638.220 --> 645.100] Spatial ability facets and what we found is that navigation using a compass is in the center of
129
+ [646.020 --> 648.860] the spatial ability network so
130
+ [650.580 --> 654.940] What does the central means here actually so it's one of the most influential?
131
+ [655.220 --> 657.220] So you can imagine for example
132
+ [657.660 --> 661.100] social media or not a social media social networks
133
+ [661.100 --> 666.460] So for example if there is one influential person like a star or whatever Instagram
134
+ [667.420 --> 671.780] Tiktok whatever if they I mean he or she
135
+ [672.940 --> 680.860] If they post something post something for example like a video or like whatever a text a lot of people
136
+ [681.500 --> 692.220] Repost it and then it's kind of in in their network. There will be a lot of so this new piece of news spread in this network
137
+ [692.220 --> 695.900] So the same thing is here so this kind of
138
+ [696.580 --> 700.060] Navigation using compass seems to be in the center of this network and
139
+ [701.940 --> 707.660] Why it is important for me because my hypothesis here is that if we kind of
140
+ [708.660 --> 715.340] Create an educational intervention which targets navigation using this compass
141
+ [715.340 --> 722.020] We can potentially influence all other spatial ability facets. I mean like all other this smaller
142
+ [722.940 --> 724.460] spatial ability
143
+ [724.460 --> 728.940] Abilities well in this sense and this could potentially
144
+ [729.380 --> 734.220] Improve our educational achievement because there it is very well documented that
145
+ [734.700 --> 740.740] Improvement and spatial ability can improve for example maps ability so we can solve math tasks better
146
+ [742.260 --> 743.780] and
147
+ [743.780 --> 749.500] Another thing here is that this navigation according to direction probably is kind of a master
148
+ [750.540 --> 752.540] spatial ability here because
149
+ [753.620 --> 756.780] It potentially can involve all other
150
+ [758.380 --> 761.500] Spatial abilities because for example when we navigate in the city
151
+ [762.220 --> 769.420] We remember some places of interest we visited and so this is other kind of facet of spatial ability
152
+ [770.100 --> 775.500] We also for example when we go in the city we kind of
153
+ [776.500 --> 780.060] Take a look from a different degree on different objects
154
+ [780.060 --> 782.820] So for example if I go across the building
155
+ [782.820 --> 787.900] I will see it from different angles and I memorize it and this is kind of one of the
156
+ [788.380 --> 795.980] other spatial ability facets like mental rotation or perspective taking so it's kind of all of them are linked and
157
+ [796.380 --> 803.220] It was not a surprise for me that this was in the center because it seems to be very important even from an evolutionary perspective
158
+ [803.380 --> 805.620] It seems as a very important thing and
159
+ [807.220 --> 809.220] Yeah, I see a lot of
160
+ [810.100 --> 812.340] Potential actually in this study because
161
+ [813.060 --> 820.260] Novel technology like for example virtual reality can provide us we're relatively cheap and easy to implement
162
+ [820.740 --> 823.940] Way to improve spatial ability so for example
163
+ [823.940 --> 831.660] This is a study which was published a couple of years ago and what they've done they asked people to navigate into a virtual city and
164
+ [834.540 --> 836.700] People needed to perform some tasks and so on
165
+ [836.700 --> 841.420] But what they found is that spatial ability actually improves when we navigate in the city
166
+ [841.420 --> 846.740] And in the in the virtual city. So this is a relatively
167
+ [847.780 --> 853.140] Easy way to improve this very important ability which we all hold
168
+ [854.300 --> 855.940] so
169
+ [855.940 --> 857.940] So fast so good
170
+ [858.340 --> 860.340] Okay
171
+ [860.420 --> 866.380] So yeah another thing which is very important for education I've already mentioned that
172
+ [867.380 --> 876.020] spatial ability is actually a malleable thing so it can be changed can be trained unlike for example intelligence
173
+ [877.660 --> 882.500] Or working memory this is kind of a huge debate around that but
174
+ [883.220 --> 890.060] Spatial ability is one of the traits that actually shows that it kind of it can be trained it can be improved and this is one of the
175
+ [891.060 --> 896.740] Not most influential, but maybe most famous studies on spatial ability
176
+ [899.660 --> 903.420] So it's even kind of all psychologists can kind of
177
+ [904.220 --> 911.180] Remember it when you coin like London taxi drivers. They will instantly remember about this. So this was the study
178
+ [912.380 --> 916.100] In which London taxi drivers were investigated
179
+ [916.460 --> 918.980] There is a special program
180
+ [919.780 --> 921.780] Educational one in London
181
+ [922.260 --> 927.460] It lasts for two years and in this program taxi drivers future taxi drivers
182
+ [927.460 --> 934.060] I learned to orient in London. So they are not allowed to use GPS like a proper London taxi driver
183
+ [934.060 --> 938.420] I'm not is not allowed to use GPS or like map and stuff
184
+ [939.260 --> 943.780] He or she needs to remember all the places and London is huge
185
+ [944.420 --> 946.420] As you know
186
+ [946.420 --> 953.380] So and they study for a couple of years in this program and what was shown in this study
187
+ [954.220 --> 958.100] This is by the way one of the most influential scientific journal
188
+ [958.500 --> 966.180] You've probably heard about science nature and this is proceedings of national Academy of Sciences one of the most recognized journals in the field
189
+ [966.420 --> 969.060] And so what they've shown in this study is that
190
+ [970.020 --> 973.060] Actually, so they used an FMRI
191
+ [974.060 --> 978.820] You've probably heard about this also technology. It's functional magnetic resonance imaging
192
+ [980.060 --> 984.900] So they they actually scan their brains and what they found is that
193
+ [986.340 --> 988.340] The gray met a volume
194
+ [988.860 --> 994.460] In hyper compass of this taxi drivers was kind of larger
195
+ [995.020 --> 997.020] Well, the
196
+ [997.380 --> 1001.780] Devolume was increased in this taxi drivers compared to
197
+ [1003.340 --> 1007.820] Healthy controls and like people who are not taxi drivers and so on and so forth
198
+ [1007.820 --> 1009.980] And this was one of the first
199
+ [1010.660 --> 1015.900] Studies that robustly shown this kind of hyper compass spatial ability link
200
+ [1016.180 --> 1020.100] There were numerous studies after that. They for example compared
201
+ [1021.100 --> 1029.540] Taxi drivers to bus drivers because bus drivers used the same routes every day and they didn't find such dramatic differences
202
+ [1030.220 --> 1035.220] Kind of related to this activity. They for example investigated links between
203
+ [1036.500 --> 1041.460] Years of being a taxi driver and gray met volume kind of increase
204
+ [1042.020 --> 1047.860] And so and so there is a huge body of studies that keep kind of providing evidence
205
+ [1048.420 --> 1050.420] Regarding this and this is one of the
206
+ [1052.300 --> 1058.580] Most influential studies on the brain plasticity so we can potentially change our brain with training
207
+ [1061.620 --> 1063.620] Since I've already mentioned
208
+ [1064.260 --> 1072.260] Brain research and spatial ability and maths for several times. I think it's high time to say couple of words about maths
209
+ [1072.660 --> 1080.380] So actually there is a lot of research that shows links between maths ability and spatial ability and
210
+ [1083.580 --> 1086.660] This is a very well-documented thing this link
211
+ [1090.740 --> 1096.540] What so you've probably heard about so there are huge meta analysis actually that shows this link
212
+ [1097.540 --> 1103.940] That show this link you probably heard about this meta analysis is like for example. I'm a researcher
213
+ [1103.940 --> 1108.420] I have read like 50 papers which shows some
214
+ [1109.260 --> 1113.460] Contradictory evidence so for example some people say that these are correlated
215
+ [1113.460 --> 1116.820] They another papers say no they are not correlated
216
+ [1117.620 --> 1118.860] this
217
+ [1118.860 --> 1124.220] Study shows no link at all and so all this is messy and stuff and there is a special
218
+ [1124.620 --> 1127.900] statistical technique that allows to kind of
219
+ [1129.460 --> 1132.500] Summarize all this research and so this is a meta analysis
220
+ [1132.740 --> 1137.060] So there are a couple of meta analysis that actually show that spatial ability and
221
+ [1137.540 --> 1140.220] Maths ability are linked and there are huge
222
+ [1140.900 --> 1147.100] Reviews published now which kind of aim to investigate like why are they linked?
223
+ [1147.420 --> 1152.820] so why are they linked and one of the key hypothesis now is that
224
+ [1153.820 --> 1161.380] Actually maths ability and spatial ability actually share some common brain mechanisms and this study
225
+ [1162.700 --> 1164.700] nicely shows
226
+ [1164.700 --> 1171.060] the areas that are activated for example when people solve maths task and when people solve
227
+ [1173.780 --> 1175.780] Spatial task and you can see that they are
228
+ [1176.380 --> 1182.640] In some places they actually coincide. I mean like this the same areas pretty much same areas are activated
229
+ [1182.820 --> 1187.340] And so it's interparietal sources and parietal cortex so this areas here and
230
+ [1190.380 --> 1192.860] What is
231
+ [1192.860 --> 1194.860] Interesting here is that
232
+ [1194.860 --> 1204.660] Kind of from an evolutionary perspective researchers now discuss so why do we still kind of why do we use these areas like
233
+ [1207.300 --> 1212.780] Which are for us which we are developed to kind of help us to orient
234
+ [1212.820 --> 1219.100] Space to kind of solve math math stasks and this is a very interesting discussion now and it's ongoing still
235
+ [1219.900 --> 1226.260] In between them right because it's so this accuracy is actually a measure of our mental number line
236
+ [1226.260 --> 1231.580] And so they've shown that spatial abilities can predict this number line
237
+ [1231.900 --> 1237.220] Our accuracy in performing this task and this number line is also underlies
238
+ [1237.780 --> 1241.180] Our approximate symbolic number calculation basically
239
+ [1241.780 --> 1249.080] Easy arithmetic tasks performance and so they've shown this in an longitudinal design at the age of five six and eight
240
+ [1249.180 --> 1252.800] They measure these tasks and this actually kind of
241
+ [1254.620 --> 1257.460] Provides some evidence for a causal link
242
+ [1258.100 --> 1264.500] Because well it is very complex thing to establish causality in psychology and then I guess in all
243
+ [1265.500 --> 1270.340] In all research, but in psychology it's very complicated to
244
+ [1271.980 --> 1278.260] To figure out what was before and what was affected and where this kind of arrow goes
245
+ [1279.020 --> 1282.100] But this longitudinal design because of the time
246
+ [1282.740 --> 1288.260] Helps us to provide to kind of to establish this causal links because our
247
+ [1290.140 --> 1292.860] Arithmetic ability at the age of eight
248
+ [1293.860 --> 1297.920] Cannot affect our spatial skill at five. You see what I mean
249
+ [1299.060 --> 1303.620] Yes, so and this is a kind of a very reliable way to establish this causality
250
+ [1304.500 --> 1312.740] There are also two other hypotheses. I can principle mention them, but they are kind of beyond our today discussion
251
+ [1314.020 --> 1318.620] So why spatial ability and maths ability link but they are beyond our discussion today
252
+ [1319.620 --> 1323.580] So I keep kind of arguing that
253
+ [1325.060 --> 1328.180] Spatial ability is very important for education and
254
+ [1330.580 --> 1340.780] I really like this study so I actually review for a couple of scientific journals and this paper came to me kind of couple of years ago and
255
+ [1342.300 --> 1347.340] I was I reviewed it and was really very eager to
256
+ [1347.740 --> 1355.220] Start kind of talking about it, but I couldn't because it was prohibited for me to discuss the paper which is not yet published
257
+ [1355.220 --> 1357.260] But now it's published so I can talk about this
258
+ [1358.140 --> 1360.140] so
259
+ [1360.300 --> 1364.460] This is a very huge study so they've taken a huge
260
+ [1364.980 --> 1369.620] Actually free data sets of school children from America and
261
+ [1370.740 --> 1375.780] So you can see that they were like 140,000 participants in it and
262
+ [1376.420 --> 1380.540] And what they've shown is that 4 to 6% of
263
+ [1382.820 --> 1385.940] US key 12 educational system are actually
264
+ [1386.820 --> 1392.060] Spatial gifted so they have very large spatial ability I mean like very high spatial ability
265
+ [1393.620 --> 1395.620] but
266
+ [1395.620 --> 1397.620] And they are not
267
+ [1398.020 --> 1404.500] Gifted in verbal or maths ability and this is a very important point here. So they've only found that these people
268
+ [1404.900 --> 1410.300] Show very high spatial ability and they are not very good in maths or verbal ability and
269
+ [1410.780 --> 1419.500] What they've shown is that these kids are at increased at increased risk of showing high academic disengagement
270
+ [1420.260 --> 1422.260] so they
271
+ [1422.740 --> 1424.980] Break for example, they were found to
272
+ [1425.420 --> 1431.620] Breaking school rules they have behavioral and emotional problems and so on and one of the reasons
273
+ [1432.020 --> 1441.420] Researchers argued is that they are kind of they feel their ability to be under used but under kind of so basically
274
+ [1441.980 --> 1450.060] They do not feel needed so they could not find a place in the educational system to to kind of
275
+ [1450.700 --> 1453.780] Use their spatial ability which is very high
276
+ [1455.300 --> 1457.300] And that's why they have shown this
277
+ [1458.380 --> 1460.380] demotivation or disengagement
278
+ [1460.780 --> 1463.380] And they find this paper is very important
279
+ [1464.220 --> 1466.220] We also published a paper on
280
+ [1467.060 --> 1477.220] Russian school children kind of following this study our sample is smaller, but still kind of good and what we found is actually
281
+ [1485.100 --> 1488.820] The number of kids who are spatially gifted
282
+ [1489.220 --> 1495.820] Is always the same so we found that we also have like four to six to seven percent
283
+ [1496.300 --> 1500.820] Kids who are spatially gifted and if you remember from school
284
+ [1501.060 --> 1509.540] We are not really heavy on spatial ability in school. We have maps and we're maps and for example Russian language and like languages in general
285
+ [1509.540 --> 1511.860] For example English one, but
286
+ [1512.460 --> 1514.460] Spatial ability like geometry or
287
+ [1516.060 --> 1518.060] Geography actually are
288
+ [1518.060 --> 1520.060] slightly
289
+ [1520.700 --> 1522.700] Kind of beyond
290
+ [1523.220 --> 1529.540] Focus of our teachers, and so this is I guess this might be a problem later on
291
+ [1531.180 --> 1534.860] So we probably need to think about how to provide
292
+ [1535.700 --> 1541.460] Some educational health kind of some help to these kids who are spatially gifted
293
+ [1542.460 --> 1551.260] There was a thing I keep forgetting this it's like blueprinting. What is the English proper name for this school subject?
294
+ [1551.740 --> 1553.020] Blueprinting
295
+ [1553.020 --> 1554.020] Duh
296
+ [1554.020 --> 1559.940] I just keep forgetting this I keep forgetting how it's called in English, but yeah, so basically
297
+ [1560.420 --> 1566.460] Yeah, a technical drawing. Yeah, it's a good thing. So we had technical drawing was also huge, but
298
+ [1567.020 --> 1572.540] And actually they keep talking that we need to reiterate we need to bring it back to the educational system
299
+ [1572.540 --> 1575.900] I've heard this this year in the beginning of this year
300
+ [1577.620 --> 1582.420] Okay another very important thing regarding spatial ability is
301
+ [1583.660 --> 1585.660] Gender differences
302
+ [1587.700 --> 1589.700] So
303
+ [1589.780 --> 1594.380] There is a lot of research in this matter now you can imagine
304
+ [1594.620 --> 1596.620] So there are actually
305
+ [1597.340 --> 1599.340] mixed findings documenting
306
+ [1599.700 --> 1603.420] Documented regarding agenda differences in MFs ability
307
+ [1604.500 --> 1605.700] so
308
+ [1605.700 --> 1610.140] Some studies argue that females and males do not differ in MFs ability
309
+ [1611.380 --> 1614.980] Some studies argue that males on average demonstrate higher
310
+ [1615.620 --> 1619.580] MFs ability than females and I would say that the body of research
311
+ [1620.860 --> 1623.500] Regarding this direction of the link
312
+ [1625.100 --> 1627.580] Is is well, it's most
313
+ [1628.380 --> 1631.700] there is a lot of evidence on that so
314
+ [1632.420 --> 1637.260] It is highly likely that males outperform females in MFs
315
+ [1637.260 --> 1643.100] I mean like there were a couple of meta analysis there a couple of huge studies international ones that show that
316
+ [1644.100 --> 1649.500] Males actually outperform females in spatial ability or in spatial ability in MFs ability
317
+ [1650.380 --> 1651.900] But
318
+ [1651.900 --> 1656.740] There are actually some studies that show that females outperform males in some populations for example
319
+ [1656.740 --> 1659.420] There was a study that show that in China
320
+ [1660.220 --> 1662.220] females outperform males
321
+ [1662.220 --> 1664.220] If we are talking about verbal ability
322
+ [1665.460 --> 1668.940] This is a robust finding females talk
323
+ [1669.740 --> 1673.460] Process verbal information communicate better than
324
+ [1674.140 --> 1676.140] Then males
325
+ [1677.140 --> 1679.140] What do you think about spatial ability?
326
+ [1683.140 --> 1686.740] Probably equal no differences. Okay, are there hypothesis?
327
+ [1689.660 --> 1691.660] Females will outperform
328
+ [1691.820 --> 1693.820] Okay, the second thought maybe
329
+ [1700.060 --> 1702.060] Okay, are there ideas?
330
+ [1706.380 --> 1708.380] Okay
331
+ [1708.380 --> 1710.380] Any ideas
332
+ [1715.340 --> 1717.340] Okay, okay
333
+ [1718.100 --> 1724.060] So yeah, there are a bunch in the differences in spatial ability with weak to moderate effect sizes and
334
+ [1724.580 --> 1728.300] Males outperform females mostly so on average
335
+ [1729.100 --> 1734.940] This this does not actually speak for any individual female and any individual males on average
336
+ [1735.940 --> 1740.300] Males outperform females, but there are of course females who show
337
+ [1740.820 --> 1746.780] Very high spatial ability and of course there are males who could not do anything with spatial tasks
338
+ [1747.100 --> 1749.100] And I will talk about this
339
+ [1749.180 --> 1756.940] We also showed some of these links even at the level of experts in the field. So for example, we had
340
+ [1756.940 --> 1771.700] People with expertise in STEM and we compared males and females and selected females still show the kind of
341
+ [1772.340 --> 1782.300] This lower performance in spatial tasks compared to males and one of the most disturbing things for me here was that even selected
342
+ [1782.700 --> 1795.380] Females. So for example who participated in Olympia's who won some STEM competitions who performed at the conferences still have lower spatial ability than some males who will not selected to STEM
343
+ [1795.380 --> 1803.820] They're like general population males. This is quite disturbing and this paper is under view now. Let's see what we've got at the end of the day, but yeah
344
+ [1803.820 --> 1818.940] However, there is some evidence that females actually could outperform males in spatial tasks and
345
+ [1819.940 --> 1824.260] one of the key kind of explanations of this
346
+ [1825.260 --> 1833.140] This kind of link and not this link, but for this pattern of results is that
347
+ [1833.660 --> 1842.420] Women have better verbal ability as we already know and they could use their verbal ability to kind of supplement the spatial ability which is slightly lower
348
+ [1842.980 --> 1844.980] So they could label things
349
+ [1846.460 --> 1850.140] So there are a couple of studies that actually show that
350
+ [1850.140 --> 1859.900] Women use their verbal ability for labeling and they memorize for example places better if it is so
351
+ [1860.460 --> 1868.420] These are pretty dated studies, but they still kind of discussed so you can see that they were done before 2000
352
+ [1869.660 --> 1871.660] whatever
353
+ [1871.660 --> 1873.660] 20 years ago
354
+ [1875.020 --> 1877.020] and
355
+ [1878.020 --> 1883.460] Since we've already mentioned evolutionary kind of explanations for this
356
+ [1884.380 --> 1886.380] difference there is a
357
+ [1886.900 --> 1890.780] huge debate regarding this so people still discussing
358
+ [1891.340 --> 1893.340] why females
359
+ [1894.380 --> 1899.260] And the before and kind of have lower performance than males in spatial ability
360
+ [1899.860 --> 1903.660] This study is from 2003, but it's still kind of
361
+ [1904.660 --> 1906.660] There is nothing
362
+ [1907.300 --> 1909.300] better I would say in this
363
+ [1909.980 --> 1911.980] in this area and
364
+ [1912.580 --> 1914.580] These hypotheses are still discussed
365
+ [1916.380 --> 1918.980] So again, you've mentioned this
366
+ [1920.340 --> 1922.340] hunter-gatherer
367
+ [1922.340 --> 1930.660] theories and this is one of the hypothesis that is discussed too, but for example and you see that they are in different direction
368
+ [1931.660 --> 1933.660] So for example
369
+ [1933.940 --> 1943.340] There are some hypothesis that actually argue that females should have higher spatial ability because female foraging so kind of going for berries and stuff
370
+ [1945.300 --> 1954.060] They need to remember the locations and so on and there is no evidence kind of definitive for neither of hypothesis
371
+ [1954.340 --> 1961.900] including not only humans, but also in other animals
372
+ [1962.060 --> 1968.220] There is a lot of research in this area in animals too and there is no definitive answer yet
373
+ [1970.740 --> 1972.740] However, there are some
374
+ [1974.060 --> 1976.060] novel hypothesis that
375
+ [1977.300 --> 1978.820] actually
376
+ [1978.820 --> 1989.700] provide kind of newer explanations so for example, this hypothesis was mentioned there so male warfare, but this is a study from 2022
377
+ [1989.700 --> 1991.700] I've read it like
378
+ [1991.700 --> 1993.700] whatever couple of months ago and
379
+ [1994.220 --> 2002.820] What they've argued is that so here for example in this on this slide they say that well men travel long distances to kill competitors and capture
380
+ [2003.380 --> 2010.140] females this is why they have higher spatial ability, but this paper whoa something's going on
381
+ [2010.660 --> 2016.380] This paper argues that actually these competitions included the use of blunt force and
382
+ [2018.220 --> 2020.660] projectile weapons so male
383
+ [2021.300 --> 2023.540] males did not only kind of
384
+ [2024.540 --> 2027.900] Traveled a lot to find someone to fight with but they also
385
+ [2028.620 --> 2030.620] tried to hit them and
386
+ [2030.620 --> 2035.300] since spatial ability is linked to motor ability
387
+ [2036.980 --> 2042.620] This probably had driven increased spatial ability in males
388
+ [2043.620 --> 2045.620] And by the way, I didn't mention this before
389
+ [2046.980 --> 2052.060] spatial and motor skills are quite related also related and there are
390
+ [2052.940 --> 2060.420] numerous studies that showed that actually some brain areas are also shared between motor and spatial skills and there are
391
+ [2060.740 --> 2062.740] very nice and
392
+ [2062.740 --> 2064.740] really
393
+ [2064.740 --> 2069.740] tricky research in this area so for example, there is one study that I'm using recently
394
+ [2073.300 --> 2079.740] Participants of the studies of psychological studies usually solve some task using their laptops for example in the labs and
395
+ [2080.740 --> 2087.260] In one of the studies they were asked to perform in a mental rotation task so they
396
+ [2087.540 --> 2094.140] The figure was shown from different angles and they needed to say whether they this is the same figure or the wrong
397
+ [2094.140 --> 2101.100] Tinger and stuff like that, but what was done in this study is that they fixed one of their arms
398
+ [2101.100 --> 2106.740] So they couldn't move one of their arms they could still use mouse to click
399
+ [2107.740 --> 2112.820] Correct answers, but one of the arms was fixed and what they found is that
400
+ [2113.620 --> 2116.540] people with this arm fixed
401
+ [2118.020 --> 2120.020] Solved this spatial tasks
402
+ [2121.020 --> 2126.900] Kind of their performance in this spatial tasks was lower compared to those whose arms was free
403
+ [2127.140 --> 2132.620] This is a very fun thing. I mean it is like amazing to me because this kind of
404
+ [2133.100 --> 2137.420] Restriction of movement affects our ability to solve spatial tasks
405
+ [2144.020 --> 2152.260] You mean like right left-handedness or something like this I don't really remember this but I guess the opposite arm was fixed
406
+ [2155.740 --> 2159.740] There is a lot of interesting stuff there so yeah
407
+ [2159.740 --> 2166.580] But the good thing there is that we can kind of
408
+ [2167.660 --> 2173.500] Address this under performance in females in spatial ability basically spatial this
409
+ [2177.060 --> 2183.060] Gender differences in spatial ability are actually called or considered in
410
+ [2184.620 --> 2188.420] Contemporary scientific literature as being one of the reasons for
411
+ [2189.140 --> 2196.020] Women under performance in STEM or gender gap in STEM and so this is one of the
412
+ [2197.100 --> 2205.580] This is the reason then we need to address this somehow and so researchers in my area are now trying to find some ways to help
413
+ [2206.980 --> 2210.020] Improve the spatial ability especially in females
414
+ [2211.100 --> 2215.300] So one of the studies I find interesting is that we can teach
415
+ [2216.260 --> 2218.260] People to solve
416
+ [2218.380 --> 2219.900] spatial tasks
417
+ [2219.900 --> 2224.020] Better using some strategies and so this study actually showed
418
+ [2224.940 --> 2229.140] That we differ in these strategies females use
419
+ [2231.020 --> 2234.740] Different strategies from males to solve spatial tasks, but
420
+ [2236.100 --> 2238.420] If we teach these strategies to
421
+ [2238.820 --> 2244.500] To people to to males and females first of all they start to use them frequently
422
+ [2244.700 --> 2249.420] This is a very messy graph. I will just say what's going on there
423
+ [2251.100 --> 2256.080] So they start to use them more frequently. This is a very good point we can learn
424
+ [2256.460 --> 2263.900] But another study in 2000 so the previous paper was published in 2012 this paper is published two years later
425
+ [2264.220 --> 2271.980] What they found is that this strategy training eliminated sex difference and spatial ability good thing
426
+ [2282.060 --> 2286.780] They are not they you can see from these slides for example
427
+ [2286.900 --> 2293.780] So males and females differ slightly so for example you can see that spatial and majestic strategy
428
+ [2293.900 --> 2299.460] For example used more by females than by females
429
+ [2300.900 --> 2309.580] But for example and alternative strategies we are almost similar but when they were instructed with these different strategies
430
+ [2309.580 --> 2316.120] We can see that females started to use alternative strategies a lot. They are not gender kind of linked
431
+ [2316.120 --> 2318.560] I turn to the means like you know uses
432
+ [2320.560 --> 2326.520] No, they use this like spatial majestic was one of the most frequent one
433
+ [2327.040 --> 2329.040] All everyone used it
434
+ [2329.040 --> 2331.040] It's described to the left
435
+ [2331.040 --> 2334.000] But then they started to use alternative ones
436
+ [2334.680 --> 2340.680] The new one like spatial diagromatic spatial and alletic algorithmic. Yeah, yeah, they color matched
437
+ [2343.240 --> 2345.240] Yeah
438
+ [2346.120 --> 2360.360] So and this is a very good thing because if we teach people properly then this gender gap will diminished eventually
439
+ [2360.960 --> 2364.120] Will be diminished eventually another good thing is that
440
+ [2365.360 --> 2372.640] Video games are to reduce gender differences. I like this study really because
441
+ [2372.840 --> 2377.040] What was shown there is that if we
442
+ [2378.080 --> 2384.000] Allow people to play video games for the study used for hours kind of session
443
+ [2385.040 --> 2388.520] I don't remember whether it was kind of divided into several ones
444
+ [2388.600 --> 2394.880] But anyway for hours spatial games it was tetris in this in in in this study
445
+ [2395.920 --> 2397.920] They found that
446
+ [2398.080 --> 2403.520] spatial ability increased in both males and females, but what is
447
+ [2404.600 --> 2407.000] Most interesting thing there is that
448
+ [2408.000 --> 2410.800] females gained more than males
449
+ [2412.840 --> 2415.720] There is no proper explanation for this I guess
450
+ [2416.480 --> 2421.960] Probably because they were at lower level in the beginning they gained more
451
+ [2422.440 --> 2423.680] but
452
+ [2423.680 --> 2426.400] Good thing is that they were equal at the end of the day
453
+ [2427.200 --> 2433.960] In the spatial ability and this is not the not the only study there was a huge meta analysis published in 2014
454
+ [2434.440 --> 2440.080] Which actually showed a lot of video games that help to improve spatial build
455
+ [2440.440 --> 2445.040] Including some like adventure or like shooters even like Call of Duty for example
456
+ [2445.040 --> 2448.680] I do remember that it was there. So yeah
457
+ [2449.440 --> 2453.400] Next time when you find your kids playing video games don't blame them
458
+ [2456.760 --> 2458.760] another
459
+ [2458.760 --> 2463.120] Thing which is important for this discussion is spatial anxiety
460
+ [2464.160 --> 2466.160] You know what anxiety is
461
+ [2467.160 --> 2471.880] And there is a kind of a special kind of anxiety which is
462
+ [2472.640 --> 2478.680] Specific to spatial tasks. So I know a lot of people who say oh, I could not orient in a city
463
+ [2478.680 --> 2481.880] I am totally spatially dumb or something like this and
464
+ [2482.880 --> 2488.160] so this is this is things called spatial anxiety and
465
+ [2489.440 --> 2490.960] Basically
466
+ [2490.960 --> 2496.560] It was shown to be one of the drivers of this gender differences in spatial ability and
467
+ [2498.960 --> 2509.880] This is not a very huge surprise here, but females on average experience more anxiety any anxiety and
468
+ [2510.880 --> 2514.520] spatial ability is not and is not kind of an exception here
469
+ [2516.040 --> 2524.400] So one of the ways to improve spatial ability like I mean like performance is to reduce spatial anxiety in females
470
+ [2525.960 --> 2527.960] And there are actually some
471
+ [2529.680 --> 2535.400] Documented way to do this and this was this was introduced this topic was I maybe the
472
+ [2536.400 --> 2542.640] It was introduced even earlier, but in 1994 published this paper that kind of
473
+ [2547.360 --> 2551.520] Stressed this idea. This is a good what stressed this idea
474
+ [2553.520 --> 2555.520] And this is actually was
475
+ [2557.520 --> 2559.520] Further kind of
476
+ [2560.000 --> 2565.000] So there is a further evidence on that so this is a study
477
+ [2566.240 --> 2571.880] That was published a couple of years ago. It is a huge study. It's a meta analysis also by the way
478
+ [2571.880 --> 2573.880] It's a huge study that
479
+ [2574.160 --> 2576.280] Analyzed a lot of studies that used
480
+ [2577.760 --> 2579.760] MRI so this
481
+ [2582.200 --> 2586.040] Method to investigate brain activity and so
482
+ [2586.440 --> 2592.680] What they found is that there is an increased activation in lentiform nucleus
483
+ [2594.040 --> 2597.080] This is the area of the brain that is linked to
484
+ [2597.960 --> 2601.680] negative emotions and they argued that basically
485
+ [2602.040 --> 2610.320] This is spatial anxiety here. So people were solving spatial tasks and what they found is that females activated in this area
486
+ [2610.640 --> 2613.440] more than males and they
487
+ [2613.840 --> 2617.000] Suggested that this is probably because of
488
+ [2619.320 --> 2622.960] spatial anxiety so spatial anxiety was kind of
489
+ [2624.400 --> 2626.400] online
490
+ [2626.680 --> 2628.680] the one which
491
+ [2629.120 --> 2634.360] Distopped for example people online during them solving these tasks
492
+ [2634.360 --> 2636.360] The
493
+ [2639.360 --> 2644.520] Mechanisms for this kind of is not very clear
494
+ [2645.920 --> 2648.360] There is a general framework for anxiety
495
+ [2649.360 --> 2653.160] Which says that well we have a limited capacity mental
496
+ [2654.000 --> 2659.000] In for example in our working memory so we can process limited number of tasks for example
497
+ [2659.000 --> 2662.080] And so when we experience anxiety we have a dual task
498
+ [2662.080 --> 2665.360] So we need to put some resources into solving a task at hand
499
+ [2665.560 --> 2669.600] So for example solving math task or spatial task or whatever driving a car
500
+ [2669.680 --> 2677.760] But when we have an anxiety higher anxiety then our brain need to process the second task and kind of we have
501
+ [2677.960 --> 2686.800] Limited resources and these two processes start to compete for this resource and that's why we have lower performance when we have a specific anxiety
502
+ [2687.320 --> 2692.160] And it is very also very well documented thing for example in the area of math anxiety
503
+ [2692.160 --> 2697.520] I've investigated this for a couple of years in my career and there is a lot of
504
+ [2698.000 --> 2701.080] Also brain evidence I mean like neuro evidence for that
505
+ [2703.120 --> 2704.800] So yeah
506
+ [2704.800 --> 2706.800] spatial anxiety and
507
+ [2707.040 --> 2710.280] another thing is regarding strategies as
508
+ [2712.240 --> 2716.000] You've asked about males and female strategies
509
+ [2716.560 --> 2718.000] but
510
+ [2718.000 --> 2720.000] What they found is that
511
+ [2722.560 --> 2725.200] females first of all had some
512
+ [2728.120 --> 2730.120] Kind of
513
+ [2730.760 --> 2736.800] Not problems, but they had troubles kind of controlling themselves during spatial ability
514
+ [2736.800 --> 2741.240] So because they found this increased activation and right subgyro
515
+ [2741.240 --> 2744.200] region and this
516
+ [2744.840 --> 2749.680] Activation in this region is linked usually to efforts to mental efforts
517
+ [2749.680 --> 2752.080] So we need to control our activity
518
+ [2752.240 --> 2758.240] This is a huge thing which is called executive function and one of the executive functions is called executive control
519
+ [2758.440 --> 2765.080] So females kind of struggled more with the task. They needed to put more effort into solving this task and
520
+ [2766.440 --> 2769.640] To the strategies what they found this is a
521
+ [2769.800 --> 2775.240] Kind of slightly dodgy explanation here. I must admit but what they found is that
522
+ [2778.280 --> 2780.280] The part of the brain which is
523
+ [2781.600 --> 2790.040] Reflects kind of usage of agacentric strategy was activated in females when they needed to
524
+ [2790.680 --> 2796.200] Use an allocentric strategy to solve the task and this is kind of them
525
+ [2796.840 --> 2802.680] Probably using wrong strategy to solve this task and this kind of bring us back to the discussion of
526
+ [2803.240 --> 2807.160] Teaching strategies to solve this task because if we teach people
527
+ [2807.880 --> 2809.880] to to use
528
+ [2809.880 --> 2813.880] Strategies that are kind of linked to the tasks at hand
529
+ [2814.040 --> 2816.400] Then they will probably do these tasks better
530
+ [2816.640 --> 2821.880] Our centric and agacentric strategies basically is being about perspective
531
+ [2822.680 --> 2829.800] Ego-centric is us kind of from our perspective from me from ego and allacentric is what me looking on the map for example and
532
+ [2830.120 --> 2832.120] We can kind of look
533
+ [2833.320 --> 2839.640] We can perceive a lot of kind of it's not a first first person perspective that would be easy to discuss
534
+ [2841.560 --> 2845.240] Yeah, kind of it's a little bit more complicated, but yeah, it's
535
+ [2846.920 --> 2848.920] Pretty good explanation
536
+ [2852.600 --> 2854.600] Yeah
537
+ [2856.040 --> 2858.120] Space is actually a key thing
538
+ [2859.000 --> 2863.640] For a lot of different processes in our brain because we live in space
539
+ [2865.160 --> 2869.400] You can see my son left who is nine months at this age
540
+ [2870.520 --> 2877.880] At when I've taken this picture and his selection of words was quite limited because he was only starting to talk
541
+ [2878.600 --> 2880.600] He had like
542
+ [2881.000 --> 2884.200] Key things were like Kisa and key for Kisa
543
+ [2885.320 --> 2888.520] Buffo Babushka for Granny and so on and
544
+ [2890.200 --> 2895.480] This is the first time I've seen that my son can process
545
+ [2896.280 --> 2900.120] Spatial information he could do it before, but it was kind of
546
+ [2901.400 --> 2903.400] somewhere
547
+ [2904.600 --> 2909.880] Yeah, it was hidden, but I mean at that time he knew exactly where we were the cars
548
+ [2910.120 --> 2911.880] He's
549
+ [2911.880 --> 2913.880] Kind of interested at that time and still
550
+ [2916.120 --> 2920.920] He knew exactly where the car was the the car he liked to look at your
551
+ [2921.800 --> 2923.320] He knew where
552
+ [2923.320 --> 2928.920] Granny lived for example when she came visit it in us and he could point to this place
553
+ [2929.400 --> 2934.840] And this is the first time I've seen that spatial ability is there and it is linked to language
554
+ [2936.520 --> 2938.520] Like very closely
555
+ [2938.680 --> 2944.760] Very closely related but spatial abilities where there are a lot of research in very small kids
556
+ [2945.720 --> 2948.040] and he's shown that they use spatial ability to
557
+ [2949.000 --> 2952.120] Orient in a room for example. They need to control their body
558
+ [2952.360 --> 2958.040] They knew where their hands are where they start to learn where they hands are and as I've mentioned already
559
+ [2958.120 --> 2963.400] Spatial ability is very closely related to motor ability and so this link is there and
560
+ [2964.200 --> 2966.520] Kind of they process spatial information from
561
+ [2967.160 --> 2968.520] Like very early
562
+ [2968.920 --> 2972.680] But the research is limited time must admit there because it's complicated to
563
+ [2973.240 --> 2976.600] To investigate anything in small kids
564
+ [2978.360 --> 2980.360] and
565
+ [2980.360 --> 2985.800] Indeed the railings between spatial and verbal ability they are documented pretty good
566
+ [2987.880 --> 2990.680] And what is important here is that
567
+ [2991.560 --> 2995.480] spatial ability can be predictive for example of
568
+ [2997.240 --> 2998.280] uh
569
+ [2998.280 --> 3003.560] Verbal abilities of for example good spatial ability higher spatial ability predicts higher
570
+ [3004.200 --> 3006.840] verbal ability later on and this is a
571
+ [3007.720 --> 3011.960] Especially important for some languages for example chinese. It's very spatial language
572
+ [3012.920 --> 3014.920] we have this complex
573
+ [3016.440 --> 3022.200] Symbols there and we need to remember exactly where they are situated even within one
574
+ [3023.080 --> 3025.080] syllabus
575
+ [3025.240 --> 3026.440] So
576
+ [3026.440 --> 3028.120] spatial ability is
577
+ [3028.120 --> 3030.120] very important for this
578
+ [3030.600 --> 3035.160] There is even some research in dyslexia, you know this kind of
579
+ [3037.240 --> 3038.440] disorder
580
+ [3038.840 --> 3042.280] and for example one of the hypothesis day is that
581
+ [3043.720 --> 3045.480] probably this
582
+ [3045.480 --> 3049.640] People diagnosed with dyslexia have some issues with
583
+ [3050.520 --> 3054.120] Processing spatial information and kind of
584
+ [3056.920 --> 3060.600] This is one of the reasons why they have troubles reading for example
585
+ [3062.200 --> 3064.200] Uh
586
+ [3066.440 --> 3070.280] And yeah, this is one of the studies from chinese language
587
+ [3071.640 --> 3073.640] This is quite complex, but it's again
588
+ [3074.360 --> 3075.400] um
589
+ [3075.400 --> 3083.880] longitudinal study and they investigate spatial ability in kindergarten so there's like very small kids there like
590
+ [3084.520 --> 3088.040] three four five years old and what they found is that
591
+ [3089.080 --> 3091.080] spatial visualization
592
+ [3091.880 --> 3095.800] Contributed to orthographic awareness and then contributed to word reading
593
+ [3096.360 --> 3097.640] so
594
+ [3097.640 --> 3099.640] this longitudinal designs
595
+ [3100.360 --> 3101.400] uh
596
+ [3101.400 --> 3106.360] Kind of keep pointing to the importance of spatial ability for verbal processing
597
+ [3107.880 --> 3109.240] and
598
+ [3109.240 --> 3110.360] Even
599
+ [3110.360 --> 3112.360] We can see these links
600
+ [3112.680 --> 3115.880] Even embedded into our language
601
+ [3116.760 --> 3119.000] So there is a lot of things
602
+ [3119.800 --> 3123.000] In our language that is spatial in nature
603
+ [3123.720 --> 3124.920] uh
604
+ [3125.720 --> 3129.800] I just bring this kind of case of time as an example
605
+ [3130.440 --> 3134.040] Because we have this spatial metaphor of time so the time flies
606
+ [3134.600 --> 3140.040] um time never stops and stuff like that and it's all spatial it's all kind of
607
+ [3140.840 --> 3142.840] linked to movement and
608
+ [3142.840 --> 3144.600] We also
609
+ [3144.600 --> 3149.160] Use justice when we talk so for example time is going from left to the right for example
610
+ [3149.720 --> 3151.720] uh or
611
+ [3151.720 --> 3153.720] It actually depends on the language
612
+ [3153.720 --> 3155.720] I mean like for example in arabic
613
+ [3156.280 --> 3158.760] language it's uh
614
+ [3158.760 --> 3161.960] The other way around so the time flies
615
+ [3162.760 --> 3164.760] not from
616
+ [3165.080 --> 3170.520] Yeah, not from left to right but from right to left and sexually and in the very interesting thing there is a
617
+ [3171.160 --> 3173.160] uh
618
+ [3173.320 --> 3179.480] An effect it is also a very well-documented thing it is called snuck effect symbolic number
619
+ [3180.440 --> 3181.880] approximate
620
+ [3181.880 --> 3184.120] representation
621
+ [3184.120 --> 3185.320] Something else
622
+ [3185.320 --> 3187.320] uh basically this is uh
623
+ [3187.960 --> 3189.960] bout numbers
624
+ [3190.280 --> 3191.960] we
625
+ [3192.840 --> 3198.440] Kind of our perception of numbers is linked to space also
626
+ [3199.080 --> 3201.560] Because smaller numbers are to the left
627
+ [3202.600 --> 3206.520] And bigger numbers are to the right so for example zero is to the left and
628
+ [3207.080 --> 3210.920] Lodge numbers are to the right and we even respond faster you've
629
+ [3211.560 --> 3217.080] Asked me about right and right handiness if we need to respond to larger number
630
+ [3217.880 --> 3219.880] By right hand we do it faster
631
+ [3220.760 --> 3224.600] Compared to responding to the large number by left hand because we have this
632
+ [3225.720 --> 3227.720] Uh kind of competing
633
+ [3228.120 --> 3229.720] Uh
634
+ [3229.720 --> 3231.720] Information
635
+ [3232.120 --> 3234.120] Sorry, say again
636
+ [3234.120 --> 3238.280] Yeah, it changes it changes but uh
637
+ [3239.240 --> 3241.800] Why I've started to talk about this snuck effect
638
+ [3242.680 --> 3248.520] Is that it is reversed in people who read from right to left so this is linked to language
639
+ [3249.160 --> 3255.880] And this is kind of a very interesting thing and this is actually I guess one of the key reasons for why actually
640
+ [3255.960 --> 3259.560] I'm doing this research actually because it's super interesting
641
+ [3260.200 --> 3265.480] And this is um also this embodiment is one of the
642
+ [3266.040 --> 3268.040] theories which is also huge now in
643
+ [3268.680 --> 3270.680] Research of human actually
644
+ [3271.560 --> 3274.360] Cognition some people say that the this theory is actually
645
+ [3274.840 --> 3279.880] Compots with cognitive cognitive theory cognitive science, but what what is it says?
646
+ [3279.960 --> 3285.080] It says that our brain has and cognition has nothing apart from our bodies
647
+ [3285.800 --> 3286.600] Uh
648
+ [3286.600 --> 3292.280] We don't have any other experience to come in the brain and all kind of our
649
+ [3293.480 --> 3296.920] Cognitive processes are actually linked to our body
650
+ [3297.880 --> 3299.400] There were some
651
+ [3299.400 --> 3308.040] Studies well, I find a little bit dodgy also, but there for example there are some studies that show that we process words
652
+ [3308.840 --> 3312.280] Uh, that we are presented to us in the
653
+ [3312.920 --> 3314.920] uh
654
+ [3314.920 --> 3316.920] Bottom of the screen
655
+ [3317.080 --> 3323.480] faster if they are if their semantics is linked to something we usually perceive
656
+ [3324.920 --> 3327.640] Uh on the bottom for example boots
657
+ [3328.920 --> 3334.520] Or grass or something like this and if we say something like sand and presenting to the
658
+ [3335.240 --> 3339.480] Upper part of the screen people process it faster
659
+ [3339.880 --> 3342.440] This dodgy this is sounds like
660
+ [3343.400 --> 3345.880] Something weird, but I think it's very interesting
661
+ [3347.960 --> 3349.720] And this is also
662
+ [3349.720 --> 3353.800] Truth for numbers of example. We also perceive numbers in this similar way
663
+ [3354.440 --> 3356.440] uh
664
+ [3356.440 --> 3359.640] Bigger numbers are higher and smaller numbers
665
+ [3360.680 --> 3363.080] Lower and if we present them
666
+ [3364.360 --> 3366.600] Kind of congrantly
667
+ [3366.680 --> 3368.280] Yeah, congruence is a correct one
668
+ [3368.280 --> 3373.160] Concurrently to the to the magnitude than we perceive them faster
669
+ [3374.840 --> 3376.440] Um
670
+ [3376.440 --> 3381.560] There is also some people who argue and by the way verbal spatial and uh
671
+ [3382.360 --> 3384.600] Oh, that was quite some time ago
672
+ [3384.600 --> 3386.600] verbal spatial and
673
+ [3386.600 --> 3388.840] Maps ability are
674
+ [3388.840 --> 3391.320] Kind of all parts of intelligence
675
+ [3392.120 --> 3396.360] So they are all linked there is a g general cognitive ability
676
+ [3397.000 --> 3400.920] Uh factor and it's like intelligence they all parts of the intelligence
677
+ [3401.800 --> 3404.680] Uh, but there is also a very
678
+ [3405.720 --> 3414.280] Interesting direction of research which tries to say that spatial and verbal ability might be kind of competitive not a competitive but
679
+ [3414.840 --> 3416.680] um
680
+ [3416.760 --> 3420.760] Kind kind of can have different differential effects on us
681
+ [3421.480 --> 3423.480] And this study is very
682
+ [3423.560 --> 3428.520] Interesting in this sense because what it's a genetically informative study by the way
683
+ [3428.600 --> 3435.320] So what they've done they've collected DNA from people they are created uh they were
684
+ [3436.600 --> 3437.640] Uh
685
+ [3437.720 --> 3443.480] Created it so called genome wide association study so they basically found correlations in genes
686
+ [3445.240 --> 3449.080] Correlations between genes different genes and we have a lot of them and
687
+ [3449.960 --> 3457.080] A trait like spatial or verbal ability and then they've created a polygenics course. This is a beyond our discussion
688
+ [3457.080 --> 3458.280] but
689
+ [3458.280 --> 3460.280] The interesting thing is that
690
+ [3460.280 --> 3461.800] genes
691
+ [3461.960 --> 3469.000] That were correlated correlated with spatial ability they were also associated with higher body mass index and
692
+ [3470.280 --> 3472.280] lower risk for schizophrenia
693
+ [3473.640 --> 3474.680] and
694
+ [3474.680 --> 3477.080] And they also were linked to less openness
695
+ [3477.960 --> 3482.760] personality trait that reflects curiosity and creativity, but conversely
696
+ [3483.800 --> 3492.360] genes that were linked to verbal ability they were linked to lower body mass index and had no association with risk of schizophrenia
697
+ [3492.840 --> 3494.200] so
698
+ [3494.200 --> 3500.120] They correlate and we know that they correlate and there is also some research that shows that they correlate
699
+ [3500.840 --> 3503.240] there is like generalist gene hypothesis
700
+ [3505.000 --> 3507.000] the same genes
701
+ [3507.160 --> 3508.920] contribute to
702
+ [3508.920 --> 3516.440] Maths verbal and spatial kind of ability, but also there are some research that shows that actually there are some specific genes
703
+ [3516.760 --> 3521.080] That kind of have differential effects on ourselves. This is a very interesting thing
704
+ [3523.640 --> 3528.200] Since we've started to talk about genes and spatial ability
705
+ [3529.720 --> 3532.440] It is a very complicated matter to investigate
706
+ [3533.080 --> 3537.320] genes that are related to be here because molecular one
707
+ [3538.120 --> 3539.560] We've already discussed twin
708
+ [3540.280 --> 3543.320] twin designs so we have monosagotic and diagotic twins
709
+ [3544.360 --> 3552.040] Two monosagotic twins are more similar than two diagotic twins and looking at them we are kind of can infer
710
+ [3553.800 --> 3557.000] genetic contributions because of the genetic differences
711
+ [3557.720 --> 3560.280] We can infer genetic contribution to some trait
712
+ [3561.240 --> 3567.240] But molecular studies are more complex and they are very expensive because we need to collect DNA
713
+ [3567.240 --> 3573.560] We can to we we need to analyze data pre-process it and so on and so forth and
714
+ [3575.560 --> 3577.560] It is very complicated to find
715
+ [3578.840 --> 3582.200] genes that are associated with different behavior
716
+ [3583.080 --> 3585.480] So for example this study did not find
717
+ [3585.880 --> 3586.680] And
718
+ [3587.880 --> 3591.240] genes that are related to spatial ability
719
+ [3592.440 --> 3597.080] In the genes that are associated with cognitive kind of diseases
720
+ [3599.080 --> 3602.280] I can imagine their kind of feelings in this case
721
+ [3603.720 --> 3605.720] But there are also some studies that
722
+ [3607.000 --> 3610.120] managed to find these links and for example in this study
723
+ [3610.600 --> 3613.400] It was these are pretty small studies
724
+ [3613.400 --> 3615.640] So for example in this the first study
725
+ [3617.000 --> 3619.000] There were there were
726
+ [3619.480 --> 3627.480] 400,000 people in this study it's like a lot we need huge international efforts to conduct such studies
727
+ [3628.280 --> 3633.480] But for example in this study there were 2,000 people and they didn't manage to find this correlation
728
+ [3633.480 --> 3635.000] And didn't manage to find this link
729
+ [3635.960 --> 3641.320] And in this study there was like 1,000 people it's like a pretty small sample
730
+ [3642.200 --> 3646.120] But they used a different approach they had like gene candidates
731
+ [3647.560 --> 3648.920] And what they found is that
732
+ [3649.720 --> 3651.720] there are actually
733
+ [3651.720 --> 3657.160] correlation in the gene with spatial ability that is also linked to long-term
734
+ [3657.160 --> 3661.000] Potensation and hypercomposence here we have hypercompos again
735
+ [3661.240 --> 3665.000] So this kind of research can help us to look
736
+ [3666.280 --> 3669.560] Kind of to get new insights into the mechanisms
737
+ [3669.560 --> 3675.480] So in one of the previous studies we've seen that well this hypercomposence links to
738
+ [3677.800 --> 3683.320] To be in a taxi driver to an experience of a taxi driver. So what
739
+ [3684.200 --> 3685.720] And now we can see that well
740
+ [3687.400 --> 3690.280] There are some genes that are linked to this hypercompos
741
+ [3691.480 --> 3694.440] So this is another kind of way to look at this
742
+ [3694.680 --> 3701.320] There are also some for example research that shows
743
+ [3701.880 --> 3705.080] That for example expression of genes
744
+ [3707.640 --> 3713.160] And gene expression in the brain can differ for example in people with different spatial ability
745
+ [3713.720 --> 3718.680] And for example what they found is that there are some genes that are associated with spatial ability
746
+ [3719.000 --> 3721.880] Okay, but these genes are usually expressed in the brain
747
+ [3722.600 --> 3726.840] And they are usually expressed into frontal area so the brain
748
+ [3727.560 --> 3731.240] And prefrontal area and the prefrontal area is the area
749
+ [3731.960 --> 3734.600] That is linked to executive functions for example
750
+ [3734.600 --> 3739.880] We've talked about executive functions already today. This is our ability to control our emotions and so on
751
+ [3740.680 --> 3742.840] And so again we have we can
752
+ [3743.640 --> 3748.360] Make this bridge from genes to behavior via kind of brain
753
+ [3749.000 --> 3753.160] So we know that prefrontal cortex is somehow involved in spatial ability
754
+ [3753.480 --> 3759.800] We know that there are some genes that are expressed in prefrontal cortex and they know now we know we now know
755
+ [3760.760 --> 3765.400] What are these genes so we can kind of have this chain of logic
756
+ [3766.120 --> 3768.120] From spatial ability on the
757
+ [3769.320 --> 3772.760] From our behavior to our genes and this is very important
758
+ [3774.920 --> 3776.920] So yeah
759
+ [3777.720 --> 3781.480] In spite of all these molecular studies of genetics
760
+ [3783.880 --> 3785.880] What is
761
+ [3785.880 --> 3787.880] Importance here is that
762
+ [3789.880 --> 3791.880] To be in studies for example show that
763
+ [3795.400 --> 3797.400] spatial ability is driven
764
+ [3798.040 --> 3804.600] Is explained by so variations in spatial ability is explained by variation in
765
+ [3807.080 --> 3808.920] genes and this variation is
766
+ [3809.800 --> 3815.160] Kind of explains 64% of variation in spatial ability so basically
767
+ [3816.280 --> 3821.000] 64% of this trait is explained by genes
768
+ [3822.920 --> 3826.760] But how comes that we could not find the genes themselves?
769
+ [3827.240 --> 3831.080] This is a tricky path. This is called missing heritability and
770
+ [3832.200 --> 3835.560] This is not a unique thing for spatial ability. We still
771
+ [3836.440 --> 3840.120] Could not find genes that are linked to intelligence for example
772
+ [3841.720 --> 3844.920] Heritability for intelligence is some
773
+ [3845.960 --> 3848.920] Pretty similar some some like 54%
774
+ [3850.200 --> 3855.800] But we only managed to explain 10% in intelligence
775
+ [3856.360 --> 3858.360] 10% of the variation intelligence
776
+ [3859.800 --> 3862.360] From genes only but that's pretty much already
777
+ [3863.160 --> 3866.360] Still we do not know where are the other 40% goes
778
+ [3867.000 --> 3871.080] So this is the thing we need to to work on in
779
+ [3872.280 --> 3874.280] Future studies
780
+ [3875.000 --> 3877.000] but
781
+ [3877.000 --> 3879.000] There are some there is some research
782
+ [3880.280 --> 3884.520] Another animals that could potentially bring us some evidence
783
+ [3885.800 --> 3888.040] regarding our
784
+ [3888.040 --> 3891.160] Kind of behavior so for example, these are mountain chickadees
785
+ [3891.320 --> 3894.200] I hope I pronounced this correctly and so
786
+ [3895.000 --> 3898.840] This is a pretty recent study and what they've shown is that
787
+ [3900.840 --> 3902.840] There are so
788
+ [3903.160 --> 3910.200] The right individual differences in spatial ability in these birds so some birds are even better than others
789
+ [3910.520 --> 3913.160] This is an interesting point already
790
+ [3914.120 --> 3916.920] But what is more important is that they found
791
+ [3917.560 --> 3920.760] Some links between spatial ability and genes
792
+ [3921.640 --> 3924.520] Of these birds so the birds were sequenced so they
793
+ [3925.880 --> 3932.280] DNA was sequenced and what they found is that there are some links between spatial ability and genetics and what is
794
+ [3932.680 --> 3933.880] again
795
+ [3933.880 --> 3938.280] We found in this study is that again the function of hippocampus was linked with these genes
796
+ [3939.320 --> 3940.680] and
797
+ [3940.680 --> 3942.120] So kind of we have
798
+ [3942.120 --> 3944.120] evidence from differ we can
799
+ [3944.120 --> 3952.200] See how our evidence kind of keep emerging keep kind of we keep kind of collecting evidence for something here
800
+ [3952.600 --> 3958.280] We don't know all answers still but we have this evidence collected and
801
+ [3958.920 --> 3963.880] For example when I was talking about sex differences in spatial ability the sex differences
802
+ [3964.600 --> 3971.880] Exist in other species too. For example, there are this small mice or rats. They also shows sex differences
803
+ [3972.680 --> 3974.360] There are some
804
+ [3974.360 --> 3976.360] Kind of differences that
805
+ [3977.320 --> 3979.320] are in a different direction or
806
+ [3980.120 --> 3986.120] Of a smaller magnitude, but they are still there and so for example in this
807
+ [3987.000 --> 3995.160] Let's say it was very vol or something like this is like small mice and they also have sex differences in
808
+ [3996.440 --> 3998.120] Oh meadow meadow
809
+ [3998.120 --> 4001.240] Meadow voles it is called and so they had this
810
+ [4001.960 --> 4006.920] spatial ability differences, but it was linked to whether
811
+ [4007.560 --> 4012.280] the exact species was monogamous or not
812
+ [4013.160 --> 4017.240] So if males had a lot of sexual partners around
813
+ [4019.560 --> 4024.680] Males in this species we are better in spatial ability compared to females
814
+ [4025.000 --> 4028.680] But if they were monogamous, they were no sex differences in spatial ability
815
+ [4029.160 --> 4031.160] interesting
816
+ [4033.480 --> 4038.440] And yeah, so probably other species could help us find something interesting
817
+ [4039.080 --> 4045.160] regarding our ourselves because for example this study is also a pretty new one. It was published in July
818
+ [4045.800 --> 4047.400] 2022
819
+ [4047.400 --> 4051.320] What what they found this is a probably a little bit of a stretch
820
+ [4052.120 --> 4055.480] Because here I discussed not spatial ability, but migration
821
+ [4056.040 --> 4058.040] Migration is a little bit more complex
822
+ [4059.000 --> 4063.080] Kind of compared to regular spatial ability, but what they found is that
823
+ [4063.640 --> 4067.320] They compared two types of hoverflies one day one
824
+ [4068.440 --> 4070.280] species
825
+ [4070.360 --> 4072.840] One type of hoverflies
826
+ [4072.840 --> 4077.560] Duma grade and the other does not and what they found is that they differ in
827
+ [4078.680 --> 4082.280] 151543 genes
828
+ [4082.920 --> 4087.400] And these genes show like a lot of different traits
829
+ [4087.720 --> 4092.440] I mean like they correlate with a lot of different traits that include like metabolism,
830
+ [4092.520 --> 4096.040] muscle structure, hormonal regulation, so on and so forth
831
+ [4096.040 --> 4102.680] So probably probably this kind of research could also bring us new insights into spatial ability
832
+ [4104.840 --> 4106.840] Um
833
+ [4106.840 --> 4108.840] Yeah and last but not the least
834
+ [4109.720 --> 4113.640] I also I think I have some time still yeah, I have some like 10 minutes or so
835
+ [4114.920 --> 4116.680] so
836
+ [4116.680 --> 4120.120] The thing I wanted to mention is a Nobel Prize like
837
+ [4121.320 --> 4123.800] one of the most recognized
838
+ [4125.080 --> 4127.720] Scientific award so one of the awards
839
+ [4129.000 --> 4134.040] One of the Nobel Prizes was given to people who actually investigated spatial ability
840
+ [4136.360 --> 4138.360] this
841
+ [4138.840 --> 4141.400] prize was given to three people
842
+ [4143.320 --> 4145.320] Who identified
843
+ [4145.640 --> 4148.680] So mass media said that they identified GPS in the brain
844
+ [4149.560 --> 4153.640] So what they found it was nominated by the way in 2014 so a recent one
845
+ [4154.280 --> 4160.120] What they found is that they found a positioning system in the brain so it is called in the GPS
846
+ [4160.680 --> 4164.680] That makes it possible to orient ourselves in space
847
+ [4165.640 --> 4169.880] And they found brain cells in the hyper compass again
848
+ [4171.640 --> 4176.360] Uh that was activated only when the red was in a particular
849
+ [4177.400 --> 4180.280] Um place in a room
850
+ [4181.320 --> 4182.840] Uh
851
+ [4182.840 --> 4187.400] And other nerve cells were activated when it was in the other parts of the room
852
+ [4187.400 --> 4193.000] So they found that they are kind of this place cells are kind of linked to them to the room
853
+ [4193.480 --> 4201.320] And what this suggests is that we have a map in our brain which is linked to the places we visit
854
+ [4203.000 --> 4205.000] and this was further
855
+ [4207.000 --> 4210.440] Uh this was further kind of confirmed in the
856
+ [4211.320 --> 4214.840] uh following studies and they found this kind of called grid cells
857
+ [4215.640 --> 4218.280] Uh so this like a coordinate system in the brain
858
+ [4219.000 --> 4221.800] And it was also linked to the places I will show
859
+ [4222.440 --> 4223.800] So this is the first study
860
+ [4224.120 --> 4233.560] John O'Keefe did it in whatever the first study was done is I think it's 1971 yeah
861
+ [4233.640 --> 4235.800] So this was a very short paper
862
+ [4237.400 --> 4239.400] He had a couple of mice
863
+ [4239.400 --> 4246.120] Uh a couple of rats and what they've done is that they installed an electrode in the brain of a red
864
+ [4246.680 --> 4249.000] And the red was running around the room
865
+ [4249.080 --> 4253.000] And what they found is that there was a particular pattern of activation of this
866
+ [4253.480 --> 4259.320] Neurons in hypercampus when uh the red visited particular places
867
+ [4260.360 --> 4262.360] Uh and uh
868
+ [4263.240 --> 4267.000] The following studies further confirmed this kind of hypothesis
869
+ [4267.800 --> 4273.720] And what they've showed is that there are kind of so this to the left we can see
870
+ [4274.680 --> 4278.840] This is by the way an n-thirty null cortex a very close structure to hypercampus
871
+ [4279.400 --> 4281.080] but
872
+ [4281.080 --> 4283.880] It kind well if I say that it reflects
873
+ [4284.600 --> 4286.600] hypercampus to a degree
874
+ [4287.160 --> 4293.400] Activity it hypercampus to a degree that would be true because we can see some similar patterns when they activate together
875
+ [4293.880 --> 4298.760] But it's probably a little bit more complex as it usually in science
876
+ [4299.560 --> 4302.360] But so what they found is that there are kind of
877
+ [4302.520 --> 4304.520] Uh
878
+ [4304.520 --> 4306.920] Locations so to the left
879
+ [4307.480 --> 4312.280] Uh you can see the trajectory of the brain and to the right you can see the
880
+ [4312.840 --> 4317.720] Patents of activation so like firing rates of the neurons in your brain
881
+ [4318.600 --> 4321.880] Um and you can see that they actually match
882
+ [4322.680 --> 4326.600] Each other so they resemble each other quite quite similar things
883
+ [4329.000 --> 4332.200] Uh, so yeah, okay take home message
884
+ [4332.760 --> 4337.320] Uh spatial ability is quite important for academic achievement and other areas
885
+ [4337.880 --> 4342.760] I didn't mention this but it's also important for sports and music even and stuff like that
886
+ [4343.400 --> 4347.160] Special ability can be improved and this is very important. We can improve it
887
+ [4347.720 --> 4348.920] Uh
888
+ [4349.080 --> 4351.720] It is spatial ability rather than spatial abilities
889
+ [4352.600 --> 4354.840] Uh there are a vast gender differences
890
+ [4355.880 --> 4358.520] But they could be addressed by training
891
+ [4359.640 --> 4361.400] Like educational interventions and so on
892
+ [4362.360 --> 4367.480] Uh spatial ability is important for verbal and maps ability. Oh, I I've vetted the point
893
+ [4367.480 --> 4373.480] But I didn't mention this. Okay. I can mention this now. So education needs to be spatialized. Yeah, it needs
894
+ [4375.720 --> 4377.480] There are
895
+ [4377.480 --> 4380.680] A whole body of research that investigates different
896
+ [4381.720 --> 4385.080] Ways to improve spatial ability. I've mentioned a couple of them already so like
897
+ [4385.720 --> 4390.040] Uh, for example like blocks like like a blocks like bricks
898
+ [4390.440 --> 4392.440] Um
899
+ [4392.440 --> 4397.240] Computer games are economy was shown to be effective. There is some evidence that spot
900
+ [4397.720 --> 4400.200] Participation could improve spatial ability
901
+ [4400.360 --> 4401.800] Computer games
902
+ [4401.800 --> 4409.240] But also there are there is a lot of uh things in schools that are already done for example that could potentially improve spatial ability
903
+ [4409.480 --> 4411.480] for example robotics classes or
904
+ [4412.280 --> 4416.840] microelectronics classics or some clay for example
905
+ [4417.720 --> 4418.840] uh
906
+ [4418.840 --> 4420.840] workshops and stuff and
907
+ [4421.160 --> 4424.760] Also, uh what is argued by researchers is that we need to
908
+ [4425.640 --> 4427.000] uh
909
+ [4427.000 --> 4431.240] Kind of spatialized education by for example using
910
+ [4432.040 --> 4437.400] I'm more using our kind of novel technologies more so for example
911
+ [4438.040 --> 4440.040] presentations schemes
912
+ [4440.680 --> 4446.520] Uh, these all these different videos and stuff like that that this could potentially kind of
913
+ [4446.920 --> 4450.360] Help those for example who have this spatial ability high but
914
+ [4451.240 --> 4453.000] struggle with verbal and observability
915
+ [4453.720 --> 4455.160] tasks
916
+ [4455.160 --> 4459.640] and yeah genetic independence of spatial ability are quite complex, but we are
917
+ [4460.680 --> 4462.680] going there
918
+ [4462.680 --> 4464.200] That's it
919
+ [4464.440 --> 4467.880] If you have any questions you can email me and we can discuss
920
+ [4471.160 --> 4473.160] Any questions
921
+ [4473.560 --> 4476.840] Comments concerns corrections conclusions
922
+ [4477.880 --> 4479.880] I've ran out of words starting from
923
+ [4481.640 --> 4483.640] This letter
924
+ [4486.760 --> 4488.760] Yeah
925
+ [4497.800 --> 4499.800] Good good
926
+ [4503.560 --> 4507.880] Is it international or uh this is a very good question
927
+ [4509.320 --> 4511.320] At least now now
928
+ [4511.320 --> 4513.320] Uh, I've mentioned migration already
929
+ [4514.120 --> 4516.120] So well my lap was uh
930
+ [4517.160 --> 4519.160] I used to work in
931
+ [4519.160 --> 4520.920] my university
932
+ [4520.920 --> 4522.920] in St. Petersburg in Russia
933
+ [4523.480 --> 4525.880] Currently I'm working there no more
934
+ [4527.320 --> 4528.680] Uh
935
+ [4528.680 --> 4531.720] And I'm looking for a new position that's that's the short answer
936
+ [4534.120 --> 4536.440] Uh, there was a Monty Python
937
+ [4538.040 --> 4542.040] Short video you probably remember this it was about the parrot
938
+ [4542.680 --> 4546.200] Do you remember there was a dead parrot there was a short sketch
939
+ [4546.680 --> 4553.000] Uh, there was a dead parrot in there and uh when people were discussing it like the parrot is no more
940
+ [4553.800 --> 4555.800] So the parrot is no more
941
+ [4557.640 --> 4559.640] Because the parrot is dead
942
+ [4559.640 --> 4561.640] The parrot is no more
943
+ [4563.240 --> 4565.240] The web page may be some
944
+ [4565.720 --> 4567.560] The page on the social network
945
+ [4568.520 --> 4570.200] Would you use about your
946
+ [4570.840 --> 4573.880] Yeah, yeah, because it seems super interesting
947
+ [4574.040 --> 4576.120] Yeah, we have a vk page
948
+ [4576.920 --> 4582.040] I can send you the link probably if you email me or I can show you to you after this
949
+ [4582.120 --> 4588.200] Decision and they have some professional networks like for example where I just post my papers that are out
950
+ [4588.440 --> 4590.200] Yeah, I have some
951
+ [4590.200 --> 4592.200] You can follow me there if you like
952
+ [4594.120 --> 4598.680] Very interesting. I was I was voting for like females being stronger or something
953
+ [4598.840 --> 4603.000] Just because like as I observed in our at least modern culture
954
+ [4603.960 --> 4607.480] females tend to be more kind of aware of their bodies
955
+ [4607.800 --> 4617.080] And I was thinking that this could be also influencing but it doesn't seem like uh at least in the researches that you mentioned that like movement or that it uh
956
+ [4617.560 --> 4620.760] Being a strong thing in financing special ability
957
+ [4620.840 --> 4626.280] Yeah, yeah, they share some brain mechanisms. We still not really
958
+ [4626.840 --> 4630.760] Uh, sure about how they are linked. I mean like for example
959
+ [4631.240 --> 4635.720] There are there is some research there is like competing research in this area
960
+ [4636.200 --> 4638.040] Uh, so for example
961
+ [4638.040 --> 4640.920] There is there are some studies that show that expert
962
+ [4641.560 --> 4645.960] Groups like professional athletes have increased spatial ability
963
+ [4646.760 --> 4653.400] This is one kind of piece of evidence, but another piece of evidence shows that if we uh
964
+ [4654.600 --> 4660.520] For example provide people with some training physical one. I mean like they I don't know like ask them to
965
+ [4661.240 --> 4663.960] Whatever do karate for example for some time
966
+ [4665.320 --> 4666.520] Uh
967
+ [4666.600 --> 4675.000] There would be some gains a spatial ability, but might be not and so so there is a competing evidence there because some types of spots work
968
+ [4675.720 --> 4678.520] better probably and so these are kind of
969
+ [4679.400 --> 4685.640] We see some big differences and we see some small differences in the train and this is kind of
970
+ [4685.960 --> 4689.240] Strainsing maybe maybe these are the years of training
971
+ [4690.040 --> 4697.000] So they they've trained they've participated in sports for I don't know several years like starting from school or whatever
972
+ [4697.320 --> 4704.040] And maybe these are years of practice that brought this kind of spatial ability to this level or
973
+ [4705.560 --> 4713.960] Maybe they had their spatial ability higher from the very beginning and that is why they get into sports because they could compete
974
+ [4714.360 --> 4715.560] they're
975
+ [4715.560 --> 4717.560] kind of they have this
976
+ [4717.880 --> 4723.080] ability that is needed to be super super successful in this area
977
+ [4723.960 --> 4725.800] so
978
+ [4725.800 --> 4730.040] The evidence is not there yet. I mean like definitive answer is not there yet
979
+ [4730.760 --> 4738.040] Maybe we need to start an longitudinal study and see how it goes in a couple of years
980
+ [4739.720 --> 4745.080] Maybe female awareness of their bodies would benefit somehow in the future
981
+ [4754.840 --> 4757.720] Yeah, I forgot I think I've edited the
982
+ [4758.200 --> 4760.200] presentation and I've deleted this slide
983
+ [4761.880 --> 4763.880] the question is like
984
+ [4764.200 --> 4766.200] have you
985
+ [4768.120 --> 4773.400] Specialized research on it or maybe that you research some practical techniques
986
+ [4773.480 --> 4781.240] Maybe you collaborate with the teachers and let them and it's the first part of the question and the second part of the question is
987
+ [4781.800 --> 4785.080] What is the perception of these ideas from them?
988
+ [4786.040 --> 4793.800] I don't know from the people who are in the like from the teachers general from the teachers. Okay. The very good question. Yeah, so
989
+ [4795.320 --> 4800.440] My contribution to this research was I've mentioned a couple of papers that we've done
990
+ [4800.440 --> 4805.800] So one of the things is to for example document these sex differences for example, I've shown our study
991
+ [4806.280 --> 4813.560] Another thing we've done was that for example where should we target this educational intervention the study on this into the network study
992
+ [4814.440 --> 4816.440] um
993
+ [4816.440 --> 4820.360] One thing I'm proud of is that we've developed
994
+ [4821.080 --> 4823.960] validated and published an open access a test
995
+ [4825.160 --> 4832.120] A good test to measure spatial ability and we put it in open access and it's available to everyone actually now
996
+ [4833.800 --> 4837.320] For people to use it for for researchers for example, or for teachers
997
+ [4837.880 --> 4843.160] uh in their educational practice because for example one of the researchers most recognized in the field says
998
+ [4843.640 --> 4845.560] Well
999
+ [4845.560 --> 4847.560] Get tested. I mean like
1000
+ [4848.040 --> 4853.080] Go and test your ability to kind of make some decisions regarding your future maybe for example
1001
+ [4853.400 --> 4858.760] This is a very tricky thing because there are no that much valid and good tests
1002
+ [4859.800 --> 4860.760] for
1003
+ [4860.760 --> 4866.040] Psychological traits because if they are good, they are probably closed. I mean like they cost money
1004
+ [4866.440 --> 4867.480] Yeah
1005
+ [4867.480 --> 4871.800] But if they are not they are available, so we do not know really whether they work
1006
+ [4872.360 --> 4874.760] So what we've done is that we've published such a test
1007
+ [4877.320 --> 4879.320] And it's now available
1008
+ [4879.320 --> 4882.120] It's in Russian it's validated in Russian samples so
1009
+ [4883.320 --> 4885.320] Yeah
1010
+ [4886.040 --> 4888.040] Coming back to teachers
1011
+ [4888.200 --> 4890.200] they
1012
+ [4891.800 --> 4897.640] Recognized the importance of spatial ability. I can say this from my communications with them
1013
+ [4898.280 --> 4905.320] Uh, I know a couple of for example geography teachers and they say well, this is very important and there is actually there are some
1014
+ [4905.880 --> 4909.640] Uh a couple of studies I've seen that showed that for example
1015
+ [4910.040 --> 4913.640] Geospatial training can help to improve spatial ability
1016
+ [4914.440 --> 4921.160] Uh, geospatial training like some for example, here's the whatever field and you can
1017
+ [4921.800 --> 4922.760] Um
1018
+ [4922.760 --> 4929.160] Draw a map of it. Something like this or there is a map find something in it or something something like this and
1019
+ [4930.200 --> 4932.200] It was shown to improve spatial ability
1020
+ [4933.560 --> 4935.560] so um
1021
+ [4935.800 --> 4941.560] The only thing there is that it needs to be systematic. I mean like it would not
1022
+ [4943.320 --> 4945.320] Work properly if
1023
+ [4945.560 --> 4948.840] It will come from teachers then it's to be policy makers
1024
+ [4949.560 --> 4951.080] to kind of
1025
+ [4951.960 --> 4957.480] Make some decisions regarding implementation of spatial training into educational problems programs. It's not like that
1026
+ [4957.880 --> 4965.160] So they could do it themselves, but that would I'm afraid it would have small impact on
1027
+ [4965.960 --> 4967.960] overall kind of
1028
+ [4968.040 --> 4970.040] educational system and educational performance
1029
+ [4973.480 --> 4978.360] Because potential is the issue if there is a really link with academic achievements and
1030
+ [4978.920 --> 4983.000] It's like something that was not properly highlighted
1031
+ [4984.280 --> 4987.000] Yeah, one of the slides I mentioned
1032
+ [4989.080 --> 4991.080] What which one
1033
+ [4992.680 --> 4994.680] Where is that
1034
+ [4995.720 --> 5002.360] Yeah, that cost of education. Yeah, it's that cost even one scientific paper was called that cost of education because well
1035
+ [5003.800 --> 5005.320] people
1036
+ [5005.320 --> 5006.840] somehow
1037
+ [5006.840 --> 5011.480] Ignore spatial ability and education and so it's maybe a question of awareness
1038
+ [5011.480 --> 5016.120] So we need to keep kind of pushing this idea for it to kind of
1039
+ [5024.120 --> 5026.120] Okay, good. Thank you
1040
+ [5029.480 --> 5031.480] Okay, what should I do
1041
+ [5031.720 --> 5033.720] Yeah
1042
+ [5035.640 --> 5037.640] That's it. I'm super
transcript/allocentric_ihKXQbYeV5k.txt ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 17.080] Hello, my name is James Hymn, and I will be informing you of the many non-bubble communications
2
+ [17.080 --> 20.720] displayed in Pixar's short film, For The Birds.
3
+ [20.720 --> 25.200] If you are not familiar with this film, For The Birds is about a group of stuck-up birds
4
+ [25.200 --> 28.560] that outcast an awkward, uncoordinated bird.
5
+ [28.560 --> 32.160] As the story begins, you see a bird fly and land on a wire.
6
+ [32.160 --> 34.480] Let's call him bird A for the scene.
7
+ [34.480 --> 40.040] As the second bird enters the scene, which will be bird B, he lands very close to bird
8
+ [40.040 --> 46.320] A. This is Proximix, the use of physical space.
9
+ [46.320 --> 51.040] This would be the first code of non-bubble communication that we would discuss.
10
+ [51.040 --> 56.680] Due to the lack of space, we see bird A's unhappy reaction.
11
+ [56.680 --> 62.360] Not only did bird B land extremely close to bird A, which invaded his personal space,
12
+ [62.360 --> 69.160] but also spread his wings, which invaded his territory.
13
+ [69.160 --> 74.800] Also notice that these birds are identical, which leads us to our next non-bubble code,
14
+ [74.800 --> 79.040] physical theory.
15
+ [79.040 --> 84.120] As we observe these birds, all of them are pretty identical, except for one.
16
+ [84.120 --> 85.120] This bird here.
17
+ [85.520 --> 91.520] He is tall, naked, uncoordinated and goofy looking.
18
+ [91.520 --> 98.320] According to page 236, in non-bubble communication, physical appearance is the visual attributes,
19
+ [98.320 --> 104.840] like hair, body type, and other physical features that even make you attractive or unattractive.
20
+ [104.840 --> 109.120] It is obvious here that he is considered unattractive because of these features.
21
+ [109.120 --> 113.120] Immediately, they take advantage of this and begin to mock him.
22
+ [115.520 --> 121.600] As the aqua bird gets their attention again, we notice a change in their facial expressions.
23
+ [121.600 --> 127.360] On page 225, it states that a person's character is clearly written on their face.
24
+ [127.360 --> 133.840] Facial expressions are our next non-bubble code, as they are clearly expressed here.
25
+ [133.840 --> 139.040] They are all startled and express different emotions as we can see, shocked, angered,
26
+ [139.040 --> 141.600] non-salon, and worried.
27
+ [141.600 --> 147.600] You can tell by the angle of their eyebrows how they feel.
28
+ [147.600 --> 152.640] As they scamper down the wire away from him, they clearly have facial expressions of annoyance
29
+ [152.640 --> 154.160] and disgust.
30
+ [154.160 --> 157.200] Skipping ahead a little, let's look at this chubby guy here.
31
+ [157.200 --> 160.760] Throughout this scene, his eyes are slanted, which shows anger.
32
+ [160.760 --> 164.080] I love how pigs are put in little details like this to show us.
33
+ [164.080 --> 165.520] This character.
34
+ [165.520 --> 169.520] If you take a look at his beak, you'll see that it has a lot of scratches, which shows
35
+ [169.520 --> 177.600] he uses it a lot, especially compared to the beak of other birds.
36
+ [177.600 --> 183.200] In this clip, you can see the difference between his beak and the beak of another bird.
37
+ [183.200 --> 187.280] As two birds begin to hammer on the toes of the outcast, my bird begins to chart as if
38
+ [187.280 --> 192.960] he's chained, which gets everyone to join him.
39
+ [192.960 --> 197.120] This shows that he is using vocalics to demonstrate his power to influence the others to join
40
+ [197.120 --> 198.120] in.
41
+ [198.120 --> 202.120] This is the form of kinesics and vocalics.
42
+ [202.120 --> 207.160] His initial chirps allow and have a steady speech rate to provoke others to join in.
43
+ [207.160 --> 211.280] He also uses his eyes to signal others.
44
+ [211.280 --> 222.080] As he becomes aware of the environment, he tends to search to allow and hire pigs.
45
+ [223.080 --> 225.600] The speech rate also speeds up.
46
+ [225.600 --> 229.920] In doing this, he is a predominance over the birds to get their attention to help stop
47
+ [229.920 --> 234.160] the hammering of the middle two birds.
48
+ [234.160 --> 238.040] In my personal life, I can use non-verbal communication to improve my relationships with
49
+ [238.040 --> 239.040] others.
50
+ [239.040 --> 241.240] I can start with my girlfriend's family.
51
+ [241.240 --> 244.880] My girlfriend is half Portuguese and her family likes to agree with kisses and also likes
52
+ [244.880 --> 247.800] to communicate closely with one another.
53
+ [247.800 --> 250.600] When I was first introduced to this, I thought it was a little awkward and I felt they
54
+ [250.600 --> 252.520] were invading my personal space.
55
+ [252.520 --> 255.560] Now with understanding, I realized that it is in their culture.
56
+ [255.560 --> 261.280] If I open my mind to it and realize that they mean no harm, I can strengthen our relationship.
57
+ [261.280 --> 263.800] I'm also aware that backing up could be disrespectful.
58
+ [263.800 --> 270.240] In my personal life, I can apply these codes to not judging people off of physical appearance.
59
+ [270.240 --> 274.720] For instance, if I go on for a job interview, I set up judging my potential boss and thinking
60
+ [274.720 --> 279.160] he's a nerd because of the way he dresses or artifacts like glasses, I should listen
61
+ [279.160 --> 282.360] to him and get to know his personality.
62
+ [282.360 --> 287.120] Also by being aware of my own body, I can send the messages that I am confident and know
63
+ [287.120 --> 290.600] what I'm doing when applying for the job.
64
+ [290.600 --> 296.680] Applying these codes and being aware of my own actions can help me land my dream job.
65
+ [296.680 --> 299.120] Again, my name is James Ham and thank you for your time.
transcript/allocentric_ixW35N_AXSA.txt ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.000] This is on silo brought to you by alumni FM connecting people through stories.
2
+ [8.000 --> 16.000] All right, welcome to on siloed. This is Greg Loplank and I'm here with Barbara Tversky who is an
3
+ [16.000 --> 24.000] emeritus professor of psychology at Stanford University and also a professor of psychology at Columbia University.
4
+ [24.000 --> 26.000] Welcome Barbara.
5
+ [26.000 --> 28.000] Thank you pleasure to be here.
6
+ [28.000 --> 36.000] Oh, I forgot to mention also that you are the author of this book Mind in Motion which we're going to discuss today.
7
+ [36.000 --> 43.000] And you know, the book is full of all sorts of fascinating insights into this area.
8
+ [43.000 --> 50.000] I don't know if is there a defined term that describes this this area of psychology.
9
+ [50.000 --> 55.000] Is it spatial psychology or the psychology of of movement?
10
+ [55.000 --> 63.000] I mean, I think that you make the claim in the book that spatial thinking is the foundation of all thought.
11
+ [63.000 --> 74.000] That's a rather bold claim, but I remember when I learned about kind of amoebas back in the day as sort of the art distant ancestors.
12
+ [74.000 --> 79.000] You know, they only could think about one thing which was kind of the sugar gradient right.
13
+ [79.000 --> 92.000] You know attraction toward you know sugar and knowing where the sugar is and so their entire brain was just this this this map right this this gradient is.
14
+ [92.000 --> 98.000] Could you elaborate on this claim that spatial thinking is the found it's a rather bold claim.
15
+ [98.000 --> 113.000] It's an all right claim and first what is this area of psychology is spatial thinking it's thinking about space what's in it the very spaces we inhabit.
16
+ [113.000 --> 116.000] And all of that is crucial to life.
17
+ [116.000 --> 121.000] Sometimes it's referred to as embodied cognition.
18
+ [121.000 --> 132.000] I keep that as an open parenthesis we can return to that if you like, but it's a loaded term and it means very different things to different researchers.
19
+ [132.000 --> 150.000] So I avoided it wisely or not because I didn't want to spend pages defining the term and saying how I was using it differently from other people I thought that wouldn't be of interest to more general readers.
20
+ [150.000 --> 157.000] So back to spatial foundation is a spatial thinking is the foundation of thought.
21
+ [157.000 --> 175.000] A moving in space is essential to life. If we didn't move in space we wouldn't find food shelter avoid dangers and be attracted to good things even plants move they turn toward the sun away from the wind.
22
+ [175.000 --> 195.000] And your amoeba might go toward sugar but they maybe is going to go avoid other things. So the basic motion is really approach or avoid go toward or away from something and you can see that that immediately ties to emotion.
23
+ [195.000 --> 211.000] Is it attractive or repulsive do you want it or do you want to avoid it so that from the get go one celled organisms even viruses that are hardly allowed.
24
+ [211.000 --> 239.000] That dichotomy of going toward or going away is essential to survival. So if we jump from amoeba to primates we find in the brains of rodents they've been most studied you can put in you can put electrodes in individual neurons it doesn't hurt them they can wander free with these neurons.
25
+ [239.000 --> 255.000] And two remarkable findings that won the Nobel Prize play cells in hippocampus their individual cells that fire when a rat is in particular places in the environment.
26
+ [255.000 --> 271.000] These are not spatially mapped and that was a puzzle for some twenty thirty years until the Mosures working in a key slap they were the three recipients there were many other people of course involved in the work.
27
+ [271.000 --> 297.000] The Mosures found one synapse away right next door to the hippocampus in an torrino cortex what are called grid cells and they map the play cells on a sort of map it's not completely accurate with respect to external space but it absorbs proximities and orders pretty well.
28
+ [297.000 --> 324.000] So studying that in humans is harder because we don't go around plant in planting single electrodes in the brains of humans except when I need to undergo brain surgery and then many of them volunteer to because again it's painless and hopefully harmless volunteer for these sorts of experiments.
29
+ [324.000 --> 353.000] And in humans what's totally remarkable in mind blowing and it was research that was coming out as I was writing is play cells in humans don't just code places they code people they code temporal events they code ideas and what's key to those single cells is they gather information from all over the cortex multi modal information into a single single cell.
30
+ [353.000 --> 382.000] That gets activated when an idea comes up when a person comes up when an event comes up and then these get mapped in and torrino cortex in the grid cells they get mapped by temporal relations by social relations by conceptual relations so the same brain structures that are coding places in space and spatial arrays are encoded in the brain.
31
+ [383.000 --> 398.000] coding events and people and times in conceptual temporal and social arrays so that feels like a strong statement I mean a strong findings supporting my statement.
32
+ [398.000 --> 424.000] We have evidence for spatial thinking in the way that we talk we we put forth ideas we tear them up we tossed them out we pull them together so that the way we talk about ideas is the way we talk about or actions on ideas like throwing them out and tearing them up is the way we talk about actions on objects.
33
+ [424.000 --> 433.000] And you almost can't talk about actions on ideas about thinking without using that language.
34
+ [433.000 --> 453.000] So and then you if you watched my gestures are gestures support that thinking we're using the body to think about tossing out ideas turning them upside down and so forth so that they those actions that we're thinking about get exactly.
35
+ [454.000 --> 480.000] externalized as gestures and the gestures are like actions on objects we tear up paper or we flip bottles around but there is no object there there's an imaginary idea that we're acting on so those the gestures the language and the brain all support go together to support this.
36
+ [480.000 --> 492.000] And audacious idea that spatial thinking is the foundation of all so I append that it's not the entire edifice just the foundation.
37
+ [492.000 --> 508.000] Now in the book you you mentioned that psychology made a big advance back when we kind of moved our attention away from behavioralism towards investigation of the mind and kind of getting into the black box of well or the.
38
+ [508.000 --> 537.000] You know the circular cranium or whatever that that we have up here i'm wondering if if we have gone astray a little bit in psychology because I think the most exciting areas of cognitive science right now or the areas around around artificial intelligence right and there's this metaphor of the mind as sort of a computational device right and you you mentioned in the book.
39
+ [537.000 --> 558.000] That you know by just by putting all of these ideas of yours into a book you know you're you're freezing your thought in words and this necessarily is is linear right this necessarily is is sequential and which is in some ways kind of an artificial way of of thinking.
40
+ [558.000 --> 583.000] Is psychology spending perhaps too much time on the metaphor of the mind as a computer and should we be rethinking of the mind as having a spatial component and what would be the implications for how artificial intelligence is is approaching this this theory of cognition.
41
+ [583.000 --> 611.000] So I mean the artificial intelligence is going in many directions as is brain research as mind research there's so much talent that has been drawn into those issues because they're so exciting and so basic to being human and acting in the world so it's almost as if the metaphor is reversed and it's not that we see the brain is a computer but we're.
42
+ [611.000 --> 640.000] We're seeing the computer is a brain and and many many of the people active in AI are really trying to mimic the mechanisms of the brain which don't have words right so it's it's activation of neurons different kinds of neurons that's been a challenge to AI because it doesn't AI much of it doesn't distinguish what kind of.
43
+ [641.000 --> 670.000] So there's so many of the lines of neurons what their specific connectivity is and so forth and the brain has a lot of specialization along that line that isn't completely understood so I mean there's so much left to do that's truly exciting but many researchers in AI are now realizing they really need to think about a body not an a body in the world experiencing things seeing things.
44
+ [671.000 --> 693.000] Making sense of what it sees making sense of how the body interacting with things in the world finding its way in space acting on objects the sorts of things we take for granted every day that we do making our morning coffee getting to work when we can go back to work.
45
+ [693.000 --> 716.000] So all of those things need to be taken account of in one way or another by AI words won't be sufficient although they've done a sort of remarkable job just feeding lots of Wikipedia entries into an AI and letting it run loose has led to.
46
+ [716.000 --> 745.000] Problems that seem as if they're thinking and inferring but you can track them into making absolutely absurd mistakes and mistakes by our standards so I think there is in the and there are many groups now working on what could be called in body day I using insights about how the body behaves in the world and.
47
+ [745.000 --> 764.000] To again truly exciting advances along that line that you need to add a body and body acting in a world to really to get an AI to truly understand and be able to explain.
48
+ [764.000 --> 788.000] What they're doing now explanations people can't always explain what we're doing or why we're thinking things but we can try and sometimes were on the mark sometimes not but that having an AI that can explain why it came to that inference and not some other inference.
49
+ [788.000 --> 799.000] Is that's a goal that hasn't been reached yet and probably can't be reached without understanding how we behave in the world.
50
+ [799.000 --> 809.000] Talk about action is sort of a integrate perform creating an integration function where you are taking all of this sense data and.
51
+ [809.000 --> 829.000] It comes in through isolated channels but really it's the it's the action which allows you to integrate them and you talk about children you know when they're looking at their hand and they're observing the motion of the hand and how the sense data that comes in through their eyes and the sense data that comes in through their.
52
+ [829.000 --> 850.000] There are no runs are you know very very different things and they have to somehow integrate them and I remember when I was in college I took a psychedelic mushrooms and I remember doing exactly that same thing and and I remember thinking oh my gosh these are two completely different sensations that I have just.
53
+ [850.000 --> 874.000] Can conflated automatically without thinking it was really a remarkable discovery for me at that time yeah no it we take for granted how about the integrates what we see with what we feel and it it it is magical and it it this is again we can't explain it.
54
+ [874.000 --> 900.000] And you develop the way of getting inside into it and that's lovely of all of a sudden something that is familiar and automatic and unchallenged all of a sudden you realize what complexity underlies the sensation in your hand with what you're seeing your hand do and there are lovely experiments now dissociating that.
55
+ [900.000 --> 919.000] So people can become identified with a rubber hand because you can do you can stroke the real hand while you're seeing the rubber hand and make that influence that that rubber hand is is your real hand.
56
+ [919.000 --> 930.000] So there are ways is short of mushrooms of dissociating those things but I don't think people come to the inside the way that you did.
57
+ [931.000 --> 958.000] One of my one favorite study was done by a colleague Maggie Schiffrower in her lab and she dressed people in black this is a kind of standard procedure you dress them in black and put light lights tiny lights on all the joints and photographs and moving jumping playing ping pong dancing walking standing on their hands you name it.
58
+ [958.000 --> 981.000] And what you get when you look at those videos is just this array of lights that are moving when they're not moving you don't know what it is when they start moving right away you can see that's a man walking that's a woman walking that's a child walking they're jumping you get the action just from the joint movement.
59
+ [981.000 --> 1007.000] So what Maggie did was film a lot of people doing this and brought them back into lab later and some of the people you knew some of them you didn't know and you were asked to look at those videos of someone dancing playing ping pong and identify who is that is it you is it your friend is it someone you don't know.
60
+ [1007.000 --> 1024.000] So people were were pretty good at recognizing their friends above chance they could say yeah that's my friend X doing those things remarkably they were best at recognizing themselves.
61
+ [1025.000 --> 1049.000] Even though these are ordinary people not dancers they haven't been looking in the mirror while they've been dancing playing ping pong and so forth so they've rarely seen themselves as they appear to others nevertheless they were able to map that motion that they were watching onto their own bodies like trying on clothes and it fit.
62
+ [1050.000 --> 1073.000] Or at least that's the theory we can't really know what was happening although by now for all I know there's been good brain work on that I can't keep up with everything but the idea that we can map what we see onto what we feel is is I think remarkable and there are other examples of it.
63
+ [1073.000 --> 1102.000] And again it's not something we're quite aware of but if you think about being on the dance floor watching other people with their moves and then being able to translate that into your own body and making those moves and knowing when they're right and knowing when they're wrong just from your own feeling that's a quite remarkable feat of human mind it's a kind of mirroring and those things there are certainly brain pain.
64
+ [1103.000 --> 1132.000] And this is for that kind of mirroring we imitate when people smile at us we smile back even strangers on the street yawning is contagious even indoors so that kind of in cooperation of what we see in the world into our own bodies is by mirroring is quite remarkable and again not we're not always aware of it.
65
+ [1133.000 --> 1162.000] So the things that we pay attention to in the world are dictated in part by the organization of our mental machinery right so when we look at a painting for instance we they've done a lot of eye tracking and they show that we spend most of our time looking at the faces and portraits and maybe also look a little bit at the hands and we don't really spend a lot of time looking at you know the small of the back or the you know the bicep or the
66
+ [1163.000 --> 1192.000] shin and we don't look a whole lot at the floor that's around the body and and this kind of this is not something that's intentional it's not something that's chosen it's just sort of dictated by what we as a result of our evolution have determined are the most important pieces of information I remember when I studied painting it was it was an enormous amount took an enormous amount of discipline to devote as much attention to the square inch of you know hardwood for you know
67
+ [1192.000 --> 1205.000] hardwood flooring as the square inch of you know eyeball but but this is all reflected in sort of the amount of of of brain machinery devoted to these different things that we see right.
68
+ [1205.000 --> 1221.000] Sure I mean there are dedicated places in multiple places in the brain for faces because we're social creatures and faces matter who's familiar who isn't familiar who's friend who isn't so fake identifying face.
69
+ [1221.000 --> 1250.000] Identifying faces and and bodies and body postures so there are multiple places in the brain that compute do the computations that generate faces as opposed to bodies for example or trees so that the the things in the world that are important for our own existence for reasons as you say evolutionary reasons are the computations are highly developed.
70
+ [1250.000 --> 1260.000] In the brain right I had another associate.
71
+ [1260.000 --> 1279.000] Well so like the homunculus right so when they draw the picture of the question aroused many things but there were many it was a long one yeah yeah so can you read it can you redeploy these brand parts of the brain so I think the probably you mentioned the hippocampus earlier and and hippocamus I think has kind of become famous lately because
72
+ [1279.000 --> 1294.000] of all the work that was done on the London taxi cab drivers and you know it was shown that it can actually physically expand and in size right as it gets put to better use kind of like your muscles.
73
+ [1294.000 --> 1306.000] But you know we certainly don't need to have those maps anymore because we have we have Google maps and so we're kind of we'll get into mapping a hope later.
74
+ [1306.000 --> 1321.000] But you know can those parts of the brain I think have been redeployed by memory experts and so if you're trying to remember a lot of facts you if you put them into some imaginary location you can remember things better is that right.
75
+ [1321.000 --> 1349.000] So yes I mean there were again several questions there and I'll try to remember all of them yes the brain has some plasticity and of course it declines with age and those locations the the areas of the brain that are sensitive to particular that do the computations for particular kinds of things in the world are subject to neural plasticity.
76
+ [1349.000 --> 1375.000] So those those face areas the areas that that are sensitive to recognizing individual faces not that there's a face but individual faces are become employed in dog experts for recognizing dogs now that wouldn't work for you and me unless you're a dog expert and I don't know about it.
77
+ [1375.000 --> 1387.000] But if you develop expertise in judging dogs that area of the brain that that distinguishes individual faces will adapt to that.
78
+ [1387.000 --> 1416.000] There's more plasticity for babies so babies who are born blind or deaf the areas that usually represent sound or vision come to represent something else in particular touch comes to take over the visual area which is particularly interesting and there were studies done a number of years ago getting volunteers to walk around with blindfolds.
79
+ [1417.000 --> 1445.000] For several weeks and in those several weeks the occipital area at the back of the brain that is representing vision for the most part came to represent touch and for congenital blind who learn braille that's where braille happens the visual system is taken over by the fingers and it probably has to do with the kinds of computations.
80
+ [1445.000 --> 1463.000] Just as much as the input because here the input is changing but the computations aren't changing or the computations the occipital load does are needed for reading braille with with touch so they take over.
81
+ [1463.000 --> 1492.000] So again these sorts of things are i'm not an expert on brain i've owned up a bit for the book but it's not my my area my primary area of research it's fascinating so i might be a little out of date but this is my understanding at this point as for using Google maps and you know it's it's extremely helpful for people who are who are map challenge.
82
+ [1493.000 --> 1517.000] And there are I know people like that who can't who find great difficulty using maps and then having a strict route directions even verbal not map like or using Google maps can help does it interfere with our own abilities to make influences for maps sure.
83
+ [1517.000 --> 1537.000] But we can't fix cars anymore and we can't fix refrigerators and nobody can compute a square root anymore and our calculators do it for us even mechanics now just switch in modules with parts they can many of them can't fix cars so everything is a trade off.
84
+ [1537.000 --> 1566.000] And it might be a good trade off using a calculator to compute a square root when I was a graduate student I did all this is you know centuries ago I did all my dissertation calculations by hand this is absurd in these days to do it by hand computers can do it efficiently faster less fewer errors it would be quite easy to do it.
85
+ [1567.000 --> 1584.000] It's very easy to do those things by hand so everything's a trade off and and again Google maps can help a lot of people it might mean that you're less adapted using a map but the trade off is worth it for you.
86
+ [1584.000 --> 1609.000] I like how as an economist I like how you repeat over and over again the first law of cognition which is that there are no benefits without costs so that that really warmed my heart because throughout the book you're talking about these trade offs and about you know redeploying the the facial machinery you know I spent a lot of time around horses and but I got into it late in the life so I still can't tell them apart.
87
+ [1609.000 --> 1619.000] I've got to look at the at the brand or the you know scars or something whereas the people who have grown around them grown up around them they can they can tell them all apart.
88
+ [1619.000 --> 1638.000] But you know there's a lot of memory that we don't need there's a classic experiment and cognition as to ask people to grow up penny and what's nobody gets it right now nobody uses cash anymore but at the time people used cash and and no one can do it.
89
+ [1638.000 --> 1667.000] No one could draw the penny right because all you have to do is look at the color in the size you don't need to discriminate and even for you for the horses you're not judging hundreds of horses and and working on that for many many many years so all you need to know is it's your horse and there might be a single feature or a couple of them that say which horses yours so it's not a skill you need to work with.
90
+ [1667.000 --> 1681.000] Well you need to acquire well I'll talk a bit more about directions and maps I'm a map buff and I love have loved geography since I was a child but you make the distinction between
91
+ [1681.000 --> 1711.000] the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the two of you know the
92
+ [1711.000 --> 1723.560] the two 0. looked like a 20 maybe two speaker didn't have questions to radii 8, men cannot explain the two of you sir said the three we'll go look at Ochaku you know five oh man did pls as well let's talk to him bigger than you in take the
93
+ [1723.560 --> 1730.040] challenge to and really bright people and they can't explain directions. So it's
94
+ [1730.040 --> 1735.480] not that they aren't empathetic. They don't have the ability to take a global map
95
+ [1735.480 --> 1741.440] and turn it into a group. And even for people that are good at that, it takes
96
+ [1741.440 --> 1747.600] some practice to get back from a global map to a route with a particular
97
+ [1747.600 --> 1752.720] starting point and a particular ending point. You're changing from thinking
98
+ [1752.720 --> 1759.080] about things as Northwest and South and East to thinking about things from
99
+ [1759.080 --> 1763.720] your perspective of moving through an environment. What's on your left? What's
100
+ [1763.720 --> 1768.560] on your right? What's in front of you when the return? So it's a really poor
101
+ [1768.560 --> 1775.560] test of empathy. It is spatial perspective taking has not been shown to be
102
+ [1775.560 --> 1782.360] correlated with social perspective taking and many people have tried to
103
+ [1782.360 --> 1790.120] find that correlation. It may pop up but it's really hard to find and we've done
104
+ [1790.120 --> 1797.480] extensive work on a spatial perspective taking, reversing perspective, taking
105
+ [1797.480 --> 1803.560] other perspectives and it doesn't correlate with social perspective taking. So
106
+ [1803.560 --> 1812.400] it's a special skill and there are many tests of spatial thinking. They don't
107
+ [1812.400 --> 1818.760] correlate with navigational ability. It seems to be again a special skill that
108
+ [1818.760 --> 1827.920] isn't related to more general skills, spatial skills. It's a special skill. So
109
+ [1827.920 --> 1832.480] and it requires, I mean there's beautiful brain work on it. Some was
110
+ [1832.480 --> 1837.400] Eleanor McGuire's on the London taxi drivers and she's still working on it but
111
+ [1837.400 --> 1841.080] many other people have done beautiful work on on
112
+ [1841.080 --> 1850.400] receptiness one at Penn working on how we put together space, how we know where
113
+ [1850.400 --> 1856.160] we are from the world around us. So it requires thinking of ourselves in a
114
+ [1856.160 --> 1861.280] global way, understanding how those landmarks are related to each other, the
115
+ [1861.280 --> 1867.320] things we're seeing. It's really quite complicated and the schematic I gave you
116
+ [1867.320 --> 1873.400] with the rats is a simplification of just wandering around the world. So I
117
+ [1873.400 --> 1878.840] want that if unless you have another question I'd like to expand on those two
118
+ [1878.840 --> 1884.120] perspectives, the one embedded in the world and the one that's overview but
119
+ [1884.120 --> 1896.480] because there are social analogs even if that but you may have all means. So it
120
+ [1896.480 --> 1903.320] relates to some of the work we and others have done on creativity. Are you
121
+ [1903.320 --> 1909.280] sitting in your own place looking around you and just altering a little bit the
122
+ [1909.280 --> 1914.400] sorts of things you're seeing, the individual objects, the individual viewpoints.
123
+ [1914.400 --> 1921.440] It's a political map I'm thinking of my viewpoint what is an adversary, someone
124
+ [1921.440 --> 1926.400] from a different party, what is their viewpoint, how do they see the landscape
125
+ [1926.400 --> 1931.280] around them, how do I see the landscape around them, then I might go to other
126
+ [1931.280 --> 1939.320] countries, how do they, I mean there are huge differences in how the Russians saw
127
+ [1939.320 --> 1944.880] the Second World War and the events in it, the temple map, this bit, and how
128
+ [1944.880 --> 1949.600] Americans saw it and how Brits saw it. They all saw it from their own point of
129
+ [1949.600 --> 1956.720] view but you can also go above and take a more analytic perspective of what are
130
+ [1956.720 --> 1965.960] the events, what is the temple arrangement and it go way more abstract that is, so
131
+ [1965.960 --> 1970.960] it's North-South East West, this happened before, after this was happening here,
132
+ [1970.960 --> 1975.960] this was happening there and you're not necessarily integrating them into a
133
+ [1975.960 --> 1982.480] route but you're seeing the global map. So those two perspectives of going
134
+ [1982.480 --> 1988.360] above and seeing a complex structure or being on the ground and seeing what's
135
+ [1988.360 --> 1996.640] around you and imagining what's around someone else, there are different ways to
136
+ [1996.640 --> 2004.160] approach problem-solving and prediction and many other sorts of psychological
137
+ [2004.160 --> 2010.560] inferences that we make and going back and forth between the particular on the
138
+ [2010.560 --> 2018.640] ground and the more abstract, I think helps you get a better picture in the end.
139
+ [2018.640 --> 2024.880] Right, and I love that insight which is a really broad insight from this
140
+ [2024.880 --> 2032.040] idea of spatial thinking. You describe how the most useful or functional ways of
141
+ [2032.040 --> 2036.800] looking at the world are not always the most accurate ways of looking at the
142
+ [2036.800 --> 2046.040] world. Right, that we are, when we're navigating space, we kind of ex the stuff that is nearest
143
+ [2046.040 --> 2052.800] to us is larger in our mind and the stuff that's further away is sort of smaller and this
144
+ [2052.800 --> 2060.160] is true not just spatially but also temporally. So we look at 1300 AD and 1500 AD, they seem
145
+ [2060.160 --> 2073.000] like super close but we think of 1989 and 1991, they seem further away. So depending
146
+ [2073.000 --> 2079.360] on the function, should we be kind of stress testing how we view the world or is this just
147
+ [2079.360 --> 2085.840] a natural part of making sense of the world? Yeah, I mean the world is always changing
148
+ [2085.840 --> 2092.880] so it probably makes more sense for us to exaggerate things that are close to us, people that
149
+ [2092.880 --> 2098.640] are close to us, events and time that are close to us because they've had more influence on our
150
+ [2098.640 --> 2106.160] activities and behavior. Similarly, the things that are close to us in the real space are more
151
+ [2106.160 --> 2114.960] likely to attend on us than the things that are far away. So some of that egocentric
152
+ [2115.520 --> 2124.800] and exaggerating close and minimizing distance probably makes sense in our behavior but
153
+ [2127.360 --> 2132.800] for certain kinds of thinking like getting proportion, getting balance, not getting panicked,
154
+ [2132.800 --> 2140.880] it helps to sink more broadly. So the past year we've been talking a lot about the 1918 pandemic
155
+ [2140.880 --> 2148.320] and growing and surprising analogues to our experience the past year and a quarter.
156
+ [2149.600 --> 2156.960] So in many ways you need both. What's accurate, I think you were hinting at it, depends on what
157
+ [2156.960 --> 2164.800] we're trying to do. So I'm trying to get food for today, it's different from running a food
158
+ [2164.800 --> 2176.480] supply or growing it. So I want the information that's relevant to what I need to do and that
159
+ [2177.280 --> 2185.680] means exaggerating the importance of certain information. It certainly means minimizing all kinds
160
+ [2185.680 --> 2192.240] of information that isn't important. I mean maps do that, they don't show you every building,
161
+ [2192.240 --> 2198.720] they don't show you the trees, they aren't like aerial photographs which would be kind of useless,
162
+ [2199.520 --> 2207.280] they pull out what you need. If you're a pedestrian it's different from if you're a bicyclist,
163
+ [2208.480 --> 2217.920] you need a 3D map for the mountains, you don't need that on the whole for driving. So what's
164
+ [2218.240 --> 2227.440] there, what you need for whatever you're working on is what will determine what information you
165
+ [2227.440 --> 2235.760] keep and what information you take away but that inevitably distorts. We were talking before about
166
+ [2235.760 --> 2241.440] the Steinberg map of New York and you're in New York right now and you know that map has been
167
+ [2241.440 --> 2246.240] reproduced countless times and people have it on their walls and I think that it's been reproduced
168
+ [2246.240 --> 2251.920] for pretty much every city in the world and I think the reason why it resonates with people is
169
+ [2251.920 --> 2259.520] because we recognize that that's how we view the world. You also mentioned the London kind of tube
170
+ [2259.520 --> 2265.360] map and even though the image is not in the book I immediately knew what you were talking about and how
171
+ [2265.360 --> 2271.920] you know the brilliance of the map is that it really simplifies things and gives you the information
172
+ [2272.000 --> 2280.080] you need and leaves out all the if you try to figure out what's what the actual geography of London
173
+ [2280.080 --> 2284.560] is from that map you know you're not going to you're not going to always get a very accurate depiction.
174
+ [2284.560 --> 2290.800] Yeah and it straightens the tube lines, they're either horizontal vertical or diagonal and if you
175
+ [2290.800 --> 2298.000] ride the tube you know you're going in curves but you don't need to know that what you what the
176
+ [2298.000 --> 2305.520] designer of the tube map knew was you needed to know where the lines intersected so which line
177
+ [2305.520 --> 2312.080] should you switch so and that there were more of them in central London and people were coming
178
+ [2312.080 --> 2319.520] from the outskirts to central London and so central London is way bigger than the rest of the map
179
+ [2319.520 --> 2329.200] which is far less complicated and and the intersections are fewer so all that is useful but yes it
180
+ [2329.200 --> 2337.280] distorts the distances and it distorts the shapes but that complexity would make it much harder to use
181
+ [2338.480 --> 2348.560] so that it and it's considered a gem and the paradigm for tube maps all over the world do that and
182
+ [2348.640 --> 2355.200] there are huge arguments they try to change the one in New York the sub-way map and huge arguments
183
+ [2355.200 --> 2361.920] about what you're distorting and what not and people are devoted to one thing and not another and
184
+ [2362.720 --> 2370.320] and can't see why you think this is important when obviously something else is important
185
+ [2370.960 --> 2377.440] and we do that when we make brain maps we do that when we make corporate diagrams we do that
186
+ [2377.520 --> 2389.360] simplification just sewing the interconnections and maybe a hierarchy and it really crystallizes what
187
+ [2389.360 --> 2396.720] we need to think about at the moment decision trees and so forth yeah you offer up eight
188
+ [2397.440 --> 2405.040] laws of cognition in the book one of which is that spatial thinking is sort of the foundation of
189
+ [2405.040 --> 2412.720] abstract thinking and an abstraction necessarily involves some kind of distortion and so sometimes
190
+ [2412.720 --> 2418.160] leads to some some errors and you know the favorite ones that you offered were the question of
191
+ [2418.800 --> 2425.920] you know is Venice Easter west of Naples is Reno Easter west of Los Angeles and of course my
192
+ [2425.920 --> 2433.440] favorite one is is Berkeley Easter west of Stanford and of course most people get get them get them
193
+ [2433.440 --> 2443.760] all wrong and why exactly is it that we get that wrong how does this kind of alignment work in
194
+ [2443.760 --> 2448.800] in our brains and why is it super helpful to us but why does it also lead to these errors
195
+ [2449.680 --> 2459.360] so what I when I did that work on maps showing that people upright the Bay Area and makes the
196
+ [2459.360 --> 2470.000] boot of Italy go vertical and I mean it works in many places around the world I was thinking of how
197
+ [2470.000 --> 2478.240] we organized perception and the perception of the environment that we're looking at and and
198
+ [2478.240 --> 2487.840] scene recognition and two processes seen salient there we group things by proximity these are
199
+ [2487.840 --> 2495.440] crystal effects that are known and loved for many years so things that are close we tend to
200
+ [2495.440 --> 2503.600] group together and we tend to want things that say are close or nearly like to be going in the
201
+ [2503.600 --> 2512.400] same direction so if I think of the Bay Area and I know that it's tilted because I can I still
202
+ [2512.480 --> 2520.800] upright it in the frame of reference so that the Bay Area or Italy is upright with respect to the
203
+ [2520.800 --> 2527.760] North Southeast west frame of reference we find it for South America it looks tilted but people
204
+ [2527.760 --> 2536.080] tend to upright it and we find that for blobs too we just get people blobs and ask them to remember
205
+ [2536.080 --> 2543.440] and they tend to upright them or make them more horizontal kids do it there's no evidence that kids
206
+ [2543.440 --> 2553.120] are different from adults that work was never published it was informal but so that's one way we
207
+ [2553.120 --> 2560.960] understand the space around us is by by understanding it with respect to a frame of reference and we
208
+ [2561.040 --> 2568.880] keep moving and integrating things but we want things upright or horizontal or vertical with respect to
209
+ [2568.880 --> 2576.880] a frame of reference I mean the world is like that we have gravity and we have a horizon line that's
210
+ [2576.880 --> 2584.960] for the most part horizontal the other part is kind of grouping things so people think that South
211
+ [2585.120 --> 2591.680] America is below North America they think Europe and the United States are lined up when actually Europe is
212
+ [2591.680 --> 2598.560] farther north on the whole than the United States so that's again a kind of grouping we say next to
213
+ [2598.560 --> 2604.720] it comes out in language something's next to something else something's above something else we
214
+ [2604.720 --> 2612.560] summarize brutally those sorts of things and this is a language is all over we don't have
215
+ [2612.560 --> 2622.000] exact we don't say something is tilted 10 degrees or 45 45 has a special status but the visual
216
+ [2622.000 --> 2630.320] system is a tuned horizontal and vertical it's a tuned to grouping so that these are basic processes
217
+ [2630.320 --> 2640.000] in perception that seem to be used also for organizing maps and experiences with space
218
+ [2640.960 --> 2647.920] the other thing you mentioned in that chapter was when you're asking about proximity it depends on
219
+ [2648.400 --> 2655.040] which direction you're asking so if if I say you know how close is your apartment to the Empire
220
+ [2655.040 --> 2660.240] State Building you'll give me a different answer than if I say how close is the Empire State Building
221
+ [2660.240 --> 2667.760] to your apartment I don't understand how does that make sense what what's the process at work
222
+ [2668.640 --> 2676.080] and to some extent the Empire State Building or they are for tower form neighborhoods so they're
223
+ [2676.080 --> 2682.560] representing more than themselves if you if somebody from out of state asks where I live I might
224
+ [2682.560 --> 2689.600] say near the Empire State near the ground zero near because these are likely to be familiar to people
225
+ [2689.600 --> 2696.720] they form neighborhoods I'm near the I for tower I'm near Notre Dame those are again going to be
226
+ [2696.720 --> 2704.080] neighborhoods and my house isn't not a neighborhood it isn't conceptually a neighborhood it's
227
+ [2705.040 --> 2713.200] so it comes from how we think about those things but it's also true I mean this was worked on
228
+ [2713.200 --> 2720.800] earlier it's it's people's when they're asked to judge the similarity of magenta to red
229
+ [2721.120 --> 2730.800] or red to magenta the similarity of magenta is almost a red it's a kind of red but red isn't a
230
+ [2730.800 --> 2738.400] kind of magenta so the distance from red to magenta is greater psychologically or we judge
231
+ [2738.400 --> 2745.760] psychologically than the distance from magenta to red and similarly the distance to the Empire
232
+ [2745.760 --> 2754.240] State Building from my house will be less judged as closer than the distance from the Empire
233
+ [2754.240 --> 2761.840] State Building to my house so that it's again a general phenomenon so these are spatial phenomena
234
+ [2761.840 --> 2768.160] and we can find evidence for them in spatial judgments but we can also find evidence for them in
235
+ [2768.160 --> 2777.280] conceptual judgments which are quite similar what's special about the landmarks cemetery and the
236
+ [2778.560 --> 2786.400] conceptual asymmetry the colors and many other examples my husband long ago did research showing
237
+ [2786.400 --> 2794.160] that people think that North Korea is more like the PC that people's Republic of China than China
238
+ [2794.240 --> 2801.520] like North Korea and people like to talk about the similarity of a son to a father not a father to a
239
+ [2801.520 --> 2812.240] son right because there's a kind of primary the father or the or China are paradigmatic prototypical
240
+ [2812.240 --> 2820.640] examples and North Korea and the son are not so even though the symmetry in some objective sense
241
+ [2820.720 --> 2828.720] the similarity has to be the same people judge it is different so this refutes any metric map
242
+ [2830.480 --> 2838.400] of conceptual relations but also spatial relations are if we have accept the landmark
243
+ [2838.400 --> 2844.960] asymmetry as people's beliefs and there's plenty of evidence all over the world for it then our
244
+ [2844.960 --> 2852.240] mental maps are not Euclidean they aren't the way real maps are they're distorted and
245
+ [2852.240 --> 2859.680] there are one of many distortions that we have when we make judgments so again the similarity
246
+ [2860.400 --> 2866.800] the errors in mental maps and the errors and conceptual judgments again supports this
247
+ [2866.800 --> 2874.720] idealist spatial thinking it's foundational but it also shows something about the way we make
248
+ [2874.720 --> 2881.360] decisions or predictions that we bring information any information that seems relevant together
249
+ [2882.080 --> 2888.560] and then try to make a judgment it's not that I have a map in my head and I look at it I just
250
+ [2888.560 --> 2895.120] bring what the bits and pieces that I know and that's going to be the way I make any judgment
251
+ [2895.200 --> 2901.760] I'm going to bring together the relevant bits and pieces and then make a judgment
252
+ [2902.640 --> 2908.640] well I grew up in Philadelphia and we have a grid system and the blocks are actually squares
253
+ [2909.200 --> 2914.640] and they're aligned with north and south and you know I think I was that shaped me so that every time
254
+ [2914.640 --> 2919.520] I go to another city I'm working from that framework and I remember when I moved to New York
255
+ [2920.080 --> 2925.200] you know the north-south blocks and the east-west blocks are very different in terms of their length
256
+ [2925.200 --> 2930.240] and so that that always screwed me up and and then when I moved to San Francisco they've got this
257
+ [2930.240 --> 2936.240] big diagonal where the grid changes net that confused the heck out of me and and you know living in
258
+ [2936.240 --> 2941.040] Berkeley where the water is on the west I took me about 10 years to figure out which direction
259
+ [2941.040 --> 2946.480] was north and south because you know I lived in in Cambridge in New York and and Philadelphia and
260
+ [2946.480 --> 2952.000] all the places where the water is on on the on the eastern side of things so so you know your
261
+ [2952.000 --> 2957.600] mental maps you get accustomed to some version of a map and then and then after that you
262
+ [2958.960 --> 2965.440] you have difficult time adjusting yeah absolutely you're orienting yourself with respect to the
263
+ [2965.440 --> 2972.000] grid pattern if there is one and grids go way back the Romans used them wherever there were Roman
264
+ [2972.000 --> 2979.200] colonies in Europe you see them the they were popular and and still are Beijing is like that
265
+ [2980.640 --> 2988.160] and in many cities in in Japan it's harder to do when you have hills so another way of building
266
+ [2988.160 --> 2994.320] roads is I build my house and then yours and then another one and the roots are built on top of
267
+ [2994.320 --> 3004.560] that and that you can see in in other European cities that was studied by in great depths by Bill
268
+ [3004.560 --> 3013.920] Hilliord at the at the at the UCL School of Architecture in London but yeah and you're using and I
269
+ [3013.920 --> 3020.800] have to and lived in many places you're using great body of water rivers as well as a grid plan
270
+ [3020.800 --> 3029.360] as a way of orienting and they don't quite jide one is more overall where are the mountains where
271
+ [3029.360 --> 3039.840] the rivers where the bodies of water and and then also the local so we're using those different
272
+ [3039.840 --> 3049.520] frames of reference which are partial to orient the reason for making avenues long wider and wider
273
+ [3049.520 --> 3059.040] space than streets is actually a good one because I having traveled quite a bit in in colonial cities
274
+ [3059.040 --> 3067.760] in Mexico and related places where they're equal you can get turned 90 degrees easily and you can't
275
+ [3067.760 --> 3074.560] get you can get turned at 180 degrees in in New York with the avenues in the streets but you can't
276
+ [3074.560 --> 3082.480] get turned 90 so that is that having that asymmetry of the North South ones wider and space farther
277
+ [3082.480 --> 3094.080] and the East ones West ones closer actually is is I think a useful way to navigate now you you
278
+ [3094.080 --> 3100.000] spend a lot of time talking about gesture in the book as a form of communication and and I found
279
+ [3100.000 --> 3105.680] this very powerful and the fact that I found most interesting was that this does not increase your
280
+ [3105.680 --> 3112.720] cognitive load but but actually it your you know these are since they're different they're using
281
+ [3112.720 --> 3119.280] different modules I guess it doesn't require extra mental effort and in fact when you add in gesture
282
+ [3120.400 --> 3126.240] the communication is more effective and yet you know I've been sitting here with you and you've
283
+ [3126.240 --> 3131.680] been gesturing quite a bit I have not and we go to of course in Italy where everyone it's in you
284
+ [3131.680 --> 3136.960] know if you want somebody to shut up you you tie their hands behind their back right whereas in
285
+ [3136.960 --> 3144.000] in other cultures people aren't using their hands quite as much and you reference how the mod
286
+ [3144.000 --> 3148.800] the part of the brain that we use for speech is the same as the one that was originally for
287
+ [3149.600 --> 3160.080] using of of hands how can people use their gestures better to communicate not only abstract
288
+ [3160.080 --> 3170.160] thoughts but also intention and emotion so if first of all the area for the hand speech didn't
289
+ [3170.160 --> 3175.680] completely overtake the area of the hand this is speculations by the man who did the
290
+ [3176.400 --> 3184.560] early work on and beautiful work on mirror neurons, Rizalati and it was highly speculative those areas
291
+ [3184.560 --> 3193.680] in monkey are close but also in humans they're close but it's not overlapping so his theory really was
292
+ [3193.680 --> 3202.240] that that because there are single neurons in this mirror neuron system it's different from
293
+ [3202.240 --> 3209.360] places but there are single neurons and monkeys that fire when the monkey throws something
294
+ [3210.000 --> 3217.440] and when the monkey sees someone else even a human throws something so it's mapping the seeing
295
+ [3217.440 --> 3225.360] of the action to the doing of the action and it's a small vocabulary in monkeys of actions that
296
+ [3225.440 --> 3235.200] do it but the speculation is that instead of really throwing I could use that gesture of throwing
297
+ [3235.200 --> 3243.440] as an indication of my intent and we do follow people's hands for intention we follow their eyes
298
+ [3243.440 --> 3248.240] they're going to look at where they're going to act and then we follow their shoulders because
299
+ [3248.240 --> 3256.480] they're going to turn to act on something and we follow their hands as we interpret what other
300
+ [3256.480 --> 3263.440] people are going to do I mean think of being a catcher and watching the picture right I mean they
301
+ [3263.440 --> 3271.120] have to very sharp eyes to infer what the picture is going to do so and the batter has to do that
302
+ [3271.120 --> 3277.440] and then the picture has to fake them out you mentioned that that you can tell just by looking at
303
+ [3277.440 --> 3284.560] someone's hand emotion what their intention is and I remember I took an art history class where
304
+ [3284.560 --> 3293.200] we spent a couple weeks just looking at the hands of paintings and you know we were supposed to infer
305
+ [3293.200 --> 3300.880] what the intent the emotion the state of mind was of the people that were in the paintings and of
306
+ [3300.880 --> 3306.800] course the way you would do this is you would actually put yourself in physically in the position
307
+ [3307.600 --> 3312.320] and put your hand in position that you saw in the painting and this would this would help you to
308
+ [3312.320 --> 3320.800] understand better what this this person in the painting was was was was feeling well the mayor's
309
+ [3320.800 --> 3327.200] in orange system says he that happens automatically and the experiments I showed you I told you about
310
+ [3327.200 --> 3335.920] earlier about recognizing yourself dancing also says you don't have to do it you feel it in your
311
+ [3335.920 --> 3346.240] body it resonates that's the theory is that it resonates so yes we can in you know how much
312
+ [3346.240 --> 3352.640] we're going to recognize from paintings depends on the skill of the painter and the painter's
313
+ [3352.640 --> 3361.040] ability to represent those actions well I mean if you look at the last supper which I've had the
314
+ [3361.040 --> 3368.560] pleasure of doing several times you see Da Vinci really knew about where people were looking
315
+ [3369.440 --> 3375.120] and the social interactions and where they're reaching and where they're gesturing and you can see
316
+ [3376.000 --> 3382.320] you can mentally animate exactly what's going on in that conversation around the last supper
317
+ [3382.320 --> 3393.680] it's it's truly extraordinary so yes and colleagues of mind in Italy did absolutely brilliant
318
+ [3393.680 --> 3402.960] research tracking is having people watch videos of a hand reaching for a bottle and the videos
319
+ [3402.960 --> 3409.440] were truncated before the hand reached the bottle but observers were able to tell whether the
320
+ [3409.440 --> 3417.120] person reaching was going to drink from the bottle or pour or hand the bottle to somebody else
321
+ [3417.120 --> 3424.560] just from the way the hand approached the bottle now the bottles grasp the same way no matter what
322
+ [3425.280 --> 3433.840] so these are mysterious findings we don't know what cues people are peeking up but it gets into
323
+ [3433.840 --> 3442.000] again we're learnings or understanding so much from human behavior from looking at them it isn't
324
+ [3442.000 --> 3449.760] worms it's not what they're saying we know when people are bored we know when people are wrapped
325
+ [3450.400 --> 3459.440] in whether they're sad or happy yesterday my son sent me a photograph of his daughter
326
+ [3460.000 --> 3467.680] whom I adore and she had just gotten braces and you could see in her face she wasn't paying and trying
327
+ [3467.680 --> 3477.440] to smile so you know she's making this great effort and smiling and you know also that she wasn't
328
+ [3477.440 --> 3485.600] paying and those subtleties we pick up in a nanosecond again thinking of basketball and mention
329
+ [3485.600 --> 3492.960] baseball before basketball what you it happens so quickly way faster than words and we have to
330
+ [3492.960 --> 3501.040] figure out where to to throw the balls and in a way that my opponents won't know where I'm throwing it
331
+ [3501.040 --> 3508.320] or what I'm doing magicians know that they know how to make you look where they want you to
332
+ [3508.320 --> 3514.400] lower by looking there or turning there and then they do something that you're not going to notice
333
+ [3514.400 --> 3525.280] but is in full sight with their hands so it's those subtleties you know magicians and basketball
334
+ [3525.280 --> 3531.280] players pick up almost intuitively in fact I think in the case of basketball players it's
335
+ [3531.280 --> 3541.520] intuitive but and remarkable but it's just so much faster than than words and one of the fun facts
336
+ [3541.520 --> 3546.320] that you mentioned the book is that basketball players they know whether the ball is going to go in
337
+ [3547.760 --> 3552.960] way earlier than anybody else right and and so you know in in the barrier we've got
338
+ [3552.960 --> 3559.920] Steph Curry and he's famous for his as soon as he releases the ball he he just turns around and
339
+ [3559.920 --> 3565.040] celebrates while the rest of us are like wondering whether it's going to go in he already knows
340
+ [3566.000 --> 3569.120] which is anyone when they're observing other players they know
341
+ [3569.760 --> 3577.600] seems right and they're better than the refs yeah well again it's remarkable so there's something about
342
+ [3578.240 --> 3584.720] the way their body is moving with respect and the ball that they know it's going to get in
343
+ [3584.720 --> 3592.320] of course it's lots of practice are we losing something in in our practice in the move to online
344
+ [3592.320 --> 3597.840] communication right you know we're doing this interview virtually business meetings are virtual
345
+ [3598.640 --> 3604.400] you know we can still see each other's faces and potentially maybe even bits and pieces of our hands
346
+ [3604.400 --> 3611.840] but there's so much of the kind of bodily clues and and and kind of corporeal movements that are
347
+ [3612.320 --> 3618.240] being that are not as visible and maybe we're not even making as many because we we feel more
348
+ [3618.240 --> 3625.120] constrained within this little little visual box are the communications necessarily going to be
349
+ [3626.400 --> 3633.120] kind of emaciated through this medium oh absolutely I think everybody's realizing in
350
+ [3633.120 --> 3639.920] zoom when if you say I'm gesturing along it's it's intentional because I know from a great deal
351
+ [3639.920 --> 3647.120] of research that people understand what other people are saying through their gestures often the
352
+ [3647.120 --> 3656.080] gestures say things say that the words don't say so your people giving instructions may you know
353
+ [3656.080 --> 3664.720] curve or may show a corvy road when they're not saying it so if you're in Italy as I have been
354
+ [3664.720 --> 3670.880] and gotten lost in the mountains and asked somebody for directions and you know three words of
355
+ [3670.880 --> 3680.160] Italian watching their bodies gives you a huge amount of information so it's and abstract things
356
+ [3680.160 --> 3687.440] are higher lower I'm I remember being in a meeting in France many years ago when I couldn't hear
357
+ [3687.440 --> 3694.640] the French and my French has gotten rotten watching the speakers gesture gesturing helped me
358
+ [3694.640 --> 3702.640] understand it no end so I know gestures communicate people gesture before the words come out
359
+ [3703.680 --> 3710.080] so that they prime you for things gestures can set up whole spatial schemas on the left
360
+ [3710.720 --> 3716.720] on the one hand on the other hand and then I just need to do this and you know I'm piling up
361
+ [3716.720 --> 3724.560] our viewments because I've set up this space that's representing on the one hand on the other hand
362
+ [3724.880 --> 3735.200] I can set up spaces for time of events and in order so gestures and to do it in words would take a lot
363
+ [3735.200 --> 3744.800] of words but I can do it very quickly with gestures on the top on the bottom feeling god feeling
364
+ [3744.880 --> 3754.480] lousy so I do gesture on purpose and deliberately and it matches my thinking I hope and helps other
365
+ [3754.480 --> 3762.720] people I know it helps other people it also helps me yeah so there's a whole lot of research
366
+ [3763.760 --> 3770.480] having people sit on their hands and explaining how to get from the railroad station to their house
367
+ [3770.480 --> 3777.680] and they have trouble finding not just finding words but thinking it through so if we put people
368
+ [3777.680 --> 3784.080] alone in the room and give them spatial descriptions descriptions of how things work
369
+ [3784.080 --> 3790.880] descriptions of different schedules of people they're alone in a room and learning this because
370
+ [3790.880 --> 3799.600] they're going to be tested they make models with their hands of the road system where things are
371
+ [3800.240 --> 3808.960] of how the car break works of where the different times are they'll make a schedule on the table
372
+ [3808.960 --> 3819.120] in the air using the knuckles of their hands and when they do that they remember better so setting up
373
+ [3819.840 --> 3828.320] your ideas in space helps you think now not everyone does it we've had people perform perfectly
374
+ [3828.400 --> 3839.280] on our tests without doing it but about 70% of our somewhere between 50 and 70% of people participants
375
+ [3839.280 --> 3847.280] in studies where the information is can be specialized like time or mechanical systems
376
+ [3847.840 --> 3857.120] between 50 and 70% spontaneously do it and when they do it they remember better so that we do use
377
+ [3857.120 --> 3862.880] our body to think I think you mentioned that you can measure teaching effectiveness or performance
378
+ [3862.880 --> 3870.080] in a musical audition simply by by watching without even listening you can predict with a high degree
379
+ [3870.080 --> 3876.400] of success so the last thing I want to and I want to ask you basically two questions one has to do with
380
+ [3878.160 --> 3886.080] what you call empathic design and and how thinking from different perspectives can help you to
381
+ [3886.080 --> 3892.320] both design better and explain better and and then also the importance of drawing I was I was
382
+ [3892.320 --> 3896.480] I was really interested in what you had to say about drawing as a as a tool for
383
+ [3897.920 --> 3902.880] both learning and creativity and communication and I remember going to a talk many years ago at an
384
+ [3902.880 --> 3908.720] architecture school where one of the architects was bimoning the disappearance of sketching in
385
+ [3908.720 --> 3914.720] architecture and how you know the use of CAD and now you know rev it and other tools are taking over
386
+ [3915.040 --> 3922.160] and you know at first I was I was thinking this person was just kind of a lot I but but then
387
+ [3922.160 --> 3927.040] then I thought about it and I realized that there was something about this act of drawing because I'm
388
+ [3927.040 --> 3933.120] always drawing and always doodling I'm always diagramming things and and you know reconfiguring
389
+ [3933.120 --> 3939.680] things and rearranging things and and in space on paper here I have still have a pen and I still
390
+ [3939.760 --> 3948.560] all this stuff so so what is it about drawing that that helps us to kind of generate concepts
391
+ [3949.280 --> 3956.000] better and and then what is this idea of empathic design that the how do you see that so there are
392
+ [3956.000 --> 3964.160] two long questions I can start with a second you could think of drawing as frozen gestures one
393
+ [3964.240 --> 3971.280] postdoc I worked with coin that term I think they go beyond because once you're putting something
394
+ [3971.280 --> 3982.480] on paper it's going to be more complete it has to be and you see gaps and you also see implications
395
+ [3982.480 --> 3990.000] that you hadn't thought about so both those things happen when you draw so architects many architects
396
+ [3990.000 --> 3999.360] do prefer drawing for designing and we studied many of them and they all drew we gave them a drawing
397
+ [3999.360 --> 4008.160] assignment we took experience to architects and newly minted ones and they draw and they're
398
+ [4008.160 --> 4014.080] their early drawings are sketchy they're not working out all the details just the main ones
399
+ [4014.720 --> 4020.320] where are things going to be located what's the background that sort of thing but then when they
400
+ [4020.320 --> 4026.720] look at their drawings again they they can make inferences that they couldn't make in their minds
401
+ [4026.720 --> 4035.360] the mind is too small the world is bigger so for example I think so when the mind overflows that's
402
+ [4035.360 --> 4045.840] when we use words gestures drawings arrangements of salt peppers on the kitchen table so in it's a
403
+ [4045.840 --> 4055.840] way of externalizing ideas for someone else or for ourselves when it's for we're working collaboratively
404
+ [4055.840 --> 4064.000] it's our product not yours or mine and we can both alter it and when we're designing or talking
405
+ [4064.000 --> 4070.480] we're not looking at each other we're looking at those objects on the table that we're moving around
406
+ [4071.120 --> 4080.720] and again that helps us understand and think it's much more precise than words so that's missing from
407
+ [4080.720 --> 4089.360] zoom that shared thinking space that we both need that's missing from zoom also missing from zoom
408
+ [4089.680 --> 4098.400] is or the platform we're using now any of them is the gestures and this camera isn't so terrible
409
+ [4098.400 --> 4107.360] but the zoom camera well it is when you start to gesture your hands get huge and so again I'm
410
+ [4107.360 --> 4113.040] self-consciously gesturing I'm trying to gesture in a way that my hands don't overwhelm
411
+ [4113.600 --> 4122.800] but those are missing and I know there are thoughtful people knowing that work will probably stay
412
+ [4122.800 --> 4131.280] partially remote knowing that that are working on making better video platforms that will include
413
+ [4132.000 --> 4142.400] will allow gesturing easily and will allow a shared thinking space the other thing that
414
+ [4142.400 --> 4150.080] happens in group meetings even one-on-one is we need to be able to look each other in the eye
415
+ [4150.960 --> 4158.000] and when you have an array of box people in boxes everybody's array is different we don't know
416
+ [4158.000 --> 4166.160] who's being looked at who's being looked at is crucial for the next speaker it's crucial for
417
+ [4166.160 --> 4172.160] attention for knowing what other people are attending so having these arrays of boxes
418
+ [4172.160 --> 4176.560] where you don't know where people are looking at and your arrays different from mine
419
+ [4176.560 --> 4180.240] so this increase is cognitive load right because we're trying to figure it out and we can't
420
+ [4180.240 --> 4187.280] it interferes with normal communication we need to know what people are looking at what they're
421
+ [4187.280 --> 4193.920] attending to what they're seeing it helps us know what they're attending to so it's not cognitive
422
+ [4193.920 --> 4200.400] it's can't be it's not cognitive load it's uncertainty we don't know what they're looking at we
423
+ [4200.400 --> 4207.360] couldn't figure it out even if we had unlimited cognitive load so there are many ways that
424
+ [4207.360 --> 4217.200] these video conferencing need to be improved and shared thinking space seeing gestures seeing
425
+ [4217.760 --> 4225.600] having a feeling of being around the table and even that was worked on I know by colleagues
426
+ [4225.600 --> 4234.000] even before the pandemic and the reliance on zoom but and it will and there are more
427
+ [4234.000 --> 4243.120] quite sophisticated conferencing they're usually proprietary conferencing videos that do a better
428
+ [4243.120 --> 4251.920] job of incorporating those three features and they're probably more so undrawings it helps the
429
+ [4251.920 --> 4258.640] architects make inferences so back to this experienced architects who's putting things in various
430
+ [4258.640 --> 4267.120] places their buildings a museum so the architect the experienced architect can then see the light's
431
+ [4267.120 --> 4274.160] going to fall badly in the winter so that's not in the diagram that's an inference that the
432
+ [4274.160 --> 4282.240] architect must make about the diagram that takes experience in experienced architects can see
433
+ [4282.240 --> 4288.560] perceptual patterns but they can't make these it's harder for them to make these conceptual ones
434
+ [4288.560 --> 4296.240] that the experienced architects see in a minute this is true for chest players it's true for musicians
435
+ [4296.240 --> 4304.880] and so for making inferences for what you see that aren't there takes practice experience knowledge
436
+ [4305.920 --> 4315.200] so um right that's the usefulness of sketches and design graduate student working with me studied
437
+ [4315.200 --> 4321.840] artists for them drawing as their main practice and they said things like I deliberately get myself
438
+ [4321.920 --> 4330.880] in trouble I don't want to do my usual tricks I know it's safe I can explore I get lost and it's I
439
+ [4330.880 --> 4338.080] could I'm gonna find my way out or I'll tear it up and start again so they enjoy that getting lost
440
+ [4338.080 --> 4344.240] just like if you're in a new your influence and you have plenty of time and there aren't too many
441
+ [4344.240 --> 4350.480] cars or tours getting lost is a pleasure as long as you know you're gonna find your way out
442
+ [4351.200 --> 4358.080] so you find things that you're not anticipating you're not worried about being lost so you're enjoying
443
+ [4358.080 --> 4368.960] what's around you um so that getting lost can be um a real pleasure for we've also found sketches
444
+ [4368.960 --> 4377.760] are good for students learning so we asked students to we taught students molecular bonding
445
+ [4378.560 --> 4387.200] over a number of days this is work with the former graduate student Eliza Bobick and then asked
446
+ [4387.200 --> 4394.080] half of them to make a visual a verbal explanation of chemical bonding this is what you normally do
447
+ [4394.080 --> 4402.160] on a test and we asked the other have to make visual explanations and then we retested them we
448
+ [4402.240 --> 4409.280] tested them before we had them make the explanations and the groups were equal and then we tested them
449
+ [4409.280 --> 4415.760] after they've made the explanations they didn't have access to any of the materials only what they
450
+ [4415.760 --> 4425.120] remembered um and then tested them again so what's interesting is both groups improved just from
451
+ [4425.120 --> 4431.520] conjuring up an explanation from what they had already learned without checking what they'd learn
452
+ [4431.520 --> 4438.560] so both groups improved just from making explanations but the visual group did much better
453
+ [4440.000 --> 4447.600] and we were delighted and they did better both on the behavior of the molecules of the chemical
454
+ [4447.600 --> 4455.760] bonding and on the structure we checked that separated the information for about a behavior or
455
+ [4455.760 --> 4462.880] causality from the information about structure so they did better on both and we thought if you
456
+ [4462.880 --> 4469.440] have that drawing in front of you you have a check for completeness is everything I need there
457
+ [4470.240 --> 4477.760] you have a check for coherence does this system work is it behave can it behave the way I think it
458
+ [4477.760 --> 4485.600] should so you have that check and you also have a platform for inferences so you can see well
459
+ [4485.600 --> 4494.560] I need to add causality here because I've only got structure so and with a list of sentences
460
+ [4495.360 --> 4504.640] it's much harder to do that checking is everything there is everything coherent um what more should I
461
+ [4504.640 --> 4512.720] be saying so the drawings give you that and um we think it's a tool that teachers can use
462
+ [4513.360 --> 4520.480] the other thing it does for teachers is it it tells them what the students are confused about
463
+ [4521.440 --> 4529.120] so the language is less likely to do that and the drawings tell give teachers inside into what
464
+ [4529.120 --> 4537.520] students are confused about and what they need to clarify and just final question about the
465
+ [4537.920 --> 4545.280] um um um uh empathic design and and do you mention that um you know mind-wandering is and brain
466
+ [4545.280 --> 4551.760] you know brainstorming in an unstructured way is is not necessarily going to lead to greater
467
+ [4551.760 --> 4560.480] creativity but this idea of um empathic design which incorporates perspective taking uh you know
468
+ [4560.480 --> 4567.400] you know, moving, uh, literally moving vertically and, uh, re-rethinking, rethinking problems
469
+ [4567.400 --> 4571.920] from other people's points of view, this helps you to become more creative.
470
+ [4571.920 --> 4573.920] How do you think about that?
471
+ [4573.920 --> 4580.280] Yeah, so again, you know, I'm an empiricist, so I need evidence.
472
+ [4580.280 --> 4585.600] And there had been a lot of studies on mind wandering.
473
+ [4585.760 --> 4592.000] You know, anything with mind in it is being mindful or mind wandering, which are opposites.
474
+ [4592.960 --> 4598.400] Anything with mind in it gets people's attention and they think it's wonderful.
475
+ [4598.400 --> 4605.440] So there have been a number of studies trying to show that mind wandering increases creativity.
476
+ [4605.440 --> 4610.800] And the typical test is alternative uses.
477
+ [4610.800 --> 4616.960] This is used in many engineering and design classes and as a warm-up exercise.
478
+ [4616.960 --> 4621.840] So the typical example is a brick, think of other uses of a brick.
479
+ [4622.560 --> 4628.800] We found that wasn't a particularly productive example, so we pre-tested and found others.
480
+ [4630.560 --> 4636.320] A typical uses of an umbrella, a shoe, and so forth.
481
+ [4636.320 --> 4643.920] We had a bunch of objects that we took from other people's experiments and pre-tested ping pong ball,
482
+ [4643.920 --> 4647.360] which we didn't use. So it's alternative uses.
483
+ [4647.360 --> 4649.920] And are you coming up with novel uses?
484
+ [4649.920 --> 4656.720] It's hard to because you keep coming back to the normal uses of an umbrella to keep the rain
485
+ [4656.720 --> 4662.240] of maybe sun, but that's not very novel and same with bricks.
486
+ [4662.960 --> 4669.120] So the mind wandering might work because it releases what's called fixation.
487
+ [4669.120 --> 4673.920] Fixation is when you can't see alternative solutions.
488
+ [4673.920 --> 4681.040] You keep coming back to the ones that you had and going back to solving eligible problems
489
+ [4682.080 --> 4684.000] and other kinds of problems.
490
+ [4684.000 --> 4686.240] We've all experienced fixation.
491
+ [4686.240 --> 4692.560] It's all too common and even top-notch designers experience fixation.
492
+ [4693.120 --> 4700.320] So mind wandering brings in other stimuli, walking in the woods brings in other stimuli
493
+ [4701.040 --> 4706.800] so that and kind of releases it can help release you from fixation.
494
+ [4707.520 --> 4714.240] But what it doesn't do is give you a pathway to finding new solutions.
495
+ [4715.040 --> 4719.520] And it's again similar to behavior of children or dogs.
496
+ [4719.520 --> 4726.240] You tell them, don't do that, but you don't tell them what to do as an alternative.
497
+ [4726.240 --> 4731.920] So they keep doing that because that's the first response in their repertoire.
498
+ [4731.920 --> 4735.360] And you have to change the responses in their repertoire.
499
+ [4735.360 --> 4739.520] So they're doing the right thing instead of the wrong thing.
500
+ [4739.520 --> 4742.560] We have it on ourselves. Take the fruit, not the cake.
501
+ [4743.120 --> 4745.440] Right? We have to suppress the wrong.
502
+ [4746.080 --> 4748.240] So it's not just don't take the cake.
503
+ [4748.240 --> 4749.440] Give me something else.
504
+ [4750.640 --> 4753.840] And similarly for a child, don't just hit your brother.
505
+ [4754.640 --> 4760.480] Do figure out another way of interacting or gives the child another way of interacting.
506
+ [4761.040 --> 4762.800] So same and design.
507
+ [4762.800 --> 4765.440] So I might be released from fixation.
508
+ [4765.440 --> 4768.080] I still don't know how to search for new ideas.
509
+ [4768.880 --> 4773.280] So with that alternative uses, we try to bunch.
510
+ [4773.920 --> 4776.000] Think of yourself in another place.
511
+ [4776.000 --> 4777.760] Think of yourself in another time.
512
+ [4777.760 --> 4780.400] Think about major events like parties.
513
+ [4781.280 --> 4789.360] But if you take each of those and each of those are important ways that we organize information
514
+ [4789.360 --> 4795.600] around people, places and events, time, those are ways that we organize in our head.
515
+ [4796.160 --> 4797.040] Information.
516
+ [4797.520 --> 4800.800] But there's a way to bundle all that.
517
+ [4800.800 --> 4803.120] And that's professional rules.
518
+ [4804.320 --> 4806.640] And we know a great deal about those.
519
+ [4806.640 --> 4808.560] Because from the time we're very little,
520
+ [4808.560 --> 4810.320] what do you want to be when you grow up?
521
+ [4810.320 --> 4811.920] A pilot and engineer.
522
+ [4812.800 --> 4815.440] So that we know and we interact with them.
523
+ [4815.440 --> 4820.160] With librarians, with physicians and so forth.
524
+ [4820.160 --> 4822.240] So we know a lot about what they do.
525
+ [4822.640 --> 4829.280] And what we did with the unusual uses is say, think of it a gardener.
526
+ [4830.000 --> 4832.800] How would a gardener use an umbrella?
527
+ [4833.760 --> 4839.040] How would a gardener use or how would a physician use it?
528
+ [4839.040 --> 4840.400] How would a policeman?
529
+ [4840.400 --> 4844.960] How would so we gave them a bunch of professional or social rules.
530
+ [4845.600 --> 4851.040] And said think about how these people might use that object.
531
+ [4851.040 --> 4853.520] And see if you can come up with more ideas.
532
+ [4854.480 --> 4855.200] And they did.
533
+ [4856.480 --> 4860.000] So the people where we told them to take
534
+ [4861.920 --> 4864.800] different roles and empathetic approach.
535
+ [4865.840 --> 4870.160] And this is the approach of the major design forms like ideal.
536
+ [4870.480 --> 4877.840] It's put yourself in the shoes and think how they would do this or know how they would do this.
537
+ [4877.840 --> 4883.840] And in fact, it's amplified by a great deal of anthropological research of going into
538
+ [4883.840 --> 4888.880] communities and seeing what people are doing and how they're solving the problems
539
+ [4888.880 --> 4890.000] you're interested in.
540
+ [4890.720 --> 4893.760] So we told people to think about that.
541
+ [4893.760 --> 4896.000] They came up with many more uses.
542
+ [4896.720 --> 4899.920] The uses they came up with were much more novel.
543
+ [4900.880 --> 4905.120] And they not only used our roles, they invented new ones.
544
+ [4905.840 --> 4911.840] So they got the idea just thinking of other points of view, other perspectives.
545
+ [4912.400 --> 4916.320] And maybe you'll come up with new items.
546
+ [4916.320 --> 4917.280] And they did.
547
+ [4917.280 --> 4922.160] And the people that adopted more roles came up with more novel items.
548
+ [4922.640 --> 4924.800] In the experiment worked like a charm.
549
+ [4925.520 --> 4926.720] We did it twice.
550
+ [4928.000 --> 4934.880] The mind wandering was really no better than the control that were given no extra instructions.
551
+ [4935.760 --> 4942.560] So and again, we think it succeeds because we've given a people a route, a pathway.
552
+ [4943.680 --> 4952.080] So then you sort of think, will this work for other kinds of problems, taking other perspectives?
553
+ [4952.080 --> 4959.280] So in political decisions, maybe thinking about your adversaries or people with other points of view,
554
+ [4959.920 --> 4964.960] in economic decisions, think about what other companies, other countries might do,
555
+ [4965.760 --> 4969.120] in your shoes, and then maybe you'll alter your own.
556
+ [4969.680 --> 4974.720] So that empathetic route can work for many others.
557
+ [4976.400 --> 4979.440] For others, it's for other situations.
558
+ [4979.440 --> 4981.440] It's kind of more complicated.
559
+ [4981.440 --> 4989.200] So an example that I like especially is one that Mokrji highlighted in a New Yorker article
560
+ [4990.240 --> 4996.160] that people, the metaphor people had for cancer, or the traditional one,
561
+ [4996.160 --> 5000.480] is cancer is an invading force and it must be destroyed.
562
+ [5001.360 --> 5004.240] So you do anything to destroy the cancer.
563
+ [5005.040 --> 5009.840] But then people realized that many people die with cancers,
564
+ [5010.800 --> 5015.120] with in fact multiple ones that never turned aggressive.
565
+ [5015.120 --> 5019.680] They didn't die of the cancers, but there's evidence that they had them.
566
+ [5020.480 --> 5026.480] So a new metaphor took hold and that's that cancer is a seed.
567
+ [5028.000 --> 5031.440] And you don't, you want to spoil the soil basically.
568
+ [5031.440 --> 5034.720] You want to prevent the seed from germinating and spreading.
569
+ [5035.200 --> 5038.240] And it's kind of more peaceful metaphor for one thing,
570
+ [5038.960 --> 5043.120] but it does make you think differently about treatments.
571
+ [5044.080 --> 5050.320] And you're preventing the cancer from taking hold instead of killing whatever is there.
572
+ [5051.120 --> 5055.840] Because whatever you're using to kill the cancer kills other cells.
573
+ [5055.840 --> 5060.720] So it has side effects that are unwanted just as wars do.
574
+ [5062.080 --> 5071.920] So thinking about alternatives, and even that idea, I think, has great generality of thinking of
575
+ [5071.920 --> 5078.400] things in this military way of their anatomy versus thinking of things that are seeded and you
576
+ [5078.400 --> 5084.880] want to prevent the growth. I think those metaphors can be applied to more than cancer,
577
+ [5084.880 --> 5092.640] to the way we deal with other countries, other people, and might do a great deal,
578
+ [5092.640 --> 5098.080] not just to improve cancer research and treatment, but our relations with the world.
579
+ [5099.040 --> 5103.840] Well, and of course this illustrates how all thinking is rooted in spatial thinking,
580
+ [5103.840 --> 5109.600] because we talk about viewing things from all sides, right, and taking different perspectives.
581
+ [5109.600 --> 5116.560] And, you know, it's about, you know, if you want to understand something, you have to kind of
582
+ [5116.560 --> 5123.040] move around it. And so thank you so much, Barbara. This has been fantastic. I really appreciate
583
+ [5123.040 --> 5128.160] you joining me. And of course, I will think of you next time I'm on the airplane and someone
584
+ [5128.160 --> 5134.640] hits me in the face with their backpack, because that was, that was hopefully that'll happen
585
+ [5134.640 --> 5139.760] sometime soon. Well, I'll be so happy. I'll be glad that I get hit in the face with the backpack
586
+ [5139.760 --> 5146.000] on the plane. But thank you. I really appreciate you joining me. I hope to see you sometime on campus.
587
+ [5146.000 --> 5153.600] Oh, this is UnSylo brought to you by Lama FM, connecting people through stories.
transcript/allocentric_l-KxSSf4gyM.txt ADDED
@@ -0,0 +1,955 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.240] Okay, let me share my screen here.
2
+ [5.280 --> 6.920] Start up my slides.
3
+ [9.560 --> 13.880] Okay, so first I wanna say to Carl and David,
4
+ [13.880 --> 15.880] thank you for organizing this.
5
+ [15.880 --> 17.960] I see a lot of familiar faces on the screen.
6
+ [17.960 --> 20.280] I see a lot of new faces.
7
+ [20.280 --> 24.760] I know some of you have torn yourself away from the election.
8
+ [24.760 --> 29.760] And so I wanna promise you an anxiety free time here.
9
+ [29.760 --> 32.680] Carl tried to up the anxiety by making it sound like
10
+ [32.680 --> 35.080] I was gonna do a lot of math, but I'm not.
11
+ [35.080 --> 39.480] And so I hope you just enjoy a little respite from the anxiety.
12
+ [39.480 --> 41.680] And the other thing in contrast to the election,
13
+ [41.680 --> 45.720] I promised to reach a conclusion in my time.
14
+ [47.360 --> 50.720] So I'm gonna talk about the fly.
15
+ [50.720 --> 53.320] And in particular, I'm gonna ask the question,
16
+ [54.800 --> 56.960] sorry, this is not, yeah, there it goes.
17
+ [57.920 --> 60.920] How does a fly know which way it's traveling?
18
+ [60.920 --> 64.320] Now you might think that's an easy question
19
+ [64.320 --> 66.800] and just say, well, that way,
20
+ [66.800 --> 68.880] the direction that the fly is heading,
21
+ [68.880 --> 71.760] we call the heading direction or facing,
22
+ [71.760 --> 74.800] but a fly travels around in wind
23
+ [74.800 --> 79.120] and often is not going in the same direction
24
+ [79.120 --> 80.120] that it's facing.
25
+ [80.120 --> 82.920] It's drifting to the side due to this wind.
26
+ [82.920 --> 85.520] And therefore, it needs to, if it wants to know
27
+ [85.520 --> 87.320] where it's really going in the world,
28
+ [87.320 --> 89.920] needs to compensate for this.
29
+ [89.920 --> 91.600] And in its navigational system,
30
+ [91.600 --> 93.640] figure out where it's actually traveling.
31
+ [93.640 --> 95.640] So we distinguish between heading direction,
32
+ [95.640 --> 96.920] which is what I've shown,
33
+ [96.920 --> 98.000] and traveling direction,
34
+ [98.000 --> 101.520] which is the velocity vector of this fly.
35
+ [101.520 --> 103.480] So that would be in this case,
36
+ [103.480 --> 105.760] something like that, the traveling direction.
37
+ [106.720 --> 110.920] Now flies have ways of determining that they're drifting
38
+ [110.920 --> 112.120] or which way they're going.
39
+ [112.120 --> 113.400] In the visual system,
40
+ [113.400 --> 116.320] there are various optic flow sensors
41
+ [116.320 --> 119.000] that project the direction of motion
42
+ [119.000 --> 121.320] onto these vectors that I've shown.
43
+ [121.320 --> 123.960] And so that's the kind of calculation
44
+ [123.960 --> 126.680] I'm going to try to tell you about.
45
+ [126.680 --> 128.840] But those vectors, of course,
46
+ [128.840 --> 130.240] are attached to the fly.
47
+ [130.240 --> 132.840] They're part of the fly visual system.
48
+ [132.840 --> 135.560] But if you want to get to a specific target in the world
49
+ [135.560 --> 138.800] or you want to know what path did you fly on,
50
+ [138.800 --> 143.800] the fly needs to anchor these vectors to the world around.
51
+ [145.960 --> 149.400] So there has to be a transformation from allocentric
52
+ [149.400 --> 151.640] or the sort of egocentric,
53
+ [151.640 --> 153.600] rather, coordinates of the fly
54
+ [153.600 --> 155.680] into allocentric world coordinates.
55
+ [155.680 --> 158.000] So we'll talk about that too.
56
+ [158.000 --> 161.720] So this is a problem that is in representation.
57
+ [161.720 --> 165.400] How are these angles represented in the brain of the fly?
58
+ [165.400 --> 167.160] But it's also in computation.
59
+ [167.160 --> 169.880] Traveling direction has to be computed.
60
+ [169.880 --> 171.960] And that involves vector addition,
61
+ [171.960 --> 173.680] addition of these red vectors
62
+ [173.680 --> 176.360] or the motion projected onto those red vectors.
63
+ [176.360 --> 179.000] And it also requires a coordinate transformation
64
+ [179.000 --> 181.160] from the coordinates of the fly's body
65
+ [181.160 --> 183.920] to the coordinates of the world around.
66
+ [183.920 --> 185.920] So I'm going to tell you in great detail
67
+ [185.920 --> 187.400] how this works in the fly.
68
+ [187.400 --> 189.200] And the reason that I can do that
69
+ [189.200 --> 190.880] is because of these two guys.
70
+ [190.880 --> 193.120] So this is work done in collaboration
71
+ [193.120 --> 196.360] with Gabby Maiman and Cheng Liu,
72
+ [196.360 --> 198.680] who's a graduate student with Gabby.
73
+ [198.680 --> 200.520] They did the experimental work,
74
+ [200.520 --> 203.000] but they also really provided the insight
75
+ [203.000 --> 205.960] of working the way through a complicated circuit.
76
+ [205.960 --> 208.600] I'll try to stress that as I go along.
77
+ [208.600 --> 212.040] And really, I just multiply co-signs every once in a while.
78
+ [212.040 --> 213.720] That's my role in the collaboration.
79
+ [213.720 --> 215.960] So I'm really describing their work.
80
+ [215.960 --> 219.240] And I can say it's their very beautiful work.
81
+ [220.720 --> 221.880] Okay.
82
+ [221.880 --> 224.360] So this last picture is just to remind me
83
+ [224.360 --> 228.200] that flies are not the only animals that travel sideways
84
+ [228.200 --> 229.720] or at an angle sometimes.
85
+ [229.720 --> 231.840] Of course, every animal does this
86
+ [231.840 --> 234.320] and has to compensate for the fact that
87
+ [234.320 --> 238.440] at times we don't move exactly following our noses.
88
+ [239.440 --> 242.040] Okay. So let me start off with heading direction.
89
+ [242.040 --> 244.720] Some of you may know this very beautiful work
90
+ [244.720 --> 246.880] from the Jairaman lab.
91
+ [246.880 --> 250.280] Johannes Sili again, and the Vec Jairaman discovered
92
+ [250.280 --> 253.720] a system in the fly that tracks its heading
93
+ [253.720 --> 256.880] relative to the surrounding world.
94
+ [256.880 --> 259.280] The second paper that I have cited here
95
+ [259.280 --> 261.480] has a very nice model of this too.
96
+ [261.480 --> 264.280] So there are really elegant models
97
+ [264.280 --> 267.520] and a pretty good understanding of this first step.
98
+ [267.520 --> 270.200] How is heading represented in the fly?
99
+ [270.200 --> 275.200] And that occurs in this donut shaped region of the brain
100
+ [276.760 --> 279.400] that you see in the picture called the ellipsoid body.
101
+ [279.400 --> 283.800] So that's the first player in our journey here.
102
+ [283.800 --> 286.640] And what Sili again, Jairaman discovered
103
+ [286.640 --> 289.840] was that there was a hotspot of activity
104
+ [289.840 --> 293.840] in the neurons that that innervate this ring.
105
+ [293.840 --> 296.040] And that that hotspot activity
106
+ [296.040 --> 299.280] like a compass needle will track fixed objects
107
+ [299.280 --> 301.560] in the world as the fly turns.
108
+ [301.560 --> 304.600] So it's providing the heading direction
109
+ [304.600 --> 306.720] in a world coordinate system.
110
+ [307.720 --> 310.640] Here's just an example of that from work
111
+ [310.640 --> 314.480] of Sengsu Kim in the Vec Jairaman lab.
112
+ [314.480 --> 317.600] So there's a picture of a forest scene.
113
+ [317.600 --> 319.640] It's been reduced in resolution
114
+ [319.640 --> 322.520] down to kind of fly level resolution.
115
+ [322.520 --> 327.520] And then shown to a fly in a virtual reality environment
116
+ [327.520 --> 332.520] where the imaging of this bump of activity
117
+ [332.680 --> 334.960] in the ellipsoid body could be done.
118
+ [335.280 --> 337.960] And here you can see that activity
119
+ [337.960 --> 340.720] is locked onto a particular angle.
120
+ [340.720 --> 345.440] But the key is that if you shift this scene around the fly,
121
+ [345.440 --> 349.120] like that, the bolus of activity shifts
122
+ [349.120 --> 351.840] in exactly a corresponding angle.
123
+ [351.840 --> 354.920] And you can see that here, where in blue,
124
+ [354.920 --> 357.680] this is the tracing of where in the ring,
125
+ [357.680 --> 361.400] ellipsoid body ring, the bump of activity was,
126
+ [361.400 --> 362.960] and then the red is just tracking
127
+ [362.960 --> 366.120] some arbitrary location in this picture.
128
+ [366.120 --> 369.040] And you can see that it track each other really quite well.
129
+ [369.040 --> 371.920] So that's the fly's compass system.
130
+ [371.920 --> 374.480] It looks in a schematic like this.
131
+ [374.480 --> 377.680] The fly turns, but the compass system keeps pointing
132
+ [377.680 --> 379.360] towards an object.
133
+ [379.360 --> 384.120] Now you might realize here that as the fly turns here
134
+ [384.120 --> 388.360] to the left, the fixed object effectively turns to the right.
135
+ [388.360 --> 391.480] So there's a minus sign here that's going to crop up
136
+ [391.480 --> 394.960] from now and again, just this anxiety free lecture.
137
+ [394.960 --> 398.320] So don't worry about it, but it'll come up.
138
+ [398.320 --> 401.040] But nevertheless, this system represents
139
+ [401.040 --> 402.520] the heading of the fly.
140
+ [402.520 --> 404.720] Now there's another player in this system.
141
+ [404.720 --> 407.280] Basically what I'm doing here is introducing you
142
+ [407.280 --> 410.920] to various pieces of what's called the central complex
143
+ [410.920 --> 412.160] in the fly.
144
+ [412.160 --> 415.400] And this new piece is called the proto-surreal bridge
145
+ [415.400 --> 417.920] or just the bridge, I'll probably call it.
146
+ [417.920 --> 423.680] And it's another player in this heading representation.
147
+ [423.680 --> 425.360] And it works like this.
148
+ [425.360 --> 429.800] First of all, these regions are divided into segments,
149
+ [429.800 --> 433.000] into compartments, if you want.
150
+ [433.000 --> 434.680] And those compartments are innovated
151
+ [434.680 --> 438.080] by different members of this class of cells.
152
+ [438.080 --> 440.880] So it sort of looks like this.
153
+ [440.880 --> 443.800] The ellipsoid body ring is down at the bottom
154
+ [443.800 --> 447.000] and the proto-surreal bridge at the top.
155
+ [447.000 --> 451.280] And what happens is that the bump of activity
156
+ [451.280 --> 454.320] that I've shown you is in neurons called epgs.
157
+ [454.320 --> 457.280] These letters stand for where this neuron
158
+ [457.280 --> 458.040] innovates.
159
+ [458.040 --> 461.280] It stands for ellipsoid-brody, proto-surreal bridge,
160
+ [461.280 --> 464.280] and another region called the gall, which won't come up
161
+ [464.280 --> 465.920] in my talk.
162
+ [465.920 --> 468.440] And so these neurons look like this.
163
+ [468.440 --> 471.520] They make the bump of activity in the ellipsoid body
164
+ [471.520 --> 475.160] with the tracks, the outside world.
165
+ [475.160 --> 478.280] And then they send that activity up to the bridge.
166
+ [478.280 --> 480.520] And they do it on both sides.
167
+ [480.520 --> 487.840] So it ends up that the compass is actually sort of copied
168
+ [487.840 --> 489.960] into three compasses.
169
+ [489.960 --> 494.200] And there are sets of cells to innovate
170
+ [494.200 --> 497.280] each of these compartments and going all the way around the ring.
171
+ [497.280 --> 500.240] I won't show you the whole set, but going all the way
172
+ [500.240 --> 504.040] across the proto-surreal bridge and all the way around the ring.
173
+ [504.040 --> 507.360] And they are locked together because they're really
174
+ [507.360 --> 508.960] the activity of the same cells.
175
+ [508.960 --> 513.920] And so here from Chung is a recording from Gabby's lab,
176
+ [513.920 --> 516.880] where you'll see as the fly turns,
177
+ [516.880 --> 519.440] the activity in these three regions
178
+ [519.440 --> 521.720] will lock and move all together.
179
+ [521.720 --> 525.320] So you can see two copies of the bump at the top
180
+ [525.320 --> 527.680] and the one copy that I've already shown you
181
+ [527.680 --> 530.320] in the ellipsoid body below.
182
+ [530.320 --> 534.200] Now, that allows you, those projections,
183
+ [534.200 --> 537.800] allow you to build a coordinate system for this structure.
184
+ [537.800 --> 539.640] In other words, we can build coordinates
185
+ [539.640 --> 543.960] going around the ring of the ellipsoid body, just angles,
186
+ [543.960 --> 545.000] I mean.
187
+ [545.000 --> 546.960] But because of these projections,
188
+ [546.960 --> 549.720] you can then assign those angles up
189
+ [549.720 --> 551.160] into the proto-surreal bridge.
190
+ [551.160 --> 552.080] And it looks like this.
191
+ [552.080 --> 555.800] As I said, there are two more cycles up there.
192
+ [555.800 --> 558.400] They're not closed into rings, but nevertheless,
193
+ [558.400 --> 561.840] they go around the full 360 degrees on the left side
194
+ [561.840 --> 563.560] and on the right side.
195
+ [563.560 --> 566.000] And those angles will be very important
196
+ [566.000 --> 568.760] as we work our way through this circuit.
197
+ [568.760 --> 574.320] So here's what the EPG neurons do.
198
+ [574.320 --> 576.920] They take the bump that's created in the ellipsoid body,
199
+ [576.920 --> 579.440] transfer it up into the bridge.
200
+ [579.440 --> 581.560] And the bridge is important because that
201
+ [581.560 --> 586.720] allows this compass direction to be relayed to other cells
202
+ [586.720 --> 590.480] that then will go on and compute various things.
203
+ [590.480 --> 595.640] And so the next player in the game that we're going to look at
204
+ [595.640 --> 597.760] are called PFR cells.
205
+ [597.760 --> 599.920] That sounds for proto-surreal bridge,
206
+ [599.920 --> 602.880] fan-shaped body, which is another structure
207
+ [602.880 --> 604.520] in the central complex.
208
+ [604.520 --> 606.800] And then round body, which is, again, a structure
209
+ [606.800 --> 608.200] I won't talk about.
210
+ [608.200 --> 611.880] So the PFRs look a lot like the EPGs,
211
+ [611.880 --> 615.440] but look like they're playing the game between the bridge
212
+ [615.440 --> 617.720] and this fan-shaped body.
213
+ [617.720 --> 620.280] And in fact, one way to think, well,
214
+ [620.280 --> 624.480] they span the different columns and the different regions
215
+ [624.480 --> 626.640] in the proto-surreal bridge in the same way.
216
+ [626.640 --> 630.360] There are sets of neurons for every compartment.
217
+ [630.360 --> 633.360] And one thing that does is allow you to take
218
+ [633.360 --> 636.440] these angle coordinates and reassign them
219
+ [636.440 --> 638.440] back down in the fan-shaped body.
220
+ [638.440 --> 643.400] So by looking at how these different cell types connect,
221
+ [643.400 --> 647.920] we can trace out an analog map of the circle
222
+ [647.920 --> 651.760] being mapped in twice into the proto-surreal bridge
223
+ [651.760 --> 654.440] and then once across the fan-shaped body.
224
+ [654.440 --> 659.440] So we've got yet another one of these circular structures.
225
+ [659.440 --> 660.120] OK.
226
+ [660.120 --> 663.120] And then the sort of flow pattern that I'm talking about
227
+ [663.120 --> 664.480] looks like this.
228
+ [664.480 --> 666.600] In the ellipsoid body, you build the bump
229
+ [666.600 --> 668.600] at an appropriate angle, depending
230
+ [668.600 --> 672.680] on your orientation or the flies orientation in the world.
231
+ [672.680 --> 677.720] The EPG neurons carry that activity up to the proto-surreal bridge.
232
+ [677.720 --> 680.720] It's transferred to these PFR neurons
233
+ [680.720 --> 684.560] and they pull it down into the fan-shaped body.
234
+ [684.560 --> 686.920] So if you just looked, or here's a better diagram
235
+ [686.920 --> 688.880] that Jung-Mae that's a little clearer,
236
+ [688.880 --> 690.920] if you just look at this diagram, you would say, OK,
237
+ [690.920 --> 693.160] well, the role of these PFR neurons
238
+ [693.160 --> 696.280] is just to take the compass signal and transfer it
239
+ [696.280 --> 701.840] from the ellipsoid body where it is formed into the fan-shaped body.
240
+ [701.840 --> 706.040] And so that really started, that picture,
241
+ [706.040 --> 710.600] started the experiments that were done by Gabi and Jung
242
+ [710.600 --> 712.520] to look to see if that's true.
243
+ [712.520 --> 714.760] Is that what the role of these neurons are?
244
+ [714.760 --> 718.760] They just transferring the compass from one place to another.
245
+ [718.760 --> 722.200] So most of the experiments that I will talk about
246
+ [722.200 --> 723.200] are done like this.
247
+ [723.200 --> 729.800] A fly is tethered, but a flying, so it can fly along.
248
+ [729.800 --> 733.880] And the wings are monitored so that by monitoring
249
+ [733.880 --> 738.640] the wing-beats amplitude, they can determine
250
+ [738.640 --> 740.680] whether the fly is turning.
251
+ [740.680 --> 742.120] Of course, the fly is not really turning,
252
+ [742.120 --> 743.720] because it's glued in place.
253
+ [743.720 --> 749.200] But if the fly intends to turn, then this dot that I've shown
254
+ [749.200 --> 751.840] there moves around appropriately.
255
+ [751.840 --> 755.240] So this is a close-loop virtual reality that makes the fly
256
+ [755.240 --> 757.240] sort of think that when it's turning,
257
+ [757.240 --> 761.920] the world is turning appropriately around it.
258
+ [761.920 --> 767.160] And then in this setup, they imaged simultaneously
259
+ [767.160 --> 771.920] both the compass system in the ellipsoid body.
260
+ [771.920 --> 773.120] That's what's on the left.
261
+ [773.120 --> 776.400] And then these PFR neurons that I'm talking about.
262
+ [776.400 --> 779.440] And often, they would find that they were nicely aligned.
263
+ [779.440 --> 782.320] Remember, we can assign angles the way
264
+ [782.320 --> 784.160] I showed you to these different regions.
265
+ [784.160 --> 787.040] So you can say, well, are these two bumps at the same angle?
266
+ [787.040 --> 789.960] And sometimes they were, but sometimes they weren't.
267
+ [789.960 --> 792.120] And that's kind of what first alerted them
268
+ [792.120 --> 796.440] that the PFRs might not simply be a copying system,
269
+ [796.440 --> 800.200] but might be a little more interesting than that.
270
+ [800.200 --> 803.760] And so what they did was you can see these blue dots
271
+ [803.760 --> 807.000] at the bottom of the virtual reality system.
272
+ [807.000 --> 811.280] Those are dots that allow them to impose a visual sense
273
+ [811.280 --> 814.480] of flow in the flies environment.
274
+ [814.480 --> 819.320] So the first experiment was just to make an optic flow like this.
275
+ [819.320 --> 822.640] So it was this to if the fly was was following,
276
+ [822.640 --> 824.960] was flying directly in its heading direction.
277
+ [824.960 --> 827.680] In other words, zero drift, just flying forward.
278
+ [827.680 --> 830.520] And they observed, and that's this open loop flow.
279
+ [830.520 --> 832.200] They observed when that happened, there
280
+ [832.200 --> 835.960] was a much better locking of these two,
281
+ [835.960 --> 838.480] these two systems to each other.
282
+ [838.480 --> 842.520] And that led them to do a systematic set of experiments
283
+ [842.520 --> 843.440] like this.
284
+ [843.440 --> 846.880] So what you can see here in the middle
285
+ [846.880 --> 852.000] is this optic flow that simulates direct forward flight.
286
+ [852.000 --> 855.400] But in addition, they could go, they could simulate backwards
287
+ [855.400 --> 858.480] flight, drifting to the left, drifting to the right, et
288
+ [858.480 --> 861.240] cetera, all these different angles of flight
289
+ [861.240 --> 862.960] could be simulated.
290
+ [862.960 --> 866.160] And again, with this simultaneous imaging,
291
+ [866.160 --> 868.040] they could look at the bump.
292
+ [868.040 --> 873.360] So in this top blue here, this is the track of the bump
293
+ [873.880 --> 878.800] as the fly is doing a turn in this virtual reality environment.
294
+ [878.800 --> 882.000] And here's the track of this second bump, the one
295
+ [882.000 --> 885.000] that's in the fan shape body, the PFR bump.
296
+ [885.000 --> 887.480] And you can see they're very nicely locked
297
+ [887.480 --> 889.880] and during the between these dashed lines,
298
+ [889.880 --> 892.040] the optic flow comes on.
299
+ [892.040 --> 894.960] And they're really very tightly locked here.
300
+ [894.960 --> 897.000] There's no difference between them.
301
+ [897.000 --> 900.840] On the other hand, when they simulated backward motion,
302
+ [900.840 --> 904.040] then when the backward motion was on,
303
+ [904.040 --> 906.720] these two bumps diverge from each other.
304
+ [906.720 --> 909.960] And in fact, diverge by 180 degrees.
305
+ [909.960 --> 912.520] And doing all the angles, they found
306
+ [912.520 --> 916.880] a very systematic relationship between the separation
307
+ [916.880 --> 920.120] of these bumps and the drift that they were simulating,
308
+ [920.120 --> 922.080] which rather than going through every panel here,
309
+ [922.080 --> 926.160] let me just summarize by saying that the difference
310
+ [926.160 --> 930.240] between the compass bump and this new PFR bump,
311
+ [930.240 --> 933.480] very accurately tracks the drift angle.
312
+ [933.480 --> 937.760] Now, if you accept that the EPG angle is the heading direction
313
+ [937.760 --> 940.840] and you sort of undo this, what you find is this,
314
+ [940.840 --> 946.000] PFR is a traveling direction bump in the world coordinates.
315
+ [946.000 --> 948.560] It signals the direction in the world.
316
+ [948.560 --> 951.040] You might notice here the axis is backwards.
317
+ [951.040 --> 954.280] This is the sign flip that I warned you about,
318
+ [954.280 --> 956.280] but told you not to worry about.
319
+ [956.320 --> 962.840] So traveling direction is represented in these PFR neurons,
320
+ [962.840 --> 965.640] in the same way that the heading direction
321
+ [965.640 --> 970.600] is represented in these compass neurons or EPG neurons.
322
+ [970.600 --> 973.720] So obviously, the next question is, how does this happen?
323
+ [973.720 --> 976.840] How is the difference between these computed,
324
+ [976.840 --> 981.000] presumably on the basis of some kind of flow information,
325
+ [981.000 --> 984.480] optic flow information being carried by the visual system?
326
+ [985.440 --> 989.320] So let me just say how we attack this first.
327
+ [989.320 --> 993.120] So just summarize, there's a bump of activity
328
+ [993.120 --> 996.520] in the ellipsoid body that tells you heading direction.
329
+ [996.520 --> 998.800] There's a bump of the activity
330
+ [998.800 --> 1002.400] that represents traveling direction in the fan-shaped body.
331
+ [1002.400 --> 1006.520] And we believe there's lots of evidence in the ellipsoid body
332
+ [1006.520 --> 1008.680] that that is a self-sustaining bump.
333
+ [1008.680 --> 1012.520] It's very much like the ring model,
334
+ [1012.520 --> 1015.480] that's quite famous in theoretical neuroscience.
335
+ [1015.480 --> 1016.880] It's a self-sustained bump.
336
+ [1016.880 --> 1021.320] And we believe also that in the fan-shaped body,
337
+ [1021.320 --> 1024.920] so this bump is also a self-sustained bump.
338
+ [1024.920 --> 1028.400] And it can exist anywhere from left to right
339
+ [1028.400 --> 1032.560] across the whole extent of this fan-shaped body.
340
+ [1032.560 --> 1034.880] So it's a line attractor, basically.
341
+ [1034.880 --> 1037.160] Now in a line attractor, what determines
342
+ [1037.160 --> 1041.160] where the bump actually ends up is the input?
343
+ [1041.160 --> 1045.520] And so what we focus on then is an input
344
+ [1045.520 --> 1049.320] that is going to turn out to be sinusotally-shaped input
345
+ [1049.320 --> 1053.080] that is expressed across this structure.
346
+ [1053.080 --> 1057.080] Because if we find the max of that, we can say
347
+ [1057.080 --> 1058.640] there's no other influence.
348
+ [1058.640 --> 1062.240] This bump can equally exist at any location.
349
+ [1062.240 --> 1064.240] So it's going to go to the max of the inputs.
350
+ [1064.240 --> 1068.000] So if we want to understand where this bump is,
351
+ [1068.000 --> 1071.000] what we want to do is focus on the inputs
352
+ [1071.000 --> 1076.040] to these PFR neurons and determine where they're maximized.
353
+ [1076.040 --> 1079.080] OK, so I told you a little about how this calculation
354
+ [1079.080 --> 1080.080] has to go.
355
+ [1080.080 --> 1082.640] There's a motion signal that has to come in
356
+ [1082.640 --> 1085.240] and be computed to determine drift.
357
+ [1085.240 --> 1088.080] That motion signal comes from the eyes,
358
+ [1088.080 --> 1090.920] but it actually comes in an inverted way.
359
+ [1090.920 --> 1094.480] So there's an inhibitory pathway.
360
+ [1094.480 --> 1096.440] And so actually the motion signal
361
+ [1096.440 --> 1098.000] that we're going to be talking about
362
+ [1098.000 --> 1102.920] is directed backwards at 45, and it's actually 135,
363
+ [1102.920 --> 1106.400] and minus 135, in the backward direction.
364
+ [1106.400 --> 1109.880] So that's the motion signal that this system is getting.
365
+ [1109.880 --> 1113.600] It has to reference that to an external landmark
366
+ [1113.600 --> 1116.080] in order to be in allocentric coordinates,
367
+ [1116.080 --> 1117.480] and it has to add the vectors.
368
+ [1117.480 --> 1119.760] So that's the calculation we're talking about
369
+ [1119.760 --> 1123.400] in order to get this traveling direction vector.
370
+ [1123.400 --> 1125.240] The coordinates then or the vectors
371
+ [1125.240 --> 1128.040] we're talking about are a forward heading direction
372
+ [1128.040 --> 1133.040] and two backward angled motion direction signals.
373
+ [1133.040 --> 1135.040] And let me just talk about the algorithm
374
+ [1135.040 --> 1138.440] if you want, or what's the calculation that has to be done?
375
+ [1138.440 --> 1139.960] It looks like this.
376
+ [1139.960 --> 1142.440] If the fly is moving forward, then the motion
377
+ [1142.440 --> 1144.720] in the backward direction is reduced.
378
+ [1144.720 --> 1147.080] So those backward directions get shorter.
379
+ [1147.080 --> 1149.160] And if you add up these three vectors,
380
+ [1149.160 --> 1150.880] you get a forward signal.
381
+ [1150.880 --> 1152.600] So that's traveling forward.
382
+ [1152.600 --> 1155.280] If the fly is actually drifting backwards,
383
+ [1155.280 --> 1160.320] despite heading forwards, then those backward direction vectors
384
+ [1160.320 --> 1163.040] sense more motion in the backward direction.
385
+ [1163.040 --> 1165.440] They get longer, and the net vector
386
+ [1165.440 --> 1168.760] ends up pointing backwards.
387
+ [1168.760 --> 1172.640] If the fly is drifting to the right while it's flying along,
388
+ [1172.640 --> 1176.120] then what happens is one of these vectors gets longer.
389
+ [1176.120 --> 1177.440] It senses more motion.
390
+ [1177.440 --> 1180.680] The other vector gets shorter because it senses less motion.
391
+ [1180.680 --> 1183.560] You add them up and you get a right word vector.
392
+ [1183.560 --> 1188.240] And then finally, if the fly simply turns and flies like that,
393
+ [1188.240 --> 1189.920] what has to happen is all these vectors
394
+ [1189.920 --> 1192.880] have to turn together in the world.
395
+ [1192.880 --> 1197.120] And so you end up getting a right word motion,
396
+ [1197.120 --> 1200.240] even though the fly is still flying straight ahead.
397
+ [1200.240 --> 1204.280] So that's the calculation that we have to account for.
398
+ [1204.280 --> 1205.640] The question is mechanism.
399
+ [1205.640 --> 1210.320] How is this done in the fly's central complex?
400
+ [1210.320 --> 1215.120] So let me introduce then a way of representing
401
+ [1215.120 --> 1217.720] and manipulating vectors that some of you
402
+ [1217.720 --> 1219.720] may have learned about in physics class.
403
+ [1219.720 --> 1221.360] I actually learned about in physics class,
404
+ [1221.360 --> 1223.600] but I paid absolutely no attention to it.
405
+ [1223.600 --> 1226.560] And never thought it would come back to actually
406
+ [1226.560 --> 1228.240] enter my life.
407
+ [1228.240 --> 1230.640] But in two dimensions, obviously, a vector
408
+ [1230.640 --> 1233.480] is characterized by an angle and a length.
409
+ [1233.480 --> 1239.720] And you can map that onto a sinusoidal wave, if you want,
410
+ [1239.720 --> 1244.240] sinusoidal function by saying that the position of the peak
411
+ [1244.240 --> 1246.560] represents the angle of the vector
412
+ [1246.560 --> 1249.160] and the amplitude of the sinusoid
413
+ [1249.160 --> 1250.760] represents the length of the vector.
414
+ [1250.760 --> 1255.840] So this is a mapping from vectors to sine waves.
415
+ [1255.840 --> 1257.680] Now, if you have another vector, obviously,
416
+ [1257.680 --> 1261.080] you can do the same thing, you represent it the same way.
417
+ [1261.080 --> 1265.160] And then a nice feature is that if you want to add those vectors,
418
+ [1265.160 --> 1267.200] all you have to do is add the sine waves.
419
+ [1267.200 --> 1272.040] And the peak phase and the amplitude of the resulting sum
420
+ [1272.040 --> 1276.440] will be the length and the angle of the resulting vector.
421
+ [1276.440 --> 1280.000] Now, this is used in engineering as a trick
422
+ [1280.000 --> 1286.880] to compute, to add basically add sinusoidal waves
423
+ [1286.880 --> 1288.520] by just adding vectors.
424
+ [1288.520 --> 1290.520] But in the fly, it's the other way around.
425
+ [1290.520 --> 1291.800] What we're going to see is the fly
426
+ [1291.800 --> 1294.480] is representing vectors in terms of sine waves
427
+ [1294.480 --> 1297.640] and adding them and shifting them in various ways
428
+ [1297.640 --> 1299.600] in order to do vector calculations.
429
+ [1299.600 --> 1301.560] That's really the point in my talk is to convince you
430
+ [1301.560 --> 1303.800] that that statement is true.
431
+ [1303.800 --> 1306.200] There's another nice trick here that if you want
432
+ [1306.200 --> 1309.560] to do a coordinate transformation, and for example,
433
+ [1309.560 --> 1311.960] refer to this vector with that other angle,
434
+ [1311.960 --> 1315.920] well, you've got to do a shift the corresponding wave over.
435
+ [1315.920 --> 1319.200] And you've done the coordinate transformation.
436
+ [1319.200 --> 1322.360] So this is a nice way to do vector representation,
437
+ [1322.360 --> 1324.240] vector addition, vector rotation.
438
+ [1324.240 --> 1326.880] And this has certainly been noticed before.
439
+ [1326.880 --> 1330.320] It was proposed for the hippocampus
440
+ [1330.320 --> 1334.520] by O'Keefe, where the oscillation was in the time domain.
441
+ [1334.520 --> 1336.000] That's not what I'm talking about.
442
+ [1336.000 --> 1338.360] I'm talking about a nice oscillation in space
443
+ [1338.360 --> 1341.880] across these structures that I've been talking about.
444
+ [1341.880 --> 1343.560] And Turetsky-Redishin-1 actually
445
+ [1343.560 --> 1347.520] realized that that could also be happening in the hippocampus.
446
+ [1347.520 --> 1349.480] So those are theoretical papers.
447
+ [1349.480 --> 1353.040] Then it was applied to insect navigation
448
+ [1353.040 --> 1355.280] by this, whoops, there's a typo there,
449
+ [1355.280 --> 1358.160] but Whitman and Shregler paper.
450
+ [1358.160 --> 1361.600] And then finally, in a really very beautiful paper,
451
+ [1361.600 --> 1364.640] Barbara Webb and Stanley Heinz and collaborators,
452
+ [1364.640 --> 1367.680] applied these ideas to a circuit that's really very similar.
453
+ [1367.680 --> 1370.480] It's a circadian bees, but it's very similar to the circuit
454
+ [1370.480 --> 1371.840] I'm going to talk about.
455
+ [1371.840 --> 1374.680] I will come back to this paper at the end,
456
+ [1374.680 --> 1380.640] but it has a lot of parallels with what I'm going to talk about.
457
+ [1380.640 --> 1382.000] OK.
458
+ [1382.000 --> 1385.800] So just to stress, the waves that I'm talking about
459
+ [1385.800 --> 1388.640] are spatial waves in a population of neurons
460
+ [1388.640 --> 1390.640] across these structures.
461
+ [1390.640 --> 1393.840] And we're going to then parameterize them.
462
+ [1393.840 --> 1395.920] The x-axis in these plots is going
463
+ [1395.920 --> 1398.680] to be either a position angle up in the bridge
464
+ [1398.680 --> 1400.680] if we're talking about activity there,
465
+ [1400.680 --> 1402.840] or a position angle in the fan-shaped body.
466
+ [1402.840 --> 1406.000] And I showed you how we defined those position angles
467
+ [1406.000 --> 1409.120] a little earlier, except for this minus sign,
468
+ [1409.120 --> 1411.400] I'm actually going to plot it as a function of minus
469
+ [1411.400 --> 1412.760] the position angle.
470
+ [1412.760 --> 1414.800] Otherwise, there'll be an irritating minus sign
471
+ [1414.800 --> 1417.040] that will crop into the talk.
472
+ [1417.040 --> 1418.800] So here's the idea.
473
+ [1418.800 --> 1421.240] We want to represent these three vectors
474
+ [1421.240 --> 1424.600] and the calculations that they perform
475
+ [1424.600 --> 1428.920] in terms of sinusoid in populations of neurons
476
+ [1428.920 --> 1431.440] across these structures, for example,
477
+ [1431.440 --> 1433.560] across the fan-shaped body.
478
+ [1433.560 --> 1436.720] So the first thing is what neurons are these.
479
+ [1436.720 --> 1439.600] Now, the heading direction one, I've already said,
480
+ [1439.600 --> 1440.960] that's these EPG neurons.
481
+ [1440.960 --> 1443.880] Those are the neurons that represent heading direction.
482
+ [1443.880 --> 1449.320] So they're a good candidate for our sinusoidal representation
483
+ [1449.320 --> 1450.320] of heading direction.
484
+ [1450.320 --> 1452.480] But what about the other two vectors?
485
+ [1452.480 --> 1455.280] So I just push a button on my computer
486
+ [1455.280 --> 1458.800] and up come these names, PFNVs.
487
+ [1458.800 --> 1461.360] So I want to acknowledge my collaborators,
488
+ [1461.360 --> 1463.960] because it was easy for me to pop these names up.
489
+ [1463.960 --> 1466.560] But that too, from the thousands of cells
490
+ [1466.560 --> 1469.720] and hundreds of cell types in the central complex,
491
+ [1469.720 --> 1473.200] figure out that these were the candidate neurons.
492
+ [1473.200 --> 1475.560] I will try to convince you that they're the right neurons,
493
+ [1475.560 --> 1478.040] but I just want to acknowledge the insight
494
+ [1478.040 --> 1480.560] that went into being able to identify
495
+ [1480.560 --> 1484.520] that these are the neurons that carry these other two vectors
496
+ [1484.520 --> 1487.400] in their sinusoids.
497
+ [1487.400 --> 1490.400] Just to show you a PFN neuron is a good candidate,
498
+ [1490.400 --> 1492.640] here's a picture of one.
499
+ [1492.640 --> 1497.160] They get a signal in the bridge, that's the bridge up there.
500
+ [1497.160 --> 1500.120] So they are linked into the compass system.
501
+ [1500.120 --> 1503.760] They give an output down where these PFN neurons are,
502
+ [1503.760 --> 1509.200] so they can put their signal into the traveling wave bump
503
+ [1509.200 --> 1510.680] that we're talking about.
504
+ [1510.680 --> 1513.600] And finally, they get another input in a structure
505
+ [1513.600 --> 1515.480] called the nodulatus.
506
+ [1515.480 --> 1517.240] And that is a visual motion input.
507
+ [1517.240 --> 1519.000] So they're good candidates in the sense
508
+ [1519.000 --> 1521.160] that they get the visual motion signal,
509
+ [1521.160 --> 1522.720] and they come from the right place
510
+ [1522.720 --> 1523.840] and they go to the right place.
511
+ [1523.840 --> 1528.360] But we've got a ways to go to really show that they're correct.
512
+ [1528.360 --> 1531.880] So there are a number of conditions that have to be met
513
+ [1531.880 --> 1535.440] if this idea is going to get off the ground.
514
+ [1535.440 --> 1537.240] First of all, so remember, what I'm saying
515
+ [1537.240 --> 1540.520] is these vectors are represented by sinusoidal patterns
516
+ [1540.520 --> 1545.320] across the spatial structure of these different
517
+ [1545.320 --> 1547.200] central complex regions.
518
+ [1547.200 --> 1550.160] It's not carried by one neuron, but by a whole population
519
+ [1550.160 --> 1551.560] of neurons.
520
+ [1551.560 --> 1554.920] And it has to obey the rules of the game.
521
+ [1554.920 --> 1557.040] First of all, it has to be sinusoidal,
522
+ [1557.040 --> 1559.640] and it has to have the right phases and amplitudes
523
+ [1559.640 --> 1562.600] to represent the vectors that I'm showing you on the left.
524
+ [1562.600 --> 1564.640] So the first question you would ask is, OK,
525
+ [1564.640 --> 1565.960] are they sinusoidal?
526
+ [1565.960 --> 1569.920] That better be true, or we're not going to get there.
527
+ [1569.920 --> 1574.560] And so here is a picture that of Chung took images
528
+ [1574.560 --> 1578.800] of PFN activity on the left side and on the right side
529
+ [1578.800 --> 1579.520] of the bridge.
530
+ [1579.520 --> 1581.560] I've separated them here.
531
+ [1581.560 --> 1583.160] And then interpolate them.
532
+ [1583.160 --> 1584.680] That interpolated them.
533
+ [1584.680 --> 1586.760] That's what you're seeing in the brown circles.
534
+ [1586.760 --> 1588.720] And then the gray is a sinusoidal fit.
535
+ [1588.720 --> 1593.360] So you can see the spatial profile of these cells
536
+ [1593.360 --> 1597.840] across the bridge is really pretty sinusoidal.
537
+ [1597.840 --> 1601.560] As the fly turns, these sinusoidal will shift their phase,
538
+ [1601.560 --> 1603.320] left, right?
539
+ [1603.320 --> 1604.960] But they'll stay sinusoid.
540
+ [1604.960 --> 1606.880] So that's pretty good.
541
+ [1606.880 --> 1609.760] If you look at the EPGs, the same exact way,
542
+ [1609.760 --> 1611.800] you image them across this structure.
543
+ [1611.800 --> 1613.640] They're less sinusoidal.
544
+ [1613.640 --> 1617.960] They're narrower than a sinusoidal, as you see in blue here.
545
+ [1617.960 --> 1622.680] But there's an extra cell that carries the compass signal
546
+ [1622.680 --> 1628.320] down to these PFR neurons, the traveling direction neurons.
547
+ [1628.320 --> 1631.760] And we believe that the sum of those two, which
548
+ [1631.760 --> 1636.120] is what these PFRs get, is more sinusoidal.
549
+ [1636.120 --> 1640.040] And so I'm going to give us a pass on the sinusoidal activity
550
+ [1640.040 --> 1643.560] patterns and move on.
551
+ [1643.560 --> 1645.760] All right, the next thing that better be true
552
+ [1645.760 --> 1648.880] is that I told you that when the fly moves,
553
+ [1648.880 --> 1652.480] for example, backwards I've shown here,
554
+ [1652.480 --> 1654.680] these vectors should get longer.
555
+ [1654.680 --> 1656.520] And that's how come there's some,
556
+ [1656.520 --> 1659.160] ended up pointing in the backward direction.
557
+ [1659.160 --> 1662.600] And likewise, I showed you if the fly drifts to the right,
558
+ [1662.600 --> 1665.160] the one vector gets long, the other vector gets short.
559
+ [1665.160 --> 1667.640] And I remember that the length of the vector
560
+ [1667.640 --> 1671.680] is encoded in the amplitude of the sine wave.
561
+ [1671.680 --> 1674.440] So if this analogy is going to be true,
562
+ [1674.440 --> 1676.800] it better be true that the amplitudes
563
+ [1676.800 --> 1680.040] of these sinusoidal activity patterns
564
+ [1680.040 --> 1683.280] in the different neurons vary correctly
565
+ [1683.280 --> 1686.160] with the motion of the animal, with the drift motion
566
+ [1686.160 --> 1687.840] that we're talking about.
567
+ [1687.840 --> 1690.440] And so here's an example of that.
568
+ [1690.440 --> 1694.800] So again, in the bridge, Chung measured this profile
569
+ [1695.000 --> 1698.800] but I've already shown you this, but now measured it
570
+ [1698.800 --> 1703.880] when the fly was experiencing these visual optic flow signals
571
+ [1703.880 --> 1708.880] in different directions and they vary in amplitude
572
+ [1709.480 --> 1710.280] you'll see in a second.
573
+ [1710.280 --> 1712.480] So here's the forward direction.
574
+ [1712.480 --> 1715.880] But now if you look at the same set of neurons
575
+ [1715.880 --> 1719.040] and imaging them when the fly is being simulated
576
+ [1719.040 --> 1722.320] in the backward direction, you get a much bigger amplitude
577
+ [1722.320 --> 1727.320] of this sinusoidal variation and activity across the bridge.
578
+ [1727.560 --> 1729.800] And of course, that corresponds exactly
579
+ [1729.800 --> 1733.400] to these two red vectors getting longer in this case.
580
+ [1733.400 --> 1736.600] And now you can fill in the whole set of directions.
581
+ [1736.600 --> 1739.960] And if I show you various examples,
582
+ [1739.960 --> 1741.080] you'll see this makes sense.
583
+ [1741.080 --> 1745.200] So for example, this neuron, the left PFN
584
+ [1745.200 --> 1747.360] is encoding this red vector.
585
+ [1747.360 --> 1749.320] And so it should be very big
586
+ [1749.320 --> 1752.200] when the fly is drifting back into the right
587
+ [1752.200 --> 1755.640] like that and it should be very short in this case
588
+ [1755.640 --> 1757.680] when it's in the opposite direction.
589
+ [1757.680 --> 1760.280] On the other hand, the bottom line is showing
590
+ [1760.280 --> 1762.440] the representation of this vector.
591
+ [1762.440 --> 1767.440] It should be big for motion here, back into the left
592
+ [1767.440 --> 1769.200] and it should be small, let's see,
593
+ [1769.200 --> 1771.200] which is the opposite one, this one.
594
+ [1771.200 --> 1774.160] This one, sorry, should be small in this case
595
+ [1774.160 --> 1775.640] when it's up into the right.
596
+ [1775.640 --> 1778.800] So these amplitudes certainly look like they're doing
597
+ [1778.800 --> 1781.400] the right thing to represent those vectors.
598
+ [1781.400 --> 1784.160] But of course, we can just calculate this.
599
+ [1784.160 --> 1787.640] I'm for the time being, think of the fly
600
+ [1787.640 --> 1789.320] as moving at a constant speed.
601
+ [1789.320 --> 1791.640] I'm not going to talk about speed until the end of the talk.
602
+ [1791.640 --> 1793.600] So just think about a constant speed.
603
+ [1793.600 --> 1798.040] So then you know exactly how these projections should vary.
604
+ [1798.040 --> 1800.480] We're projecting the motion of the fly
605
+ [1800.480 --> 1803.600] onto these two vectors when you project.
606
+ [1803.600 --> 1805.520] You get a cosine, right?
607
+ [1805.520 --> 1809.360] And the cosine should be maximal at these plus and minus
608
+ [1809.360 --> 1812.120] one to 35 degree angles corresponding
609
+ [1812.120 --> 1814.680] to the direction of these backward vectors.
610
+ [1814.680 --> 1819.600] So this model predicts that the amplitude
611
+ [1819.600 --> 1821.920] as a function of the drift angle,
612
+ [1821.920 --> 1826.360] or the egocentric traveling angle of the fly
613
+ [1826.360 --> 1827.960] should follow this cosine.
614
+ [1827.960 --> 1830.040] This is a different cosine now, remember.
615
+ [1830.040 --> 1836.720] The basic activity pattern is a cosine shape
616
+ [1836.720 --> 1838.240] across the structure.
617
+ [1838.240 --> 1841.640] But now we're talking about the amplitude of that cosine
618
+ [1841.640 --> 1843.920] as a function of the drift angle.
619
+ [1843.920 --> 1845.680] And that's another cosine.
620
+ [1845.680 --> 1849.640] And you can see the data very beautifully fit this idea.
621
+ [1849.640 --> 1854.480] So these vectors really are, when the vector is varying
622
+ [1854.480 --> 1858.040] in length, the amplitude of the corresponding sine waves
623
+ [1858.040 --> 1861.560] seems to be varying exactly as you would predict.
624
+ [1861.560 --> 1863.280] So I'm going to give a victory there,
625
+ [1863.280 --> 1865.640] but let me just add one thing.
626
+ [1865.640 --> 1869.040] And that is that we're assuming this front heading projection
627
+ [1869.040 --> 1871.280] vector is just constant.
628
+ [1871.280 --> 1875.840] And so it should not vary as you simulate
629
+ [1875.840 --> 1877.200] as they simulate any experiments,
630
+ [1877.200 --> 1880.120] different heading, different traveling directions.
631
+ [1880.120 --> 1880.880] And that's true.
632
+ [1880.880 --> 1883.760] So in the model, we'd expect a straight line.
633
+ [1883.760 --> 1885.560] And that's a pretty good fit to the data.
634
+ [1885.560 --> 1890.120] So it really looks like the sinusoid
635
+ [1890.120 --> 1893.960] is representing three vectors, one at a constant length.
636
+ [1893.960 --> 1896.520] And the other two varying appropriately
637
+ [1896.520 --> 1898.880] with the motion of the fly.
638
+ [1898.880 --> 1901.000] So we'll give ourselves a victory then.
639
+ [1901.000 --> 1903.600] OK, the next thing that I told you about when
640
+ [1903.600 --> 1908.080] I did the algorithmic view is that if the fly turns,
641
+ [1908.080 --> 1910.280] all these vectors should turn together.
642
+ [1910.280 --> 1913.080] That's what locks the representation
643
+ [1913.080 --> 1914.920] to the external world.
644
+ [1914.920 --> 1921.280] So they'd better be locked in phase as the fly turns.
645
+ [1921.280 --> 1926.560] And so you would expect that to happen, actually,
646
+ [1926.560 --> 1928.880] because I didn't show before.
647
+ [1928.880 --> 1931.400] But in the same way, as I showed you,
648
+ [1931.400 --> 1935.840] the PFR is getting input, the way that the PFN runs
649
+ [1935.840 --> 1938.040] know about the compass signal is they
650
+ [1938.040 --> 1940.760] get direct input from the EPGs like this.
651
+ [1940.760 --> 1945.360] And likewise, as you go around the circle or around the thing.
652
+ [1945.360 --> 1947.760] So actually, I goofed up on the bottom.
653
+ [1947.760 --> 1950.480] So you would expect this.
654
+ [1950.480 --> 1953.840] And again, doing simultaneous imaging
655
+ [1953.840 --> 1959.400] of the EPG activity and the PFN, the peak of the sinusoid
656
+ [1959.400 --> 1964.800] of the PFN, you can see that as the fly turns in this arena,
657
+ [1964.800 --> 1968.520] it has a dot to orient itself to the external world,
658
+ [1968.520 --> 1970.640] they track each other very well.
659
+ [1970.640 --> 1973.320] It's probably easiest to see if you look at the right plot.
660
+ [1973.320 --> 1976.640] So you can see that the difference in phase between these
661
+ [1976.640 --> 1981.320] stays really relatively tightly locked to zero.
662
+ [1981.320 --> 1985.080] And so you would say, OK, great, the phases do shift
663
+ [1985.080 --> 1986.800] together with rotation.
664
+ [1986.800 --> 1989.440] We can give ourselves a victory there.
665
+ [1989.440 --> 1992.200] But we got one more thing to get right.
666
+ [1992.200 --> 1995.680] So the first, are they sinusoid?
667
+ [1995.680 --> 1998.960] The second was whether the amplitude is correct.
668
+ [1998.960 --> 2001.200] The third is whether they shift together,
669
+ [2001.200 --> 2004.240] but they have to also have the right relative phases.
670
+ [2004.240 --> 2006.720] In other words, these vectors that we're
671
+ [2006.720 --> 2011.840] trying to represent are offset from each other by 135 degrees.
672
+ [2011.840 --> 2014.320] Now, according to the rules of the game,
673
+ [2014.320 --> 2018.040] that means the phases of the different sinusoid
674
+ [2018.040 --> 2020.840] should also be offset by 135 degrees.
675
+ [2020.840 --> 2024.760] That's how the direction of the vectors were represented.
676
+ [2024.760 --> 2027.520] But unfortunately, I just showed you this picture.
677
+ [2027.520 --> 2030.560] And this picture shows that they are not offset.
678
+ [2030.560 --> 2033.280] They're all lined up with each other.
679
+ [2033.280 --> 2036.800] So unfortunately, we've got alignment of these vectors.
680
+ [2036.800 --> 2037.760] So what does that mean?
681
+ [2037.760 --> 2040.120] Well, we got a lot of things right.
682
+ [2040.120 --> 2042.400] But the picture that we would want
683
+ [2042.400 --> 2044.200] is what I'm showing here.
684
+ [2044.200 --> 2046.120] But the real picture looks like this.
685
+ [2046.120 --> 2047.760] They're all aligned with each other, which
686
+ [2047.760 --> 2051.040] means that the vectors were representing so nicely
687
+ [2051.040 --> 2051.840] look like that.
688
+ [2051.840 --> 2054.560] They're all aligned with each other.
689
+ [2054.560 --> 2057.960] So at this point, we'd have to say, well, nice try,
690
+ [2057.960 --> 2061.640] but didn't look like it worked.
691
+ [2061.640 --> 2064.440] Because we failed in this direction thing.
692
+ [2064.440 --> 2065.840] But we're not done.
693
+ [2065.840 --> 2067.080] Obviously, I'm not done.
694
+ [2067.080 --> 2070.680] Otherwise, I wouldn't be talking about this.
695
+ [2070.680 --> 2073.400] And the point is that the measurements
696
+ [2073.400 --> 2076.360] that we're talking about, we're all made up in the bridge.
697
+ [2076.360 --> 2078.640] And the reason for that is because in the bridge,
698
+ [2078.640 --> 2080.480] you can separate left from right.
699
+ [2080.480 --> 2083.640] The left side, on the left side, the bridge, the right
700
+ [2083.640 --> 2085.000] sides are on the right side of the bridge.
701
+ [2085.000 --> 2087.720] But they'll get mixed up when they go into the fan shape
702
+ [2087.720 --> 2088.440] body.
703
+ [2088.440 --> 2090.240] So what I've been talking to you about
704
+ [2090.240 --> 2093.960] is the alignment of these signals when they're in the bridge.
705
+ [2093.960 --> 2097.840] Now, the summation that we're talking about, what we're
706
+ [2097.840 --> 2102.400] trying to do is identify the inputs to these PFR neurons.
707
+ [2102.400 --> 2105.520] And they get added up, not up in the bridge,
708
+ [2105.520 --> 2107.360] but down in the fan shape body.
709
+ [2107.360 --> 2109.560] So we still have some wiggle room.
710
+ [2109.560 --> 2113.880] And in fact, if I've been sort of providing a little drama
711
+ [2113.880 --> 2116.360] here by not showing you the connections,
712
+ [2116.360 --> 2120.200] but if I now show you the connections of these PFR fans
713
+ [2120.200 --> 2123.320] to the fan shape body, you see something interesting.
714
+ [2123.320 --> 2127.040] Here, I've shown you PFR fans that connect
715
+ [2127.040 --> 2130.200] to these yellow regions in the bridge.
716
+ [2130.200 --> 2133.080] But if you notice in the diagram, they're not
717
+ [2133.080 --> 2135.720] going to the yellow regions in the fan shape body.
718
+ [2135.720 --> 2138.080] They're going to the green and the orange.
719
+ [2138.080 --> 2140.720] And remember, we've aligned the angles all up
720
+ [2140.720 --> 2141.560] in these regions.
721
+ [2141.560 --> 2144.880] So in a sense, they're going to the wrong place.
722
+ [2144.880 --> 2149.440] They're offset relative to the alignment of angles
723
+ [2149.440 --> 2150.800] that I talked about.
724
+ [2150.800 --> 2155.200] Again, these are a whole set of neurons, one for every column
725
+ [2155.200 --> 2159.080] or set for every column, and for every of these,
726
+ [2159.080 --> 2161.880] what are called glomeruli in the proto-sderibral bridge.
727
+ [2161.880 --> 2163.880] And they're all offset.
728
+ [2163.880 --> 2167.840] So if I gave you a summary here, it looks like this.
729
+ [2167.840 --> 2172.040] In the left bridge, they flip by 45 degrees to the right,
730
+ [2172.040 --> 2176.280] which in my bizarre convention is minus 45 degrees.
731
+ [2176.280 --> 2180.200] And then the right bridge, they shift one to the left.
732
+ [2180.200 --> 2180.920] So that's good.
733
+ [2180.920 --> 2183.040] That means we're getting a phase shift, right?
734
+ [2183.040 --> 2187.680] It means that these sine waves look like this.
735
+ [2187.680 --> 2189.040] They're aligned.
736
+ [2189.040 --> 2192.120] If I showed you them in the bridge, they would all be aligned.
737
+ [2192.120 --> 2194.680] But if I now showed them to you in the fan shape body,
738
+ [2194.680 --> 2198.360] they look like this shifted by 45 degrees.
739
+ [2198.360 --> 2201.040] Well, that's a start, but it's not what we want.
740
+ [2201.040 --> 2204.760] That means we're representing vectors that look like this.
741
+ [2204.760 --> 2208.240] And so close, but we didn't get there yet.
742
+ [2208.240 --> 2212.160] But the next great news is that these PFNs,
743
+ [2212.160 --> 2214.600] remember, we're trying to get to these PFR neurons.
744
+ [2214.600 --> 2217.440] Those are the traveling direction neurons.
745
+ [2217.440 --> 2219.760] They don't go to the PFR neurons.
746
+ [2219.760 --> 2223.280] They, I showed you that they went to the fan shape body,
747
+ [2223.280 --> 2226.200] but they actually connect onto another set of neurons
748
+ [2226.200 --> 2228.520] called H delta B.
749
+ [2228.520 --> 2231.720] And these are internarons.
750
+ [2231.720 --> 2234.720] And they have the property that they receive their input.
751
+ [2234.720 --> 2235.600] This is one of them.
752
+ [2235.600 --> 2237.680] Again, there's a whole set of these.
753
+ [2237.680 --> 2240.240] They received their input at one part of the fan shape body,
754
+ [2240.240 --> 2242.680] but they're put their output in a different part
755
+ [2242.680 --> 2243.640] of the fan shape body.
756
+ [2243.640 --> 2248.440] So they are going to provide an additional angular shift.
757
+ [2248.440 --> 2251.640] And they are the ones that carry the signal
758
+ [2251.640 --> 2254.720] to these traveling direction neurons, the PFRs.
759
+ [2254.720 --> 2257.920] So if we take into account the internaron,
760
+ [2257.920 --> 2259.400] the internarons look like this.
761
+ [2259.400 --> 2262.400] Again, there's different sets that are offset
762
+ [2262.400 --> 2264.960] and go between two places.
763
+ [2264.960 --> 2269.360] If we take them into account, they produce a 180 degree shift.
764
+ [2269.360 --> 2271.840] And if you do a little arithmetic,
765
+ [2271.840 --> 2273.960] you add up these two angles and bingo,
766
+ [2273.960 --> 2276.120] you've got your 135 degrees.
767
+ [2276.120 --> 2279.160] So the anatomy here saves the day.
768
+ [2279.160 --> 2282.360] And the beauty of this system being laid out
769
+ [2282.360 --> 2287.720] so neatly in angle space really allows you to see this.
770
+ [2287.720 --> 2290.280] And so we got it.
771
+ [2290.280 --> 2293.640] These waves, if you include the internaron,
772
+ [2293.640 --> 2296.560] are offset by 135 degrees.
773
+ [2296.560 --> 2298.760] And things are looking pretty good.
774
+ [2298.760 --> 2303.120] Let me just show you that what you can do in fly neuroscience
775
+ [2303.120 --> 2306.880] these days due to the connectome data
776
+ [2306.880 --> 2309.360] that only relatively recently came out.
777
+ [2309.360 --> 2310.600] It's publicly available.
778
+ [2310.600 --> 2313.120] You can all take a look at it.
779
+ [2313.120 --> 2317.680] There will surely be a paper from Vivek Jairamins group
780
+ [2317.680 --> 2320.640] analyzing in great detail what the connectome has
781
+ [2320.640 --> 2322.240] to say about the central complex.
782
+ [2322.240 --> 2324.200] That's not quite out yet.
783
+ [2324.200 --> 2325.840] But the data's out.
784
+ [2325.840 --> 2327.880] And so we can make use of that.
785
+ [2327.880 --> 2329.920] And here's an analysis of that where
786
+ [2329.920 --> 2335.120] you can go synapse by synapse and trace through the pathway
787
+ [2335.120 --> 2339.000] and compute the shift of every synapse basically implied
788
+ [2339.000 --> 2342.480] by every synapse and then do these histograms and means.
789
+ [2342.480 --> 2347.640] And you can see you really get very close to these answers.
790
+ [2348.480 --> 2350.800] At the EM synapse level, you find
791
+ [2350.800 --> 2356.160] that these shifts really are quite accurately in the data.
792
+ [2356.160 --> 2360.040] OK, so I'm going to turn this X into a check mark.
793
+ [2360.040 --> 2365.040] We really seem to have satisfied all of these conditions
794
+ [2365.040 --> 2367.560] just to remind you that these vectors seem
795
+ [2367.560 --> 2370.480] to be represented by sinusoidal activity patterns
796
+ [2370.480 --> 2373.640] across these structures.
797
+ [2373.640 --> 2376.440] Their amplitudes varies exactly as it needs
798
+ [2376.440 --> 2381.280] to vary to represent the different projections of motion.
799
+ [2381.280 --> 2383.400] They shift together in phase, which
800
+ [2383.400 --> 2385.400] couples them to the external world.
801
+ [2385.400 --> 2387.920] Basically, they're all coupled to the compass.
802
+ [2387.920 --> 2390.280] And the compass is coupled to the external world.
803
+ [2390.280 --> 2393.280] And finally, they have the phase offset
804
+ [2393.280 --> 2403.760] to do the vector addition and get the right answer.
805
+ [2403.760 --> 2408.200] So here's a kind of a complete model description
806
+ [2408.200 --> 2410.640] of the whole system.
807
+ [2410.640 --> 2415.320] We can represent these different sinusoids at the bottom.
808
+ [2415.320 --> 2417.560] In the middle is this little vector diagram
809
+ [2417.560 --> 2419.720] that I showed you, I called algorithm.
810
+ [2419.720 --> 2421.840] But at the bottom are the activity patterns
811
+ [2421.840 --> 2424.560] that we expect in all these different cases.
812
+ [2424.560 --> 2430.200] You can see, for example, the EPG is just a sinusoid center.
813
+ [2430.200 --> 2433.360] This is really the EPG plus delta 7
814
+ [2433.360 --> 2435.880] is centered in the forward direction
815
+ [2435.880 --> 2439.400] unless the fly turns and then it shifts over.
816
+ [2439.400 --> 2445.160] You can see the right and the left shifted by 135 degrees.
817
+ [2445.160 --> 2448.960] They have low amplitude when the fly is moving forward.
818
+ [2448.960 --> 2452.400] They have big amplitude when the fly is moving backwards.
819
+ [2452.400 --> 2457.920] And they have this non-symmetric amplitude when the fly goes right,
820
+ [2457.920 --> 2459.760] et cetera, and they all shift together.
821
+ [2459.760 --> 2462.600] And then finally, here is what we predict
822
+ [2462.600 --> 2466.360] for the input to this traveling direction system.
823
+ [2466.360 --> 2470.200] This again is across the fan-shaped body neurons,
824
+ [2470.200 --> 2474.440] PFR neurons that live in the columns at different angles
825
+ [2474.440 --> 2478.560] should be getting a profile of input that looks like this.
826
+ [2478.560 --> 2480.560] And finally, what I told you is that, you know,
827
+ [2480.560 --> 2484.080] to figure out where the bump of PFR activity is,
828
+ [2484.080 --> 2486.800] we just have to look for the maximum of that input
829
+ [2486.800 --> 2492.320] because the line attractor should just drift to that maximum.
830
+ [2492.320 --> 2495.960] And if you look, the maxima is exactly at the right place.
831
+ [2495.960 --> 2498.360] It's at the angle of the purple vectors,
832
+ [2498.360 --> 2500.320] which is the traveling direction.
833
+ [2500.320 --> 2503.720] So basically, this is the picture of how traveling direction
834
+ [2503.720 --> 2505.480] is computed.
835
+ [2505.480 --> 2509.160] And you can compare the model to the data.
836
+ [2509.160 --> 2512.200] The model, as I described it, would get traveling direction
837
+ [2512.200 --> 2513.560] absolutely perfect.
838
+ [2513.560 --> 2516.160] But if you actually put in the amplitudes
839
+ [2516.160 --> 2519.840] that we measure rather than the idealized amplitudes,
840
+ [2519.840 --> 2523.920] actually, you get really an excellent fit to the data arguing
841
+ [2523.920 --> 2529.920] that, again, this PFR phase is representing the direction.
842
+ [2529.920 --> 2531.680] This is done in the egocentric coordinates,
843
+ [2531.680 --> 2539.400] but is represented in the allocentric direction of the fly.
844
+ [2539.400 --> 2543.760] OK, so in a fly, you can do a sort of manipulations
845
+ [2543.760 --> 2547.240] to try to convince, provide more evidence
846
+ [2547.240 --> 2549.120] that this idea is right.
847
+ [2549.120 --> 2552.080] One is I haven't mentioned much about walking flies,
848
+ [2552.080 --> 2556.760] but there's evidence that this system also works in walking flies.
849
+ [2556.760 --> 2563.760] And one thing that Gabby and Chung did in walking flies
850
+ [2563.760 --> 2569.520] was to block synaptic transmission by genetic manipulations
851
+ [2569.520 --> 2571.680] in the compass system.
852
+ [2571.680 --> 2574.600] So this is a fly that doesn't have a compass.
853
+ [2574.600 --> 2579.200] And you would expect that that would detach this traveling
854
+ [2579.200 --> 2581.920] direction from the external world.
855
+ [2581.920 --> 2584.720] It would still sense motion, but it would
856
+ [2584.720 --> 2587.000] be uncoupled from the external world.
857
+ [2587.000 --> 2588.800] And that's exactly what you find is
858
+ [2588.800 --> 2593.360] that the phase which was nicely aligned,
859
+ [2593.360 --> 2597.400] so this is in flies without drift,
860
+ [2597.400 --> 2601.800] was nicely aligned to the world before,
861
+ [2601.800 --> 2604.880] has now become in this line.
862
+ [2604.880 --> 2607.560] This is the manipulative fly, much more flat.
863
+ [2607.560 --> 2611.720] So as predicted, the system seems to have decoupled
864
+ [2611.720 --> 2614.360] from the external world in that manipulation.
865
+ [2614.360 --> 2618.440] Another manipulation is to artificially shrink
866
+ [2618.440 --> 2620.840] the length of these backward vectors.
867
+ [2620.840 --> 2623.160] That's by silencing the PFNs.
868
+ [2623.160 --> 2626.560] Remember, those are the neurons that represent the backward vectors.
869
+ [2626.560 --> 2629.560] So if you silence them, it's like you're shrinking the thing.
870
+ [2629.560 --> 2633.160] And that should make the fly think it's moving forward.
871
+ [2633.160 --> 2636.040] And this is done in those cases where the fly is not
872
+ [2636.040 --> 2637.920] getting any visual motion.
873
+ [2637.920 --> 2639.640] So in other words, what we would say
874
+ [2639.640 --> 2644.280] is that this manipulation is simulating optic flow.
875
+ [2644.280 --> 2647.080] And that's what happens.
876
+ [2647.080 --> 2650.840] From a poorly aligned system, you get a much better aligned
877
+ [2650.840 --> 2654.720] system just as if you turned on the optic flow.
878
+ [2654.720 --> 2659.120] And finally, you can excite these PFNs.
879
+ [2659.120 --> 2662.720] That means you're simulating backwards motion.
880
+ [2662.720 --> 2665.840] And indeed, instead of a forward peak,
881
+ [2665.840 --> 2668.240] now you get a 180 backwards peak.
882
+ [2668.240 --> 2671.600] So the manipulations seem to agree.
883
+ [2671.600 --> 2673.840] And so let me just take you through the circuit.
884
+ [2673.840 --> 2677.280] You have these EPG neurons representing heading direction.
885
+ [2677.280 --> 2682.400] They translate that compass signal to another set of neurons
886
+ [2682.400 --> 2684.520] that get visual motion.
887
+ [2685.520 --> 2687.800] Those neurons do vector addition.
888
+ [2687.800 --> 2690.400] There's this 180 degree correction.
889
+ [2690.400 --> 2693.480] And you get the signal going to the PFRs.
890
+ [2693.480 --> 2695.840] The PFRs also get the EPG signal.
891
+ [2695.840 --> 2698.840] And I don't know if you followed, but basically,
892
+ [2698.840 --> 2702.440] the only role of that is to do an offset.
893
+ [2702.440 --> 2709.120] Because there's a baseline activity of these PFNs that would
894
+ [2709.120 --> 2712.520] make the fly continually think it's moving backwards.
895
+ [2712.520 --> 2716.160] If you didn't offset by a forward direction signal.
896
+ [2716.160 --> 2719.120] And then finally, the PFRs do a max operation
897
+ [2719.120 --> 2722.440] and outcomes this traveling direction signal.
898
+ [2722.440 --> 2726.200] OK, let me just talk a little bit at a few things.
899
+ [2726.200 --> 2729.040] I've talked about this particular system.
900
+ [2729.040 --> 2731.960] There are actually more PFN neurons.
901
+ [2731.960 --> 2734.160] I've talked about PFN V neurons.
902
+ [2734.160 --> 2736.200] There are PFN D neurons.
903
+ [2736.200 --> 2740.400] If you look at the anatomy, they have the 45 degree shift,
904
+ [2740.400 --> 2743.200] like these guys, but they don't have the internal on.
905
+ [2743.200 --> 2745.080] And if you've been paying attention,
906
+ [2745.080 --> 2749.080] you should immediately be able to know what signal they carry.
907
+ [2749.080 --> 2751.440] They have to carry a forward direction signal,
908
+ [2751.440 --> 2753.640] because they're not 180 degree flipped.
909
+ [2753.640 --> 2754.680] And indeed, they do.
910
+ [2754.680 --> 2757.920] So their amplitude is modulated as if they're
911
+ [2757.920 --> 2760.560] carrying a forward direction signal.
912
+ [2760.560 --> 2762.840] We haven't talked about them because these placemen
913
+ [2762.840 --> 2765.680] flying to have a subdominant role.
914
+ [2765.680 --> 2768.400] But they're definitely there in the circuit.
915
+ [2768.400 --> 2772.480] And the final thing I'm going to talk about is path integration.
916
+ [2772.480 --> 2774.840] So we talked about traveling direction.
917
+ [2774.840 --> 2779.040] Obviously, that's the relevant direction in an animal.
918
+ [2779.040 --> 2781.440] That's not moving straightforward.
919
+ [2781.440 --> 2783.800] And ego to eccentric transformation.
920
+ [2783.800 --> 2788.040] That's what you need to do path integration vector addition.
921
+ [2788.040 --> 2793.320] So we have a lot of the ideas here for path integration.
922
+ [2793.320 --> 2797.080] And that was really the topic of this paper I mentioned
923
+ [2797.080 --> 2800.200] before, in particular, the really beautiful theoretical work
924
+ [2800.200 --> 2803.160] of Barbara Webb, who built these ideas
925
+ [2803.160 --> 2807.680] into a really very complete model of path integration.
926
+ [2807.680 --> 2811.120] That I would urge you to read.
927
+ [2811.120 --> 2816.880] It's very elegant, particular what it generates is a no matter,
928
+ [2816.880 --> 2823.560] as an insect moves along, generates a vector pointing back
929
+ [2823.560 --> 2825.160] to the home direction.
930
+ [2825.160 --> 2830.280] And animals like ants and bees, that's a very important signal.
931
+ [2830.280 --> 2835.000] So the question is, is this this system, what I've talked about?
932
+ [2835.000 --> 2836.880] There's evidence that it's not.
933
+ [2836.880 --> 2839.000] One is that the speed modulation.
934
+ [2839.000 --> 2841.560] Of course, when you, I mentioned that I was going
935
+ [2841.560 --> 2843.560] to talk about angle and not speed.
936
+ [2843.560 --> 2847.080] If you look at the speed modulation of this bump, which
937
+ [2847.080 --> 2849.760] is very sensitive to traveling direction angle,
938
+ [2849.760 --> 2852.080] it's not very speed-modulated.
939
+ [2852.080 --> 2855.560] So it's as if speed has been kind of left out in this system.
940
+ [2855.560 --> 2857.160] And you can't do path integration
941
+ [2857.160 --> 2859.680] without incorporating a speed signal.
942
+ [2859.680 --> 2863.280] The other thing is we don't see evidence for integration.
943
+ [2863.280 --> 2870.280] But I would just end by saying, these are the components
944
+ [2870.280 --> 2872.880] of what you would need for path integration.
945
+ [2872.880 --> 2875.880] The ideas, I think, are generally on the right track.
946
+ [2875.880 --> 2881.040] And so it should be that in the future,
947
+ [2881.040 --> 2883.280] this system, the fly, central complex,
948
+ [2883.280 --> 2886.520] would really offer the community a chance
949
+ [2886.520 --> 2892.280] to really work out path integration in a very detailed way.
950
+ [2892.280 --> 2894.880] So that's the system I've talked about.
951
+ [2894.880 --> 2897.240] I'll wind up now.
952
+ [2897.240 --> 2900.160] I want to acknowledge again my collaborators
953
+ [2900.160 --> 2902.440] for their really beautiful work.
954
+ [2902.440 --> 2907.160] And it's been just a pleasure and a privilege to work with them.
955
+ [2907.160 --> 2910.160] I will end there, and I'm happy to take questions.
transcript/allocentric_mFJK-t4s-sE.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.360] Comment voir facilement chez votre attaire locuteur, l'attendance est plutôt egocentrique ou
2
+ [4.360 --> 10.240] alocentrique. Donc, egocentrique et go on l'atteint ces jeux, je reviens vers moi, je
3
+ [10.240 --> 16.080] suis autocentrée à l'eau, c'est vers les autres, je vais vers les autres. Alors, l'idée
4
+ [16.080 --> 20.880] est très simple, vous demandez par exemple à briquer, vous voyez, ou je sais pas qu'on
5
+ [20.880 --> 26.120] vous passe quelque chose. Si la personne a réellement envie de vous passer quelque chose,
6
+ [26.120 --> 32.560] parce que elle est naturellement alocentrique, elle aura une tendance à tendre le bras vers vous.
7
+ [32.560 --> 37.160] Si elle a une tendance et gocentrique, vous allez vous remarquer que le bras est à peine tendu
8
+ [37.160 --> 42.720] pour vous obliger à venir vers elle. Et donc, si elle fait ce type de geste, c'est plutôt
9
+ [42.720 --> 46.280] de l'égocentrisme, alors que si elle tend le bras, c'est plutôt de l'alocentrisme.
transcript/allocentric_mQc1sNumTp8.txt ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.000] I'm going to go to the next slide.
2
+ [5.000 --> 10.000] Both Larry and I are honored to be presenting here at the
3
+ [10.000 --> 14.000] symposium and we're honored to present.
4
+ [14.000 --> 17.000] This individual's work, Cheng Liu.
5
+ [17.000 --> 20.000] Neither Larry nor I did any of the experiments.
6
+ [20.000 --> 23.000] You will see today, Cheng did all of them and you will see.
7
+ [23.000 --> 25.000] I think that he did an expansive.
8
+ [25.000 --> 28.000] Body of work with far reaching conclusions that's really an.
9
+ [28.000 --> 32.000] I'm excited for Larry and I to be able to share with you these results.
10
+ [32.000 --> 37.000] And we hope this experiment of a experimentalist and a theorist.
11
+ [37.000 --> 41.000] A joint lecture works. Let's see how it goes.
12
+ [41.000 --> 45.000] So today's lecture will be about navigation.
13
+ [45.000 --> 52.000] And one issue when we're navigating the world is that sometimes our heads are pointed in the direction.
14
+ [52.000 --> 54.000] We are traveling in.
15
+ [54.000 --> 59.000] And sometimes we're looking some other direction as we walk forward.
16
+ [59.000 --> 66.000] And with head movements, our gaze can shift plus or minus 90 degrees say as humans.
17
+ [66.000 --> 76.000] But in certain circumstances, our heads might be pointed 180 degrees offset from the traveling direction in the station wagon for many hours.
18
+ [76.000 --> 81.000] And in all these conditions, we have an internal sense of where we're going in the world.
19
+ [81.000 --> 89.000] And we can do it even though our head angle and our traveling angles might be very, very offset.
20
+ [89.000 --> 93.000] And the reason this distinction is important.
21
+ [93.000 --> 109.000] One reason is that the neurons we know about that indicate the orientation of our body with reference to external coordinates in neuroscience at the southern level, our head direction cells that have been discovered in many species which indicate which way.
22
+ [109.000 --> 115.000] Our heads are oriented or an animal's head is oriented in an environment.
23
+ [115.000 --> 122.000] But as these examples indicate what the animal often cares about is which way it's traveling in the environment.
24
+ [122.000 --> 128.000] So are there signals related to traveling angle in the navigational centers of the nervous system.
25
+ [128.000 --> 131.000] And if so, how are they constructed?
26
+ [131.000 --> 137.000] And this is such a general question that we don't need to go to the human brain, but we can study this in an insect.
27
+ [137.000 --> 158.000] And in the visualize one family for which we happen to live in the actually in places that are, and we have another system that we try to do withenzia or bonita and flashlight this is only 10%.
28
+ [158.000 --> 163.200] angle offset from your heading angle. And in Drosophila over the past six years or so,
29
+ [164.160 --> 167.520] work from many labs, including Michael Dickinson's lab,
30
+ [167.520 --> 174.800] the Vettgerman's lab, Rachel Wilson's lab, my lab, and many others, have studied a heading signal
31
+ [175.840 --> 179.920] in the flybrain. And what I wanted to tell you about today is the discovery of a traveling
32
+ [179.920 --> 187.920] signal and how it's built. So let's start going into the brain. And specifically the part of
33
+ [187.920 --> 192.960] the brain that carries these navigational signals, which is the central complex. The central
34
+ [192.960 --> 198.800] complex of Drosophila consists of four principal structures. One looks like this bicycle handlebar
35
+ [198.800 --> 204.560] shaped structure called the protocerible bridge. If you go a little deeper in the brain, you find a
36
+ [204.560 --> 210.080] structure called the fan-shaped body. Beneath the fan-shaped body is a doughnut-shaped structure
37
+ [210.080 --> 217.440] called the ellipsoid body. And under the ellipsoid body are these sets of nodule. And today's talk
38
+ [217.440 --> 223.280] will include descriptions of signals in all four of these structures. The bridge, the fan-shaped
39
+ [223.280 --> 230.080] body, the circular ellipsoid body, and the nodule. And all of these structures, one of the things they
40
+ [230.080 --> 240.400] do is have a angular representation of space or signals that cover tile angular space. And in the
41
+ [240.400 --> 247.040] ellipsoid body, it's very natural. The zero to 360 degrees around the fly are mapped around the
42
+ [247.040 --> 253.760] circular structure. The bridge has two circles that have been opened up and pieced together as shown
43
+ [253.760 --> 261.680] here. And the fan-shaped body has a fourth axis, angular axis. All of them cover angular space.
44
+ [262.560 --> 271.520] And so let's start with the bridge and the ellipsoid body. In these structures are, well, they're
45
+ [271.520 --> 277.360] made of neurons, many hundreds of neurons. And here's one class of them called epig neurons that I
46
+ [277.360 --> 283.680] will always show in blue. And Larry will always show in blue. And epig neurons have dendrites that
47
+ [283.680 --> 291.760] tile the ellipsoid body and their axons then tile the protus-ribble bridge. And a key step forward
48
+ [291.760 --> 298.240] in this field was done by Johannes Selig in Vivek Jaira Muns lab when he imaged calcium in the
49
+ [298.240 --> 304.640] dendrites of the epig neurons in a fly that's walking on a little floating ball. And what they saw
50
+ [304.640 --> 310.000] was that there was a little bump of calcium activity that was acting like a little compass needle.
51
+ [310.000 --> 314.960] And so when the fly was stationary, this bump was stationary. And when the fly turned on the ball,
52
+ [314.960 --> 320.160] and in this case, it's in complete darkness, this little bump of activity turned in the brain like
53
+ [320.160 --> 326.720] a compass needle. So I'll play a video. The fly turns, the compass needle turns, the fly stable,
54
+ [326.720 --> 335.360] the bump is stable. Fly turns, it updates as as follows. And we've imaged from these same neurons
55
+ [335.360 --> 341.360] in their axon terminals in the protus-ribble bridge. And what you see in the bridge are two bumps
56
+ [341.360 --> 349.680] of activity. One on the left bridge and one on the right bridge, respecting the two open circles
57
+ [349.680 --> 355.040] that I told you about a little earlier. And these bumps move like little windshield wipers in the
58
+ [355.040 --> 360.080] bridge as the fly turns on the ball. And one thing that's a little different in the experiment
59
+ [360.080 --> 365.360] I'm about to show you a video of here is that we gave the fly a visual beacon to know its
60
+ [365.360 --> 370.160] orientation rather than the complete darkness measurement that I showed you in the prior video.
61
+ [370.160 --> 375.120] And when you give the fly a visual beacon that's yoked to the ball. So when the fly turns as blue
62
+ [375.120 --> 380.800] bar turns with the flying with the ball, it acts like the sun or distant mountain and the fly can
63
+ [380.800 --> 384.960] use it and the brain can use it to estimate the fly's orientation. So what you'll see in the video
64
+ [384.960 --> 391.600] is when the fly turns on the ball, this bar rotates with the ball and these bumps track the angular
65
+ [391.600 --> 406.640] position of the bar like little windshield wipers. And these compass signals in the brain, you can
66
+ [406.640 --> 411.520] study them in a walking fly or you can study them as I'm showing here in a tethered flying fly,
67
+ [411.520 --> 417.680] which is the preparation Michael Dickinson and I developed in his lab. And when you do this, you can
68
+ [419.520 --> 424.400] track the wings instead of there's no balls. So you track the wings and you can have a little
69
+ [424.400 --> 430.720] visual stimulus that then moves and close loop with the wings just like with the ball analogously.
70
+ [430.720 --> 435.200] And most of the experiments I'll tell you about today were in tethered flying flies.
71
+ [435.200 --> 441.760] So the EPG neurons have this more detailed anatomy. A single neuron has dendrites in this wedge
72
+ [441.760 --> 446.160] say of the ellipsoid body and goes up and projects to this glomerulus of the left bridge.
73
+ [446.800 --> 451.600] The neighboring EPG goes to the right bridge, left bridge, right bridge, left bridge, right bridge,
74
+ [451.600 --> 456.560] as follows. And so now I hope it's clear why a single bump in the dendrites of the ellipsoid
75
+ [456.560 --> 462.320] body EPG dendrites manifests as two separate bumps in the bridge. It's the same exact neurons
76
+ [462.320 --> 468.560] that are active in the two structures. And the EPG neurons fill up the two structures and in fact
77
+ [468.560 --> 473.440] all the neurons we will tell you about today inside the central complex are columnar neurons in
78
+ [473.440 --> 479.120] this way, meaning they tile. They have many constituent elements that tile the structures that they
79
+ [479.120 --> 486.320] innovate. And so if a fly is standing at some angle relative to the sun, these bumps will be at
80
+ [486.320 --> 491.120] some location in the brain. And when the flight turns with reference to the sun, that's what we're
81
+ [491.120 --> 497.360] mimicking with that blue bar. These bumps will rotate in the brain. And if the flight turns 180,
82
+ [497.360 --> 501.600] the bumps might go to the bottom. And at this location, you can see that these are the when the
83
+ [501.600 --> 507.040] bump kind of opens up, that shows that it's an open circle because it's on two sides of the bridge.
84
+ [507.040 --> 511.440] And then when it keeps rotating and the whole system rotates around. So these are bumps of
85
+ [511.440 --> 516.400] activity that we think inside the fly's brain indicates which way am I oriented relative to the
86
+ [516.400 --> 522.160] sun or some external Q. Another way of saying that is a world referenced orientation signal
87
+ [522.160 --> 530.560] or an aloe centric world referenced is aloe centric. So the question is for this talk, what about
88
+ [530.560 --> 536.880] if the fly's traveling in a direction that's not the direction it's heading? Is there a bump in the
89
+ [536.880 --> 541.280] ellipsoid body or elsewhere in the central complex that says which way the fly is moving rather than
90
+ [541.280 --> 547.520] which way the fly's head is oriented? And so the experiments that kind of reveal this to Chang
91
+ [547.520 --> 552.720] are shown here. Here's one experiment where we are imaging the calcium signal of the epiginaurons
92
+ [552.720 --> 558.240] in the ellipsoid body. You cut the ellipsoid body open into a line. You see the bump kind of moves
93
+ [558.240 --> 563.840] up and down on this plot over time. That's the bump rotating around the circle, plot it as shown.
94
+ [563.840 --> 569.600] And you can extract this peak of this activity and that's shown here in blue over time. And you can
95
+ [569.600 --> 574.080] plot on the same plot the position of the bar and closed loop and you can see that this bump in the
96
+ [574.080 --> 580.080] brain is following the position of the dot in this case and this flying experiment. So this dot is
97
+ [580.080 --> 584.400] rotating in closed loop with the fly's turning behavior. So here it is. Here's this signal that's
98
+ [584.400 --> 589.840] indicating which way the fly's oriented relative to this dot. And the question is, is there a signal
99
+ [589.840 --> 594.000] that indicates which way the fly's traveling relative to the dot? Now the fly's glued to a plate.
100
+ [594.960 --> 600.000] So it's not actually traveling anywhere, but it might be trying to. And so what Chang was doing
101
+ [600.000 --> 604.480] in this experiment was actually co-imaging this blue bump with other bumps. And he did a series of
102
+ [604.480 --> 608.720] experiments where he imaged a bunch of other signals in the central complex. And there's many other
103
+ [608.720 --> 614.880] columnar neurons that have bumps of activity. And one of these other neurons that he imaged
104
+ [615.920 --> 622.240] are tile the fan-shaped body. They're called PFR neurons. And they also express a bump of activity
105
+ [622.240 --> 627.440] that moves left and right in the fan-shaped body. And so Chang was co-imaging these neurons with
106
+ [627.440 --> 632.800] the EPG neurons in the experiment I just showed you. And what he saw was interesting in that the bump
107
+ [632.800 --> 639.440] in the PFR neurons was broader, first of all. And sometimes like here, this broad bump follows the
108
+ [639.440 --> 645.040] arc of the blue bump. You can see that the peaks are tracking together. But at other times like here,
109
+ [645.040 --> 650.400] the blue bump was relatively stable and the purple bump is whizzed across the fan-shaped body
110
+ [650.400 --> 657.200] as shown here. So there's a bump that sometimes is aligned with the EPG bump and sometimes isn't.
111
+ [657.200 --> 661.120] And so that suggests it to us that it may be the reason it's misaligned at times is because the
112
+ [661.120 --> 666.880] flies trying to fly sideways or backwards. And this purple bump is indicating that. And so the way
113
+ [666.880 --> 671.520] we tested this in the flight experiments is if you look in this arena, sometimes we present it to
114
+ [671.520 --> 678.880] the fly's optic flow. So a sea of dots beneath the fly. So it's still controlling the closed loop
115
+ [678.880 --> 684.080] sun stimulus. But we presented this optic flow to make the fly feel like it's getting blown forward.
116
+ [685.680 --> 690.880] In this case, optic flow or backwards. And this is very compelling if you've ever been in a virtual
117
+ [690.880 --> 697.360] reality bubble or glasses on, goggles on for humans. It's very compelling and we think the fly's
118
+ [697.360 --> 702.640] finding compelling too. You see the sea of dots that make you feel like you're moving one direction
119
+ [702.640 --> 708.720] or another relative to your body axis. Okay. So in parallel with this closed loop sun, we presented
120
+ [708.720 --> 713.120] this optic flow. And I just want to mention that this optic flow stimulus was adapted from
121
+ [713.120 --> 718.560] code from Michael Dickinson's lab. And in fact, Peter Weir, who was a postdoc with Michael Dickinson,
122
+ [718.560 --> 723.520] was the first to image, fanchi body neurons in response to these optic flow stimuli.
123
+ [724.160 --> 730.240] And what we saw in these experiments was that when we presented a static optic flow, that's what I
124
+ [730.240 --> 736.720] was showing to you here. Now when we presented optic flow that simulated the fly's body moving forward,
125
+ [737.440 --> 743.680] we saw a remarkable and very strong effect, which is that the fly could still turn left and right
126
+ [743.680 --> 749.040] in the in the in the sun in the bump position here indicated that it locked into the position of
127
+ [749.040 --> 755.280] the dot. But now the purple and blue bumps aligned very tightly. So when the fly's body was
128
+ [755.280 --> 759.920] traveling in the direction it was oriented, these bumps aligned. And when you didn't give a fly
129
+ [759.920 --> 764.720] a cue that it's moving forward, they were misaligned again. So we could do this sort of experiment
130
+ [764.720 --> 771.200] not just anecdotally but systematically. Here's another single example when we presented for
131
+ [771.200 --> 777.200] three seconds, optic flow forward. And you can see that in this sample trial, the blue and purple
132
+ [777.200 --> 783.040] bumps aligned. And this is now 13 flies that difference between the blue and purple curves
133
+ [783.040 --> 788.000] over time. And they are not very different when you present open loop optic flow simulating forward
134
+ [788.000 --> 794.640] travel. When you simulate backward travel, they got misaligned by exactly 180 degrees.
135
+ [796.080 --> 801.120] And when you presented the intermediate drift angles, you got intermediate offsets that were
136
+ [801.120 --> 806.480] very consistent across individuals. And you could summarize these data by plotting the difference
137
+ [806.480 --> 812.160] between the blue and purple curves as a function of the optic flow angle. And these fit
138
+ [812.160 --> 818.480] decently along a linear axis, which means the difference between these two faces is saying which
139
+ [818.480 --> 826.880] way my body is getting blown. And maybe even more precisely what or more relevant to our question
140
+ [826.880 --> 834.080] is you could infer which way the fly's body is moving in a world centered coordinates from these
141
+ [834.960 --> 841.360] measurements. So you could say where is the sun in the arena or where is this blue phase? You could
142
+ [841.360 --> 846.880] say that's the way the body is oriented. And then you could add at every moment in time, you could
143
+ [846.880 --> 851.520] say which way the optic flow is moving and add that angle. And so from those two angles, you can get
144
+ [851.520 --> 857.280] you can say which way the flies body is moving relative to the world and plot that on the X axis.
145
+ [857.280 --> 861.760] And you can plot on the Y axis, where is the purple bump in the fan shade body? And then you see a
146
+ [861.760 --> 867.600] very nice beautiful fit. The position of this purple bump is closely tracking which way the flies
147
+ [868.320 --> 874.160] is moving in a world centered reference frame. And we saw this not only in PFR neurons, but in the
148
+ [874.160 --> 879.760] input that they have a dominant input from a set of neurons called H delta Bs that are we will
149
+ [879.760 --> 885.680] always plot in red. H delta B neurons also have this traveling direction signal. And in fact,
150
+ [885.680 --> 891.040] it is an H delta B neurons that we think this signal is first built. And so let me summarize what
151
+ [891.040 --> 896.320] I just said in a cartoon. If you didn't follow the data, this is the point. When the flies flying,
152
+ [897.040 --> 902.560] there's two bumps of activity. One in the fan shade body, one in the ellipsoid body. Both these
153
+ [902.560 --> 908.080] bumps indicate an orientation relative to the sun or relative to the world. So literally, they
154
+ [908.080 --> 912.960] represent east, northwest, south, and so on. And the reason we don't use these terms usually is
155
+ [912.960 --> 919.360] because in any given fly east might be here in the fan shade body or it might be here. But once you
156
+ [919.360 --> 926.800] get this offset dealt with, which could differ fly to fly, these are literally earth coordinate systems
157
+ [926.800 --> 932.640] like like north east, west, south. So when the flies flying around, the bumps track which way the
158
+ [932.640 --> 937.040] flies oriented. But when the fly gets blown backwards right there, the purple bumps says I'm moving
159
+ [937.040 --> 941.760] south and the blue bump doesn't. And now the fly gets blown back again, the purple bump is saying
160
+ [941.760 --> 946.400] which way am I traveling in the world, even if it's not oriented with my heading and the blue bump
161
+ [947.200 --> 951.920] doesn't. And so that's the signal we found. And that's the first part of the talk. And now the
162
+ [951.920 --> 959.200] rest of the talk will be regarding this question, how do you build a bump of activity in the brain,
163
+ [959.200 --> 964.000] the tracks, which way you're traveling relative to the world or an allocentric traveling direction
164
+ [964.000 --> 970.080] signal. And with that, I'll hand over to Larry. Gabby introduced to you these two directions,
165
+ [970.080 --> 976.000] the heading direction and the traveling direction. And I want to outline how the traveling direction
166
+ [976.000 --> 980.720] would be computed. Just the components we need will go into more detail as we progress
167
+ [981.360 --> 986.400] in order to compute this traveling direction. And an important feature which Gabby presented to
168
+ [986.400 --> 991.760] you is its traveling direction in the world, reference to the world around it. So the first thing
169
+ [991.760 --> 998.480] that is going to be needed in order to calculate that is a reference to an external landmark like
170
+ [998.480 --> 1005.600] the Sun here through this EPG compass system. So in fact, we think that a lot of the computations
171
+ [1005.600 --> 1011.600] being done in the central or complex are probably tied into the world around the fly through
172
+ [1011.600 --> 1016.400] this system. And that's definitely true of this traveling direction signal. So that links it to
173
+ [1016.400 --> 1022.640] the to the world. But of course, the fly also is going to need to know which way it's moving in the
174
+ [1022.640 --> 1029.920] world. And that's done through optic flow. As we progress, Gabby will explain to you how visual
175
+ [1029.920 --> 1035.680] neurons project the optic flow onto these four directions that have shown in brown and orange.
176
+ [1036.400 --> 1043.040] There's an orthogonal system that's rotated 45 degrees relative to the fly. And the optic
177
+ [1043.040 --> 1050.240] motion is projected onto those vectors, as I said. The key in linking these two, the right hand side
178
+ [1050.240 --> 1056.880] would give you an egocentric view of motion. The left hand side ties it to the world is to tie
179
+ [1056.880 --> 1062.800] these four axes to the heading direction. In other words, the front two there, you wouldn't call
180
+ [1062.800 --> 1069.600] plus and minus 45 degrees, but heading plus and minus 45 degrees. Then when the fly rotates,
181
+ [1069.600 --> 1076.000] they get rotated along with it. And they're constantly referenced to the world. And that is how
182
+ [1076.000 --> 1082.320] the traveling direction signals computed as a vector sum. Once you do this first step of anchoring
183
+ [1082.320 --> 1087.520] these vectors to the world, that's the coordinate transformation from egocentric to allocentric.
184
+ [1088.080 --> 1093.200] And then it's a vector summation problem, which I'll show you now. So if you take these four
185
+ [1093.200 --> 1100.000] vectors for any particular drift or optic flow, this would be a little optic flow that's more in
186
+ [1100.000 --> 1104.880] the forward direction, a little bit more on the right. Now you just add these vectors and you get
187
+ [1104.880 --> 1111.520] the traveling direction in the world. I used to teach first-year physics. This is a classic first-year
188
+ [1111.520 --> 1115.840] physics problem. And there's the answer that a good first-year physics student will give you.
189
+ [1116.480 --> 1122.480] This angle is given by this formula. Now if you've forgotten your first-year physics, don't worry,
190
+ [1122.480 --> 1127.360] because we're going to explain to you how the fly calculates this angle. It's exactly the same
191
+ [1127.360 --> 1132.560] calculation, but it's actually done in a clever and much simpler way. There's no ratio, there's
192
+ [1132.560 --> 1137.440] no heart tangent. And so we're going to progress to tell you how the fly does this computation.
193
+ [1138.160 --> 1144.880] The first question that we have to ask experimentally then is how and where are these four vectors
194
+ [1144.880 --> 1155.360] represented in the fly's brain? So the answer begins with the fact that the EPG cells, when they
195
+ [1155.360 --> 1164.080] project up to the bridge, connect up to a set of neurons that are called PFND cells and PFND
196
+ [1165.040 --> 1171.840] cells. And this anatomy of these cells was first described by Tanya Wolf in 2014.
197
+ [1172.560 --> 1178.080] And has been kind of even in more detail described with a recent release of the connect on from
198
+ [1178.960 --> 1184.480] Janelia, which will serve a role in kind of anchoring our anatomy here. So I'll be telling you
199
+ [1184.480 --> 1192.240] about connections that are analyzable in that beautiful dataset that Janelia released. So the EPG's
200
+ [1192.240 --> 1198.960] we know now through those two sets of experiments connect or papers connect to these PFND and PFND
201
+ [1198.960 --> 1206.240] cells directly. And when you co-image, when Chanko image the EPG cells in a green fluorescent
202
+ [1206.240 --> 1212.640] calcium indicator and the PFND cells in the bridge with a red fluorescent indicator, he saw a very
203
+ [1212.640 --> 1219.360] clear result, which is that the PFND, these orange cells have two bumps of activity just like the EPGs
204
+ [1219.360 --> 1226.800] and those bumps track the EPG bumps kind of in a relatively boring way. So in addition to the
205
+ [1226.800 --> 1232.320] two bumps of EPG is moving like windshield wipers, we have four more bumps and duplicate two PFND
206
+ [1232.320 --> 1239.520] bumps and two PFND bumps. So those we believe might get us part of the way there towards these four
207
+ [1239.520 --> 1245.520] vectors. The reason we think they might be related to these four vectors that would be building this
208
+ [1245.600 --> 1253.440] traveling signal is twofold. One is these four things are anchored to the heading angle that Larry
209
+ [1253.440 --> 1259.840] talked about. When the blue bumps rotate, all these orange and brown bumps will rotate together.
210
+ [1259.840 --> 1266.560] So they have that H minus calculation in there for free. And the second thing is as we'll show
211
+ [1266.560 --> 1272.160] you later is they connect monosynaptically to these H delta B neurons that we know have the
212
+ [1272.160 --> 1277.520] traveling signal in them. And so we have four signals that are connecting to our traveling
213
+ [1280.080 --> 1288.240] bump neurons. And so they have the potential to carry these four vectors. So can bumps,
214
+ [1288.240 --> 1292.000] they don't look like little arrows in the brain, they look like little bumps of activities,
215
+ [1292.000 --> 1295.920] our next question is going to be can bumps represent vectors.
216
+ [1296.880 --> 1303.680] So the answer to this is yes if they're the right kind of bumps. So there's a very well-known
217
+ [1303.680 --> 1309.200] representation of vectors called a phaser representation in which you take a vector like this has
218
+ [1309.200 --> 1316.640] has a length L1 that has an angle phi1 and you map it onto a sine wave. And that sine wave should
219
+ [1316.640 --> 1322.480] have a phase equal to the angle of the vector and an amplitude equal to the length of the vector.
220
+ [1322.480 --> 1330.240] So this is the phaser map between sinussoids and vectors. You can do this for a second vector,
221
+ [1330.800 --> 1336.080] it has a length L2 and that maps to the amplitude and the phase maps to the angle again.
222
+ [1336.080 --> 1341.200] And then the payoff here is that if you want to add those two vectors which you can do
223
+ [1341.840 --> 1347.120] by the vector method all you have to do is add the two sinussoids. So you add the two sinussoids,
224
+ [1347.120 --> 1352.720] you'll get a resultant sinusoid that has its phase at the angle of the summed vector
225
+ [1352.720 --> 1358.880] and its amplitude equal to the length of the summed vector. So in engineering this trick is used
226
+ [1358.880 --> 1365.120] on the on the right to some sine waves using vectors. And what we're going to argue is that the
227
+ [1365.120 --> 1371.360] fly does it the other way. It uses the sinussoids to add the vectors rather than the vectors to add
228
+ [1371.360 --> 1378.560] the sinussoids. Now the idea here then is that these are spatial sinussoids, not temporal,
229
+ [1378.560 --> 1385.440] that populations of different cell types like you see in the bottom there, through their activity
230
+ [1385.440 --> 1391.680] across the structures of the central complex are going to map out these sinussoids.
231
+ [1392.400 --> 1397.760] And it will have one for each vector we need, they'll get summed because they'll converge
232
+ [1397.840 --> 1405.440] onto a common target. And that's the proposal for how things work. Now this idea has been noted
233
+ [1405.440 --> 1410.880] before, their papers that go back to proposing that maybe something like this is going on in the
234
+ [1410.880 --> 1419.440] hippocampus associated with various navigational computations. They're also the that's the first two
235
+ [1419.440 --> 1425.360] papers that I referenced there. The last three papers actually apply this idea to insects and
236
+ [1425.360 --> 1431.280] to navigation and insects. In particular, the last paper here, the stone web and hyenas,
237
+ [1432.640 --> 1439.280] use these ideas in an interesting proposal about how path integration might be done in bees.
238
+ [1439.840 --> 1446.080] And it also used a nanologist circuit in bees quite similar to the one we're describing.
239
+ [1446.720 --> 1453.200] So the question that we have to ask now of the experiments is not just that there are pfn
240
+ [1453.200 --> 1456.800] bumps, but are they sinussoids? Because that's critical to this idea.
241
+ [1459.360 --> 1468.240] So the way you contest this is by taking the bumps, in this case, in the EPG measurements,
242
+ [1468.960 --> 1473.600] phase nulling them, meaning move the bumps so they're always in the same position on every frame.
243
+ [1474.560 --> 1479.840] And then average the g-cam signals that you see, the calcium signals, and look at the shape that
244
+ [1479.840 --> 1487.760] you get out. And when you do that with EPG bumps, these blue bumps, you can see a variety of shapes,
245
+ [1487.760 --> 1497.600] but often you'll see a shape that is sub sinusoidal, as shown here. And with a sinusoidal fit shown in
246
+ [1498.480 --> 1504.880] dotted line, you can see it's not very good. And this shape of the two bumps in the bridge makes sense
247
+ [1504.880 --> 1508.880] because if you project this bridge through the known anatomy down to the ellipsoid body,
248
+ [1509.840 --> 1518.480] a beautiful kind of Gaussian looking bump in the ellipsoid body would yield this because every
249
+ [1518.480 --> 1523.120] other signal goes left bridge, right bridge, left bridge, right bridge. You get this half bump on
250
+ [1523.120 --> 1530.960] the left half bump on the right and they're kind of too skinny often. So the EPG bumps are in many
251
+ [1530.960 --> 1538.720] measurements not sinusoidal. What about the pfn and dnv? It turns out that they're
252
+ [1539.600 --> 1544.880] comport really, really nicely to sinusoid as measured in the bridge. And not only that, the
253
+ [1544.880 --> 1552.880] kind of the beauty of the Drosophila system is that you can imagine a hypothesis rooted in
254
+ [1552.880 --> 1558.560] detailed anatomy for how you convert a non sinusoidal pair of bumps into a sinusoidal pair.
255
+ [1559.520 --> 1565.600] And the idea is this that when the EPGs go up to the bridge, they talk directly to the pfn
256
+ [1565.600 --> 1572.080] and dnv cells. But they also talk indirectly to those same cells through these internals.
257
+ [1572.960 --> 1579.360] So EPG cells hit these delta 7 cells very strongly with their synapses. And here's one example of
258
+ [1579.360 --> 1586.480] a delta 7 neuron that has this kind of weird and beautiful anatomy. It has dendrites that peak
259
+ [1586.480 --> 1592.000] at these two positions and you can see kind of anatomically drop off in their intensity off their
260
+ [1592.000 --> 1597.280] peaks. And these three locations are where the axon terminals are of the cell. So let me draw
261
+ [1597.280 --> 1603.360] this cell schematically on our bridge here. And you can see that when the bump is at the top of
262
+ [1603.360 --> 1609.520] the ellipsoid body, which is the kind of situation we're going to be simulating, this specific
263
+ [1610.080 --> 1614.880] delta 7 green cell is going to be activated very hard if you assume that synaptic,
264
+ [1615.680 --> 1621.200] sorry anatomical overlap, it co-varies with physiological activation. So this green cell should
265
+ [1621.200 --> 1625.600] be activated very strongly, which means the outputs of this cell should be very high
266
+ [1626.560 --> 1633.360] to its downstream targets. And so we can draw this kind of schematically in a simple model where we
267
+ [1633.360 --> 1639.200] schematize the dendritic weights of varying going up and down sinusoidally twice over the bridge.
268
+ [1639.200 --> 1645.840] That's shown here up and down twice. And then the same cells axon has outputs here, here and here.
269
+ [1645.840 --> 1651.120] So I've modeled the single cell as it's dendrites here and is axon sphere. And in this model,
270
+ [1651.120 --> 1656.640] you can take this average activity from the ATG, take a dot product with the simulated dendritic
271
+ [1656.640 --> 1661.760] density. And that's just multiply point by point and add everything up. And you can see the peaks
272
+ [1661.760 --> 1666.640] are aligned. So you're going to get a large value. It's going to give you a nice output. And so these
273
+ [1666.640 --> 1671.760] three locations are going to be very active. And the key is that this is just one of eight
274
+ [1671.840 --> 1676.400] different delta seven neurons. The next delta seven neuron has everything shifted over to the left.
275
+ [1676.400 --> 1680.720] So if you look at the schematic, here's another delta seven neuron with its axons here.
276
+ [1681.680 --> 1686.000] Now this blue neuron is going to hit a little offset from the peak. So the output at these two
277
+ [1686.000 --> 1692.000] locations is going to be not quite as high but still high. And another delta seven neuron has its
278
+ [1692.000 --> 1696.880] axons here. It's going to be its dendrites are very thin here. So this cell would be very
279
+ [1696.880 --> 1702.720] inactive. So you can plot all eight delta seven model neurons, all eight modeled outputs.
280
+ [1702.720 --> 1708.320] And what you find in this simple model is that if you take this weird EPG shake thing,
281
+ [1708.960 --> 1715.600] play it through the model. You get a beautiful sinusoid out. A couple things to say, the sinusoid
282
+ [1715.600 --> 1721.200] is phasenverted. So this is the trough where you see the peak here. But we're almost certain that
283
+ [1721.200 --> 1726.240] the delta seven neurons are inhibitory because they're glutamateurgic and there's functional
284
+ [1726.240 --> 1730.960] evidence as well to suggest that they're inhibitory on many downstream cells. So what we think is
285
+ [1730.960 --> 1737.920] happening is that the EPGs hit their downstream cell and these inhibitory reshaping neurons kind
286
+ [1737.920 --> 1746.320] of reformat these weird bumps into sinusoid. The same idea is in analysis of the connect on paper
287
+ [1746.320 --> 1752.560] from the VEG Germans lab as analyzed by Dan Turner Evans. And so just to summarize schematically,
288
+ [1752.560 --> 1756.960] you take weird shaped bumps and you can turn them into sinusoid. How do you do it? You get
289
+ [1756.960 --> 1762.080] direct inputs to the cells and you have indirect through delta seven. And we think these
290
+ [1762.080 --> 1770.720] delta sevens reshape these four bumps to be four sinusoid. And so we have four sine waves now in
291
+ [1770.720 --> 1775.920] the bridge that have an interesting property. Not only are they in the bridge, they're yoked
292
+ [1775.920 --> 1781.360] to the fly's heading. So when the flight turns, remember I told you the EPG bumps would move in
293
+ [1781.360 --> 1786.560] the bridge like this. And these are driving these four sinusoid so they're going to rotate as well.
294
+ [1786.560 --> 1791.680] So every time the flight turning, you have these sinusoid rotating in the bridge, which means they're
295
+ [1791.680 --> 1796.960] yoked to the fly's internal sense of heading. And sinusoid's is Larry said could represent vectors.
296
+ [1796.960 --> 1803.440] So we're in good shape. And I guess I'll throw it to Larry by asking can we declare victory? We have four
297
+ [1803.680 --> 1812.000] sinusoids linked to the fly's heading. Yeah, so we have a lot of good news here.
298
+ [1812.800 --> 1818.240] As Gabby said, we've got our four sinusoids. It seems like the system puts some effort into
299
+ [1818.240 --> 1824.320] making sure they're really sinusoids, which fits with the phaser idea. They're all linked to
300
+ [1824.320 --> 1830.000] heading direction because they're all tied in with the EPG heading signal. So this axis I've drawn
301
+ [1830.000 --> 1835.360] on the right is the heading angle. And I don't have a nice movie like well, it Gabby, but they would
302
+ [1835.360 --> 1843.280] all move together back and forth if the flight turned. So that's all great. We've made the coordinate
303
+ [1843.280 --> 1851.360] transformation to world coordinates because all of these phases are now tied to heading and that's
304
+ [1851.360 --> 1857.520] what we needed with the vectors. But we've got one problem. And that is that these guys are all aligned.
305
+ [1857.520 --> 1865.040] That means that the four PFN cell classes represent four vectors that are all parallel to each
306
+ [1865.040 --> 1872.000] other. And that's not what we wanted. So we almost made it. But what we really want is that these
307
+ [1872.000 --> 1877.680] four vectors are split out and differ from the heading direction by plus and minus 45 in the
308
+ [1877.680 --> 1884.960] frontward and the PFN Ds and plus and minus 135 for the PFN Vs in the backward direction.
309
+ [1885.520 --> 1891.840] So what I have to introduce then is another question about the data, which is are the PFN sinusoid
310
+ [1891.840 --> 1901.840] shifted in this way? So it turns out that the four sinusoid in the bridge, I've drawn them up
311
+ [1901.840 --> 1908.000] there's two brown bumps on the left two orange on the right are aligned with the EPG bumps in
312
+ [1908.000 --> 1912.720] the bridge. That's another way of restating what Larry just said in the bridge, all the bumps are
313
+ [1912.720 --> 1920.720] aligned with the EPG bump. But the key issue is that the way the the traveling direction signal is
314
+ [1920.720 --> 1926.560] going to be built is through interactions down here in the fan shape body. So what we need to analyze
315
+ [1926.560 --> 1931.680] is how these neurons project to the fan shape body and interact with the H delta B neurons that
316
+ [1931.680 --> 1936.000] they synapse onto. And so that's what I'm going to show next. Let's look at these two
317
+ [1936.960 --> 1944.400] single PFN V neurons and I'm going to draw their dendrites in squares and their axon terminals
318
+ [1944.400 --> 1951.600] in circles. They have this kind of beautiful cross over anatomy that is shown by Tanya Wolf and
319
+ [1951.600 --> 1957.440] by the connectome release more recently. So what does this mean? This means that the fan shape
320
+ [1957.520 --> 1966.320] body's axis, if you say this is zero, the this this course, the two corresponding cells here are
321
+ [1966.320 --> 1972.400] offset by 90 degrees plus or minus 45 degrees from the center point. And so you have a 45 degree
322
+ [1972.400 --> 1979.520] shift just in how they project down here. Now these orange cells are not the 45 degree shifts. We want
323
+ [1979.520 --> 1984.560] these cells are going to be the 135 degree shift. And the way this works is that the H delta B
324
+ [1984.560 --> 1989.920] cells have introduced a second shift of 180 degrees. So now let me show you what these cells
325
+ [1989.920 --> 1995.040] schematically look like. H delta B cells have dendrites in one column of the fan shape body.
326
+ [1995.760 --> 2004.000] And they have axon terminals that are offset by 180 degrees here. So if you add 45 to minus 180,
327
+ [2004.000 --> 2010.880] you get a net shift minus 135. And so this bump of activity gets moved and placed here in the
328
+ [2010.880 --> 2016.560] fan shape body or this neurons activity I should say. The symmetric process happens from the left
329
+ [2016.560 --> 2023.680] bridge. And so we have our plus 135 and minus 135 shifts. And PF and D's they have the same first
330
+ [2023.680 --> 2030.480] order anatomy as PF and D's. The difference is is that they connect PF and D's connect very weekly,
331
+ [2030.480 --> 2035.440] relatively weekly I should say to the dendrites of H delta D's. They do connect to the dendrites,
332
+ [2036.000 --> 2042.720] more weekly than they connect to the axon terminals. So it's kind of an interesting and odd anatomy.
333
+ [2042.720 --> 2048.560] This PF and D cell will drive this axon terminal, we believe strongest, stronger than the
334
+ [2049.200 --> 2055.360] dendrites of cell here. And this PF and D cell will drive this axon terminal here. And so if you
335
+ [2055.360 --> 2061.280] look at the axons of the H delta B cells, you expect that these four common positions in the
336
+ [2061.280 --> 2067.120] bridge will lead to four offset positions in the fan shape body that correspond to plus 45 minus
337
+ [2067.120 --> 2078.480] 45 plus 135 minus 135. And this same anatomical insights are in a paper by Jenny Lee and Rachel
338
+ [2078.480 --> 2087.600] Wilson with theoretical work by Shao Drukney. That's unbiased. So let me kind of summarize here with
339
+ [2087.600 --> 2094.800] our sinusoid and vector diagram, which is summarizable as follows. We have these four bumps
340
+ [2094.800 --> 2100.000] aligned with the epiGs in the bridge, but the way these bumps, these sinusoidal bumps project to
341
+ [2100.000 --> 2108.320] the fan shape body, is with a unique anatomy. So PF and D's go down plus 45. I should say minus 45
342
+ [2108.320 --> 2114.240] the way we flip the axons here for technical reasons I won't get into. This, meaning this vector
343
+ [2114.240 --> 2122.080] gets positioned here relative to the front. This one goes to the other direction 45. PF and V's go
344
+ [2122.080 --> 2127.280] here, but then the H delta B's brings the peak over to here and vice versa for the other one.
345
+ [2128.320 --> 2133.920] And so the four bumps in the bridge are sinusoid that can represent vectors and not only that
346
+ [2134.720 --> 2143.280] we think they get summed by the H delta B cells in this kind of 90 degree offset way with peaks
347
+ [2143.360 --> 2149.760] at the exact positions Larry's been has been talking about to you. So now I think we can ask are
348
+ [2149.760 --> 2154.960] these properties sufficient? Can we say that we can build a model for how the traveling direction
349
+ [2154.960 --> 2162.160] signal is built? So we've made great progress here. We've got our four vectors represented by
350
+ [2162.160 --> 2168.240] sine waves. The sine waves are oriented in the right way to represent the angles of the vectors,
351
+ [2168.240 --> 2173.280] but there's one piece missing and that is that if you remember when I made this phaser analogy,
352
+ [2173.840 --> 2180.000] the phase of the sine wave corresponded to the angle of the vector that's the part we've accounted for.
353
+ [2180.000 --> 2185.360] But also the amplitude of the sine wave approach proportional to the length of the vector and if
354
+ [2185.360 --> 2191.040] you remember the lengths of these vectors are going to reflect the projection of the optic flow
355
+ [2191.040 --> 2198.000] onto these different directions. So we've got to have a case where optic flow modulates the amplitude
356
+ [2198.400 --> 2204.560] of these four sinusoides that we're talking about. Let me just illustrate you that for one of the
357
+ [2204.560 --> 2213.440] vectors, the one corresponding to PF and D on the right side. So here's sort of the null point where
358
+ [2213.440 --> 2218.720] we've got a vector in a sine wave. If the happens to be a lot of optic flow in that particular
359
+ [2218.720 --> 2224.640] direction, the vector gets longer and that means the sinusoid has to get of higher amplitude if this
360
+ [2224.640 --> 2230.160] is going to work. If that we've when we add up these these sinusoids into the H delta B, we're
361
+ [2230.160 --> 2236.000] going to get the right answer. Likewise, if there's low optic flow in this particular direction,
362
+ [2236.000 --> 2241.520] the sine wave should shrink. So again, we're going to come up now with a with a question for the
363
+ [2241.520 --> 2248.560] experimentalists, which is are there amplitude modulations of these sinusoidal spatial pattern
364
+ [2248.640 --> 2257.200] in these types and are they related to optic flow? So a remarkable observation by
365
+ [2257.200 --> 2263.680] Chang was that there are extremely strong modulations of the amplitude of these signals in the
366
+ [2263.680 --> 2271.360] proto-surrey book bridge with optic flow. So here I'm showing you the mean of nine flies when
367
+ [2271.360 --> 2277.520] he was imaging PF and D's. We've phased now all the signals and averaged them so we can see the
368
+ [2277.520 --> 2283.840] shape and amplitude of the GCAM signal in the left bridge PF and D's and the right bridge PF and D's
369
+ [2283.840 --> 2289.520] and the same for the V's. When we presented optic flow in this case that simulates the fly drifting
370
+ [2289.520 --> 2297.840] directly backwards relative to its body, the V's became very bright and the D's became very weak.
371
+ [2297.840 --> 2302.320] And by the way, these dotted lines are sinusoidal fits and so you can see that they still are
372
+ [2302.320 --> 2308.640] sinusoidal, they just change amplitude. When the fly was simulated to be going forward,
373
+ [2309.440 --> 2316.480] the amplitude modulation inverted where the D's became bilaterally higher and the V's became
374
+ [2316.480 --> 2325.200] bilaterally weaker. And in these kind of off center, off axis directions, we saw these asymmetries
375
+ [2325.200 --> 2331.040] in amplitudes. So the answer is that we saw extremely strong modulations in amplitude very dramatic
376
+ [2331.040 --> 2342.240] ones. And we can quantify these amplitude modulations as follows. You can take the sinusoidal fits
377
+ [2342.240 --> 2348.400] and extract from it the amplitude of the fit or do it directly from the data and you can see
378
+ [2348.400 --> 2354.240] that the amplitude of the left bridge PF and D's grows and shrinks as follows with the optic flow
379
+ [2354.240 --> 2360.080] direction and that's what I'm plotting here kind of the amplitude fit. And you can do the same thing
380
+ [2360.080 --> 2364.480] for the right bridge. You can see that it would peak right around here and that's what I'm showing
381
+ [2364.480 --> 2370.000] on the right. You can do the same thing for the V's and you see these strong modulations that
382
+ [2370.000 --> 2375.840] themselves are sinusoidal. So this is a sinusoidal modulation of the amplitude of a sinusoidal.
383
+ [2377.520 --> 2383.920] So these are non just experimentally, these are not generically observed if you look at EPGs,
384
+ [2383.920 --> 2389.760] they're relatively stable in amplitude. And one of the things I think Larry and I want to do with
385
+ [2389.760 --> 2395.840] our hour here that we have because of a joint talk is really showing the level of comprehensiveness
386
+ [2395.840 --> 2402.880] we can provide to the understanding of these signals which is like with the delta sevens we could
387
+ [2402.880 --> 2408.960] we could say hey you know maybe these neurons are contributing to reshaping bumps into sinusoidal
388
+ [2408.960 --> 2416.880] bumps. Here we can use the anatomy of the inputs the central complex to give a hypothesis about
389
+ [2416.880 --> 2421.200] where these modulations are coming from from outside the central complex. What is bringing an
390
+ [2421.200 --> 2426.960] optic flow sense from the eye into these neurons to change the amplitude and we think we know the
391
+ [2426.960 --> 2434.080] answer. So the PFN V's what we need is that the left bridge grows an amplitude and shrinks with
392
+ [2434.080 --> 2439.600] amplitude with this property. So you need to modulate all these neurons kind of synchronously.
393
+ [2439.600 --> 2443.520] And so the way this works is that all these left bridge neurons have another
394
+ [2443.520 --> 2448.880] neuropl, where they get input called the the the nod. So this is the sorry the right bridge PFN V's
395
+ [2448.880 --> 2454.480] get input in the left nodules. They all kind of converge and have a massive kind of strong input
396
+ [2454.480 --> 2459.600] right here from a set of neurons that are called LNO tunes. And so before I show you those neurons
397
+ [2459.600 --> 2465.360] let me just show you that when you image the PFN V's in the left nodules and take the main G
398
+ [2465.360 --> 2472.000] camp activity here it comports really well to the main G camp or amplitude modulation we saw here.
399
+ [2472.000 --> 2476.160] So let's look at the right bridge. This is now imaging in the nodules. So this is a different
400
+ [2476.160 --> 2480.480] except of experiments where we image these neurons down here and you can see they have the similar
401
+ [2480.480 --> 2486.480] modulation. So something we think that's bringing in inputs right here is driving this modulation
402
+ [2486.480 --> 2491.040] and that's curculating up to the bridge that's amplitude modulating the sinusoid. What are those
403
+ [2491.040 --> 2496.240] neurons? Like I said from the connect domain from previous light microscopy anatomy that best
404
+ [2496.240 --> 2500.640] candidates are these LNO one neuron cells that receive inputs outside the central complex and
405
+ [2500.640 --> 2506.480] densely innervate the right bridge PFN V's in the left nodules. When you image those neurons
406
+ [2506.480 --> 2512.560] you see a beautiful modulation as well. Perfectly sign inverted to the to the orange neurons
407
+ [2512.560 --> 2519.120] modulation. So we are I believe this is an inhibitory input a sign inverting input that's driving
408
+ [2519.120 --> 2525.200] these PFN V's bringing in the optic flow input into the system. When you look at the left bridge
409
+ [2525.280 --> 2530.880] everything is symmetric and you see that the LNO ones from the other side of the brain bring in
410
+ [2530.880 --> 2536.960] an input that could create this modulation. Symmetrically let's go to the D's now. When you go
411
+ [2536.960 --> 2544.640] down to the nodules you see the matching signals like you'd expect and there are two neurons
412
+ [2544.640 --> 2550.640] called LNO 2 that are likely to bring in optic flow inputs to these noduli but we were technically
413
+ [2550.640 --> 2554.880] limited from imaging them. We didn't have a gal four line that allowed us to do it cleanly.
414
+ [2555.520 --> 2560.560] Luckily with PFN V's they have a second input that tackles the left bridge or the right bridge
415
+ [2561.200 --> 2567.680] kind of on mass and they're called SPSP neurons. There's two of them and you can image them
416
+ [2567.680 --> 2572.960] on the left bridge and ask do they are they bringing in optic flow input from outside the central
417
+ [2572.960 --> 2578.240] complex and the answer seems to be yes. So when you image the SPSP neurons you find that their
418
+ [2578.240 --> 2584.880] amp with G-camp is modulated in a way that's perfectly signed inverted to the PFN D's and we
419
+ [2584.880 --> 2589.280] think that they're one of the inputs that brings in this optic flow signal into the system
420
+ [2589.280 --> 2595.600] modulating the amplitudes of the sinusoid so that the vector calculation can have the right
421
+ [2595.600 --> 2601.760] amplitudes to all the vectors. And so at this point I think Larry will be able to put it all together
422
+ [2602.400 --> 2608.160] and give you a sense of how the model works for building a traveling direction signal inside the
423
+ [2608.160 --> 2615.280] system. Yeah so let me let me put it all together and do that. So on the right you're seeing
424
+ [2615.280 --> 2621.520] these four vectors that we've been talking about all along and in the center is as a schematic
425
+ [2621.520 --> 2628.880] of the central complex. The EPG signal here has been put in a certain place corresponding to a
426
+ [2628.880 --> 2635.120] particular heading of the fly and then if you go down from the bridge into the fanché body what
427
+ [2635.120 --> 2642.000] you see is four sinusoids those are the spatial patterns of the four PFNs representing the forward
428
+ [2642.000 --> 2648.320] and backward direction and then a red dot which would be the corresponding position of the H-delta-B
429
+ [2648.320 --> 2653.680] which sums up the input from those four sign waves and in this case is in the corresponding
430
+ [2653.680 --> 2658.240] position to the EPG. So this would be a case where the traveling direction and the heading
431
+ [2658.240 --> 2664.000] direction align. But let's look what happens if the fly would start drifting backwards due to a
432
+ [2664.000 --> 2670.560] strong wind. So in the experiments that's represented by having a convergent pattern of dots
433
+ [2670.560 --> 2677.040] on the screen and Gabby showed you that the effect of that is to shrink the amplitude of the forward
434
+ [2677.920 --> 2685.600] representing PFN d's and expand the amplitude of the PFN v so if you look in the middle plot that
435
+ [2685.600 --> 2693.920] corresponds to the dark brown sinusoids getting smaller the orange ones getting bigger that causes
436
+ [2693.920 --> 2702.880] an input that drives the H-delta-B bump all the way across to the 180 degree position. If I look
437
+ [2702.880 --> 2708.880] at it in terms of vectors what's happening is the backward vectors are getting big the frontward
438
+ [2708.880 --> 2714.480] vectors are getting small because of the of the backward drift if you add up these four vectors the
439
+ [2714.480 --> 2721.040] way that the sinusoidal sum is doing it you get an H-delta-B signal that indicates backward
440
+ [2721.040 --> 2726.960] traveling motion. On the other hand if the fly moves forward that would be represented in the experiment
441
+ [2726.960 --> 2736.640] by diverging optic flow in this case things flip the PFN d's get big amplitude modulation and the
442
+ [2736.640 --> 2743.520] PFN b's get small that's represented in the middle now by the orange sinusoid being big that the
443
+ [2743.520 --> 2750.560] other ones being small and that will yeah there there is sort of the brown ones getting big and the
444
+ [2750.560 --> 2756.240] orange ones getting small that will drive the H-delta-B back to its original position now we're
445
+ [2756.240 --> 2761.040] heading forward if you look at the vectors what's happened is the the forward ones are big the backward
446
+ [2761.040 --> 2765.840] ones are small you add them up and you get that forward direction I'll just do one more of these if
447
+ [2765.840 --> 2772.640] the if the fly is going sideways a little bit that would be represented in the experiment by by
448
+ [2772.640 --> 2782.080] moving the the focus of of the of the optic flow that causes an asymmetric pattern in the PFN
449
+ [2782.080 --> 2789.440] d sinusoid if I if you look in the middle that means that two of these sinusoid on the right side
450
+ [2789.440 --> 2796.320] will be expanded amplitude the other two will be shrunk that moves the H-delta-B ballus to the right
451
+ [2796.400 --> 2803.440] and in terms of vectors we have big vectors on the left small vectors on the right and they add up
452
+ [2803.440 --> 2809.120] in that way so this is how the system works we can put it together into a model and compared
453
+ [2809.120 --> 2816.160] to the data that's what you'll see here so on the left hand side the circles represent the H-delta-B
454
+ [2816.160 --> 2821.280] data that Gabby showed you right at the beginning of the talk and the and the plus signs are the
455
+ [2821.280 --> 2828.560] model you can do an analytic model here if you notice in Gabby's plots all of the fits that he
456
+ [2828.560 --> 2834.960] showed were cosines and signs of various forms so you can do the whole calculation by trigonometry
457
+ [2834.960 --> 2838.960] and it just reproduces exactly the calculation that I showed you in that equation
458
+ [2839.520 --> 2845.200] towards the beginning of the talk on the other hand this plot just uses the data so there are no
459
+ [2845.200 --> 2851.440] parameters in this in this model you just take the data from the pfn's you add it the way this
460
+ [2851.440 --> 2857.520] this model says it should be added and you get those green pluses that agree very well in the case
461
+ [2857.520 --> 2864.800] of the pfr on the right you'll notice again a very good match of the model but a less good match
462
+ [2864.800 --> 2869.520] to the diagonal so there's this little deviation from the diagonal and actually we understand that
463
+ [2869.520 --> 2875.280] that's because there is an extra input from the pfr and when we include that in the model we match
464
+ [2875.280 --> 2882.560] this little deviation quite well so I'm going to wind up my comments here and Gabby will take it
465
+ [2882.560 --> 2888.880] to the end but I just want to say that that for a for a model or theorist it's just a thrill to work
466
+ [2888.880 --> 2896.000] on a system where we've reached a level of precision that's really in system neuroscience really
467
+ [2896.000 --> 2902.160] quite remarkable that's due of course flies have always been famous for genetics and that's only
468
+ [2902.160 --> 2908.880] increased but now you know Michael in the in the introduction talked about the advances in behavior
469
+ [2908.880 --> 2916.880] and fly you can do these virtual experiments you now we have the EM connect home and so I think
470
+ [2916.880 --> 2923.840] it's kind of a new era of precision in both measurement and modeling that we've achieved that
471
+ [2923.840 --> 2930.800] hopefully we'll expand to other systems over time so I'm ready to declare victory Gabby I'm happy
472
+ [2930.800 --> 2938.080] so what do you have to say so maybe the last thing data wise that we was worth showing you is what
473
+ [2940.000 --> 2944.960] took Chang about a year to do so if Larry was able to declare victory you wouldn't have to do
474
+ [2944.960 --> 2952.560] these experiments but a good one of the great aspects of Drosophila at least in the modern era
475
+ [2952.560 --> 2959.920] is that we can do genetically precise perturbation so I just want to show you four experiments that
476
+ [2959.920 --> 2965.840] Chang did where he tried to perturb the system in a way that would test this vector model and the
477
+ [2965.840 --> 2972.960] results are turned out consistent with with this overall view so the first experiment he did was
478
+ [2972.960 --> 2978.720] the silence the EPG neurons so that should be a major hit to the system you don't have any
479
+ [2978.720 --> 2984.720] anchoring signal you can't orient any of these vectors to the external world and so when you see
480
+ [2984.720 --> 2989.840] the bump in the fancy body if there even is a bump it should be all over the map the way to
481
+ [2989.840 --> 2994.400] represent that is that there shouldn't even be these we don't something very bad should be happening
482
+ [2994.400 --> 3000.320] to these vectors and indeed when you this isn't now in a walking flight you did it to be stringent
483
+ [3000.320 --> 3005.280] because in walking flies we often see a nice alignment between this traveling bump and the
484
+ [3005.840 --> 3011.840] blue bar on the screen because flies typically walk forward what Chang saw in the context of an
485
+ [3011.840 --> 3017.680] EPG silence fly is that this purple bump still existed so that's interesting it means the
486
+ [3017.680 --> 3022.400] system is trying to estimate the traveling direction you think but it's having a horrible time
487
+ [3022.400 --> 3031.360] doing it and you get the the difference between the black and purple signals here is nearly uniform
488
+ [3031.360 --> 3036.160] that means the purple bump is all over the map all over the fancy body for a given bar position
489
+ [3036.160 --> 3043.920] so the system indeed is really reliant on this EPG starting point Chang inhibited the PFNVs by
490
+ [3043.920 --> 3049.520] expressing in them a potassium channel that hyperpolarizes them that should cause in the vector
491
+ [3049.520 --> 3056.000] modest backward facing vectors to shorten which means this should be a fly that thinks it's
492
+ [3056.000 --> 3061.840] traveling forward all the time and indeed when the image is us flying flying the context of
493
+ [3061.840 --> 3067.280] no optic flow so this is a fly remember where the purple bump deviated from the blue a lot
494
+ [3068.240 --> 3076.640] now they they're very tightly aligned across our population more so than on controls so that's
495
+ [3076.640 --> 3082.480] consistent with the flight that believing it's flying forward what if you excite these these
496
+ [3082.480 --> 3087.920] orange neurons the way we're going to do it is by inhibiting those LNO tubes which inhibit so that
497
+ [3087.920 --> 3094.240] you know on this inhibition of the PFNV so we expect this fly to have a kind of a horrible life
498
+ [3095.040 --> 3100.960] when the light is on anyway where it feels like it's going backward all the time and so Chang did this
499
+ [3100.960 --> 3105.040] with two photon activation of the optogenetic reagent and that's what he's showing here and you
500
+ [3105.040 --> 3109.840] can see the purple and blue bumps tend to be deviated by 180 degrees and that was true across
501
+ [3109.840 --> 3117.040] the population the two bumps deviated on average by 180 you can get the same result by shortening
502
+ [3117.040 --> 3123.520] the front two vectors through optogenetic experiment and that to cause the deviation to be on average
503
+ [3123.520 --> 3129.920] 180 degrees and so that's where we'll end with the experiments I have a little bit of a conclusion
504
+ [3131.280 --> 3135.920] to augment what Larry said beyond this but just to say that this
505
+ [3136.560 --> 3142.160] um set of perturbation experiments is what gives us even more confidence than the
506
+ [3143.040 --> 3149.360] models fit to the physiology all together gives us a very strong belief that this is a good working
507
+ [3149.360 --> 3155.520] view of what the system is trying to do and how it does it all right so for conclusions um
508
+ [3157.280 --> 3161.920] I think we've shown you that flies can explicitly track the direction their body is traveling not
509
+ [3161.920 --> 3170.000] just the direction their head is pointing um they represent vectors we think explicitly two
510
+ [3170.000 --> 3175.600] dimensional vectors through sinusoidal patterns whose amplitude represents the length of the vector
511
+ [3175.600 --> 3183.280] and whose phase represents the angle they can perform a rotation of vectors scaling the length
512
+ [3183.360 --> 3193.440] and adding them so vector arithmetic um is happening we think in the system and I want to end by saying
513
+ [3194.800 --> 3200.880] a few words to relate this work to mammalian neuroscience and our mammalian colleagues that
514
+ [3200.880 --> 3206.240] are listening to this talk um I don't do this work and most of the members of my lab don't do this
515
+ [3206.240 --> 3212.080] work just to get insight into the mammalian brain uh we find the majority of brains in this planet
516
+ [3212.080 --> 3218.400] are insect brains um and so they're worth understanding in their own right but I think uh there's also
517
+ [3218.400 --> 3225.120] in this case relatively direct in kind of enjoyable analogy with work that both Larry and I
518
+ [3225.120 --> 3231.600] contributed to um in parietal cortex uh so let me build up to that what am I what am I getting at
519
+ [3231.600 --> 3238.560] one way to um think about the computation we just told you is that this system is taking
520
+ [3238.560 --> 3245.200] optic flow input egosent which is reflects which way my body is moving or head is moving that's
521
+ [3245.200 --> 3251.440] what the optic flow says um from you know which way my body or head moving relative to my body
522
+ [3252.400 --> 3258.560] and what it's doing is converting that egocentric travel body centric travel into a signal that's
523
+ [3258.560 --> 3264.800] which way am I moving relative to the world relative to the sun and how do you do it you integrate
524
+ [3264.800 --> 3269.920] a sense of which where your body is moving relative to its axis with another signal that says
525
+ [3269.920 --> 3275.440] which where your body is oriented relative to the sun or relative to the world you combine these two
526
+ [3276.160 --> 3280.560] and together you can say which way am I but am I moving relative to the world how do you combine
527
+ [3280.560 --> 3286.720] these two you have a set of neurons that explicitly combines them so pf and v's and v's are neurons
528
+ [3286.720 --> 3291.680] that are conjointly tuned they have mixed selectivity they're tuned both to allocentric heading
529
+ [3292.480 --> 3296.960] so basically where these peaks are of these pf and v neurons is said which way am I oriented
530
+ [3296.960 --> 3304.160] relative to the sun and then their amplitudes indicate how fast my body is moving relative to
531
+ [3304.160 --> 3309.760] the body axis and so these kind of conjointly tuned neurons what do we think they're doing they're
532
+ [3309.760 --> 3316.000] transforming body centered traveling to world center traveling they're performing a coordinate
533
+ [3316.080 --> 3326.160] transformation and what we what we can do in drosophila in the central complex is image spatially
534
+ [3326.160 --> 3330.400] these bumps of activity and and everything makes sense and it's very beautiful to see but
535
+ [3330.400 --> 3336.720] in mammalian systems what often people do is you image or image or a record from single neurons
536
+ [3336.720 --> 3341.920] that are salt and pepper in the brain and I just want to say a few things that it you know
537
+ [3341.920 --> 3346.320] commonly when you look at the mammalian brain and your record from single neurons you'll find
538
+ [3346.320 --> 3352.880] cosine tuning to properties and so one thing the mammalian colleagues I believe should be thinking
539
+ [3352.880 --> 3359.200] about is are these potentially two-dimensional vector representing neurons are they operating in
540
+ [3359.200 --> 3364.560] a population of cells that are not spatially distributed as nicely as in drosophila but are
541
+ [3364.640 --> 3371.920] exist as a as a vector representation so that's one comment cosine tuning might reflect that sort
542
+ [3371.920 --> 3376.800] of function and we should consider it when we record in the mammalian brain and the second comment
543
+ [3376.800 --> 3384.720] I want to say is when you record in cortex it's more the rule than the exception that you see mix
544
+ [3384.720 --> 3392.880] selectivity for features so so when you record from single neurons in the fly we've done single
545
+ [3392.880 --> 3397.680] neuron recordings we would have found these pfn cells are sensitive both to heading and that
546
+ [3397.680 --> 3404.240] heading is scaled the activity is scaled by the optic flow that's analogous very nicely to work
547
+ [3404.240 --> 3409.440] from Richard Anderson and modeled by David Zipser in the 80s where they recorded in
548
+ [3409.440 --> 3415.440] thrydal cortex neurons from area 7a and what they saw in those single neurons that they had
549
+ [3415.440 --> 3421.120] receptive fields that were yoked to the retina you know normal receptive fields position of stimulus
550
+ [3422.080 --> 3427.440] but if the eye was in a different part of the eye socket that response was scaled multiplicatively
551
+ [3428.240 --> 3433.280] and pressionately in a beautiful model they imagined oh maybe these neurons what they're doing is
552
+ [3433.280 --> 3438.960] converting stimulus position on retina to stimulus position relative to the head they're performing
553
+ [3438.960 --> 3444.160] a coordinate transformation in this model and what I think we can add to this picture is that
554
+ [3444.160 --> 3450.400] this three-layer network idea in a computational model initially by Zipser and Anderson but also by
555
+ [3450.400 --> 3457.680] others including one of our speakers in this seminar I think in drosophila we have very strong
556
+ [3457.680 --> 3461.520] evidence that that is indeed what happens there's a three-layer network that's doing a coordinate
557
+ [3461.520 --> 3468.160] transformation and to chunk's credit it took eight cell types and has first order functional
558
+ [3468.160 --> 3473.760] description of what roles they serve in that transformation and that's a testament I think to
559
+ [3473.760 --> 3479.760] drosophila and what we can do at a systems level of understanding is Larry was saying and it's what
560
+ [3479.760 --> 3484.960] might come out when we you know understand these circuits at that level of resolution but
561
+ [3485.760 --> 3492.800] isn't it remarkable personally and I think for Larry as well that we had spent time in cortex and
562
+ [3492.800 --> 3498.000] that anything we would do in the fly would bear any relation and the idea that the higher brain functions
563
+ [3498.000 --> 3503.360] of the fly might be performing a coordinate transformation as one of their dominant functions and
564
+ [3503.360 --> 3509.360] that being a hypothetical function for not just parietal cortex but people think in the
565
+ [3509.360 --> 3514.000] medial and terrional hippocampal system in prefrontal cortex there's a lot of
566
+ [3514.560 --> 3519.520] conceptions that mixed selectivity might serve this role of coordinate transformations and so I
567
+ [3519.520 --> 3526.240] want to propose that in drosophila we can tell you for sure that a mixed selectivity neuron is
568
+ [3526.240 --> 3532.800] performing that function we believe and maybe re-energize that view of the higher brain region
569
+ [3532.880 --> 3539.680] mammals as well is one speculative hypothesis and with that I'll end and say you can read about
570
+ [3539.680 --> 3545.440] this work in a bioarchitect paper we put out a parallel study from Rachel Wilson and Jenny Liu
571
+ [3545.440 --> 3553.840] is also on bioarchived on many of the same results and ideas and the vector addition ideas and then
572
+ [3553.840 --> 3561.920] sinusoidal ideas as analyzed in the connectome is in a paper or closer to a book
573
+ [3562.640 --> 3570.720] enormous tone a very extensive analysis of the fly connectome the central complex connectome
574
+ [3570.720 --> 3577.440] from the VEG JARAMON's lab and these four senior authors and with that I'll end thank
575
+ [3577.440 --> 3588.960] chunk for all his work thank you for listening we'll be happy to take questions
transcript/allocentric_u6v-LAy6Whk.txt ADDED
@@ -0,0 +1,502 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 15.000] With the four of you stand up, please.
2
+ [15.000 --> 19.040] In the coming ten seconds, you're all going to think of an emotion, something that you
3
+ [19.040 --> 23.720] could associate with something perhaps personal to you.
4
+ [23.720 --> 30.720] And you're done, briefly run through the memory and the emotion involved.
5
+ [30.720 --> 32.320] No, no, no, don't come back to the room yet.
6
+ [32.320 --> 34.640] Just take on to the emotion.
7
+ [34.640 --> 36.360] And so you're right there.
8
+ [36.360 --> 38.480] You look very authorities.
9
+ [38.480 --> 41.480] Could you go into a little more detail?
10
+ [41.480 --> 43.280] Is that okay?
11
+ [43.280 --> 44.760] Thank you.
12
+ [44.760 --> 51.760] Look at me, please.
13
+ [51.760 --> 56.760] Do you have a picture in your mind?
14
+ [56.760 --> 58.560] Any emotion you do?
15
+ [58.560 --> 59.560] A memory too?
16
+ [59.560 --> 61.560] Is it happiness?
17
+ [61.560 --> 62.560] Thank you.
18
+ [62.560 --> 63.560] You can sit back.
19
+ [63.560 --> 66.040] Thank you.
20
+ [66.040 --> 72.560] You are the major skeptic here.
21
+ [72.560 --> 74.560] Disgust?
22
+ [74.560 --> 81.320] Thank you.
23
+ [82.320 --> 88.320] You don't actually have to show it to me, just in your mind.
24
+ [88.320 --> 95.120] I'm not entirely sure.
25
+ [95.120 --> 99.640] Would that be right if I say you kind of changed your mind?
26
+ [99.640 --> 100.640] You did?
27
+ [100.640 --> 101.640] Thank you.
28
+ [101.640 --> 105.320] Is it happiness now?
29
+ [105.320 --> 108.160] But I'm not entirely sure.
30
+ [108.160 --> 113.240] It got to be contempt or fear.
31
+ [113.240 --> 114.240] Thank you.
32
+ [114.240 --> 115.240] There you go.
33
+ [115.240 --> 116.240] You can sit down.
34
+ [116.240 --> 120.320] And thank you.
35
+ [120.320 --> 126.480] What I want you to do, sir, you're more focusing on the memory rather than the emotion
36
+ [126.480 --> 129.160] if I said, would that be right?
37
+ [129.160 --> 132.560] Is it anger?
38
+ [132.560 --> 133.560] Thank you.
39
+ [133.560 --> 135.880] Your memories, I can't go into detail.
40
+ [135.880 --> 139.640] I'm not allowed to even.
41
+ [139.640 --> 144.760] What if I said it's about a conversation or an attitude from someone to you?
42
+ [144.760 --> 146.920] Is that, would that be?
43
+ [146.920 --> 147.920] Someone mentioned something.
44
+ [147.920 --> 154.160] It's like not like a complaint, but someone mentioned something.
45
+ [154.160 --> 157.040] Something about the commandeer, something about gender.
46
+ [157.040 --> 159.040] There you go.
47
+ [159.040 --> 160.040] Thank you so much.
48
+ [160.040 --> 161.040] You can sit down.
49
+ [161.040 --> 162.040] Thank you.
50
+ [162.040 --> 169.640] Before trying to know or understand even one sense among the five, people always want
51
+ [169.640 --> 172.080] to figure out the sixth sense.
52
+ [172.080 --> 176.040] Some happen to believe there is no such thing and I am one of them.
53
+ [176.040 --> 181.600] The key here would be using the five senses together more appropriately to create an illusion
54
+ [181.600 --> 182.920] of the sixth.
55
+ [182.920 --> 186.520] This is what I'm mentalist is.
56
+ [186.520 --> 187.520] Good evening.
57
+ [187.520 --> 188.880] My name is Hadhi.
58
+ [188.880 --> 193.880] I'm a mentalist.
59
+ [193.880 --> 196.120] Thank you.
60
+ [196.120 --> 201.720] In a moment, you're all going to see two images on screen, two faces probably.
61
+ [201.720 --> 202.560] Have a look at it.
62
+ [202.560 --> 209.800] If you would, oh, that, could you associate them with one of them with happiness and other
63
+ [209.800 --> 211.800] one with sadness?
64
+ [211.800 --> 213.880] Is it very obvious?
65
+ [213.880 --> 216.440] That's my cute little niece there.
66
+ [216.440 --> 220.600] If you should try this.
67
+ [220.600 --> 222.720] For most of the people, they look the same.
68
+ [222.720 --> 226.320] But if you watch a little closer, you will see the emotion unfolding.
69
+ [226.320 --> 229.080] The people on the left is indeed happier than the other.
70
+ [229.080 --> 230.960] That isn't really happiness, actually.
71
+ [230.960 --> 238.320] If you watch close, they say the natural smile causes characteristic wringles around the
72
+ [238.320 --> 241.440] eyes, insincere people smile only with their mouth.
73
+ [241.440 --> 244.160] You will see this kind a lot in the receptions.
74
+ [244.160 --> 245.160] Okay.
75
+ [245.160 --> 251.240] Thanks to the works of Professor Pollock, my microextricin sheddies.
76
+ [251.240 --> 254.600] We suggest if you could imagine the back of the brain as a projector and then the face
77
+ [254.600 --> 256.120] comes as a screen.
78
+ [256.120 --> 260.880] Whatever projects there, even for a minute fraction of second, should appear here.
79
+ [260.880 --> 262.720] You thought so safe with you though.
80
+ [262.720 --> 267.920] Just that emotion has signals.
81
+ [267.920 --> 272.360] All of us know that only about a thirty-fifth percentage of the communication is verbal.
82
+ [272.360 --> 279.360] The rest is non-verbal, which involves expression, tonality, eye contact, gesture, posture,
83
+ [279.360 --> 288.160] appearance, etc., etc., which literally means the people always give up or give away.
84
+ [288.160 --> 289.160] That would be right.
85
+ [289.160 --> 293.200] People always give away much more than they realize.
86
+ [293.200 --> 295.560] These are only for the train rise though, fortunately.
87
+ [295.560 --> 300.760] And as most of the people don't even know the existence of it when it comes to communication.
88
+ [300.760 --> 302.880] So, this is my point.
89
+ [302.880 --> 307.920] If so, isn't that very obvious for a person to see two to three times more than an average
90
+ [307.920 --> 311.840] observer if he is following the non-verbal communication as well?
91
+ [311.840 --> 312.840] Just think about it.
92
+ [312.840 --> 314.680] I am not trying to make it complicated here.
93
+ [314.680 --> 315.680] Just to the math.
94
+ [315.680 --> 317.680] Are you with me so far?
95
+ [317.680 --> 318.680] Yes.
96
+ [318.680 --> 319.680] Okay.
97
+ [319.680 --> 320.680] Let me try this.
98
+ [320.680 --> 327.960] Have you ever heard this most famous intellectual game called the 20 Christians?
99
+ [327.960 --> 328.960] How many of you?
100
+ [329.960 --> 330.960] That's a few.
101
+ [330.960 --> 332.960] Some call it reverse quiz.
102
+ [332.960 --> 333.960] There we go.
103
+ [333.960 --> 334.960] A little more.
104
+ [334.960 --> 335.960] Allow me.
105
+ [335.960 --> 339.960] If we do stand up, sir.
106
+ [339.960 --> 341.960] What's your name if I may have?
107
+ [341.960 --> 342.960] Abhishek.
108
+ [342.960 --> 345.960] Sir, would you mind as well?
109
+ [345.960 --> 346.960] Your name is?
110
+ [346.960 --> 349.960] Abhishek and Suraj.
111
+ [349.960 --> 353.960] This is how it's going to work.
112
+ [353.960 --> 357.960] I will explain how it's going to work.
113
+ [357.960 --> 363.960] What I want you to do is try and think of an object of a person.
114
+ [363.960 --> 365.960] Try and think of a person.
115
+ [365.960 --> 367.960] Someone, a well-known person, if I would.
116
+ [367.960 --> 370.960] Normally, how the game works, you should write it down.
117
+ [370.960 --> 372.960] What are you thinking and give it to a panel.
118
+ [372.960 --> 375.960] They will judge the person is whether famous enough or not.
119
+ [375.960 --> 377.960] But we don't work that way.
120
+ [377.960 --> 378.960] So, make sure.
121
+ [378.960 --> 380.960] I hope you are thinking of a well-known person.
122
+ [380.960 --> 384.960] Now, you, sir, Suraj, what are you going to do?
123
+ [384.960 --> 387.960] You are going to think of an object.
124
+ [387.960 --> 390.960] Anything under the sun or even beyond sun.
125
+ [390.960 --> 392.960] But something really, really famous.
126
+ [392.960 --> 393.960] You got one?
127
+ [393.960 --> 394.960] Yes.
128
+ [394.960 --> 397.960] Now, forget 20 questions.
129
+ [397.960 --> 400.960] I'm going to frame my questions in such a manner that you could only answer.
130
+ [400.960 --> 401.960] Yes or no?
131
+ [401.960 --> 403.960] Forget 20 questions.
132
+ [403.960 --> 407.960] Let's say four to five questions, hopefully.
133
+ [407.960 --> 410.960] You were taking.
134
+ [410.960 --> 412.960] You haven't had to answer.
135
+ [412.960 --> 414.960] You only got to answer in your head.
136
+ [414.960 --> 415.960] How convenient.
137
+ [415.960 --> 416.960] Nothing out loud.
138
+ [416.960 --> 417.960] Just answer in your head.
139
+ [417.960 --> 418.960] But we are generally honest.
140
+ [418.960 --> 421.960] I see two major skeptics in here.
141
+ [421.960 --> 423.960] So, you are making me nervous.
142
+ [423.960 --> 425.960] You understand the rule of the game.
143
+ [425.960 --> 427.960] No, you are supposed to answer in your head.
144
+ [427.960 --> 428.960] Remember.
145
+ [428.960 --> 429.960] Okay.
146
+ [429.960 --> 430.960] Whatever.
147
+ [430.960 --> 434.960] Just, here we go.
148
+ [434.960 --> 439.960] Have you checked?
149
+ [439.960 --> 440.960] Okay.
150
+ [440.960 --> 442.960] Mirroring the other person's body language.
151
+ [442.960 --> 443.960] That's good.
152
+ [443.960 --> 447.960] I will start with you then, Suraj.
153
+ [447.960 --> 451.960] Is it a big object, a massive one?
154
+ [451.960 --> 458.960] Okay.
155
+ [458.960 --> 460.960] Something like a household object.
156
+ [460.960 --> 461.960] That's confusing enough.
157
+ [461.960 --> 463.960] Let me refresh my question.
158
+ [463.960 --> 470.960] Something that we see inside kitchen.
159
+ [470.960 --> 471.960] Okay.
160
+ [471.960 --> 473.960] You're not trying to trick me, are you?
161
+ [473.960 --> 475.960] I believe you.
162
+ [475.960 --> 477.960] That is in the third question.
163
+ [477.960 --> 482.960] I just made it up.
164
+ [482.960 --> 483.960] A moving object.
165
+ [483.960 --> 485.960] Is that confusing?
166
+ [491.960 --> 493.960] Electronic?
167
+ [493.960 --> 495.960] Ah.
168
+ [495.960 --> 496.960] Accessory?
169
+ [496.960 --> 499.960] Okay, wait.
170
+ [499.960 --> 503.960] It's either the lab...
171
+ [503.960 --> 504.960] No, no.
172
+ [504.960 --> 506.960] It's a cell phone, isn't it?
173
+ [506.960 --> 507.960] Thank you very much.
174
+ [507.960 --> 508.960] You can sit down.
175
+ [508.960 --> 512.960] Thank you.
176
+ [512.960 --> 517.960] Don't change the person.
177
+ [517.960 --> 519.960] Who already is?
178
+ [519.960 --> 520.960] I believe you.
179
+ [520.960 --> 521.960] Don't change.
180
+ [521.960 --> 523.960] Whoever you are.
181
+ [523.960 --> 524.960] An Indian?
182
+ [524.960 --> 527.960] No, Indian.
183
+ [527.960 --> 533.960] Politician?
184
+ [533.960 --> 540.960] Alive?
185
+ [540.960 --> 545.960] A celebrity?
186
+ [545.960 --> 548.960] You're trying to react otherwise, or I could kind of guess.
187
+ [548.960 --> 553.960] So I have to use reverse psychology or something.
188
+ [553.960 --> 558.960] Thank you for confirming that.
189
+ [558.960 --> 559.960] People.
190
+ [559.960 --> 567.960] There is one in every cloud.
191
+ [567.960 --> 569.960] Is there an actress?
192
+ [569.960 --> 571.960] Be genuinely honest.
193
+ [571.960 --> 574.960] Is it...
194
+ [574.960 --> 577.960] What is his name?
195
+ [577.960 --> 579.960] Hello, I'm a mentalist.
196
+ [579.960 --> 580.960] Polu Walker?
197
+ [580.960 --> 581.960] Was that it?
198
+ [581.960 --> 582.960] Thank you.
199
+ [582.960 --> 583.960] Thank you.
200
+ [583.960 --> 584.960] Thank you.
201
+ [584.960 --> 595.960] When I perform in New York, not to show off, but where I live.
202
+ [595.960 --> 600.960] Whenever I perform there, I always caught my country, our beautiful India, as the land of
203
+ [600.960 --> 604.960] mystery and spiritual history, the land of mystery and spiritual history.
204
+ [604.960 --> 607.960] This is what I say every single time.
205
+ [607.960 --> 614.960] But I have generally no idea why people are so blind when it comes to the mysterious subjects.
206
+ [614.960 --> 619.960] I have no idea why we so much want to believe in supernatural.
207
+ [619.960 --> 620.960] Don't accuse me yet.
208
+ [620.960 --> 628.960] My point is when the nature and natural surface is all over in our culture and myth, why do we even need supernatural?
209
+ [629.960 --> 636.960] As I mentioned earlier, people don't want to know anything about senses, but all they care about is the sixth sense.
210
+ [636.960 --> 639.960] Does it make any sense to you?
211
+ [639.960 --> 641.960] The people want their mystery at the end of the day.
212
+ [641.960 --> 645.960] They just need their mystery, no matter what.
213
+ [645.960 --> 654.960] No wonder most of the silly con, such as psychics, ujabod, pendulum, jodish, and even recently this mud brain stimulation.
214
+ [654.960 --> 655.960] Was that mid-brain?
215
+ [656.960 --> 659.960] That doesn't make any difference, all same with me.
216
+ [659.960 --> 662.960] These silly cons are way too easy to sell here.
217
+ [664.960 --> 669.960] People ask about ESP a lot, extra sensory perception, ESP.
218
+ [669.960 --> 677.960] As human being, we all make a genuine attempt to perhaps honest attempt to perhaps understand other severe and deeper or closer and deeper.
219
+ [677.960 --> 679.960] For a mentalist, it's a default nature.
220
+ [679.960 --> 680.960] This is what we do.
221
+ [681.960 --> 685.960] We observe study and read people, but we all have these abilities.
222
+ [685.960 --> 687.960] But you just believe that.
223
+ [687.960 --> 688.960] We all have these abilities.
224
+ [688.960 --> 690.960] You just have to know where to look.
225
+ [690.960 --> 693.960] And when you know that, look a little closer.
226
+ [693.960 --> 695.960] A little more than necessary.
227
+ [697.960 --> 705.960] All I'm referring here is about that little extra effort you put on to your senses to sensory perceptions.
228
+ [705.960 --> 710.960] Maybe, and I believe this is all about extra sensory perception, ESP.
229
+ [710.960 --> 716.960] This is an average person uses 10 to 15% of their cerebral capacity, 10 to 15.
230
+ [716.960 --> 718.960] Think about it for a moment.
231
+ [718.960 --> 722.960] We don't see the things in front of us as we think we do.
232
+ [722.960 --> 724.960] It goes to all the senses.
233
+ [724.960 --> 727.960] I understand that's a bit complicated to digest.
234
+ [727.960 --> 730.960] So let's talk about observation a bit.
235
+ [730.960 --> 731.960] Okay.
236
+ [732.960 --> 734.960] Before we begin, I must admit this.
237
+ [734.960 --> 740.960] Before we begin, we talk to the organizers to arrange a few random audience for us.
238
+ [740.960 --> 742.960] They have been chosen absolute random.
239
+ [742.960 --> 744.960] Two conditions we gave.
240
+ [744.960 --> 747.960] One is that they have to be somewhere visible.
241
+ [747.960 --> 752.960] So could you just, you were there, I think.
242
+ [752.960 --> 754.960] And I can't find the other one.
243
+ [754.960 --> 756.960] We haven't met though, right?
244
+ [756.960 --> 757.960] Just raise your hand.
245
+ [757.960 --> 758.960] Whoever the two people are.
246
+ [758.960 --> 759.960] Oh, you two.
247
+ [759.960 --> 760.960] Okay, good.
248
+ [760.960 --> 762.960] Just would you come up, please?
249
+ [762.960 --> 766.960] And the upload will continue until you reach there.
250
+ [766.960 --> 768.960] Because don't kill the mom.
251
+ [768.960 --> 771.960] Such a question, media.
252
+ [771.960 --> 773.960] Thank you so much for joining up here.
253
+ [773.960 --> 774.960] Right here.
254
+ [774.960 --> 775.960] Hi.
255
+ [775.960 --> 776.960] Hi.
256
+ [776.960 --> 777.960] You name me?
257
+ [777.960 --> 778.960] Sindhaja.
258
+ [778.960 --> 779.960] I'm Adi.
259
+ [779.960 --> 780.960] Hi, Pranita.
260
+ [780.960 --> 784.960] So the second condition was, or would be your watches.
261
+ [784.960 --> 785.960] Okay.
262
+ [785.960 --> 788.960] So here is an observation test.
263
+ [788.960 --> 789.960] What?
264
+ [789.960 --> 790.960] Just a little forward?
265
+ [790.960 --> 791.960] No, no, facing me.
266
+ [791.960 --> 792.960] Okay.
267
+ [792.960 --> 793.960] So the idea.
268
+ [793.960 --> 795.960] Is that a gift?
269
+ [795.960 --> 796.960] Yeah.
270
+ [796.960 --> 797.960] No.
271
+ [797.960 --> 799.960] I borrowed it from someone.
272
+ [799.960 --> 800.960] Someone.
273
+ [800.960 --> 801.960] Okay.
274
+ [801.960 --> 802.960] So it kind of, yeah.
275
+ [802.960 --> 805.960] But the point is, you definitely know the brand then.
276
+ [805.960 --> 806.960] Right?
277
+ [806.960 --> 807.960] No.
278
+ [807.960 --> 808.960] That is pathetic, don't you think?
279
+ [808.960 --> 812.960] How many times, how many times we look at her watch every single day?
280
+ [812.960 --> 813.960] Think about it.
281
+ [813.960 --> 814.960] This is exactly my point.
282
+ [814.960 --> 815.960] I wanted her to say no.
283
+ [815.960 --> 817.960] Do you know, you know, you watched which brand is it?
284
+ [817.960 --> 818.960] What is it?
285
+ [818.960 --> 819.960] What is it?
286
+ [819.960 --> 820.960] Okay.
287
+ [820.960 --> 822.960] No, I don't think they said.
288
+ [822.960 --> 826.960] Obviously, I don't have to ask my next question, which is the sub-brand.
289
+ [826.960 --> 828.960] Right under the main brand, there are some watches.
290
+ [828.960 --> 830.960] They do write the sub-brand.
291
+ [830.960 --> 833.960] So like, it is in the code, whatever it is.
292
+ [833.960 --> 835.960] So there is no point asking.
293
+ [835.960 --> 839.960] See, it does nothing to do with the intellectual awareness.
294
+ [839.960 --> 840.960] Okay.
295
+ [840.960 --> 842.960] My point is, to look is one thing.
296
+ [842.960 --> 845.960] To see what we look at is another.
297
+ [845.960 --> 848.960] And we often have no idea what we miss.
298
+ [848.960 --> 849.960] Wait a minute.
299
+ [849.960 --> 851.960] Before I do anything, would you hold on to this?
300
+ [851.960 --> 855.960] I want all of you in the room, everybody, would you just fold your hands?
301
+ [855.960 --> 856.960] Is that okay?
302
+ [856.960 --> 857.960] Everybody.
303
+ [857.960 --> 864.960] And how many of you know that seven out of ten people cross left over, right?
304
+ [864.960 --> 865.960] Statistics say so.
305
+ [865.960 --> 872.960] I know people make arguments over using statistics for these terms, but they are there for a reason.
306
+ [872.960 --> 873.960] Look around, okay?
307
+ [873.960 --> 875.960] And you know what's interesting?
308
+ [875.960 --> 877.960] Try to do other ways.
309
+ [877.960 --> 879.960] That's very irritating, right?
310
+ [879.960 --> 883.960] This is practically impossible to unlearn that gesture.
311
+ [883.960 --> 886.960] But that is not a subject here, not my point here.
312
+ [886.960 --> 888.960] I just want to find that someone, sir.
313
+ [888.960 --> 892.960] The person in the third row, the way you are not reacting.
314
+ [892.960 --> 893.960] Would you stand up, please?
315
+ [893.960 --> 894.960] Is that okay?
316
+ [894.960 --> 896.960] Step a little forward if you would.
317
+ [896.960 --> 901.960] And what I want you to do, I want you to stare at the screen for three seconds.
318
+ [902.960 --> 904.960] Screen, focus, but don't hurt yourself, okay?
319
+ [904.960 --> 907.960] Here we go.
320
+ [907.960 --> 908.960] And that's it, okay.
321
+ [908.960 --> 910.960] Now, look at me.
322
+ [910.960 --> 911.960] I want you to think of a time, okay?
323
+ [911.960 --> 912.960] Time and a clock.
324
+ [912.960 --> 913.960] Could be anything.
325
+ [913.960 --> 915.960] Don't tell what it is yet.
326
+ [915.960 --> 918.960] You could go back, take your time, sit back, relax.
327
+ [918.960 --> 922.960] But in between, try to sit to your watch to the particular time you are thinking of.
328
+ [922.960 --> 923.960] Okay?
329
+ [923.960 --> 925.960] If you have two times, go for the first one.
330
+ [925.960 --> 926.960] You got it.
331
+ [926.960 --> 927.960] Okay, thank you, please, sir.
332
+ [927.960 --> 929.960] Take your time.
333
+ [929.960 --> 930.960] So, hi.
334
+ [930.960 --> 931.960] Where are we?
335
+ [931.960 --> 932.960] Where are you hiding there?
336
+ [932.960 --> 933.960] Okay, it.
337
+ [933.960 --> 935.960] So the next question.
338
+ [935.960 --> 939.960] The indication in a watch system follows either in the indoor,
339
+ [939.960 --> 942.960] in the indoor, in the numerical system, which is one, two, three, the number or the
340
+ [942.960 --> 943.960] ramen letter.
341
+ [943.960 --> 944.960] You learned in a government score?
342
+ [944.960 --> 945.960] I did.
343
+ [945.960 --> 946.960] You learned in a government score.
344
+ [946.960 --> 947.960] So, I did.
345
+ [947.960 --> 949.960] So, we are very familiar with it.
346
+ [949.960 --> 950.960] You know the ramen letters, right?
347
+ [950.960 --> 951.960] They would, I would say, they are a classroom.
348
+ [951.960 --> 952.960] Okay, good.
349
+ [952.960 --> 953.960] So, it's either one.
350
+ [953.960 --> 954.960] Which one you watch follows?
351
+ [954.960 --> 955.960] Without looking.
352
+ [955.960 --> 956.960] I think Roman.
353
+ [956.960 --> 957.960] You think Roman.
354
+ [958.960 --> 959.960] You think Roman.
355
+ [959.960 --> 960.960] Think in courts.
356
+ [960.960 --> 961.960] What about you?
357
+ [961.960 --> 962.960] Roman.
358
+ [962.960 --> 963.960] That is confidence there.
359
+ [963.960 --> 965.960] Do you want me to have a look at it?
360
+ [965.960 --> 966.960] Shut up.
361
+ [966.960 --> 967.960] What about that?
362
+ [967.960 --> 968.960] It isn't, sir.
363
+ [968.960 --> 970.960] It's just a line, right?
364
+ [970.960 --> 971.960] Sorry.
365
+ [971.960 --> 972.960] It's just a line.
366
+ [972.960 --> 973.960] It isn't, it isn't Roman.
367
+ [973.960 --> 977.960] I purposely avoided that option to make you confused.
368
+ [977.960 --> 979.960] But see, both of you, it's different.
369
+ [979.960 --> 980.960] It's completely different.
370
+ [980.960 --> 984.960] There are two or three ramen letters, but nothing in there, right?
371
+ [984.960 --> 986.960] So, what are you missing?
372
+ [986.960 --> 989.960] I can see so many faces going there in between.
373
+ [989.960 --> 992.960] It doesn't help to peek at the last moment.
374
+ [992.960 --> 993.960] Okay.
375
+ [993.960 --> 995.960] There's an, here we go.
376
+ [995.960 --> 999.960] I just made you look at your watch purposely like three to four times.
377
+ [999.960 --> 1001.960] What is the exact time right now?
378
+ [1001.960 --> 1004.960] I don't want you to calculate.
379
+ [1004.960 --> 1006.960] Do you want to peek at the screen for?
380
+ [1006.960 --> 1008.960] People do miss these points.
381
+ [1008.960 --> 1009.960] Come on, how many times?
382
+ [1009.960 --> 1014.960] You were looking, but you were not seeing everything as you should be, right?
383
+ [1014.960 --> 1015.960] This is my point.
384
+ [1015.960 --> 1018.960] And I will let you head back and not done with you.
385
+ [1018.960 --> 1019.960] So, please.
386
+ [1019.960 --> 1020.960] Thank you so much.
387
+ [1020.960 --> 1022.960] Round of applause for the...
388
+ [1022.960 --> 1025.960] Here we go.
389
+ [1025.960 --> 1027.960] What I wanted to do.
390
+ [1027.960 --> 1029.960] Step a little forward.
391
+ [1029.960 --> 1035.960] And stare at the screen for about four seconds, okay?
392
+ [1035.960 --> 1036.960] From now.
393
+ [1036.960 --> 1038.960] And that's it.
394
+ [1038.960 --> 1039.960] That's it.
395
+ [1039.960 --> 1040.960] That's it.
396
+ [1040.960 --> 1041.960] Name.
397
+ [1041.960 --> 1044.960] Generally, if you have to be honest or, right?
398
+ [1044.960 --> 1045.960] Name the first time.
399
+ [1045.960 --> 1046.960] Come to your mind.
400
+ [1046.960 --> 1049.960] If you have a time, a time on a clock.
401
+ [1049.960 --> 1050.960] Okay?
402
+ [1050.960 --> 1051.960] Okay?
403
+ [1051.960 --> 1052.960] You have something in your mind?
404
+ [1052.960 --> 1053.960] Yeah.
405
+ [1053.960 --> 1054.960] To the mic.
406
+ [1054.960 --> 1055.960] To the mic, yes.
407
+ [1055.960 --> 1057.960] Two forty-five.
408
+ [1057.960 --> 1062.960] Sir, does it ring any bell?
409
+ [1062.960 --> 1063.960] There.
410
+ [1063.960 --> 1067.960] There we go.
411
+ [1067.960 --> 1069.960] I think you will see the...
412
+ [1069.960 --> 1070.960] Yeah.
413
+ [1070.960 --> 1073.960] Maybe like two, three minutes off because it's been a-been talking.
414
+ [1073.960 --> 1074.960] Thank you, sir.
415
+ [1074.960 --> 1075.960] You can, you can, you can.
416
+ [1075.960 --> 1076.960] Thank you.
417
+ [1076.960 --> 1077.960] But, thank you.
418
+ [1077.960 --> 1078.960] Thank you.
419
+ [1078.960 --> 1079.960] Thank you.
420
+ [1079.960 --> 1081.960] If I ask you, look at me.
421
+ [1081.960 --> 1087.960] If I ask you, if I ask you to associate it with the day, a day of a week, Sunday,
422
+ [1087.960 --> 1096.960] Chumanday, or Sunday, the Saturday, a day, this is what keeping me.
423
+ [1096.960 --> 1098.960] What is the day?
424
+ [1098.960 --> 1099.960] Should I say?
425
+ [1099.960 --> 1100.960] Thursday.
426
+ [1100.960 --> 1101.960] Thursday.
427
+ [1101.960 --> 1108.960] See, before I come here, yesterday or the previous day, I don't remember.
428
+ [1108.960 --> 1110.960] I happen to take a photo of my alarm clock.
429
+ [1110.960 --> 1111.960] Okay?
430
+ [1111.960 --> 1112.960] See, we have never met before.
431
+ [1112.960 --> 1113.960] Could you confirm that?
432
+ [1113.960 --> 1114.960] Yes.
433
+ [1114.960 --> 1115.960] Okay.
434
+ [1115.960 --> 1116.960] You didn't write anything and give it to someone or something.
435
+ [1116.960 --> 1118.960] It's just, you're not helping me in any way.
436
+ [1118.960 --> 1119.960] No.
437
+ [1119.960 --> 1120.960] Okay.
438
+ [1120.960 --> 1123.960] Because I see some of the previous TED Talk and the in the comment system, you know, they-they
439
+ [1123.960 --> 1124.960] do.
440
+ [1124.960 --> 1126.960] And so many people are saying, it's all set up and all.
441
+ [1126.960 --> 1128.960] It is absolutely not a set up.
442
+ [1128.960 --> 1129.960] Okay.
443
+ [1129.960 --> 1132.960] So, if I try open, could you just-is seal this officially seal?
444
+ [1132.960 --> 1133.960] There is no way you know it out.
445
+ [1133.960 --> 1136.960] You could just hear it open.
446
+ [1136.960 --> 1142.960] Well, it's an 18-minute talk.
447
+ [1142.960 --> 1143.960] So.
448
+ [1143.960 --> 1146.960] Should I take it out?
449
+ [1146.960 --> 1148.960] It's a procedure.
450
+ [1148.960 --> 1149.960] What does it say?
451
+ [1149.960 --> 1152.960] It's a photo of my alarm clock which says 245.
452
+ [1152.960 --> 1153.960] Is that it?
453
+ [1153.960 --> 1154.960] Yeah.
454
+ [1154.960 --> 1155.960] The camera.
455
+ [1155.960 --> 1157.960] If you wanted.
456
+ [1157.960 --> 1158.960] Okay.
457
+ [1158.960 --> 1161.960] And you know the interesting thing.
458
+ [1161.960 --> 1163.960] You didn't say, was it morning or evening?
459
+ [1163.960 --> 1165.960] Do you have something in your mind or just-
460
+ [1165.960 --> 1166.960] Yeah.
461
+ [1166.960 --> 1167.960] You had?
462
+ [1167.960 --> 1168.960] PM.
463
+ [1168.960 --> 1169.960] Most of the people think of me and it's PM, right?
464
+ [1169.960 --> 1170.960] Yeah.
465
+ [1170.960 --> 1172.960] And was it the Thursday?
466
+ [1172.960 --> 1173.960] Thank you very much.
467
+ [1173.960 --> 1174.960] There we go.
468
+ [1174.960 --> 1178.960] No, no, no, no, wait, wait.
469
+ [1178.960 --> 1179.960] Why Thursday?
470
+ [1179.960 --> 1180.960] Was it random?
471
+ [1180.960 --> 1181.960] No.
472
+ [1181.960 --> 1184.960] The thinking, like, you know, is for any reason?
473
+ [1184.960 --> 1185.960] Yeah.
474
+ [1185.960 --> 1187.960] There's going to be another reason in my-
475
+ [1187.960 --> 1191.960] If I had to make sure that you will think of it because I don't have like 247 of those in my-
476
+ [1191.960 --> 1192.960] Okay.
477
+ [1192.960 --> 1196.960] So, um, use it as a random.
478
+ [1196.960 --> 1200.960] Well, there is no such thing as random.
479
+ [1200.960 --> 1203.960] How many of you?
480
+ [1203.960 --> 1205.960] No such thing as random.
481
+ [1205.960 --> 1208.960] When your subconscious is there, back there always working.
482
+ [1208.960 --> 1214.960] That's an absolute, uh, remember this?
483
+ [1214.960 --> 1217.960] Did I ask you to ignore the kid?
484
+ [1217.960 --> 1218.960] We get influence.
485
+ [1218.960 --> 1219.960] That's what they say.
486
+ [1219.960 --> 1224.960] We get influence, but everything we see, everything we hear, everything we listen,
487
+ [1224.960 --> 1227.960] and you've been watching everything from the end to the beginning.
488
+ [1227.960 --> 1229.960] And thank you very much indeed.
489
+ [1229.960 --> 1230.960] You could go right, thank you.
490
+ [1230.960 --> 1231.960] Watch out.
491
+ [1231.960 --> 1236.960] And guys, this skill, uh, it has its own curse.
492
+ [1236.960 --> 1238.960] It has its own curse.
493
+ [1238.960 --> 1241.960] Uh, it's a merge on relationship first.
494
+ [1242.960 --> 1246.960] People call it a gift, but it has its own curse.
495
+ [1246.960 --> 1251.960] When you're trying to see the puzzle in everything, learning to solve the people by just looking at them,
496
+ [1251.960 --> 1254.960] the signals are everywhere.
497
+ [1254.960 --> 1258.960] Once you start looking at it, it's impossible to stop.
498
+ [1258.960 --> 1259.960] Hope you enjoy it.
499
+ [1259.960 --> 1260.960] Thank you very much indeed.
500
+ [1260.960 --> 1261.960] Thank you.
501
+ [1261.960 --> 1263.960] Thank you.
502
+ [1271.960 --> 1273.960] You
transcript/allocentric_uMpEHwPgHzk.txt ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 24.000] Okay, greetings everyone. In the following talk, Lee Overman and I would like to present an archeological perspective on human spatial thinking.
2
+ [24.000 --> 36.000] I will use the first half of the talk to discuss early developments in hominin spatial cognition from about 3.3 million years ago to about 500,000 years ago.
3
+ [36.000 --> 45.000] What I hope to show is that some important developments in human spatial cognition occurred long before we were officially human.
4
+ [45.000 --> 51.000] I will then pass it on to Dr. Irman who will discuss some more recent developments.
5
+ [54.000 --> 74.000] I would like to start off with a fairly well known example from the recent past, which is Ernest Shackleton's journey from Elephant Island in Antarctica to South Georgia Island, which was a journey of about 1,500 kilometers across the stormyest oceans on Earth.
6
+ [74.000 --> 98.000] This is a remarkable bit of sailing. It was also a remarkable bit of navigation. Of course, he had some help. His navigator was thrown in Frank Worsley and Worsley had a succident and an accurate chronometer and using these assists, he was able to navigate a perfect journey across the ocean.
7
+ [98.000 --> 124.000] This was actually not really unique. Micronesia navigators do similar things. For example, they were able to sail their outrigger canoes from Sipan in northern Micronesia all the way to Central Micronesia down to Pulawatt down here, which is a journey of about 600 miles or about 900 kilometers.
8
+ [124.000 --> 132.000] Again, it's across the open ocean. In this case, they don't have a succident and they don't have an accurate chronometer.
9
+ [132.000 --> 146.000] They do use a system of knowledge that is based on a very clever version of dead reckoning, which I think Dr. Hoverman will be talking about, and also celestial navigation using star positions.
10
+ [146.000 --> 160.000] But what I'm going to talk about today a little bit is how did this evolve? That is where did this ability come from? And how old is it? How long have we been able to do this?
11
+ [160.000 --> 174.000] We humans use two basic methods of wayfinding and a number of hybrid intermediate forms. One form is known as root following, and you get this a lot from Google Maps.
12
+ [174.000 --> 194.000] If you ask how to give from point A to point B, this is usually what you get. And for example, on the right you see, after Park Street, you need to take a right on summer street or speaking to Holland Street and so forth. And then so basically it's moving from a system of landmarks, from one landmark to another landmark.
13
+ [194.000 --> 205.000] And it's a very effective way for moving about landscapes. One is already familiar with doesn't work very well for landscapes. You know nothing about.
14
+ [205.000 --> 223.000] So the second system we use is often called survey knowledge. And this is where you use some kind of external spatial framework to orient yourself in space and to determine the location of your target and how to get from your target.
15
+ [223.000 --> 239.000] And how to get from your current position to your intended position. This is what we use in usual maps that people are familiar with. And that perspective takes a bird's eye view. That is, if you're looking down on the surface of the earth.
16
+ [239.000 --> 246.000] There are other ways of doing it, but that's the most familiar one for us.
17
+ [246.000 --> 255.000] Turns out that these two systems of wayfinding, which is the preferred word rely on different cognitive systems.
18
+ [255.000 --> 265.000] So following, for example, relies on very basic topological relationships, turn left turn right. It's inside, it's outside.
19
+ [265.000 --> 277.000] And it also relies a lot on long term memory and is one's ability to remember landmarks on the landscape and remember what choice to make at a particular landscape.
20
+ [277.000 --> 292.000] And humans have developed a lot of very interesting demonic devices for doing this. One that a former student of mine talks about a lot was used by the Khmachi in the Southern Plains, the United States when they were raiding into Mexico.
21
+ [292.000 --> 305.000] And they would sit down around a fire and ask the intended participants to memorize a series of landscape landscape features.
22
+ [305.000 --> 324.000] And after they had succeeded in memorizing them, they were recorded as little marks on a stick. So with that when they were actually moving across the landscape that could move down one notch to the next, remember what the landscape feature was supposed to be and what they were supposed to do when they got there.
23
+ [324.000 --> 328.000] And this is again a form of root following.
24
+ [328.000 --> 350.000] Serving knowledge relies on some other cognitive systems in particular, something known as allocentric perception, which is one's ability to construct a third perspective or a second perspective that you don't have direct access to that is something that you cannot see you have to imagine that perspective.
25
+ [350.000 --> 362.000] And it also relies on some understanding of spatial quantity that is spatial amount and it's how far one has traveled.
26
+ [363.000 --> 386.000] Now my particular specialty is cognitive archaeology. And what can a cognitive archaeologist do to add to a study of human spatial thinking cognitive archaeology, by the way, is a fairly simple idea that you look at archaeological remains and try to reconstruct something about how people were thinking in the past.
27
+ [386.000 --> 404.000] There are two pieces of information that cognitive archaeology supplies to the study of cognitive evolution. One is the timing of developments that is when did something occur and more importantly in what sequence did these things occur.
28
+ [405.000 --> 421.000] So for example, in our case, we could ask the question, did root following and survey knowledge evolve at the same time or not was one perceived the other and as we'll see it's almost certainly root following has been around longer than survey knowledge.
29
+ [421.000 --> 439.000] Second thing that archaeologists can supply is the context that is what was the world like in a sense when these particular abilities evolved, which hominin ancestor first demonstrated the ability, how did its way of life select for that particular ability.
30
+ [439.000 --> 454.000] There are a lot of caveats that go along with cognitive archaeology, we have to keep in mind that the archaeological record is actually relatively poor sample of what people were doing in the past.
31
+ [455.000 --> 463.000] There is also a sliding scale of resolution that is the deeper we look in the past the less preserved and the less we know.
32
+ [463.000 --> 471.000] For example, until very late in the Stone Age, there is almost no direct evidence of way finding per se.
33
+ [472.000 --> 485.000] I think Lee will talk about one example for Neanderthals when we get up closer to the modern world after about 200,000 years ago, we can begin to see some things about way finding but prior to that that we can't.
34
+ [485.000 --> 500.000] So what do we have to do? We have to be clever. We have to try to extract useful insights from the evidence we have an abundance and that is stone tools and the earliest stone tools are about 3.3 million years old.
35
+ [501.000 --> 509.000] Now for the benefit of people who are listening who don't know anything about stone tools, what I'm going to do is play a video.
36
+ [509.000 --> 529.000] It's about 7 minutes long, prepared by Nick Toath from the University of Indiana. It presents basic stone mapping. That is how you make a stone tool and it also introduces an important tool type known as the hand axe and I will have a lot to say about hand axes. So with any luck this will work.
37
+ [540.000 --> 550.000] My name is Nicholas Toath. I am the co director of the Stone Age Institute and a professor of anthropology at Indiana University.
38
+ [551.000 --> 560.000] I want to show you what our ancestors started doing about 2.5 million years ago that we can see in on the African continents.
39
+ [560.000 --> 573.000] The earliest archaeological record are very simple stone tools and they discovered around this time that if you took two natural cobbles say from a gravel bar of a river you could hit one against the other one.
40
+ [576.000 --> 583.000] And knock off these razor sharp pieces of stone that archaeologists call flakes. So in a matter of a few seconds.
41
+ [583.000 --> 611.000] I'm producing these very sharp razor blade like materials that can be used for cutting up the carcass of an animal that you might kill or maybe you find that's been left by carnivores and a little bit of meat left on the bones that you can cut off.
42
+ [611.000 --> 626.000] These are literally razor sharp. If I were to drag this across my finger I would cut myself quite badly. So when you think of the early stone tools don't think that they're not razor sharp. They're sharp as a surgical scalpel.
43
+ [626.000 --> 646.000] And we can use these for a range of activities as our ancestors did in the early stone age, which are animals working wood say you want to make a spare for hunting or a digging stick for digging up roots and two verse from the branch of a tree. These are great tools for the shaping that you would do for that.
44
+ [646.000 --> 652.000] And so we see this trajectory in human evolution starting around two and a half million years ago.
45
+ [652.000 --> 674.000] These very simple types of stone tools and it's soon after that by about two million years ago that we have really good evidence for the expansion of the human brain for the spread out of Africa shortly after that the reduction in the size of the teeth and the jaws of our ancestors as well.
46
+ [674.000 --> 694.000] And starting about one and a half million years ago they started getting more ambitious and making forms that we call so called hand acts as they were probably held in the hand and used as cutting tools. This would be one to more refine one from perhaps about a half a million years ago or so made out of Flint.
47
+ [694.000 --> 704.000] And so the idea of this is you would take a larger piece of stone and kind of imagine that form in the middle of it.
48
+ [704.000 --> 722.000] And then again using a hammer slowly but surely marching around the edges and knocking off flakes. I'm using the scar of the flake that was just knocked off as a striking platform to go in the other direction.
49
+ [722.000 --> 729.000] And the idea is you want to march all the way around this piece.
50
+ [729.000 --> 742.000] So I'm not thinking too much about the final shape yet. I'm just kind of trying to produce an edge around the circumference of this piece.
51
+ [752.000 --> 765.000] Okay, and so it takes me about another five minutes to go all the way around this. And then the next stage that our ancestors learn to do.
52
+ [765.000 --> 775.000] That's called hard hammer percussion where you use a hard hammer to knock off flakes. So another thing they learned to do which is really interesting and somewhat counter intuitive.
53
+ [775.000 --> 785.000] Is you totally steepen the edge. You tell the edge by removing these little flakes with a smaller hammer.
54
+ [785.000 --> 789.000] And then grinding the edge.
55
+ [789.000 --> 799.000] And then you take a softer material to be a piece of hard wood, a fragment of elephant, tusk of this case. I'm using an antler of a deer.
56
+ [799.000 --> 805.000] And you're biting right into the edge.
57
+ [805.000 --> 810.000] And you're able to move much longer thinner flakes when you do that.
58
+ [810.000 --> 816.000] And this is a way of controlling flaking.
59
+ [816.000 --> 819.000] Do it one more time.
60
+ [819.000 --> 826.000] You get a much thinner product at the end of it. This is called soft hammer techniques.
61
+ [826.000 --> 838.000] I'm going to try to drive a flake off of this side by striking on this side.
62
+ [838.000 --> 840.000] See how I did that.
63
+ [840.000 --> 846.000] And by doing that going around, you can carefully shake the edge.
64
+ [846.000 --> 852.000] Then it make a superb butchery tool.
65
+ [852.000 --> 858.000] And when people ask why did they bother to make something this big? They were butchering large animals.
66
+ [858.000 --> 865.000] Sizes of zebra and buffalo on occasion. These are big mammals that weigh hundreds and hundreds of pounds.
67
+ [865.000 --> 871.000] And would you rather use a razor blade or butchering one of these animals?
68
+ [871.000 --> 879.000] Or would you rather use it basically a two-fisted tool like this? You have a lot more cutting edge, a lot more weight behind what you're doing as well.
69
+ [879.000 --> 887.000] And so when we look at the history of technology, you're seeing changes in the refinement of stone tools over time.
70
+ [887.000 --> 894.000] By the time you get to Neanderthals and early modern humans, they're probably starting to have tools on to handles.
71
+ [894.000 --> 899.000] You get to using some kind of adhesive material like pitch from a pine tree.
72
+ [899.000 --> 904.000] Or by using sinew or other material to lash tools together as well.
73
+ [904.000 --> 919.000] And so we see this progression and refinement of technology through time, especially with stone, but starting around 80,000 years ago, we're also seeing refinement and bone working that they're starting to make bone tools for the first time as well.
74
+ [919.000 --> 932.000] And we strongly feel that technology is a very important element in the evolution of the human brain that without technology, we wouldn't have the high quality diet that we need as
75
+ [932.000 --> 943.000] that driving that brain evolution. Your brain is about 3% of your body weight, but about 20% of your energy intake.
76
+ [943.000 --> 954.000] So it's a very expensive organ to have. So we must have had a very good reason to have a large brain probably has to do with becoming very social animals.
77
+ [954.000 --> 961.000] We're living in larger groups. We have to deal with the number of individuals politically and socially.
78
+ [961.000 --> 967.000] And so more intelligent animals are able to interact socially in a better way.
79
+ [967.000 --> 980.000] And so probably another payoff of these larger brains over time is that we've become more inventive and experimental and try new techniques of making and using tools as well.
80
+ [980.000 --> 995.000] And we become the consummate, the most important dedicated tool maker in the history of the earth as we are today.
81
+ [995.000 --> 1010.000] All right, why isn't this work?
82
+ [1010.000 --> 1015.000] Okay, my to do this.
83
+ [1015.000 --> 1029.000] Okay, you just heard Nick to talk about basic stone, this is a picture of Nick totes brain under a positron emission tomography skin. So it's done about 20 years ago.
84
+ [1029.000 --> 1044.000] And I present this to give you some idea of what's going on in the brain when Nick totes is stone, and in this case what what Nick did was just knock off basic flakes for about five minutes after they did injected him with their
85
+ [1044.000 --> 1059.000] radioactive tracer, then he jumped in the trace that the machine that gave a scan of his brain, this couple of things I want to point out, first of all, up here, if you can see this.
86
+ [1059.000 --> 1069.000] This is the superior prior to low and it's involved in spatial cognition, you can see it here also.
87
+ [1069.000 --> 1083.000] What this scan really tells you is that stone napping is a visually guided motor procedure and that guiding action in space is one of the important things that goes on in stone napping.
88
+ [1083.000 --> 1110.000] Interestingly, what you don't see here is much activation of the front of the lobes turns out Nick totes is an expert stone napper and as a consequence, he doesn't really when he's naps like you saw doesn't really engage the major working memory processes of the brain because it's more or less an automatic procedure for him.
89
+ [1110.000 --> 1117.000] If we're going to talk about the evolution of spatial abilities a natural place to start is to talk about non human primates.
90
+ [1117.000 --> 1127.000] What do non human primates do in terms of way finding what is their spatial cognition like we could go into a lot of detail about this, but I'm going to sprint through it.
91
+ [1127.000 --> 1139.000] If we look at chimpanzees in their natural circumstances out out in the wild, if they're traveling from point A to point B, they almost invariably take an established path.
92
+ [1139.000 --> 1148.000] They almost never go across country and almost never break a new path, that is they follow established roots.
93
+ [1148.000 --> 1168.000] So this would be a kind of root following and this is how primates get around their territories and especially anthropoid primates have a detailed memory of the resources that are available in their territories and where those resources are located and they can get to them efficiently using root following as it is.
94
+ [1168.000 --> 1178.000] We can also ask about this sort of general spatial cognition. This is a little harder to do about 60 years ago.
95
+ [1178.000 --> 1190.000] There was a flurry of interest in what was called chimpanzee art and a lot of zoos made a lot of money off of selling chimpanzee art and where they would do this, they get a compliant chimpanzee.
96
+ [1190.000 --> 1207.000] That's a bit of an issue in and of itself. They sit it down in front of finger paints and basically asked them to create and this example on the right is an example of a chimpanzee painting that was then sold for thousands of dollars.
97
+ [1207.000 --> 1224.000] What they don't tell you about this is that the whoever ever is assisting the chimpanzee has to take the piece of art away because the chimpanzee will continue to do this until the whole sheet is covered in brown.
98
+ [1224.000 --> 1241.000] Because what they they enjoy the process they enjoy the color they're not actually making images. They're engaging in a motor procedure very interesting and profitable for zoos but it doesn't return out to tell us very much about spatial cognition.
99
+ [1241.000 --> 1258.000] What most non human primates use in their basic spatial repertoire or what we would call topological relationships left and right up and down inside outside those sort of basic spatial relationships.
100
+ [1258.000 --> 1277.000] And this happens to be true of the earliest stone tools. So you saw Nick Toath making one making one of these earlier from about 3.3 million years ago and I should say that Nick Toath video was made before we a new site was found in 2015 with stone tools that date back to 3.3 million years.
101
+ [1277.000 --> 1283.000] But between 3.3 million years ago and 1.8 million years ago, that's a fairly long time.
102
+ [1283.000 --> 1307.000] How many technology was pretty simple it consisted of knocking flakes off of course using the flakes for cutting using the cores for crushing not much happens the basic topological excuse me the basic spatial repertoire of these early hominids were basic topological relationships they were not interested in the shape of the
103
+ [1307.000 --> 1327.000] tool they were not really very interested in the shape of the edge they were interested if it was sharp or if it wasn't sharp so this prevailed for a million and a half years which is really really quite a long period of time.
104
+ [1327.000 --> 1346.000] But about 1.8 million years ago this artifact appears on the scene it's called a hand axe probably slightly a misnomer probably was never used as an axe or very rarely ever used as an axe but it was probably held in the hand.
105
+ [1346.000 --> 1375.000] And this particular way to understand a hand axe is that it was the hominins solution to the problem of producing a large hand held cutting tool and John Gallagher who is arguably the world's authority on hand axes has come up with what he calls a sort of ergonomic design explanation for hand axes that hand axes instantiate these
106
+ [1375.000 --> 1404.000] six ergonomic design features one is called a glob but which is basically down here which is a heavier which is basically the end you hold in the palm of your hand so it's a little thicker it's a little heavier it allows you to get a good grip forward extension which is extension away from the palm and we're trying to produce a effective cutting tool and with a long cutting edge and so if you extend it,
107
+ [1404.000 --> 1417.000] and so if you extend it away from the palm you get a more effective cutting tool edge support is making the bifesh led you saw Nick was doing that lateral extension is interesting it's this dimension.
108
+ [1418.000 --> 1432.000] It turns out that if you try to make a cutting tool like this that's too narrow with twists in the hand and to cut down on the twisting you need to give it some lateral heft and that's what this does but then stone tools are heavy.
109
+ [1432.000 --> 1454.000] And if you're using them they're very tiring so if you reduce the weight that's helpful in the only way really to do this with a large cutting tool is reduce the thickness because you don't want to reduce the length that you don't want to reduce the lateral extension and then John argues for a bit of skewness for handedness I don't actually agree with him about that tools get past that.
110
+ [1455.000 --> 1456.000] So.
111
+ [1462.000 --> 1491.000] One of the features that appears very early on hand access and the one that actually helps us out a bit here is something called over determination so gallots ergonomic imperatives fall short of a complete account of hand X form the hand X stone numbers often added features that were not essential to the artifacts ergonomic functionality in other words they over determined the form.
112
+ [1492.000 --> 1510.000] And this over determination was linked we think to visual appeal and it enables us to see some things about spatial cognition.
113
+ [1510.000 --> 1516.000] Over determination in the direction of regular forms is the most obvious.
114
+ [1516.000 --> 1524.000] In position of shape is one of the most salient features we see on hand axes.
115
+ [1524.000 --> 1536.000] And the shape that they imposed most often was symmetry from the almost from the very beginning so this is a hand axe from older by gorgeous about 1.6 million years old.
116
+ [1536.000 --> 1547.000] The hominin stone nappers went to a great deal of trouble to produce a bilateral symmetry on a big flake of lava.
117
+ [1547.000 --> 1563.000] And if you go back and look at the order of flake removals which one can do if one looks at this very carefully it's clear that the napper flipped the tool back and forth trim different parts of the edge in order to achieve this bilateral symmetry.
118
+ [1566.000 --> 1583.000] So what does that tell us first of all we have to remind you a few things about visual processing visual information travels through the eyeballs passes to the occipital cortex of the brain which it does initial visual process.
119
+ [1583.000 --> 1606.000] The vision is arguably the best understood of the the sense systems and we share all of our visual system almost all of our visual system with non human primates so things like stereoscopic vision and color vision are things that we share with anthropoid monkeys and also with a.
120
+ [1607.000 --> 1618.000] So what happens with the imposition of form on artifacts and this is the first interesting thing that we can see in the evolution of human spatial abilities.
121
+ [1618.000 --> 1633.000] We saw earlier when we saw Nick to the brain that spatial cognition is primarily a temporal low function, a temperature, which is my pride alone function and this is something what is sometimes called the dorsal stream of visual processing.
122
+ [1633.000 --> 1644.000] Shape recognition on the other hand that is the processing of shapes occurs down here in the temporal lobes and this is sometimes a note as the ventral stream.
123
+ [1644.000 --> 1659.000] So the first thing that has to happen for the hominid to impose a shape on an artifact is to coordinate these two systems which are generally not connected to one another and we assume it happens at some place.
124
+ [1659.000 --> 1673.000] In some kind of central control if not working memory some part of frontal low function.
125
+ [1673.000 --> 1688.000] So if we go back to our FL or FLK west hand acts from older by gorge we see that the hominid made this was coordinating the dorsal and ventral streams of visual processing in order to impose this symmetrical shape.
126
+ [1688.000 --> 1699.000] Symmetry is an example of a good gestalt form and recognition of symmetry occurs in cell groups of the fused form gyros of the temporal lobes.
127
+ [1699.000 --> 1716.000] Now it turns out that symmetry recognition is very old in an evolutionary sense the ability to recognize a symmetry is shared with lots of organisms and this is adaptive predators are symmetrical food is symmetrical it's a good idea to be able to see symmetrical things.
128
+ [1717.000 --> 1723.000] But what's interesting here is the hominids start imposing this form on stone tools.
129
+ [1723.000 --> 1735.000] They also imposed radial symmetry on stone tools these are artifacts are a little bit more recent in time who bid is about 1.5 million years old this artifact on the left is very interesting.
130
+ [1735.000 --> 1751.000] It's a very large artifact it's 20 centimeters in diameter doesn't appear to have been used and it's clearly modified we don't know what they were doing with these things it's kind of a puzzle but they were clearly imposing a radial symmetry on stones.
131
+ [1751.000 --> 1764.000] Lewis leaky suggested these were bold stones that they were used as projectiles but you don't really need them to be perfectly round in order to be a projectile unless you're playing cricket or baseball or something.
132
+ [1766.000 --> 1781.000] So for the next million years after a million and a half years ago hominids very slowly acquired techniques that gave them greater control over artifacts shape nictus talked about some of these in the video.
133
+ [1781.000 --> 1792.000] The occasionally invested their hand axes with attractive aesthetic effects and in doing so the utilize some higher level visual processes which I'm going to talk about yeah next.
134
+ [1792.000 --> 1802.000] But they also utilize regularities and shapes that reveal some interesting things about abilities and congruency which is a component of Euclidean thinking.
135
+ [1802.000 --> 1812.000] So first let me talk a little bit about what helmet later calls implicit visual effects.
136
+ [1812.000 --> 1831.000] These are effects that artists use today to enhance their visual presentations and hominids use some of them as well then I'll say a word about allocetric perspective.
137
+ [1831.000 --> 1847.000] The first of these implicit effects is known as peak shift and if a stimulus provides pleasure and exaggerated stimulus supplies more more pleasure that's basically what's behind it it's an effect that's used a lot in political cartoons.
138
+ [1848.000 --> 1864.000] So as you know political cartoonists like to exaggerate the sizes and the shapes of structures as a way to make humor and also make political points like Donald Trump's orange hairs a great example or
139
+ [1864.000 --> 1868.000] a rock up on the years or another example of peak shift.
140
+ [1868.000 --> 1882.000] But what we see in stone tools most often the one that's most clear is the gigantic which is making very very large tools this tool in the middle is a hand axe it weighs well over three kilos.
141
+ [1882.000 --> 1911.000] It probably was not used as a hand tool it may not even have been used at all and what the hominids did was invest a lot of effort to produce these very large stone tools and there doesn't seem to have been much payback in terms of doing that even if you use the one in the middle to handed you would get tired very quickly so we think they were doing it as a way to show off as a way to sort of visually impressed someone else.
142
+ [1912.000 --> 1929.000] So again, it's not the only form of peak shift one another form is what I call hypertrophic forms that is taking the basic shape of a hand axe and exaggerating it in some way.
143
+ [1929.000 --> 1937.000] So the one on the left is a very long narrow hand axe the one on the right is what's called a twisted ovate and I'll show you one of those a little bit later.
144
+ [1937.000 --> 1953.000] If you turn this hand axe and look at it from the profile looks like the person who knapt it held it in and hand and twisted it around the central pole they have this nice s twist on the edges is the very beautiful art of acts for the most part.
145
+ [1953.000 --> 1958.000] A third form of peak shift is color the hominids often selected.
146
+ [1958.000 --> 1969.000] Class of raw material that had a beautiful color or beautiful quality to them this one on the right is probably I think made in the running for the most beautiful hand axe ever found.
147
+ [1969.000 --> 1981.000] If you look at it carefully you can see it's made out of layered stone it's actually a form of iron stone and the nap are made the basic hand axe shape and then scout the edges whenever down the edges.
148
+ [1981.000 --> 2004.000] So they made these little scalloped flakes along the edge that showed off the layered nature of the color this is an absolutely beautiful artifact and again it's it's a way to both invest the artifact with aesthetic appeal and perhaps advertise one skill as a stone ever.
149
+ [2004.000 --> 2033.000] I want to say a little bit about the second of these implicit effects which is prototypicality as I think many of you know the way the brain assigns membership to categories is by resemblance to prototypes not by trade lists but by resemblance to prototypes and the prototype of the hand axe is very close to a form called the hemielemnus which is you know about mathematics and the lenders could have Bernoulli hand axe.
150
+ [2033.000 --> 2062.000] Hand axe is a hand axe turned on its side is very close to half of a lenders get of Bernoulli not not arguing with the homies understood mathematics in this sense but there is a regular shape that's associated with hand axes and it appears to be a regularization of those ergonomic shape forms that go let's talk about earlier that is if you take that basic.
151
+ [2062.000 --> 2071.000] The glob but forward extension etc and regularize it this is what you get.
152
+ [2071.000 --> 2091.000] And this prototypical form you might call it a tear drop form if you don't want to call it a hemielemnus gave was the primary target for stone nappers for about a million and a half years and that's a staggeringly long period of time to have a single.
153
+ [2091.000 --> 2118.000] The single target for your technology we don't quite understand why that's the case from an aesthetic point of view the closer an artifact gets to this prototypical form the more pleasing it is the prettier we think it is including us today and so it wasn't just hominines half a million and a million years ago who were finding aesthetic appeal in these objects.
154
+ [2118.000 --> 2141.000] This is this one I'm especially fond of in the middle for me is really sight of Geshebert not your call it's about somewhere between 600 and 800 thousand years old it's just a beautiful artifact in black black lava.
155
+ [2141.000 --> 2165.000] I want to say a little bit more about regular geometric forms because Stanislav's a French neuroscientist recently made an interesting argument that people that is modern humans are sensitive to regular geometric shapes not just symmetries but all kinds of regular geometric shapes pentagrams and so forth.
156
+ [2165.000 --> 2183.000] We're very good at seeing these things we can select them very quickly out of a raise of of line drawings non human primates don't seem to be and it he's done the research to support this that we pick out regular geometric shapes very rapidly non human primates don't.
157
+ [2183.000 --> 2212.000] And don suggests that this perceptual bias is as important as language in grounding our abilities in mathematical thinking i don't know about that somebody may have some thoughts about that but what struck me is that regular handaxes like we've been talking about either reflect the initial stirings of this bias towards regular shape war and I think this is a little bit more provocative that stone napping making hand axes for millions.
158
+ [2213.000 --> 2232.000] So many years may have been the avenue by which this bias evolved no one's really examined that very carefully i'm told to do it will let me take care of that I think it's probably what we're going to do.
159
+ [2232.000 --> 2260.000] The third and plus of the fact is familiarity and this just simply means that that something that's a familiar form to you will give you more pleasure than an exotic form and we could argue about that a little bit but this is really hard to detect in the archaeological record because what you need are the products of a single social group and we have almost no examples of that from the early stone age and the only example.
160
+ [2260.000 --> 2288.000] It comes from the English side of box grow, which is a landscape that was exposed when sea level dropped for about 80 years and sea level rose again and covered the landscape and we have hundreds of hand axes from this landscape and they were probably all manufactured by the same social group and you can see some examples on the right here they look as if they were cut out with cookie cutter and in this case.
161
+ [2288.000 --> 2298.000] We laid the hand axes only by 500,000 years old, but the hominins were clearly following a community norm of some kind.
162
+ [2298.000 --> 2317.000] Finally, this is what we're round about way to allocentric perception and if you recall allocentric perception is the ability to construct a point of view not available to you directly that either someone else has or that an imaginary perspective has.
163
+ [2317.000 --> 2340.000] 40 years ago, evolutionist psychologist named Irwin Silverman put together a number of experiments investigating allocentric perception and what he discovered is that this is one of the important abilities in mental rotation, you know those rotating box tests you get on psychometric tests.
164
+ [2340.000 --> 2369.000] The wholeocentric perception is correlates very strongly with one's ability to do mental rotation and furthermore, he then examined whether he played an important role in human way finding and when he did this was he took students out in the woods blindfolded them asked them to point home and it turned out that we're pretty good at that in some ways.
165
+ [2369.000 --> 2394.000] If you take a student you walk them through the woods and then ask them to point where they came from some students are really good at it other students are not so good at it now point silverman was trying to make was one about sex difference and spatial cognition i'm not going to go there that's a different topic, but what he was suggesting is that allocentric perception evolved in support of hunting.
166
+ [2394.000 --> 2413.000] So what he suggested is that hunters going out and looking for animals and then having to come home would would have to sort of come up with a new route home and that this would select for abilities and allocentric perception.
167
+ [2413.000 --> 2427.000] The interesting thing is we can see evidence for allocentric perception in hand axes occurs very late, probably by 500,000 years ago, maybe by 800,000 years ago.
168
+ [2427.000 --> 2437.000] And the hand axe on this slide is symmetrical in three dimensions that is symmetrical in profile.
169
+ [2437.000 --> 2447.000] It's a symmetrical in plan and if you took this hand axe and put it on a saw and saw it through at any angle the cross section would be symmetrical as well.
170
+ [2447.000 --> 2465.000] And the cross section is not available to the stone that stone never can't see it when he or she is napping the artifact so it's a perspective that has to be constructed and so these hand axes with regular cross section suggest that the hominins were making them.
171
+ [2465.000 --> 2471.000] So basic abilities in allocentric perception.
172
+ [2471.000 --> 2480.000] And I did want to show you a really lovely artifact from the Ashmoline Museum in Oxford.
173
+ [2480.000 --> 2484.000] It demonstrates a number of the napping things that Nick Toath talked about.
174
+ [2484.000 --> 2495.000] What I wanted to point out is this nice sort of s shape on the edge. This is a twisted OV. It's really a lovely artifact.
175
+ [2495.000 --> 2505.000] And again, it relied on basic spatial understanding of basic spatial relationships, including allocentric perception.
176
+ [2505.000 --> 2516.000] And then hominins on the late artifacts played around with symmetry a lot. They started to violate symmetry intentionally which I think is interesting from an aesthetic perspective.
177
+ [2516.000 --> 2529.000] And finally, I don't have time to talk about these very much towards the very end of hand axes. The hominins started to make them in the shapes of animals, which I think is very interesting from a semiotic perspective.
178
+ [2529.000 --> 2539.000] And that is these are iconic hand axes. And these occur. These are probably very late, probably 300,000 to 250,000.
179
+ [2539.000 --> 2548.000] This is the time when you find the first possible evidence for iconic objects like the barricade room figuring.
180
+ [2548.000 --> 2553.000] Okay. So what was spatial cognition like 500,000 years ago?
181
+ [2553.000 --> 2565.000] And hominins were imposing the three dimensional symmetry on artifacts. They were often violating symmetry in aesthetically pleasing ways.
182
+ [2565.000 --> 2573.000] They demonstrate technical brilliance. They really were wonderful stone average. They had a sensitivity to regular geometric forms.
183
+ [2573.000 --> 2585.000] They employed gigantism as an implicit visual effect. And in some cases at the end, they also made hand axes that actually looked like animals.
184
+ [2585.000 --> 2595.000] Stone-napping itself may have powered these developments in cognition through embodied and extended resources. So what, I mean, what does that suggest?
185
+ [2595.000 --> 2604.000] Well, it suggests that one of the important components of survey knowledge was in place probably before 500,000 years ago.
186
+ [2604.000 --> 2612.000] And this is autosentric perception. In other words, this component of modern spatial thinking evolved long ago.
187
+ [2612.000 --> 2628.000] Before modern humans appeared on the scene. And I think that's a very interesting bit of information about human spatial cognition. And I will pass it on now to Lee who will talk about more recent times.
188
+ [2628.000 --> 2632.000] Thank you.
189
+ [2632.000 --> 2645.000] I can share my screen.
190
+ [2645.000 --> 2649.000] Can you guys see my slides?
191
+ [2649.000 --> 2659.000] So I'm going to pick up a story with the emergence and development of cultural systems of space in the last 60,000 years. So we're going to jump, jump forward in time a bit.
192
+ [2659.000 --> 2672.000] This story needs some context because over the past two millennia, which is not a lot of time in an evolutionary sense, the West has invented many devices for navigating, measuring, representing and labeling space.
193
+ [2672.000 --> 2681.000] There were so many devices. In fact, the traditional way finding was actually lost, which is kind of kind of interesting in and of itself.
194
+ [2681.000 --> 2695.000] Traditional way finding was so utterly forgotten that by the 19th century when the Victorians encountered it in other societies, they thought that so-called primitive man possessed an animal instinct for home.
195
+ [2695.000 --> 2710.000] Fortunately, this is no longer believed, but the same trend continues to this day because, you know, among Western people, knowledge of the stars is pretty rare. And certainly the ability to navigate by means of them is pretty rare.
196
+ [2710.000 --> 2723.000] But I'm going to talk about traditional societies and they deal with space on three overlapping scales from largest to smallest. These are navigating the unknown, traveling the known and measuring cultural space.
197
+ [2723.000 --> 2737.000] As Thomas explained, navigating the unknown involves survey knowledge where space is organized into a stable map like framework. This is the allocentric or bird's eye view in which every point or landmark is related to every other point.
198
+ [2738.000 --> 2748.000] In contrast, traveling the known is a matter of route knowledge. And that's the egocentric or personal perspective in which every point or landmark is related to the traveler.
199
+ [2748.000 --> 2759.000] And finally, measuring cultural space begins with the body. And that part of the story will connect with the beginning of geometry, which is our formal science of space and shape.
200
+ [2760.000 --> 2769.000] Despite the loss of traditional wayfinding knowledge, especially in the west, modern people, even Western ones use both survey and route knowledge.
201
+ [2769.000 --> 2778.000] A personal example comes from living in Colorado Springs, where pikes peak always orient us to the west, even in unfamiliar parts of town.
202
+ [2779.000 --> 2798.000] In the past 200,000 years, almost sapiens, that's us, reached and colonized virtually every environment on the planet. For Australia, the Americas and the Pacific, this migration occurred even more recently within only the past 60,000 years or less.
203
+ [2799.000 --> 2810.000] Navigating, of course, involves keeping track of where you are relative to where you are going and of course relative to where you came from, especially if you have any hope of going back there.
204
+ [2810.000 --> 2827.000] So how did we get from stone tools to such impressive navigation? For millions of years, our ancestors interacted with stone tools in ways that enabled them to appreciate structural features and spatial relations across multiple dimensions.
205
+ [2828.000 --> 2847.000] The Ella Centric view is nothing more than this same ability applied to features and relations of land and sky. The features and relations in question simply differ as to whether they are localized and contiguous as in objects or distributed and non contiguous as in Ella Centric wayfinding.
206
+ [2847.000 --> 2858.000] Now, there are two basic strategies for navigating Ella Centric Lee. One is celestial navigation, determining position by means of the stars, planets, sun and moon.
207
+ [2858.000 --> 2866.000] The other is dead reckoning, determining or predicting your position from a starting point as adjusted by your heading, steed and drift.
208
+ [2866.000 --> 2873.000] Heading and speed are measured while drift involves estimating the effects of ocean currents and weather.
209
+ [2873.000 --> 2885.000] We'll look at celestial navigation in the Pacific, which Tom mentioned at the beginning of his tar. Navigators memorize the rising and setting points of the brightest and most distinctive stars and planets.
210
+ [2885.000 --> 2897.000] To aid this memorization, navigators use what's called a star compass. This is a body of knowledge passed through oral tradition and a mental construct achieved through experience and practice.
211
+ [2897.000 --> 2905.000] It's not a physical device, although it has been given just such a form as you can see in the modern era on the screen.
212
+ [2905.000 --> 2917.000] Star compasses locate and name the places where the stars emerge from and return to the ocean. Navigators identify stars as they rise and set and this lets them know their position and direction.
213
+ [2917.000 --> 2929.000] They choose a star on the horizon and steer their walker or canoe towards it. When the star rises to high in the sky or sets beneath the horizon, they choose another star to follow and so on throughout the night.
214
+ [2929.000 --> 2935.000] I've heard that seven to twelve stars are enough for one night's navigation.
215
+ [2935.000 --> 2943.000] In my pernesia, navigators use alignments between horizon stars and etak or reference islands.
216
+ [2943.000 --> 2949.000] From the island of origin, the navigator knows that the etak lies in the direction of star X.
217
+ [2949.000 --> 2957.000] As the journey progresses, the relative direction of the etak changes until it falls towards star Y.
218
+ [2957.000 --> 2963.000] When the etak approaches the direction of star Z, the navigator knows he is near the destination island.
219
+ [2963.000 --> 2971.000] Now, what is shown here is downwind sailing. The navigator sets a course for a point upwind of the target island.
220
+ [2971.000 --> 2981.000] This is a strategy for improving the chances of finding it. Once this point has been achieved, he turns the walker to let the wind carry it to the destination island.
221
+ [2981.000 --> 2989.000] When sailing against the wind, the destination island and etak are the same.
222
+ [2989.000 --> 2995.000] The navigator takes left until he considers the etak to lie in the direction of star A.
223
+ [2995.000 --> 3001.000] He then takes right until the etak lies in the direction of star F.
224
+ [3001.000 --> 3005.000] He repeats this with shorter and shorter texts as the etak draws nearer.
225
+ [3005.000 --> 3013.000] There's an interesting nuance here as it is the etak that is considered to move and the walker or canoe that is considered to be stationary.
226
+ [3013.000 --> 3027.000] This is quite a different conceptualization of the available variables than is found in western navigation, where of course it is the ship that is considered to move and terrestrial features like islands that are considered to be stationary.
227
+ [3027.000 --> 3033.000] Pacific navigators are a great example of combining survey and route knowledge.
228
+ [3033.000 --> 3042.000] For example, keeping track of direction into familiar waters involves feeling ocean swells as they refract around or reflect off of known islands.
229
+ [3042.000 --> 3054.000] Navigators feel the direction that swells come from and they note their relation to the guiding stars and then it the sky becomes overcast this helps them maintain their orientation.
230
+ [3054.000 --> 3062.000] In unfamiliar waters, change in the swell pattern is a good indication that islands or underwater reefs are near.
231
+ [3062.000 --> 3066.000] A device used to teach this knowledge is the shell map.
232
+ [3066.000 --> 3069.000] Unlike the star compass, this is an actual device.
233
+ [3069.000 --> 3077.000] Shells indicate islands or island groups and sticks show ocean swells and their direction.
234
+ [3077.000 --> 3081.000] Since stars aren't visible all the time, the sun and moon are also used.
235
+ [3081.000 --> 3087.000] The sun provides a directional point twice each day as it rises into the east and sets in the west.
236
+ [3087.000 --> 3092.000] At daybreak, the navigator notes the position of the rocket in relation to the rising sun.
237
+ [3092.000 --> 3098.000] And as the sun gets higher in the sky, he looks toward where it will set in the evening.
238
+ [3098.000 --> 3110.000] At night, if clouds are fog passing in front of the guiding stars, the moon may still be visible and it is a good indicator of direction, especially when it is located near the horizon.
239
+ [3110.000 --> 3116.000] Long journeys into unknown waters meant relying on celestial navigation as the means of fixing position.
240
+ [3116.000 --> 3123.000] Clouds or storms could cause sailors to become completely lost with no way to return home.
241
+ [3123.000 --> 3130.000] Whether they were lost or not, the significant differences in involvement that sailors had no way of knowing whether they'd find land.
242
+ [3130.000 --> 3137.000] Most likely they'd go out and look for seas of land until either they found them or they reached the point of no return.
243
+ [3137.000 --> 3142.000] And that's when they had only enough supplies left to turn around and try and go home.
244
+ [3142.000 --> 3145.000] We're going to look at those signs of land.
245
+ [3145.000 --> 3153.000] This ability at sea is governed by the curvature of the earth, a height above sea level for both ship and land and the distance between the two.
246
+ [3153.000 --> 3158.000] Large canoes could extend more than seven meters above the surface of the water.
247
+ [3158.000 --> 3166.000] The big island of Hawaii, which is over four kilometers high, can be seen from about 150 kilometers away.
248
+ [3167.000 --> 3176.000] For an A-tall whose highest points are coconut trees, visibility is about a tenth of that distance, no more than about 15 kilometers.
249
+ [3176.000 --> 3182.000] Outside of visual range, certain signs indicate the proximity and direction of land.
250
+ [3182.000 --> 3191.000] For example, clouds tend to gather over an island and of course they can be seen for much further away than the island itself.
251
+ [3191.000 --> 3200.000] Birds are another indicator of proximity and direction, particularly the white term, a species that can range up to 200 kilometers from land.
252
+ [3200.000 --> 3218.000] Besides birds, clouds and ocean swells, other useful signs include wind direction, light reflected by lagoons, plants washed into the ocean, sea animal behaviors, localized wave characteristics, bioluminescence, water color, and tailapa.
253
+ [3218.000 --> 3225.000] Rare and mysterious flashes of light said the M-nate from land like lightning.
254
+ [3225.000 --> 3231.000] How about position? In Pacific navigation, sailors keep continual mental track of where they are.
255
+ [3231.000 --> 3235.000] So in a sense they always know and they don't write anything down.
256
+ [3235.000 --> 3244.000] In contrast, in western sea sharing, heading speed and time are used to chart the ship's position on paper maps, and that's true even to this day.
257
+ [3244.000 --> 3251.000] Although it's no longer the only system that are used on board ships. For heading, a compass is used.
258
+ [3251.000 --> 3258.000] For speed in olden days, sailors through overboard a log attached to a rope with knots tied every 14 meters.
259
+ [3258.000 --> 3263.000] As the rope pass through their hands, they counted the number of knots in 28 seconds.
260
+ [3263.000 --> 3270.000] Today the record of a ship's movement is still called the log and a ship's speed is still measured in knots.
261
+ [3271.000 --> 3280.000] It's a little hard to see on the diagram, but the young man in the middle is holding an hourglass, so their time measurement was a bit of an estimation.
262
+ [3280.000 --> 3283.000] Western sailors also used local guides.
263
+ [3283.000 --> 3298.000] In the first of his three Pacific voyages, James Cook took on board the endeavor and experienced Pacific navigator, a Tahitian named Tupia, who helped cook fine New Zealand, later died in Indonesia of a ship born illness.
264
+ [3298.000 --> 3305.000] Tupia drew a map while he served with Cook. Today thought to be an attempt to explain Pacific wayfinding to the Westerners.
265
+ [3305.000 --> 3316.000] The map does not use Western spatial concepts like proportional distance or geographic coordinates, and many of the islands it depicts cannot be matched to any known.
266
+ [3316.000 --> 3323.000] However, scholars are currently using traditional wayfinding methods to try and decipher it.
267
+ [3324.000 --> 3331.000] Overland, a traditional method of keeping track of time and distance involves the amount of ground covered in a day's worth of walking.
268
+ [3331.000 --> 3337.000] The navigator keeps track of the number of days he has walked, and he knows about how much distance that involves.
269
+ [3337.000 --> 3346.000] My favorite account of this comes from Australia. An Aboriginal messenger painted his arm with stripes of mud, one for each day of his journey.
270
+ [3346.000 --> 3352.000] On his returned trip, he would erase one stripe each day.
271
+ [3352.000 --> 3359.000] In the Kalahari distance and time involved in walking is contextual and understood as such.
272
+ [3359.000 --> 3364.000] For example, a woman gathering food covers less distance than a man going out the hunt.
273
+ [3364.000 --> 3378.000] Total distance is expressed as the number of sleeves, which is an indicator of how many days, with remaining portions of days indicated by pointing to the expected position of the sun upon arrival.
274
+ [3378.000 --> 3388.000] The route knowledge in comparison is the ability to construct a sequence of points, landmarks, and perspectives that comprise a path from one place to another.
275
+ [3388.000 --> 3401.000] Primates with one exception, and that would be us, rely on route knowledge, as Tom mentioned in his talk, since they lack the kind of innate positioning or directional system found in these birds and some other species.
276
+ [3401.000 --> 3416.000] The branch of primates, the bipedal apes, route knowledge is suggested by the footprints at Liatoli in Tanzania, where the Australopithecines, one of the earliest bipedal apes, left their footprints some 3.6 million years ago.
277
+ [3416.000 --> 3423.000] Members of the group intentionally stepped into the previous footprints, creating a natural trackway.
278
+ [3423.000 --> 3438.000] Today, cross culturally, route knowledge involves establishing, memorizing, and sometimes marking trails and tracks, giving places descriptive and memorable names, codifying route knowledge in stories and songs, and using spatial memory.
279
+ [3438.000 --> 3447.000] Route knowledge is a really interesting endeavor, especially in environments that to our Western have few distinguishing features.
280
+ [3447.000 --> 3455.000] In Australia, Aboriginal people traded goods and information over thousands of kilometers for thousands of years.
281
+ [3455.000 --> 3463.000] One of the mines where ochre was collected for trade appears to have been in continuous use for some 20 to 30,000 years.
282
+ [3463.000 --> 3475.000] In comparison, the famous silk road that once linked the west to the Middle East and Asia began some 20 to 100 years ago and lasted for only 1,600 years.
283
+ [3475.000 --> 3484.000] Some of the longest Aboriginal trading routes were created for Peturi, an indigenous plant used for its medicinal and hallucinogenic properties.
284
+ [3484.000 --> 3491.000] Peturi was traded over an area of more than 800,000 square kilometers.
285
+ [3491.000 --> 3496.000] The Aboriginal trade routes that crisscross the continent are known as Dreaming Tracks.
286
+ [3496.000 --> 3505.000] In Aboriginal cosmology, the Dreaming is the period of time when the ancestors took animal forms and created the land's topographic features.
287
+ [3505.000 --> 3511.000] The features there today attest to these ancestral journeys.
288
+ [3511.000 --> 3514.000] Dreaming tracks aren't just etched on the land.
289
+ [3514.000 --> 3526.000] They also live in the memories of the people who inherited the routes from previous generations, who had inherited them as well, creating one of the oldest and longest chains of human memory known in the history of the world.
290
+ [3526.000 --> 3539.000] In 2016, two Australian researchers recorded Aboriginal stories from 21 coastal locations, including tales of a time when parts of the coastline now underwater were dry land.
291
+ [3539.000 --> 3553.000] These stories corresponded to geological evidence of post-glacial rise in sea levels, showing that they were transmitted over a period of some 7,000 to 13,000 years.
292
+ [3553.000 --> 3559.000] Dreaming Tracks and associated song cycles function as nomonic devices for wayfinding.
293
+ [3559.000 --> 3567.000] They provide reliable descriptions and literal directions for travelers to follow, including places they have never personally visited before.
294
+ [3567.000 --> 3572.000] The land is the text and songs and stories are the means of reading it.
295
+ [3572.000 --> 3579.000] Associating stories and songs with specific places makes the information easier to remember and recall.
296
+ [3579.000 --> 3586.000] This strategy for spatial memory is akin to the so-called method of low-sci.
297
+ [3586.000 --> 3595.000] The method of low-sci is a technique for associating memorized information with spatial locations, like a street with shops or the rooms of a house.
298
+ [3595.000 --> 3605.000] It's also sometimes called the Greek memory palace, a name that reflects its origin as a memory strategy invented and used by ancient Greek orators.
299
+ [3605.000 --> 3614.000] When someone wants to memorize a set of items, he mentally visits the spatial locations and associates an item to be memorized with each one.
300
+ [3614.000 --> 3620.000] Retrieving the items then involves mentally visiting the spatial locations to see what each one holds.
301
+ [3620.000 --> 3627.000] While the method of low-sci can involve recollected or imagined places, dreaming tracks use the actual landscape.
302
+ [3627.000 --> 3634.000] However, both leverage spatial memory, a function of the hippocampus.
303
+ [3634.000 --> 3644.000] The hippocampus is a component of the limbic system, the part of the brain involved in behavioral and emotional responses that is sometimes called the old mammalian brain.
304
+ [3644.000 --> 3653.000] Besides its critical role in spatial memory, the hippocampus is also involved in consolidating short-term memories into long-term long.
305
+ [3653.000 --> 3673.000] Named after the Greek word for seaworse because of its shape, the hippocampus is famous for becoming larger in the brains of London taxi drivers who are required to pass a test demonstrating that they have memorized the city's 25,000 streets and all the businesses and landmarks on them and can flexibly plan alternative routes.
306
+ [3674.000 --> 3682.000] Storytelling a spatial memory was also once known in the lands that are now weird, Western, educated, industrialized, rich, and democratic.
307
+ [3682.000 --> 3689.000] For example, epic poems from the early First Millennium, like the Odyssey, were stories in spatial form.
308
+ [3689.000 --> 3703.000] Putting stories in the form of a journey, a sequence of spatial events made those events easier to remember and recall, facilitating oral transmission between individuals and generations.
309
+ [3703.000 --> 3713.000] When oral traditions invent writing or come into contact with the idea of it, their stories may take the form of journeys, sequences of spatial events.
310
+ [3713.000 --> 3722.000] Here is a page from an Aztec codex that shows the journey from the mythical Aztec homeland to the Valley of Mexico where they ultimately settled.
311
+ [3722.000 --> 3736.000] Most of the content in the codex is pictorial with little footprints to indicate the idea of spatial connection, while any glyphs are restricted to names, particularly the names of geographic places.
312
+ [3736.000 --> 3748.000] Geographic names take a variety of forms in Aztec writing, including historical events, distinctive environmental features, and cultural associations.
313
+ [3748.000 --> 3757.000] In the Arctic, Inuit peoples built piles of rocks called anuksut, stones that act as waypoints and mnemonic devices for travelers.
314
+ [3757.000 --> 3767.000] These constructions can be remarkably durable, and one such route the anuksut are thought to be more than 4,500 years old.
315
+ [3767.000 --> 3782.000] Using dogs to pull sled dates back nearly 10,000 years. However, dogs are more than just transportation, since they also have instincts for homing, and these can be less impeded than human senses under conditions of poor visibility.
316
+ [3783.000 --> 3793.000] Importantly, dogs sled travel is also slow enough that travelers can still feel the natural wind, an important aspect of orientation and direction.
317
+ [3793.000 --> 3807.000] Modern means like snowmobiles are so fast they lose the natural wind. The associated loss of orientation and direction, along with the greater speed, increase the chances of becoming lost.
318
+ [3808.000 --> 3819.000] Before we move on, let's look at wayfinding in Neanderthals, our cousin species who lived in Europe during the Middle Paleolithic, the period between 300,000 and 50,000 years ago.
319
+ [3819.000 --> 3832.000] Remember that modern spatial cognition emerged more than half a million years ago. This is sometime after the last common ancestor of Neanderthals and homo sapiens who lived nearly 2 million years ago.
320
+ [3832.000 --> 3843.000] Now, these timelines are estimates and thus imprecise. And Neanderthals had the same stone tool using heritage, and they had developed impressively complex stone tools.
321
+ [3843.000 --> 3854.000] Thus, it's quite possible that Neanderthals had a similar capacity for spatial cognition. We'll examine this possibility through a recent archaeological study.
322
+ [3854.000 --> 3867.000] This study analyzed landscape knowledge, navigational abilities and decision making at BAU, a French site inhabited by Neanderthals some 200,000 to 100,000 years ago.
323
+ [3867.000 --> 3877.000] The study analyzed more than 350 potential sources of raw stone for availability, quality and distance from the main Neanderthal camp.
324
+ [3877.000 --> 3888.000] Repairing availability to exploitation show that the Neanderthals did not exploit most of the area's resources, including several sites with high quality stone located very near the main camp.
325
+ [3888.000 --> 3902.000] However, for what they did exploit, they could plan efficient routes to obtain the raw materials, and they could identify optimal alternative routes, not only from the main camp, but also from various locations within the area.
326
+ [3902.000 --> 3914.000] The ability to estimate access costs accurately implies a location specific awareness of directions and distances, which the authors of the study claim is best explained as survey knowledge.
327
+ [3914.000 --> 3921.000] From this, we're tempted to conclude that Neanderthals and Homo sapiens had the same capacity for spatial cognition.
328
+ [3921.000 --> 3933.000] However, territory size and exploration are also influenced by things like sociality and creativity, cognitive domains in which the two human spaces are known to have differed.
329
+ [3933.000 --> 3946.000] Thus, it's not surprising to learn that Neanderthals tended to have smaller ranges than Homo sapiens, and this also admits some possibility of some slight differences in spatial cognition.
330
+ [3946.000 --> 3960.000] We come now to the third and final of our three overlapping scales, the cultural measurement of space, society's measure space in order to do things like construct shelters and estimate the size of fields.
331
+ [3960.000 --> 3972.000] Cross culturally, measurements begin with the body, making man literally the measure of all things as Plato observed in the early first millennium, and inch was the width of a man's thumb.
332
+ [3972.000 --> 3981.000] The palm or hand three to five digits across is still used today to measure the height of a horse, although it has been standardized to four inches.
333
+ [3981.000 --> 3986.000] A span was about nine inches, the length of the outstretched hand.
334
+ [3986.000 --> 3996.000] The foot 12 inches was the length of the average milled foot. A mile was a thousand pieces has counted by the Romans in double steps.
335
+ [3996.000 --> 4007.000] Other measurements included the arm span, useful for measuring cloth, and the forearm or a cubit, as measured from the elbow to the tip of the middle finger.
336
+ [4007.000 --> 4014.000] At some point, measurements transitioned from the body to physical devices like ropes and rods.
337
+ [4014.000 --> 4024.000] These not only had more standardized dimensions, they were way more convenient when it came to measuring large things like agricultural fields and monumental constructions.
338
+ [4024.000 --> 4031.000] Some of these measurements are quite old, and it's astonishing to realize how old some of these cultural systems are.
339
+ [4031.000 --> 4036.000] They were used in Mesopotamia and Egypt more than 6000 years ago.
340
+ [4036.000 --> 4043.000] The measurement standard shown here is made of copper alloy and comes from the ancient Samarian city of Nupur in what is now Iraq.
341
+ [4044.000 --> 4051.000] It defined the cubit as about 52 centimeters, just over 20 inches, which again makes us recollect the forearm.
342
+ [4051.000 --> 4060.000] In Egypt, the cubit was defined as seven palms in length, about 21 inches with a three inch palm.
343
+ [4060.000 --> 4066.000] As measurements are codified and standardized, they become increasingly involved in calculations.
344
+ [4066.000 --> 4075.000] This 5,000 year old clay tablet from the ancient Samarian city of Iraq contains the world's oldest documented calculations.
345
+ [4075.000 --> 4081.000] One calculation appears on its obverse or front, another on its reverse or back.
346
+ [4081.000 --> 4093.000] By the time calculation appears in the archaeological record, it is already impressively complex, suggesting that these cultural systems originated before the Neolithic.
347
+ [4093.000 --> 4102.000] As reconstructed, the tablet calculates area for an irregular quadrilateral by multiplying average length by average width.
348
+ [4102.000 --> 4111.000] The calculation on the reverse uses different lengths and widths to calculate the exact same area, which suggests the tablet was a schoolhouse exercise.
349
+ [4111.000 --> 4118.000] Both calculations involved a standard measurement, the rod, a length about 6 meters long.
350
+ [4118.000 --> 4122.000] However, the unequal sides met the corners who were not right angles.
351
+ [4122.000 --> 4134.000] With no way of specifying what the angles were, or dealing with them in the calculations, the calculations were necessarily imprecise, as they certainly shouldn't have ended up with the same answer.
352
+ [4134.000 --> 4144.000] This suggests that in the fourth millennium, people didn't quite yet know how to handle shape and size and angles as factors in geometric relations and calculations.
353
+ [4144.000 --> 4151.000] However, the calculations were likely accurate enough for the purposes they served at the time.
354
+ [4151.000 --> 4164.000] Faded to about 1,500 years later, this old Babylonian mathematical tablet contains a square and its diagonals, not just a diagram, but a study of geometric shape and relations.
355
+ [4164.000 --> 4167.000] This too was likely a schoolhouse exercise.
356
+ [4167.000 --> 4177.000] The number in green approximates the square root of 1,5, while the number in blue is its reciprocal, the square root of 2.
357
+ [4177.000 --> 4190.000] The first number is the hypotenuse of a right triangle with sides of equal length, 1,5 in this case, since 30,5 of 60 in the sex-agessimal or base-60 number system used in Mesopotamia.
358
+ [4191.000 --> 4199.000] Both calculations are accurate to six places when compared to our modern versions in decimal or base-10.
359
+ [4199.000 --> 4206.000] Simply, the mathematicians of the old Babylonian period had figured out the geometric relations of right angles.
360
+ [4206.000 --> 4211.000] Here is another old Babylonian mathematical tablet known as Plimpton 322.
361
+ [4211.000 --> 4224.000] While its purpose remains a matter of ongoing debate, it is usually considered to be related to Pythagorean triples, prefiguring Pythagoras and his same as theorem by over 2,000 years.
362
+ [4224.000 --> 4230.000] While a column one is mathematically related, its purpose remains unclear.
363
+ [4230.000 --> 4239.000] Here is a quick and admittedly modern calculation of the angle enclosed by the adjacent leg and hypotenuse in columns 2 and 3.
364
+ [4239.000 --> 4244.000] I want to highlight the fact that these angles do not appear anywhere on the tablet.
365
+ [4244.000 --> 4255.000] Interestingly, the angles increase within a very narrow range, which isn't something we might predict given the complete lack of any pattern in the lengths given in columns 2 and 3.
366
+ [4255.000 --> 4266.000] While this still doesn't tell us what the tablet means, it does suggest a methodical and rigorous exploration of the properties of right triangles.
367
+ [4266.000 --> 4281.000] At roughly the same time, mathematicians in Egypt were calculating areas and volumes for rectangles, circles, triangles, trapezoids, cylinders, and of course pyramids, as shown by the Rhymed Mathematical Papyrus.
368
+ [4281.000 --> 4289.000] In the early second millennium, the Rhymed Papyrus was likely a copy of an earlier text, although we don't know how much earlier.
369
+ [4289.000 --> 4298.000] Of note, the Egyptians, like the Babylonians, were only using right angles in their calculations at this point in time.
370
+ [4298.000 --> 4310.000] By the first millennium, Babylonian mathematicians were dividing the circle into 360 degrees, something again that reflects their sex adjustment or base 60 number system.
371
+ [4310.000 --> 4319.000] As shown as a tablet from the Assyrian city of Minova in what is today Syria, the city is thought to have been the location of the famous hanging gardens of Babylon.
372
+ [4319.000 --> 4325.000] In this system, which we still use today, a right angle is 90 degrees.
373
+ [4325.000 --> 4333.000] The Groma, an early survey instrument for estimating right angles, also emerged in the early first millennium in Babylon and or Egypt.
374
+ [4333.000 --> 4338.000] It consisted of two sticks, set at right angles on an arm attached to a main pole.
375
+ [4338.000 --> 4343.000] Courts whose ends were weighted descended from each end of both sticks.
376
+ [4343.000 --> 4357.000] The Groma would later be standardized by Roman surveyors, who used it to build about 80,000 kilometers of arrow-straight stone-paved roads, primarily for their military as it tried to conquer the known world.
377
+ [4358.000 --> 4371.000] In the mid-first millennium and borrowing much from the Babylonians and Egyptians, Greek mathematicians like Euclid were also investigating and codifying space and shape, creating the science we know today as geometry.
378
+ [4371.000 --> 4376.000] Here the story becomes quite familiar, so this seems like a good place to end.
379
+ [4377.000 --> 4384.000] But to tie it all together, we started by describing our ancestors' interaction with stone tools over millions of years.
380
+ [4384.000 --> 4393.000] This interaction ultimately yielded the ability to appreciate sets of features and relations in objects in multiple dimensions.
381
+ [4393.000 --> 4405.000] The ability to appreciate sets of features and relations that were localized and contiguous later expanded to include sets that were distributed and non-contiguous, which is allocentric way-finding.
382
+ [4405.000 --> 4424.000] The geometric diagram can be regarded as the same ability, applied again to sets of localized contiguous features and relations, but now in a medium that permitted their manipulation and analysis subsequent to the development of cultural systems for measuring space, as well as of course numbers.
383
+ [4424.000 --> 4434.000] These things in turn would ultimately provide a foundation for a later extension to distributed non-contiguous sets as in geographic coordinate systems.
384
+ [4434.000 --> 4444.000] We've covered space and shape across significant spans of time, multiple species of bipedal apes, including us, and the domains of way-finding, measuring space, and dealing with shape.
385
+ [4444.000 --> 4447.000] Thanks very much for your attention.
386
+ [4464.000 --> 4474.000] Thank you.
transcript/allocentric_vRWVgcPVaK4.txt ADDED
@@ -0,0 +1,696 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 9.000] Thank you so much. I guess that needs a little pruning. That caps profile. Thank you so much.
2
+ [9.000 --> 15.000] Well, you saved me time because I don't have to introduce who I am. But basically, I'm a psychiatrist
3
+ [15.000 --> 22.000] and for the last 15 years have been doing mostly cognitive behavior therapy in the outpatient
4
+ [22.000 --> 32.000] clinics here. And I also, what's called a neuropsychiatrist, which not everybody knows,
5
+ [32.000 --> 37.000] it's a it's a sub-specialty that's kind of the interface between neurology and psychiatry.
6
+ [37.000 --> 43.000] But most of my time is really spent doing cognitive behavior therapy. So I'm just kind
7
+ [43.000 --> 51.000] of a weird psychiatrist that's doing multiple things. But I got into this virtual reality
8
+ [51.000 --> 57.000] subject because my research interest was in psychosomatic illnesses and was trying to develop
9
+ [57.000 --> 67.000] more motor and sensory interventions for these problems and began an exploratory pilot study
10
+ [67.000 --> 75.000] using virtual reality. And as I got immersed in that, I found all these applications
11
+ [75.000 --> 81.000] and this rich research that's been going on for the last 20 years and started to experiment
12
+ [81.000 --> 88.000] applying it in cognitive behavior therapy as part of cognitive behavior therapy. And I have just been doing this
13
+ [88.000 --> 96.000] for about four or five years. So I'm not an expert and it's a very wide field. So I'm going to do my best to
14
+ [96.000 --> 104.000] just kind of give you the highlights of everything, but just know there's it's much deeper than I can even
15
+ [104.000 --> 111.000] express during this 50 minutes. But hopefully I look forward to most the interactive part when we have the question
16
+ [111.000 --> 120.000] and answer. So I'll try to go as quickly as possible through the knowledge component and then we can have as many questions
17
+ [120.000 --> 127.000] as possible. That's always the most fun and most meaningful. So I'm going to break it up into just giving you a
18
+ [127.000 --> 137.000] primer on the basics of virtual reality and then talk about the long history of the traditional VR for mental illness.
19
+ [137.000 --> 146.000] And then this new technology that's you're probably hearing about and because of all the technology developments,
20
+ [146.000 --> 155.000] there's this embodied VR and I'll tell you what impact this has for the future of possible treatments for psychiatric illness.
21
+ [155.000 --> 166.000] So let's just start with virtual reality basics. So if you read, if you really want to dive into the subject,
22
+ [166.000 --> 175.000] there's a Jaren Lanier who wrote the dawn of the new everything. I'll tell you he's one of the founders and first coin the term,
23
+ [175.000 --> 184.000] I believe, virtual reality. And I think the consensus now is that it's anything that's a computer generated 3D
24
+ [184.000 --> 194.000] experience and it usually is a head mounted display that you wear and it picks up movement of some type.
25
+ [194.000 --> 207.000] And it can incorporate mostly visual and auditory, but it can also incorporate other senses like touch and vibration and smell.
26
+ [207.000 --> 216.000] And it's usually a life size experience where you really feel like you've been taken somewhere else.
27
+ [216.000 --> 223.000] And this is different than augmented reality and I'll be talking to you a little bit more about the difference with augmented reality.
28
+ [223.000 --> 230.000] So basically this headset and here's right here.
29
+ [230.000 --> 242.000] And it measures these traditional headsets really measure the three axes of the head, your head movements.
30
+ [242.000 --> 259.000] And what it does is replace visual reality, which usually our visual capture system really informs is a very robust system that has a lot of influence over what we believe and what we feel emotionally.
31
+ [259.000 --> 267.000] So we replace that and the only reality coming in are your head movement and where that is in space.
32
+ [267.000 --> 279.000] And that informs the computer generated system or input, which then can evoke beliefs and emotions.
33
+ [279.000 --> 288.000] So there's some jargon you probably just need to know. It's not as complicated as it sounds.
34
+ [288.000 --> 293.000] The sense of presence is a term that's thrown around a lot with VR.
35
+ [293.000 --> 302.000] It just means your experience, your psychological experience of being there, of just really being somewhere else.
36
+ [302.000 --> 321.000] And in research and social science is kind of broken into three categories, social presence, which is how realistic it feels interacting with other people in the environment.
37
+ [321.000 --> 329.000] It's the spatial sense of presence, how realistic and how much you feel that you're in this new space.
38
+ [329.000 --> 339.000] And then there's your self-present, how much you feel ownership over an avatar or a body that you're inhabiting if you're having that.
39
+ [340.000 --> 349.000] And it's usually at this point considered sort of how robust the device is, how well it gives you a sense of presence.
40
+ [349.000 --> 357.000] So it's sort of the gold standard for VR is how quickly you can get this sense of presence.
41
+ [357.000 --> 360.000] I hope that makes sense.
42
+ [361.000 --> 371.000] Yeah, it's too bad we can't have this interactive because then if you had questions, you could ask me, usually I do my talks like that, so it's a little different for me not to stop for questions, but I'll get used to it.
43
+ [371.000 --> 377.000] And so there's these presence measures that are used in research.
44
+ [377.000 --> 383.000] And the second one, the Whitmer and Singer one is probably the one that's used most often.
45
+ [383.000 --> 386.000] We don't know what it's correlated with yet.
46
+ [386.000 --> 395.000] So at least from the reading that I've done, so I wouldn't take it too seriously, but it's just you'll see measures of presence.
47
+ [395.000 --> 404.000] And then this term immersion, which is just the device's capability of creating presence and just a sense of being there.
48
+ [404.000 --> 413.000] And usually it takes people about five to six seconds to feel immersed and there, which is pretty remarkable.
49
+ [413.000 --> 420.000] So our multimodal sense sensory system is capable of readjusting quite quickly.
50
+ [420.000 --> 428.000] And then some really important aspects to think about with VR basics is you can have different perspectives.
51
+ [428.000 --> 438.000] So the most common is the first person perspective, this egocentric viewpoint, the one that we're usually feeling, where you're seeing things from your own viewpoint.
52
+ [438.000 --> 456.000] So this slide is, for example, a parent maybe learning parenting skills will have an experience of being themselves talking to their offspring and trying out new communication skills.
53
+ [456.000 --> 467.000] So sort of being John Malkovich experience, if you've seen that movie where people could beam into this one person's body and have that experience.
54
+ [467.000 --> 470.000] So that's the first person point of view.
55
+ [470.000 --> 475.000] But it also has the ability to bring in the second point of view.
56
+ [475.000 --> 479.000] So you could watch yourself from the other person's perspective.
57
+ [479.000 --> 488.000] We could actually create an experience where you're watching yourself do this skill from maybe your daughter or son's perspective.
58
+ [488.000 --> 491.000] So that's the second person, allocentric viewpoint.
59
+ [491.000 --> 500.000] And in psychology there's a lot of attention right now to this theory of mind and at the ability to be able to take different perspectives.
60
+ [500.000 --> 507.000] And there's different conditions that have strengths and weaknesses in this area.
61
+ [507.000 --> 513.000] The other point of view is the allocentric point of view, which is a third person point of view.
62
+ [513.000 --> 518.000] You can also create the experience so you're looking at both of you interacting.
63
+ [518.000 --> 531.000] And we know from trauma work and linguistics that usually when people tell their narrative from a third person rather than a first person, it seems to be more helpful.
64
+ [531.000 --> 544.000] And so there's potential here you can see for being able to capture and manipulate the theory of mind and perspectives.
65
+ [544.000 --> 552.000] And so the hardware that's being used is varied and very cheap and available.
66
+ [552.000 --> 557.000] So you can get a cardboard, Google cardboard for like $3.
67
+ [557.000 --> 561.000] The technology is really in the phones.
68
+ [561.000 --> 565.000] It's used with any type of phone.
69
+ [565.000 --> 573.000] And so you can use something very basic like this.
70
+ [573.000 --> 585.000] Or you can have a really comfy one for like $30 with foam and this is kind of the Mercedes of sets.
71
+ [585.000 --> 591.000] And so they're quite available as long as you have a phone.
72
+ [591.000 --> 599.000] And so there's another type of virtual reality or immersive reality called augmented or mixed reality.
73
+ [599.000 --> 606.000] So this actually layers virtual reality on top of real reality or live information.
74
+ [606.000 --> 615.000] So in this kind of setup, you can either add or take away information onto an experience.
75
+ [615.000 --> 619.000] So that's a bit different.
76
+ [619.000 --> 626.000] So, okay, what are the VR treatments in mental illness currently and the research?
77
+ [626.000 --> 633.000] Well, there's a few ways and you can imagine probably just listening about VR.
78
+ [633.000 --> 638.000] People start to think of all these amazing ways you could apply it.
79
+ [638.000 --> 644.000] And what's been done so far is probably in about four categories that I'll talk about.
80
+ [644.000 --> 649.000] In more depth about each one, but the first one is exposure has been the most used.
81
+ [649.000 --> 657.000] And so this is a part of is a mechanism for desensitization.
82
+ [657.000 --> 666.000] So people get sensitized to things just like you get sensitized to allergens and maybe need allergy shots.
83
+ [666.000 --> 671.000] People get sensitized to stimulus to cues.
84
+ [671.000 --> 674.000] You can be born with a certain sensitivity.
85
+ [674.000 --> 679.000] And we know that exposure and desensitization is helpful.
86
+ [679.000 --> 684.000] So virtual reality is able to be very specific and be personalized to cues.
87
+ [684.000 --> 692.000] The other way that's been used for virtual reality really at the onset was a distraction.
88
+ [692.000 --> 699.000] So for people with acute pain, burn patients, Hunter Hoffman did some of the first studies on this.
89
+ [699.000 --> 710.000] I've found that people use less opiates who were distracted using virtual reality or playing games.
90
+ [710.000 --> 714.000] You can also use it for emotional distress to distract yourself.
91
+ [714.000 --> 719.000] I know James Gross just gave a talk to our department and talked about the only way.
92
+ [719.000 --> 724.000] There's only two things to do to regulate your emotions.
93
+ [724.000 --> 731.000] He's the researcher on effective regulation and emotion recognition.
94
+ [731.000 --> 743.000] He said, you know, when you're not overwhelmed, you can re-appraise things and regulate your emotions and bring some negative emotions down.
95
+ [743.000 --> 747.000] But when you're really overwhelmed, you really can't think.
96
+ [747.000 --> 750.000] And so the only thing you can do is distract.
97
+ [750.000 --> 760.000] So virtual reality really allows you to distract when you don't have access to your own imagination.
98
+ [760.000 --> 771.000] And so we can use it as well for stimulation for those folks that are sensory deprived like in geriatric settings.
99
+ [771.000 --> 778.000] You can add this. We know that the brain needs a certain amount of pleasure and mastery and novelty.
100
+ [778.000 --> 783.000] We often say four pleasant activities a day to keep the blues away.
101
+ [783.000 --> 790.000] And people get under stimulated and get depressed. So it can be used for that.
102
+ [790.000 --> 798.000] And then training. So what's really unique and cool about virtual reality is its effect on learning.
103
+ [798.000 --> 803.000] And it enhances learning of any kind because it's so engaging.
104
+ [803.000 --> 808.000] Anything you learn and VR and interact with, you're going to retain more.
105
+ [808.000 --> 823.000] So instead of having to listen to your therapist drone on and on about mindfulness, you could actually have a mindfulness experience and be taught how to do it and interact with it and have some specific feedback.
106
+ [823.000 --> 833.000] You're going to learn it. You're going to enjoy it. You're going to retain it more and much of the treatment for mental illness is training and learning skills.
107
+ [833.000 --> 836.000] So diaphragmatic breathing.
108
+ [836.000 --> 843.000] It could also be used for simulation learning for providers.
109
+ [843.000 --> 845.000] And for psycho education.
110
+ [845.000 --> 861.000] And then the other big component is research. So because you can deliver standardized experiences, it can be very much more precise and allow rigorous studies to be done that are controlled.
111
+ [861.000 --> 871.000] It also can be used as a measurement tool like for eye gaze and measuring avoidance behaviors and things like that.
112
+ [871.000 --> 881.000] And also what's being developed currently is also interfaces between biofeedback and virtual reality, EEG and virtual reality.
113
+ [881.000 --> 889.000] So there's many, many possibilities. There's probably many more than I'm saying here, but these are probably the most common currently.
114
+ [889.000 --> 900.000] And I was flabbergasted that there is about 20 years of data for VR for multiple illnesses.
115
+ [900.000 --> 909.000] And I didn't learn any of these in psychiatry in my residency or even after residency when I thought I was keeping up with the literature.
116
+ [909.000 --> 920.000] So mostly the field of psychology has really been the ones to develop this. And mostly for anxiety disorders.
117
+ [920.000 --> 934.000] So we've got social anxiety for controlled trials, panic and agoraphobia, five controlled trials, fear of flying to controlled trials, spider phobia to controlled trials.
118
+ [934.000 --> 942.000] And there's many other trials. I'm just giving you the most robust level of evidence that we have, which is pretty good.
119
+ [942.000 --> 952.000] So because many of our other psychotherapies don't even have this level of evidence. For addiction, we just have one on smoking.
120
+ [952.000 --> 970.000] And there's some studies in the pipeline. And I know some internal studies by companies that look very promising. And I know in China, they have some studies and they use VR a lot in addiction programs.
121
+ [970.000 --> 990.000] And then pain for acute pain with burn patients, as I said, Hunter Hoffman had a controlled trial PTSD and trauma. We have three controlled trials eating disorders in obesity for autism and social skills training just recently to came out for schizophrenia.
122
+ [990.000 --> 1003.000] We've got four controlled trials now. But sadly for mood disorders, depression, bipolar things like that, there doesn't seem to be a great effect unless there's an anxiety component.
123
+ [1003.000 --> 1018.000] And I'll tell you a little bit more about that as we talk more. But so not as much evidence yet. And we need to develop ways of treating mood disorders with VR.
124
+ [1019.000 --> 1037.000] All right. So and although these studies have been around a lot a long time, the application in clinical practice is fairly new because we didn't have access to platforms and the technology to do it. And now that's becoming available with different platforms.
125
+ [1037.000 --> 1053.000] We have one that we just started using a couple of years ago in our VR clinic, which makes it really easy and providers in like an hour or two can really become pretty proficient at it if they know how to do exposure therapy and other and they know the protocols.
126
+ [1054.000 --> 1066.000] A few of these platforms actually have all the evidence based protocols. So it's nice standardized care with an evidence base.
127
+ [1066.000 --> 1080.000] And so let's talk about each of the disorders and the evidence for them in a little bit more detail so you can get an idea of what actually happens if you went in for VR therapy.
128
+ [1081.000 --> 1093.000] And one thing I want to say too is VR is just a tool in therapy, just like a Kleenex box. I think of it just like a chair would be.
129
+ [1093.000 --> 1106.000] And so most of the therapy requires that you have a good therapeutic alliance. The research shows that so the relationship between the provider and the patient is most important.
130
+ [1106.000 --> 1117.000] You need to have goals. You need to be aligned on the goals and the tasks. You need to also make sure that you've got relapse prevention and a supportive system.
131
+ [1117.000 --> 1129.000] And so there are many interventions you can deliver in a interpersonal skills training, motivational interviewing.
132
+ [1129.000 --> 1140.000] But the treatment in VR is just one component of the therapy. So really in practice it might be 5% of what you're doing.
133
+ [1140.000 --> 1150.000] We're not sticking people in to therapy and leaving the room and letting the VR do the therapy. I have a lot of people who think that that might be the case.
134
+ [1150.000 --> 1159.000] It's very curated and it's a very small percentage of what you do and it's more like a tool of psychotherapy.
135
+ [1159.000 --> 1172.000] So anxiety disorders. I'll start with that since we have the most evidence base here and about a third of people are affected in their lifetime at some point with an anxiety disorder.
136
+ [1172.000 --> 1183.000] So it's quite a public health issue and the standard treatment is psychotherapy, medications and complimentary health approaches.
137
+ [1183.000 --> 1191.000] And most of the psychotherapy does require that you have an exposure component to it in order to improve.
138
+ [1191.000 --> 1199.000] And let me tell you why. So most anxieties. So I'm just going to give you the principles so you don't get into the details of everything.
139
+ [1199.000 --> 1212.000] But the principle with anxiety disorders and treatment is that all anxiety is maintained due to avoidance and safety behaviors.
140
+ [1212.000 --> 1226.000] So you may think, oh well, you know, the person is just catastrophizing and thinking incorrectly. That's true. But really what's maintaining anxiety is behavior and avoidance behaviors.
141
+ [1226.000 --> 1238.000] The thoughts might have an avoidance component. So I will. So this it might have been actually a good thing to be anxious.
142
+ [1238.000 --> 1262.000] If you're in the savanna and you see a lion and you you run, you avoid that's adaptive. And then you the more you when you go near the place where you saw the lion or the cave, you stay away, you get your family and your friends to move away, you get more and more afraid of lions and you survive.
143
+ [1262.000 --> 1288.000] That's an adaptive positive feedback loop. But most of daily life now, you can get queued in and get this positive feedback loop going. That's not that's maladaptive. For example, I was on a plane ride when my when my kids were little and we had a very turbulent flight.
144
+ [1288.000 --> 1301.000] Like six hours of the severe turbulence. I had these little babies and the flight attendants never said thing. The pilot didn't say anything for like six hours. We were in pure terror.
145
+ [1301.000 --> 1317.000] And so I and then 9 11 came along and so I didn't go on a plane for like three years after that. I was so traumatized. And so but and then when I did start going on the planes, I was so hyper vigilant, making sure everything's okay.
146
+ [1317.000 --> 1331.000] Making sure we're on the shortest flight possible. I was doing all sorts of avoidance behaviors. So of course I got more and more anxious. And so I wanted to get out of that.
147
+ [1331.000 --> 1346.000] So what did I do? I started to fly more and I knew these principles. And so I knew that I had to stop gripping really tightly whenever there was turbulence. I needed to act the opposite.
148
+ [1346.000 --> 1360.000] And I needed to stop being so hyper vigilant. I needed to you know watch movies and do what I would normally do if I wasn't scared. And so I got about 85% better.
149
+ [1360.000 --> 1375.000] But then I was still 15% anxious. I wasn't back to my baseline. So I thought what am I doing that still is safety behavior. So I had to I realized that every time there was turbulence.
150
+ [1375.000 --> 1388.000] I had this safety behavior in my mind. I'm like, okay, I hope I hope the the pilots are really thinking about how to make this as smooth as possible. They've got to control the plane really hard.
151
+ [1388.000 --> 1399.000] And so I would do this mental gymnastics. And then I realized that I didn't know anything about turbulence. I didn't know how to I would never watch movies or anything about turbulence or planes.
152
+ [1399.000 --> 1408.000] So I started watching more movies and I found out that turbulence you actually need to stop controlling the plane to make it smoother. The more you control, the more turbulent things get.
153
+ [1408.000 --> 1419.000] So I started to not be afraid of learning about airplanes. And then every time there was turbulence, I think, oh, just relax. I hope they're not controlling the plane.
154
+ [1419.000 --> 1433.000] And then the last 15% of my flight phobia went away. And I'm not afraid anymore. So that was a little tiny avoidance behaviors that are beyond your consciousness that you have to you have to figure out and let go of.
155
+ [1434.000 --> 1450.000] And so that is maybe an example of how anxiety disorders work. And so over time, you can see, so after that turbulent flight, it took me probably, you know, four or five years to get back down with my exposures.
156
+ [1450.000 --> 1459.000] So habituation and desensitization. So exposure techniques involve, you can do them in three ways.
157
+ [1459.000 --> 1473.000] So usually therapists will start with an imaginal component where you talk about what happened or you talk about facing the thing that you're afraid of or you talk about how you're going to let go of your safety behaviors or your avoidance.
158
+ [1473.000 --> 1478.000] And that's a imaginal exposure. So that's some some amount of exposure.
159
+ [1479.000 --> 1496.000] And then most if you don't have virtual reality, you would go right into in vivo exposure where the therapist after a while is like, okay, now you're ready to go out and face the thing that you're afraid of.
160
+ [1496.000 --> 1504.000] And it's a big jump between imagining and then going out into the real world and exposing yourself to what you're afraid of.
161
+ [1504.000 --> 1511.000] And so people, there's a 25% dropout rate for exposure because it hurts. It's hard and it's scary.
162
+ [1512.000 --> 1528.000] And so there does seem to be some evidence that if you have a middle, middle step, this in virtual, you can increase the likelihood that a person gets to do the in vivo or maybe even quick in it.
163
+ [1528.000 --> 1540.000] And you also have your therapist in the room to help you cope. So when you have an imaginal, you do an imaginal exposure, then you add a virtual or in virtual.
164
+ [1540.000 --> 1552.000] And you can do it on a television screen or you can do an immersive environment. Then you can have coaching, you can desensitize even more before you actually go do the in vivo.
165
+ [1552.000 --> 1565.000] I think I would have got better a lot faster if I had had the virtual reality to help me actually. And I could have done it, you know, who can afford to fly more than weekly because usually you need a weekly exposure to desensitize.
166
+ [1565.000 --> 1575.000] That's why airplane flying is so difficult. But if you had, when you have a virtual experience that you don't have to, you know, go broke doing your exposure.
167
+ [1575.000 --> 1592.000] Okay. And then so VR for anxiety disorders. So as I said, there's a lot of evidence. So PTSD, OCD and generalized anxiety are also considered anxiety disorders often. We clump those together.
168
+ [1592.000 --> 1606.000] And I don't know why because there's a lot of pilot data for OCD and people do use virtual reality for OCD, but there aren't any control trials yet. Same same for generalizing anxiety.
169
+ [1606.000 --> 1623.000] So what we know from the studies, they're still a bit underpowered and they do all appear to be similar, the results to standard CBT. So it looks like they're not inferior to CBT.
170
+ [1623.000 --> 1631.000] But as I said, it looks like the dropout rates might be a little less when you do VR.
171
+ [1631.000 --> 1643.000] And so here would be an environment that we would use, maybe for fear of public speaking, in the office and we can adjust, you know, you can have an medium grade.
172
+ [1643.000 --> 1660.000] You can have an easy level, maybe with two people in the room that are really positive or you can adjust it to the high level advanced where you're in a big room and people are judgmental or negative or even walking out of the room.
173
+ [1660.000 --> 1672.000] And then the other thing that's very, I think that's actually more useful and I'm used more than these protocols is having people pick the content that's specific to them.
174
+ [1672.000 --> 1686.000] So this is personalized, precise medicine happening right here where a person can find the cue because often these are idiosyncratic sorts of fears where it could be, you know, a yellow bookshelf or something.
175
+ [1686.000 --> 1695.000] And you're not going to have a whole protocol for that and people can find their cues, download them in the VR360.
176
+ [1695.000 --> 1707.000] And then I will either give them a cardboard to take home or they buy one of these and they can do all sorts of exposures all day long at home with their specific cue.
177
+ [1707.000 --> 1714.000] So that's kind of how we're using it in the clinics.
178
+ [1714.000 --> 1723.000] So the other very robust research topic has been eating disorders.
179
+ [1723.000 --> 1741.000] And we do, although we're not, we have a study going on now, I have some collaborators who are about to study and about to launch a study looking at VR enhanced CBT for eating disorders.
180
+ [1741.000 --> 1747.000] But basically eating disorders one in 20 people have been affected at some point in their lives.
181
+ [1747.000 --> 1759.000] And I think the standard treatment that we know works is cognitive behavior therapy as well as wellness and nutritional counseling.
182
+ [1759.000 --> 1765.000] But I am, you know, glossing over the details of this just for the sake of time.
183
+ [1765.000 --> 1769.000] But it's basically CBT and nutritional counseling.
184
+ [1769.000 --> 1781.000] And what the eating disorder specialist tell me is that many times the behaviors get better, but what's hard to treat is the body image and the body to satisfaction that remains.
185
+ [1781.000 --> 1785.000] And that is a risk factor for going back to these behaviors.
186
+ [1785.000 --> 1807.000] So VR has a couple advantages in treating eating disorders, we think, is that not only can it help people reduce their behaviors towards food so you can either get somebody practice inhibiting their response to food or you can get them practicing stopping restricting because everybody's different.
187
+ [1807.000 --> 1813.000] Some people are restricting too much, some people are engaging too much with food.
188
+ [1813.000 --> 1829.000] And you can use VR to get them to practice those behaviors. But then the other helpful thing is that you can have them work with their cognitive distortions about their size, this especially for anorexia nervosa.
189
+ [1829.000 --> 1837.000] There are some programs that help people estimate what they look like and then compare it.
190
+ [1837.000 --> 1841.000] And I'll show you in one of our programs where you can do that.
191
+ [1841.000 --> 1847.000] We haven't started delivering this, but other researchers and clinicians have do use this.
192
+ [1847.000 --> 1863.000] And then the other thing, I'm going to tell you a bit more when we get to embodied VR, but there is some evidence that we can actually reduce body dissatisfaction by updating implicit biases towards our body.
193
+ [1863.000 --> 1866.000] So it'll tell you more details there.
194
+ [1866.000 --> 1883.000] So here's one of the things that we're using, or we will be using in the clinic for the top one is for having people estimate their body size and the bottom is a restaurant and different foods that you can program in.
195
+ [1883.000 --> 1907.000] So schizophrenia and psychosis and paranoid delusions, what does that have to do with VR? Wouldn't VR be contraintigated for this population? I usually thought they're not in touch with reality. Why should we deliver something that's going to be more out of touch with reality?
196
+ [1907.000 --> 1920.000] But actually there is some really good options. And as many as three and a hundred people have suffered at some point in their life with psychosis. So it's not rare.
197
+ [1920.000 --> 1934.000] The psychotherapy medications, of course, are important, but we also know that cognitive behavior therapy is quite important and helpful for three things.
198
+ [1934.000 --> 1949.000] One thing is for reappraising. So people with psychotic disorders need to know how to reappraise and do reality testing. I don't know if anyone saw a beautiful mind.
199
+ [1949.000 --> 1964.000] John Nash, the economic Nobel laureate, who suffered with psychosis learned that his hallucinations didn't age and that's how he could know that somebody was actually a hallucination.
200
+ [1964.000 --> 1978.000] So there's things like that, the cognitive reappraisal that's important to do. And then the second part is distress tolerance. People have to deal and accept a lot of their hallucinations and symptoms
201
+ [1978.000 --> 2007.000] that can't be reversed. And so distress tolerance is important. So using VR to help with learning skills to tolerate distress. And then lastly, there's the executive functioning problems. And there is cognitive enhancement therapy with computers practicing that help with that cognitive functioning that's involved with psychosis often.
202
+ [2007.000 --> 2036.000] So there have been now four controlled trials, mostly looking at social skills training and decreasing paranoia. And they have shown cognition and functioning is reduced. And also self-evicacy. And interestingly, they had one group that was VR plus
203
+ [2036.000 --> 2052.000] it was social is vocational training plus VR and vocational training plus group. And the VR did better than the group. So there was a comparison study, which is kind of still rare at this point.
204
+ [2052.000 --> 2065.000] And yeah, and also there's a randomized control trial showing that it improves persecutory delusions. So lots of potential.
205
+ [2065.000 --> 2086.000] So there we are not delivering it yet. But we have Kate Hardy, who is one of the directors of the Inspire Clinic, who is very interested in starting to fell up a way that we could deliver this. So it should be coming down the pipelines in our clinics at some point.
206
+ [2086.000 --> 2096.000] And so this is just an example of one of the studies showed the difference. And this is probably too busy of a slide I apologize.
207
+ [2096.000 --> 2107.000] So they compared virtual reality exposure when they just exposed them to what they're afraid of and didn't ask them to decrease their safety behaviors.
208
+ [2107.000 --> 2119.000] And that virtual reality exposure did not do as well as exposing them to what they're afraid of and having them stop their behaviors and do something different.
209
+ [2119.000 --> 2132.000] So it's really important not only is just to expose yourself to the behaviors, but to actually let go and have an alternate behavior in response. I hope that makes sense.
210
+ [2133.000 --> 2146.000] Okay, and then so these are some of the scenarios that people use when they're treating psychosis with VR alcohol and addiction.
211
+ [2146.000 --> 2156.000] There's potential to treat that now helping people do refusal skills in the session.
212
+ [2156.000 --> 2168.000] There's some controversy about desensitization to to cues some people worry about you could possibly cue someone or create a craving.
213
+ [2168.000 --> 2177.000] But I think most of the time people who've studied this have found that the Q desensitization is helpful.
214
+ [2177.000 --> 2192.000] Moved disorders like major depression and dysthymia as I said, there's not a great evidence base, although we can extrapolate to delivering mindfulness and relaxation skills.
215
+ [2192.000 --> 2206.000] So we know that relaxation and mindfulness skills, although they help the acceptability of anxiety treatments, they don't actually influence the outcome.
216
+ [2206.000 --> 2220.000] But that's the opposite with mood disorders that actually relaxation and mindfulness training actually improve and can be a treatment and affect outcome.
217
+ [2220.000 --> 2230.000] So you can deliver, and like I said, they don't have to listen to me droning on about how to do progressive muscle relaxation. They can have an experience.
218
+ [2230.000 --> 2239.000] And it also reduces the burden on the physician and physician wellness is a big, big issue, especially in psychiatry.
219
+ [2239.000 --> 2252.000] I can get my note done. The electronic record is in, you know, while somebody's having their experience, I can finish my note. I can be in a better frame of mind, which translates to better outcomes for the patients as well.
220
+ [2252.000 --> 2267.000] So yeah, so we've got in our clinic progressive muscle relaxation, diaphragmatic breathing, mindfulness, and then pain reduction for oncology patients that's being developed right now in our inpatient units.
221
+ [2267.000 --> 2278.000] And so here's our clinic. We have a website with how do we get referred so you can definitely Google it and check out check out our clinic.
222
+ [2278.000 --> 2291.000] But let's get to the other juicy parts. So that's that's all old stuff. But what about this new embodied virtual reality? What the heck is that? And why is why is virtual reality getting so much attention?
223
+ [2291.000 --> 2309.000] Well, because I think we're having there's I don't want to I don't think we can overstate this, but we now have the ability to inhabit another body and feel like we're in another body with a simulation.
224
+ [2309.000 --> 2321.000] And I think this is the first time in history we've been able to have a body transfer experience, which is going to open up a lot of different things.
225
+ [2321.000 --> 2335.000] And why this is so different is that although it's the same kind of tracking, just like the head gets tracked, it adds in body tracking by using sensors on the wall.
226
+ [2335.000 --> 2353.000] Although that's changing with some of them will now not be even having sent sensors. And now we're even we have a cordless version of this. So before you could only have these embodiment experiences and multi million dollar labs here, Jeremy Balanson that the virtual human interaction lab was one of the pioneers.
227
+ [2353.000 --> 2372.000] But now we are able to bring this into the offices and anywhere due to this commercial gaming devices, Oculus Rift, HTC Vive, down in the right corner. That's where we use the for my study. We use that.
228
+ [2373.000 --> 2387.000] And so what's so different about this compared to the traditional well usually visual reality has movement involved.
229
+ [2388.000 --> 2410.000] And so when you take body movement and you inform a virtual generated computer simulation, you not only can change cognition and emotion, but you now have access to changing sensation and actual movement.
230
+ [2410.000 --> 2419.000] And I'll try to explain this a little better, but sensation and movement are intimately linked in a feedback loop.
231
+ [2419.000 --> 2435.000] And so these avatar experiences are available on on gaming devices now in different programs. You can there's one program high fidelity where you can go pick your body, design your body.
232
+ [2435.000 --> 2442.000] It's free of charge and you meet people from all over the world. I mean, this is just weird.
233
+ [2442.000 --> 2459.000] And so there you can imagine the things that are available now that you could practice being somebody that you're not. I mean, there's psychotherapy roll a lot of psychotherapy is role playing embodied experiences.
234
+ [2459.000 --> 2470.000] So they they're using it now in medicine, not in psychiatry, but for stroke rehab motor skills training and simulations, Parkinson's disease and cerebral palsy.
235
+ [2470.000 --> 2491.000] The first experience of embodied VR therapy was actually Ramashandran's mirror therapy where they amputated a phantom limb using a mirror. That's really embodied virtual reality when you think about it, a very crude form of it.
236
+ [2491.000 --> 2508.000] And so why and how this works is that pain and sensation respond to lack of movement. So you can imagine if you're on the savanna, you break your arm, you feel a lot of pain and then your arm is not moving.
237
+ [2508.000 --> 2520.000] And the more it doesn't move, the more the brain's like this is really serious, don't move the arm, send more pain because that arm needs to heal and you don't want to use it.
238
+ [2520.000 --> 2530.000] And so movement, the lack of movement visual when you see something not the reptilian brain picks up on disuse and it increases pain.
239
+ [2530.000 --> 2558.000] And so this is good when you're healing, right? But and it's a positive feedback loop and it allows you to recover. But people can get into these chronic pain loops with disuse so the more you don't use something, the more pain you feel, the more you don't use it, or you have a stroke and you don't use something, you get these weird pain and sympathetic distrophies and inflammation that starts to occur.
240
+ [2558.000 --> 2569.000] And the way out of that for all the pain specialists that I talked to, they say movement is the first indication of pain getting better.
241
+ [2569.000 --> 2581.000] So if you see movement, if your brain sees movement, it says, oh, okay, things are not that bad. Let's turn down the pain. And so you can decrease pain and sensation.
242
+ [2581.000 --> 2597.000] So that's just an example of how sensation and movement and visual information is connected. But many, there's many other systems in the body, the immune system, I'll talk to you a little bit later that are that are connected this way.
243
+ [2597.000 --> 2617.000] So the ability to have movement really is a game changer. So there are many disuse syndroms that mere therapy has been shown to be quite helpful with. I won't bore you with all the details, but they're disuse syndroms that are unilateral disuse syndroms.
244
+ [2618.000 --> 2631.000] And so that's what made me interested. I have a lot of patients with psychosomatic illness. They'll have paralysis or weakness many times on one side.
245
+ [2631.000 --> 2643.000] And so that got me interested in using mere therapy. And then I collaborated with Jeremy Balancin's lab and we, it was right at the time that the HTC Vive was coming out.
246
+ [2643.000 --> 2657.000] And we were able to replicate what was going on in Jeremy's lab and do a mere visual feedback and do mere therapy in embodied, embodied virtual reality.
247
+ [2657.000 --> 2668.000] And so we're now, so we're two years into the study and we've got another year and we're doing a randomized control trial to see if it helps with psychosomatic illness.
248
+ [2668.000 --> 2695.000] So what else, what else, what's so great about embodied virtual reality? Well, I want to convince you that we're in brand new territory by talking about some of the research that's been done. And this has been done with embodiment and illusions because before we didn't have access to all these new commercial gaming devices.
249
+ [2695.000 --> 2707.000] And so we know a bit about the nervous system from these illusions and the, and the most notorious and the one that we have the most data is called the rubber hand illusion.
250
+ [2707.000 --> 2720.000] And I think it will inform us about virtual reality. So let me just show this to you. So a picture is worth a thousand words.
251
+ [2755.000 --> 2785.000] So I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I'm, I
252
+ [2785.000 --> 2789.140] I'm, I'm, I'm, I'm, I'm, I'm I'm I'm thinkin' the most worrying of discovering a malesoka
253
+ [2789.140 --> 2814.900] while your dentist and her
254
+ [2814.900 --> 2815.900] that illusion is.
255
+ [2815.900 --> 2825.140] And so people have different propensity for this.
256
+ [2825.140 --> 2829.820] Some people have this experience with the rubber hand illusion.
257
+ [2829.820 --> 2831.700] Some people don't have it.
258
+ [2831.700 --> 2836.220] And it seems to be a marker of interoceptive sensation.
259
+ [2836.220 --> 2843.820] And problems with interoceptive sensation are a marker for emotion regulation.
260
+ [2843.820 --> 2852.020] So people that have emotion regulation issues can more easily get immersed and embodied
261
+ [2852.020 --> 2855.020] in these.
262
+ [2855.020 --> 2857.820] So it can be a marker.
263
+ [2857.820 --> 2865.220] I think we can start to use virtual reality and possibly measure things in psychiatric
264
+ [2865.220 --> 2866.220] illness.
265
+ [2866.220 --> 2870.620] The other thing that's a little disturbing is that we don't know very much about this.
266
+ [2870.620 --> 2877.940] And this researcher, Demasio, has demonstrated that the hand that's being disembodied during
267
+ [2877.940 --> 2882.660] the rubber hand illusion actually starts to have an immune response.
268
+ [2882.660 --> 2884.820] And the body starts to reject that.
269
+ [2884.820 --> 2890.620] So the visual system and the immune system are also intimately related.
270
+ [2890.620 --> 2893.740] People with trauma dissociate.
271
+ [2893.740 --> 2904.180] In a sense, they are also having, are more easily taking into these kind of immersive experiences.
272
+ [2904.180 --> 2912.380] So although it's a marker of kind of plasticity and probably we develop this because of tool
273
+ [2912.380 --> 2919.180] use, we've never been able to have these such robust kinds of illusions and we don't know
274
+ [2919.180 --> 2920.500] what the effects are going to be.
275
+ [2920.500 --> 2922.820] And we need to be very careful.
276
+ [2922.820 --> 2925.740] These aren't regulated devices.
277
+ [2925.740 --> 2930.900] And so, and the safety is unclear.
278
+ [2930.900 --> 2935.700] And I think that we need the medical community to also be very careful in using these.
279
+ [2935.700 --> 2936.940] And these are for the embodied.
280
+ [2936.940 --> 2942.220] I think we're pretty safe on the headsets, the virtual reality with it.
281
+ [2942.220 --> 2946.020] But everything after that is we're in a big experiment right now.
282
+ [2946.020 --> 2950.140] And I think we need to know that we're in an experiment.
283
+ [2950.140 --> 2956.860] The FDA is thinking about regulating these devices, which I think they should.
284
+ [2956.860 --> 2962.660] But I think what's really coming, we're coming to terms with is that this visual capture,
285
+ [2962.660 --> 2968.060] there's this dominance of the visual system.
286
+ [2968.060 --> 2971.500] And that things are possible that we don't even know yet.
287
+ [2971.500 --> 2976.860] And Jeremy's balance in lab has shown that we can actually learn to control an eight
288
+ [2976.860 --> 2978.180] armed lobster.
289
+ [2978.180 --> 2983.740] You can actually become a lobster and learn how to control that.
290
+ [2983.740 --> 2990.540] So the exploratory uses right now are mere therapy, integration with biofeedback, physical
291
+ [2990.540 --> 2994.900] therapy, body image disorders.
292
+ [2994.900 --> 3000.940] And there are cases of people improving their body's dissatisfaction just by swapping
293
+ [3000.940 --> 3006.500] on body so people with obesity swap for a very thin body.
294
+ [3006.500 --> 3011.700] And once they leave the environment, they actually feel more satisfied with it.
295
+ [3011.700 --> 3014.060] But you would think it'd be the opposite, right?
296
+ [3014.060 --> 3019.740] And then that seems to translate to health behaviors.
297
+ [3019.740 --> 3025.780] There's uses for teletherapies because most of communication, about 80% of communication
298
+ [3025.780 --> 3027.060] is nonverbal.
299
+ [3027.060 --> 3031.660] You lose a lot if you're doing telocyciatry in just a screen.
300
+ [3031.660 --> 3035.980] You only get a limited amount of information, including eye gaze and things that are important
301
+ [3035.980 --> 3037.780] for attachment.
302
+ [3037.780 --> 3047.820] So doing telocyciatry in VR will probably be another possible novel use.
303
+ [3047.820 --> 3055.380] And then I think most important will be having a mindfulness exercise in an embodied experience
304
+ [3055.380 --> 3061.140] is going to be even more immersive and interactive and retain.
305
+ [3061.140 --> 3067.980] You're going to learn even more probably and enhance learning.
306
+ [3067.980 --> 3075.140] And then that lastly, and then I'm finishing up now, is the ability to reprogram implicit
307
+ [3075.140 --> 3081.300] bias and reprogram some of our unconscious belief systems.
308
+ [3081.300 --> 3088.940] So Jeremy Balinson's lab is really the ones who coined the term proteus effect.
309
+ [3088.940 --> 3096.580] So when you inhabit an avatar with certain traits, you change your beliefs about those traits.
310
+ [3096.580 --> 3101.460] So I think one of their first experiments was with superheroes.
311
+ [3101.460 --> 3108.760] So if one, they had an option where you could either inhabit a superhero or a non-super
312
+ [3108.760 --> 3109.760] hero.
313
+ [3109.760 --> 3116.240] And if you inhabit the superhero, after the experiment, they recorded people helping people
314
+ [3116.240 --> 3118.320] more.
315
+ [3118.320 --> 3126.720] And when they do this on gender and race, they actually show from implicit biased testing
316
+ [3126.720 --> 3132.680] that their implicit bias has changed after they've inhabited the body of that characteristic.
317
+ [3132.680 --> 3138.320] So it could be a force for good and maybe not good in that.
318
+ [3138.320 --> 3144.000] A lot of this is passive reprogramming, you know, with maybe without your consent.
319
+ [3144.000 --> 3149.240] But there are biases, implicit biases that we can reprogram.
320
+ [3149.240 --> 3157.040] And a lot of psychiatric illnesses have some elements of implicit bias.
321
+ [3157.040 --> 3164.660] We can do this theory of mind where some of disorders like autism have impairment in
322
+ [3164.660 --> 3167.980] being able to understand the mind of the other.
323
+ [3167.980 --> 3174.220] We can do training with forced allocentric viewpoints, maybe have people practice empathy
324
+ [3174.220 --> 3184.140] training, having people change their implicit beliefs about themselves and about others.
325
+ [3184.140 --> 3189.020] And then as far as body, there's this allocentric lock theory.
326
+ [3189.020 --> 3193.220] I'm hoping I can describe it well.
327
+ [3193.220 --> 3194.660] It's a little bit complicated.
328
+ [3194.660 --> 3201.180] But basically, there's an egocentric view of ourselves that we have that comes from information
329
+ [3201.180 --> 3204.860] coming down my internal sensations.
330
+ [3204.860 --> 3211.860] My interoceptive sensations will inform me of what my body looks like, what it feels like.
331
+ [3211.860 --> 3215.740] But then we also, our body perception is coming from the outside.
332
+ [3215.740 --> 3218.260] What other people think I look like?
333
+ [3218.260 --> 3219.260] Am I fat?
334
+ [3219.260 --> 3220.260] Am I too tall?
335
+ [3220.260 --> 3223.140] Am I too short?
336
+ [3223.140 --> 3229.700] You have this idea of what the world thinks of you.
337
+ [3229.700 --> 3236.100] You have a model of who you are and your body based on information coming up and information
338
+ [3236.100 --> 3237.900] coming down.
339
+ [3237.900 --> 3245.380] From some psychiatric disorders appear that the allocentric viewpoint is dominating the
340
+ [3245.380 --> 3246.700] viewpoint of your body.
341
+ [3246.700 --> 3249.340] It's not updating the models correctly.
342
+ [3249.340 --> 3255.660] For example, who's very thin struggling with anorexia might feel fat even though they
343
+ [3255.660 --> 3256.660] look down.
344
+ [3256.660 --> 3257.740] They see that they're thin.
345
+ [3257.740 --> 3264.020] But this allocentric viewpoint is overriding that.
346
+ [3264.020 --> 3267.060] We can people with body dysmorphia.
347
+ [3267.060 --> 3275.420] There may be ways to change by inhabitants mounting that by inhabiting these avatars, we can
348
+ [3275.420 --> 3278.900] update models about the body.
349
+ [3278.900 --> 3279.900] That's exciting.
350
+ [3279.900 --> 3282.900] Again, it's all exploratory at this point.
351
+ [3282.900 --> 3288.980] I don't think we can say anything for sure, but these are the things that are being explored.
352
+ [3288.980 --> 3293.020] One last thing, just notes of caution.
353
+ [3293.020 --> 3294.340] Some of the barriers are cost.
354
+ [3294.340 --> 3300.300] We don't get reimbursed anymore for using these technologies.
355
+ [3300.300 --> 3303.460] It's unclear who's going to pay for them or whether.
356
+ [3303.460 --> 3306.580] The cost is pretty low at this point.
357
+ [3306.580 --> 3314.420] When I do psychotherapy, I'm not getting any more RV use or payment for using virtual
358
+ [3314.420 --> 3317.260] reality as opposed to not doing it.
359
+ [3317.260 --> 3321.700] Some people have perceived difficulty with technology.
360
+ [3321.700 --> 3326.780] There is a 1% chance of cyber sickness.
361
+ [3326.780 --> 3332.580] There's also visual disturbances, postural instability, lucid dreaming, which I guess
362
+ [3332.580 --> 3335.420] could be a positive or a negative.
363
+ [3335.420 --> 3340.580] There are gamers that are talking about, and usually this is for hours or days of using
364
+ [3340.580 --> 3344.540] who will wake up and have reported not feeling their hands.
365
+ [3344.540 --> 3349.860] Luckily this is temporary, but we don't know what some of the side effects may be.
366
+ [3349.860 --> 3355.940] Really we're recommending most devices are recommending not more than 20 to 30 minutes
367
+ [3355.940 --> 3356.940] of use.
368
+ [3356.940 --> 3363.620] There's a limit to how long you can be in VR, especially in these embodied experiences.
369
+ [3363.620 --> 3372.700] It does look like that people who have migraines or children or people who have cyber sickness
370
+ [3372.700 --> 3379.420] within the first 10 minutes of use are at increased risk.
371
+ [3379.420 --> 3385.540] But there are ways to manage it with better content, pacing people, having them spend
372
+ [3385.540 --> 3393.220] long less time and having providers who let people out of the simulation whenever they
373
+ [3393.220 --> 3394.220] notice it.
374
+ [3394.220 --> 3396.260] It will need to be some training.
375
+ [3396.260 --> 3403.900] Anyway, if you're interested more, you can look up on our VRIT website, what's happening
376
+ [3403.900 --> 3405.900] in our department.
377
+ [3405.900 --> 3406.900] Okay.
378
+ [3406.900 --> 3409.500] So, well, I went a little longer than I thought.
379
+ [3409.500 --> 3412.140] But now we're open for questions.
380
+ [3412.140 --> 3421.660] Did you mean to say that the disembodied limb, the immune system doesn't work in the limb,
381
+ [3421.660 --> 3424.620] it still works in the rest of the body at the same time?
382
+ [3424.620 --> 3426.860] Yeah, they report hiss to me.
383
+ [3426.860 --> 3427.860] Oh, yeah.
384
+ [3427.860 --> 3436.140] So, you're asking, the immune response is working fine in the rest of the body, but not in
385
+ [3436.140 --> 3438.020] the limb that's disembodied.
386
+ [3438.020 --> 3443.780] And you were asking me that as a question, and I'm saying, yes, it's a hiss to me in response
387
+ [3443.780 --> 3446.860] that appears very localized in that area.
388
+ [3446.860 --> 3448.660] Yes, yes.
389
+ [3448.660 --> 3449.660] Yeah.
390
+ [3449.660 --> 3450.660] Good.
391
+ [3450.660 --> 3455.660] Yeah, I'm kind of intrigued by that.
392
+ [3455.660 --> 3456.660] Yeah.
393
+ [3456.660 --> 3463.100] I heard about embodied VRIT used for having conversations with the self, so having this
394
+ [3463.100 --> 3467.820] self therapy session, and I'm interested in that and what your thoughts are on how
395
+ [3467.820 --> 3469.820] with applications could be.
396
+ [3469.820 --> 3471.580] Yeah, me too.
397
+ [3471.580 --> 3473.220] I know there was this report there.
398
+ [3473.220 --> 3474.420] Oh, sorry.
399
+ [3474.420 --> 3482.780] Yeah, there's, there are reports of using embodied VR for self therapy.
400
+ [3482.780 --> 3489.580] And yes, I have heard about that, and I am very interested in that.
401
+ [3489.580 --> 3496.580] There was a podcast recently about a researcher who did a program called Freud, which I'm trying
402
+ [3496.580 --> 3502.660] to get my hands on, in which you will go into the experience and you see a picture of
403
+ [3502.660 --> 3506.260] a Freud, and you go in and you tell him your problem.
404
+ [3506.260 --> 3512.780] And then you, instead of Freud answering, you go in and you are Freud and you listen
405
+ [3512.780 --> 3514.460] to yourself.
406
+ [3514.460 --> 3520.020] So from that, you go, an allocentric point of view, and people have found that they, they
407
+ [3520.020 --> 3523.660] can see things differently and re-appraise things in that.
408
+ [3523.660 --> 3526.380] So yeah, yeah, I think that's going to be great.
409
+ [3526.380 --> 3531.460] Or empty chair technique, a lot of things we do in psychotherapy are actually embodied.
410
+ [3531.460 --> 3534.420] We use our imagination to embody.
411
+ [3534.420 --> 3537.860] But I think that's good because a lot of times when people are emotionally dysregulated,
412
+ [3537.860 --> 3539.540] they can't access their emotion.
413
+ [3539.540 --> 3542.140] They can't access their imagination.
414
+ [3542.140 --> 3547.860] Or I see people with like TBI, traumatic brain injuries, and they just have a lot of problems
415
+ [3547.860 --> 3549.180] imagining.
416
+ [3549.180 --> 3555.940] And so this is sort of like a prosthetic imagination for them for people that, is that answer your
417
+ [3555.940 --> 3556.940] question?
418
+ [3556.940 --> 3558.940] Or do you know of anything else that's good?
419
+ [3558.940 --> 3562.740] I was actually a great episode on radio a lot with that.
420
+ [3562.740 --> 3563.740] Yeah.
421
+ [3563.740 --> 3564.740] Yeah.
422
+ [3564.740 --> 3565.740] Kind of the recommend.
423
+ [3565.740 --> 3570.300] It takes you through this person who has experience talking to themselves as Freud.
424
+ [3570.300 --> 3571.900] And it's very cool.
425
+ [3571.900 --> 3574.100] Yeah, I have a student trying to track it down.
426
+ [3574.100 --> 3576.180] We want to try to use it in our clinic.
427
+ [3576.180 --> 3577.940] Yeah, in the red.
428
+ [3577.940 --> 3581.220] I had heard some information.
429
+ [3581.220 --> 3584.860] It was very limited, but I had chronic vertigo.
430
+ [3584.860 --> 3589.140] And then VR could help with people who have chronic vertigo.
431
+ [3589.140 --> 3590.780] Did you hear anything about that?
432
+ [3590.780 --> 3595.300] I heard something, but I don't know the details, but it makes sense to me as I'm, oh,
433
+ [3595.300 --> 3596.300] sorry.
434
+ [3596.300 --> 3606.540] The woman in the red said that she'd heard of, she has chronic vertigo and has heard of
435
+ [3606.540 --> 3609.260] VR being used for that.
436
+ [3609.260 --> 3613.260] And I have heard of it, but I don't remember the details, but I would think, yeah, it's
437
+ [3613.260 --> 3616.860] a motor and sensory phenomenon.
438
+ [3616.860 --> 3622.500] And an illusion might definitely be something that could help.
439
+ [3622.500 --> 3623.500] Yeah.
440
+ [3623.500 --> 3631.260] And I know there's a very effective protocols for vertigo.
441
+ [3631.260 --> 3635.540] And maybe they could also automatize it into VR.
442
+ [3635.540 --> 3636.540] That would be another option.
443
+ [3636.540 --> 3642.300] Yeah, because I know there's very good treatments for vertigo and protocols.
444
+ [3642.300 --> 3646.980] ENT, yeah, and ear nose and throat.
445
+ [3646.980 --> 3647.980] Yeah.
446
+ [3647.980 --> 3648.980] Yeah.
447
+ [3648.980 --> 3649.980] Back.
448
+ [3649.980 --> 3657.060] And I was mentioning about executive functioning and proving that as an experience with ADD on
449
+ [3657.060 --> 3658.860] that.
450
+ [3658.860 --> 3660.860] With VR?
451
+ [3660.860 --> 3663.140] Oh, yeah.
452
+ [3663.140 --> 3672.740] So is there any evidence of improvement in executive functioning in ADD using VR?
453
+ [3672.740 --> 3674.780] Possibly, but I don't know all the research.
454
+ [3674.780 --> 3676.020] Yeah.
455
+ [3676.020 --> 3679.940] It's possible there is.
456
+ [3679.940 --> 3685.820] And yeah, I would think that there's all sorts of ways that you could help.
457
+ [3685.820 --> 3692.060] We do have a researcher actually in our department who's looking at using VR and biofeedback for
458
+ [3692.060 --> 3693.980] kids with ADHD.
459
+ [3693.980 --> 3699.700] So I know people are exploring it, but I'm sorry, I'm just not an expert in that field,
460
+ [3699.700 --> 3701.700] so I don't know.
461
+ [3701.700 --> 3703.700] Yeah.
462
+ [3703.700 --> 3707.300] You had a lot on all exposure techniques.
463
+ [3707.300 --> 3710.340] And you think all of this is exposure?
464
+ [3710.340 --> 3713.900] What's the next step into interactive?
465
+ [3713.900 --> 3721.460] So yeah, you were saying this question was that we talked a lot about exposure, but what
466
+ [3721.460 --> 3724.620] about the next step with interaction?
467
+ [3724.620 --> 3725.620] Can you say more?
468
+ [3725.620 --> 3726.620] What do you mean?
469
+ [3726.620 --> 3732.060] Well, I guess some of the things you were showing were stepping into interactive where
470
+ [3732.060 --> 3738.540] the Freud or some of the feedback.
471
+ [3738.540 --> 3744.660] I was looking more of the interactive into the actual imagery of the brain or the body.
472
+ [3744.660 --> 3752.460] And how neurosurgeons, but I know this is a whole different thing for this psychiatry,
473
+ [3752.460 --> 3760.700] but where maybe with the body sense, the body image, where they are interactively working
474
+ [3760.700 --> 3765.500] with their previous body image and what they want to be, what is it that we help for
475
+ [3765.500 --> 3766.500] them?
476
+ [3766.500 --> 3767.500] Yeah.
477
+ [3767.500 --> 3774.700] So is it more exposure research still going on or are they really starting to question
478
+ [3774.700 --> 3775.700] to interactive?
479
+ [3775.700 --> 3780.540] No, I think, yeah, people are just doing just wild things.
480
+ [3780.540 --> 3781.540] And we have vendors.
481
+ [3781.540 --> 3786.580] So once a month we have a meeting, a contortion meeting, and so where vendors come and show
482
+ [3786.580 --> 3793.660] us things that are being developed and researchers, it's just a collaborative process of exploring
483
+ [3793.660 --> 3794.740] what's out there.
484
+ [3794.740 --> 3801.140] And people are doing all sorts of interactive things with mindfulness and body scans and
485
+ [3801.140 --> 3810.860] where, drawing where your emotions are and social experiences.
486
+ [3810.860 --> 3816.340] So yeah, I think exposure, we all know what words we could use it.
487
+ [3816.340 --> 3819.540] And I think this is the next phase.
488
+ [3819.540 --> 3828.100] Can you embody maybe somebody that's having gender dysphoria that that could be one of
489
+ [3828.100 --> 3838.100] the treatments as they get maybe some of these going through and change and getting used
490
+ [3838.100 --> 3840.860] to things like that.
491
+ [3840.860 --> 3843.140] There's so many options.
492
+ [3843.140 --> 3849.860] Did you have something in mind that it's more on the surgical side?
493
+ [3849.860 --> 3850.860] I see.
494
+ [3850.860 --> 3851.860] Yeah, yeah.
495
+ [3851.860 --> 3853.500] There's a lot of surgical simulation.
496
+ [3853.500 --> 3855.780] They're way ahead of us than in psychiatry.
497
+ [3855.780 --> 3861.380] So there's, yeah, that's lots and lots of training that's going on with surgical simulation.
498
+ [3861.380 --> 3862.380] Yeah.
499
+ [3862.380 --> 3866.340] With virtual reality and mindfulness, what do they do in?
500
+ [3866.340 --> 3870.660] Is something you can do at home and buy a little glasses or something you have to do in
501
+ [3870.660 --> 3872.660] a psychiatrist's office?
502
+ [3872.660 --> 3876.940] No, they're, yeah, that's probably one of the most popular apps.
503
+ [3876.940 --> 3883.420] Our mindfulness apps, there's things like Zen Zone and I mean, there's just millions.
504
+ [3883.420 --> 3888.060] That seems to be the most popular intervention.
505
+ [3888.060 --> 3890.860] But a lot of them haven't been tested either.
506
+ [3890.860 --> 3895.380] We don't know if it's equivalent to the mindfulness training.
507
+ [3895.380 --> 3901.980] They randomized control trials have used with depression.
508
+ [3901.980 --> 3904.700] But yeah, I mean, it's being developed.
509
+ [3904.700 --> 3905.700] Yeah.
510
+ [3905.700 --> 3907.700] Does that answer your question?
511
+ [3907.700 --> 3908.700] Okay.
512
+ [3908.700 --> 3912.220] Puller's son who want to do anything and I'm just trying to look for ways to get him to
513
+ [3912.220 --> 3913.220] do something.
514
+ [3913.220 --> 3916.420] You probably want to do anything anyway, but.
515
+ [3916.420 --> 3923.660] One of my favorite ones is this one that you, a lot of people have trouble focusing on
516
+ [3923.660 --> 3925.700] breathing.
517
+ [3925.700 --> 3931.940] And so it shows you you're in a peaceful setting in nature and it shows you how to do
518
+ [3931.940 --> 3935.100] your breath, the pattern of your breath.
519
+ [3935.100 --> 3939.260] You can set it at different patterns and you see the breath, you just see the smoke going
520
+ [3939.260 --> 3944.820] in and out and I can feel my breath much more when I can see the breath coming in and
521
+ [3944.820 --> 3949.620] out and it guides me because I've never got it from a yoga teacher who tells me to breathe.
522
+ [3949.620 --> 3950.620] I'm like, what am I doing?
523
+ [3950.620 --> 3952.460] I can't feel my breath.
524
+ [3952.460 --> 3954.860] But there is an interoceptive sensation.
525
+ [3954.860 --> 3958.340] So there's things like that that I think is novel that are going to be developed.
526
+ [3958.340 --> 3960.180] But again, I'm not an expert.
527
+ [3960.180 --> 3962.500] There's probably a lot more to it.
528
+ [3962.500 --> 3963.500] Good question.
529
+ [3963.500 --> 3970.340] Regarding visual reality, for everyday life, most of our non-entracking visual reality,
530
+ [3970.340 --> 3975.140] what 10 years from now, most of our going to be in traffic are basically you going to
531
+ [3975.140 --> 3979.660] buy something and you, the cashier is visual reality.
532
+ [3979.660 --> 3985.060] So how would the effect change based on how much fear being more involved?
533
+ [3985.060 --> 3986.860] Oh, that's interesting.
534
+ [3987.820 --> 3994.860] So how would this effect treatment if virtual reality is so ubiquitous and we're kind of
535
+ [3994.860 --> 3996.980] desensitized to it?
536
+ [3996.980 --> 3997.980] Will it change anything?
537
+ [3997.980 --> 3998.980] I don't know.
538
+ [3998.980 --> 3999.980] Maybe we'll be desensitized.
539
+ [3999.980 --> 4003.100] Maybe we won't learn as well because I don't know.
540
+ [4003.100 --> 4004.100] What do you think?
541
+ [4004.100 --> 4009.580] It's like something that is like a color TV compared to black and white.
542
+ [4009.580 --> 4012.420] Because day to day life, people don't aren't using it.
543
+ [4012.420 --> 4018.860] When all the games become virtual reality, all the training and tutorial become virtual reality,
544
+ [4018.860 --> 4022.660] then you go to the treatment and just like the same thing.
545
+ [4022.660 --> 4024.500] And say, what's the bank?
546
+ [4024.500 --> 4025.500] It's almost the same thing.
547
+ [4025.500 --> 4026.740] What's the difference?
548
+ [4026.740 --> 4027.740] Yeah, yeah.
549
+ [4027.740 --> 4030.220] But it'll probably still be specific enough.
550
+ [4030.220 --> 4033.420] It'll be better than doing, I guess, regular, maybe.
551
+ [4033.420 --> 4035.580] Maybe it'll, but you probably need to do both.
552
+ [4035.580 --> 4039.380] I mean, eventually you have to do in vivo reality exposure.
553
+ [4039.380 --> 4043.500] So it's just a step to do the real exposure.
554
+ [4043.500 --> 4044.500] But I don't know.
555
+ [4044.500 --> 4046.380] Yeah, that was an interesting question.
556
+ [4046.380 --> 4047.380] In the back.
557
+ [4047.380 --> 4053.300] We talked about the pain sensation, the kinds of things.
558
+ [4053.300 --> 4058.780] I think you mentioned, in order to decrease the pain sensation, you make people see the
559
+ [4058.780 --> 4059.780] movement.
560
+ [4059.780 --> 4065.980] Is there any specific movement that you show or like, is it like random movement?
561
+ [4065.980 --> 4066.980] Oh, yeah.
562
+ [4066.980 --> 4068.300] Oh, that's a good question.
563
+ [4068.300 --> 4072.780] So what type of movement do you show to decrease pain?
564
+ [4072.780 --> 4073.780] Is what you're asking?
565
+ [4073.780 --> 4074.780] Yeah.
566
+ [4074.780 --> 4078.460] It depends on where the person's feeling the pain.
567
+ [4078.460 --> 4081.540] So usually you have to swap limbs.
568
+ [4081.540 --> 4088.460] So if it's happening in the left upper limb, we will show it.
569
+ [4088.460 --> 4093.420] So if they're having pain in the left upper limb, we'll put them in virtual reality where
570
+ [4093.420 --> 4098.220] when they're moving this hand, it looks actually like they're moving this hand.
571
+ [4098.220 --> 4100.700] So they swap it.
572
+ [4100.700 --> 4105.180] But it really depends on where the pain you customize it to where the pain location is.
573
+ [4105.180 --> 4106.940] But it has to be on one side.
574
+ [4106.940 --> 4110.300] If it got it on both sides, it doesn't seem to work as well.
575
+ [4110.300 --> 4113.540] Unless it's a different kind of pain on both sides.
576
+ [4113.540 --> 4114.540] Yeah.
577
+ [4114.540 --> 4115.540] Does that make sense?
578
+ [4115.540 --> 4116.540] Okay.
579
+ [4116.540 --> 4117.540] Any other questions?
580
+ [4117.540 --> 4118.540] Yeah?
581
+ [4118.540 --> 4128.020] Is there a source of information to find out, find providers or clinics that use?
582
+ [4129.020 --> 4130.020] Virtual reality.
583
+ [4130.020 --> 4141.020] And if there are places, how do we know if they have the appropriate training or accreditation?
584
+ [4141.020 --> 4144.020] That's a good question.
585
+ [4144.020 --> 4145.020] Yeah.
586
+ [4145.020 --> 4147.020] That's a very good question.
587
+ [4147.020 --> 4153.140] So the question is, how do we find providers who are delivering virtual reality?
588
+ [4153.140 --> 4159.780] How do we know if they're trained and they're well versed and how to deliver this?
589
+ [4159.780 --> 4167.460] We don't have any great standard of care or clinical practices right now.
590
+ [4167.460 --> 4171.460] And there's no trainings that really standardize this.
591
+ [4171.460 --> 4181.260] There's one company, Sias, who is probably the most common platform and they do have 800
592
+ [4181.260 --> 4182.260] users.
593
+ [4182.260 --> 4186.580] Probably somewhere on their site, if you contact them, you could see who's doing it.
594
+ [4186.580 --> 4188.620] But we don't know the quality of the providers.
595
+ [4188.620 --> 4190.140] We don't know who's trained.
596
+ [4190.140 --> 4192.460] We don't have training programs.
597
+ [4192.460 --> 4197.740] That's one of the things we're developing and trying to accomplish would be to get some
598
+ [4197.740 --> 4200.740] standards of care, just like we do for regular CBT.
599
+ [4200.740 --> 4205.300] But probably if you have somebody who's trained in cognitive behavior therapy, they're going
600
+ [4205.300 --> 4209.060] to have most of the principles because this is just a tool what they're using.
601
+ [4209.060 --> 4219.580] So one good site is the association of cognitive and behavioral therapist, abct.org, and they
602
+ [4219.580 --> 4222.020] have a Find a Therapist button.
603
+ [4222.020 --> 4228.500] So you can find well-trained cognitive behavior therapist and they might have a subspecialty
604
+ [4228.500 --> 4229.500] in virtual reality.
605
+ [4229.500 --> 4233.860] But right now we don't have best practices in virtual reality.
606
+ [4233.860 --> 4235.460] So be careful.
607
+ [4235.460 --> 4236.460] Yeah.
608
+ [4236.460 --> 4247.180] I mean, in G's Phillips, PSIOUS is probably the most common platform now.
609
+ [4247.180 --> 4248.180] Yeah.
610
+ [4248.180 --> 4251.020] But still, you know, they're the comments one and I asked them the other day, I said how
611
+ [4251.020 --> 4254.180] many people are using, they're like 800 in the whole world.
612
+ [4254.180 --> 4255.980] Like that's still small.
613
+ [4255.980 --> 4256.980] So yeah.
614
+ [4256.980 --> 4263.900] Dr. Bullock is the technology at least stable enough to where you can develop standards
615
+ [4263.900 --> 4269.980] care against it or is it forever chasing the moving target?
616
+ [4269.980 --> 4274.540] So the question is, is it stable enough to develop standards of care?
617
+ [4274.540 --> 4275.540] Yeah.
618
+ [4275.540 --> 4276.540] That's a good question.
619
+ [4276.540 --> 4280.020] So, I mean, I think with the protocols, yes.
620
+ [4280.020 --> 4283.220] So we have some protocols that we know work.
621
+ [4283.220 --> 4287.940] But the technology that devices are what's changing so quickly.
622
+ [4287.940 --> 4291.060] And so yeah, that's going to be hard to develop the standards of care.
623
+ [4291.060 --> 4293.220] I think I would more stay.
624
+ [4293.220 --> 4299.780] We have some standardized evidence-based interventions that have been studied in control trials with
625
+ [4299.780 --> 4301.100] these headsets.
626
+ [4301.100 --> 4304.140] But we don't have it for the immersive technology.
627
+ [4304.140 --> 4308.940] And yeah, I think because it's moving so fast and the devices are changing and that's,
628
+ [4308.940 --> 4309.940] is that what you're asking?
629
+ [4309.940 --> 4313.700] That's, it's going to be hard to create standards with those devices.
630
+ [4313.700 --> 4317.460] But I think we have pretty good evidence space.
631
+ [4317.460 --> 4321.860] It's evidence informed right now, although we don't have a good training program for
632
+ [4321.860 --> 4327.900] physicians or for providers.
633
+ [4327.900 --> 4332.660] As far as the quality of graphics, like even characters are pretty basic right now.
634
+ [4332.660 --> 4336.700] So is it, does that affect how the user responds to that?
635
+ [4336.700 --> 4342.540] For instance, you look in Hollywood characters get more and more complex as far as detail.
636
+ [4342.540 --> 4343.540] So.
637
+ [4343.540 --> 4347.620] Yeah, there's this really interesting phenomenon.
638
+ [4347.620 --> 4354.100] There's most of the exposures, the reptilian brain doesn't, it's kind of stupid.
639
+ [4354.100 --> 4358.020] It doesn't need a lot of detail to, to evoke the emotion.
640
+ [4358.020 --> 4363.460] It just needs really basic components like even with a spider.
641
+ [4363.460 --> 4370.300] And the problem is the, it takes more and more money to get more and more realistic.
642
+ [4370.300 --> 4375.260] And if you're really close to realistic, but not quite realistic, you're getting to
643
+ [4375.260 --> 4377.460] what's called the uncanny valley.
644
+ [4377.460 --> 4379.100] And then it's just weird.
645
+ [4379.100 --> 4383.860] So usually people will either keep it really simple or you got to go all the way and pour
646
+ [4383.860 --> 4385.540] a bunch of money into it.
647
+ [4385.540 --> 4390.300] And since the landscape's changing so quickly, I think people aren't pouring a lot of money
648
+ [4390.300 --> 4396.300] into the details of it because of that investment cost is my understanding, although I'm not
649
+ [4396.300 --> 4397.300] a developer.
650
+ [4398.300 --> 4402.900] Do you think people would be a better response or can't have us?
651
+ [4402.900 --> 4407.260] To be honest, because yeah, you're dealing with the rep, unexposure, I think it doesn't
652
+ [4407.260 --> 4411.420] matter because you're dealing with the reptilian brain, maybe on learning environments.
653
+ [4411.420 --> 4416.100] Yeah, the more engaging it is, perhaps the novelty.
654
+ [4416.100 --> 4417.100] But again, I don't know.
655
+ [4417.100 --> 4419.860] I don't know if we have the answer to that.
656
+ [4419.860 --> 4420.860] Yeah.
657
+ [4420.860 --> 4424.940] Hi, you mentioned that the head sounds are pretty stable.
658
+ [4424.940 --> 4429.940] Any research showing that they might be addictive for the individuals that have a long-term
659
+ [4429.940 --> 4430.940] effects?
660
+ [4430.940 --> 4431.940] We don't know that yet.
661
+ [4431.940 --> 4432.940] Yeah.
662
+ [4432.940 --> 4433.940] I haven't seen that.
663
+ [4433.940 --> 4434.940] Oh, yeah.
664
+ [4434.940 --> 4442.940] So are there any long-term effects with addiction and VR?
665
+ [4442.940 --> 4443.940] Addiction to VR.
666
+ [4443.940 --> 4444.940] Sorry.
667
+ [4444.940 --> 4445.940] Yeah.
668
+ [4445.940 --> 4450.940] Is there any evidence of addiction with VR?
669
+ [4450.940 --> 4451.940] That's a good question.
670
+ [4451.940 --> 4457.260] I haven't looked at the literature, but I think it's similar to the gaming.
671
+ [4457.260 --> 4460.460] And I think there's a number of hours, which is actually quite long.
672
+ [4460.460 --> 4463.940] It was quite disturbing when I read the literature on that.
673
+ [4463.940 --> 4470.340] I'm not remembering how many hours, but actually to be at risk for addiction, there's many,
674
+ [4470.340 --> 4472.340] many hours of gaming.
675
+ [4472.340 --> 4478.700] I think because of the cyber sickness issue, people should not be on for more than 20 minutes.
676
+ [4478.700 --> 4485.020] And I think that they get visual fatigue more easily on there.
677
+ [4485.020 --> 4487.020] So there's probably less risk.
678
+ [4487.020 --> 4488.500] But again, that's a good question.
679
+ [4488.500 --> 4491.500] I think I'm going to go home and look that up, see if there's anything about it.
680
+ [4491.500 --> 4492.500] Do you know anything about it?
681
+ [4492.500 --> 4496.300] Well, I was just curious about the research with obesity, right?
682
+ [4496.300 --> 4501.140] So the individual maybe desire to serve body type, and they finally achieve that.
683
+ [4501.140 --> 4507.820] And we can see a high and a positive effect, short-term, but maybe then that becomes cyclical
684
+ [4507.820 --> 4509.460] and then they depend on that.
685
+ [4509.460 --> 4516.420] And then they go back for VR treatment, especially if you know, 10, 20 years from now, it's everywhere
686
+ [4516.420 --> 4518.420] and it's kind of cheap.
687
+ [4518.420 --> 4524.260] I kind of wonder how that would socially affect things.
688
+ [4524.260 --> 4525.260] Right.
689
+ [4525.260 --> 4526.260] Right.
690
+ [4526.260 --> 4528.260] And I think, yeah, we don't know any of this now.
691
+ [4528.260 --> 4530.700] Yeah, we're all, we're just exploring this.
692
+ [4530.700 --> 4538.540] But I think at the present moment, there's probably less risk because of the visual fatigue.
693
+ [4538.540 --> 4540.540] All right.
694
+ [4540.540 --> 4541.540] Okay.
695
+ [4541.540 --> 4544.540] And I'm here after if anybody wants to ask any questions.
696
+ [4544.540 --> 4545.540] Thank you, Dr. Borscher.
transcript/allocentric_vkqjB6ofThA.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.960] pppc
2
+ [24.940 --> 28.960] pfccccccccccccc
3
+ [30.000 --> 60.000] 1.0.5.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1
4
+ [60.000 --> 86.220] 1.0.2.3.1.1.1.2.1.2.1.7.9.1.1.2.3.3. guys let's get the
5
+ [86.220 --> 93.220] I just wanna leave my bed
6
+ [93.220 --> 98.220] Don't feel like taking up my phone to live with this
7
+ [98.220 --> 103.220] I'm tired of losing you, I'm not too ready
8
+ [103.220 --> 107.220] That's the ghetto
9
+ [107.220 --> 113.220] I can't move, I'm waiting for this feeling
10
+ [113.220 --> 115.220] I just wanna leave my bed
11
+ [115.220 --> 117.220] That's the ghetto
12
+ [117.220 --> 121.220] I can't move, I'm waiting for this feeling
13
+ [121.220 --> 126.220] I'm tired of losing you, I'm not too ready
14
+ [126.220 --> 128.220] That's the ghetto
15
+ [128.220 --> 131.220] I can't move, I'm too ready
16
+ [131.220 --> 134.220] Oh, yes, I said it
17
+ [134.220 --> 136.220] I said it
18
+ [136.220 --> 138.220] No, no
19
+ [138.220 --> 140.220] I said it
20
+ [141.220 --> 144.220] I'm tired of losing you, I'm not too ready
21
+ [147.220 --> 152.220] I'm tired of losing you, I'm not too ready
transcript/allocentric_wOhLMEKLTKE.txt ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.880] This week we're going to be talking about verbal and nonverbal communication and we have a special guest with us my
2
+ [6.880 --> 12.260] YouTube friend Mary Daphne and she is going to be helping us out. So let's get into it.
3
+ [18.060 --> 24.960] I like to introduce you to my friend Mary Daphne. She has her own YouTube channel. Hey Alex. Thanks for having me on your channel
4
+ [24.960 --> 29.920] I am so excited to be here. Mary Daphne's channel helps millennials boost your
5
+ [29.920 --> 39.120] social skills with a focus on relationship, productivity and mental wellness and at any point you can find a link to her channel in the description below this video.
6
+ [39.120 --> 50.760] So please take a look, subscribe and show her some support. The main point of this video is that we observe verbal and nonverbal communication at the same moment.
7
+ [50.760 --> 58.720] We don't usually separate these when we express ourselves and we don't usually separate them when we are listening to somebody else communicate.
8
+ [58.720 --> 71.080] We experience them all at the same moment. So both of these complement each other. A basic definition of verbal communication is the words and other utterances we use to express ourself.
9
+ [71.080 --> 83.420] Not surprising. Nonverbal communication is all of the rest. It's how you look like your gestures and your face and also how you sound like your tone of voice, pacing and pauses.
10
+ [83.420 --> 98.300] So clearly verbal communication has specific meaning and some people think that nonverbal communication works the same way. Some people claim that they can literally read your nonverbals as if they know the secret code.
11
+ [98.300 --> 109.180] But is that really the case? That's what we're going to talk about. Let's ask Mary Daphne what she thinks. Do nonverbal cues like gestures have specific meanings or not?
12
+ [109.180 --> 124.060] You know, Alex, that's a very interesting question. One way we can think about nonverbal gestures is through emblems and illustrators. Emblems are the nonverbal cues that have a universal meaning within a specific culture.
13
+ [124.060 --> 137.500] For example, if we're talking about American culture, when we give a thumbs up, we know that means good job or circling our index finger next to our ear means that person is totally crazy.
14
+ [137.500 --> 150.380] A really cool thing to note, however, is that emblems are decided by a particular society. This means that in one culture, a gesture could be very different from what it denotes in another culture.
15
+ [150.380 --> 164.100] I remember when I was teaching a communication class in Istanbul, Turkey, and there were some Middle Eastern exchange students who were offended by the thumbs up gesture because it means something derogatory in their cultural context.
16
+ [164.100 --> 173.940] So we have to be mindful that while emblems can have agreed upon meanings, they're inextricably linked to their cultural context.
17
+ [173.940 --> 190.180] And so the meaning can vary significantly from culture to culture. Now, illustrators are interesting because unlike emblems, we use these automatically and subconsciously to illuminate the words or speaking.
18
+ [190.180 --> 201.060] It helps us paint a picture that our words are expressing. They're not explicitly taught to us or tied to a culture. They're often unique to the person, timing, and situation.
19
+ [201.060 --> 214.100] Let's say someone's excited to see their friend. In addition to saying, it's so great to see you, they use big sweeping gestures with their arms. These gestures signal joy and excitement.
20
+ [214.180 --> 226.420] On the flip side, if you're in a high stakes meeting and are super nervous, you're probably going to be more reserved with your illustrators, possibly using stiff or jerky movements.
21
+ [226.420 --> 233.700] And if you think about it, we still use these non-verbals, even if no one's there to see them, such as when we're on the phone.
22
+ [233.700 --> 237.780] They're deeply rooted in our communicative behavior.
23
+ [237.860 --> 247.380] You know, it's funny, Alex, when I'm editing my YouTube lessons, I don't even realize that I'm using illustrators because they just flow naturally.
24
+ [247.380 --> 256.340] Yeah, that's a good point. And I'm sure most people would likely agree. They don't usually think about the exact gestures they use.
25
+ [256.340 --> 264.020] And by the way, I will link to all of the sources that we are talking about in the description below this video so you can take a look at those.
26
+ [264.100 --> 272.500] So in addition to emblems and illustrators, could you give us more detail about how verbal and non-verbal communication overlap?
27
+ [272.500 --> 274.340] How do these work together?
28
+ [274.340 --> 281.780] There are six different ways that non-verbal and verbal communication interact in real life.
29
+ [282.580 --> 291.780] So the first one is repeating. You can think of this one as the non-verbal behavior reinforcing the verbal message.
30
+ [292.420 --> 298.660] In other words, you're repeating your message because you are essentially saying the same thing.
31
+ [299.540 --> 305.300] For example, saying nice job while giving a thumbs up achieves this.
32
+ [306.420 --> 311.940] Or you might say, don't interrupt me while putting your hand out as repetition.
33
+ [312.420 --> 314.900] The second one, substituting.
34
+ [315.620 --> 325.220] So with substituting, you're using a gesture or some other form of non-verbal communication in place of a word.
35
+ [326.340 --> 333.540] For instance, you might give a high five instead of saying, wow, that's awesome. Congratulations.
36
+ [334.420 --> 336.420] Number three, turn taking.
37
+ [337.380 --> 343.460] This one is about relying on non-verbal communication to signal turn taking.
38
+ [344.420 --> 351.460] Let's imagine you're in a group conversation and you notice that Jimmy hasn't gotten a chance to say anything yet.
39
+ [352.340 --> 361.860] Noticing that, you might lean in and do an outwardly gesture facing Jimmy to signal that maybe they would like to say something.
40
+ [361.860 --> 365.060] Number four, complimenting the verbal message.
41
+ [365.620 --> 373.460] So with complimenting, you can think of enhancing your verbal message with non-verbal communication.
42
+ [374.500 --> 380.260] You can use complimenting to drive home a point or to clarify a message.
43
+ [380.900 --> 387.780] For example, if you're giving your manager a rundown of your team's epic progress this quarter,
44
+ [388.340 --> 394.980] you might gesture an upward chart signaling growth. Number five, emphasizing.
45
+ [395.700 --> 401.460] If I'm looking to really emphasize my point, I might use a strong gesture.
46
+ [402.100 --> 408.580] But I could also leverage parallel linguistics, which includes tone of voice, volume,
47
+ [408.580 --> 416.420] inflection patterns, and pitch. Or I might change my word pacing or add some dramatic pause for that extra
48
+ [417.380 --> 428.980] of these are a few examples of ways to truly accentuate your verbal message, especially in the context of a speech, presentation, meeting, or pitch.
49
+ [429.780 --> 433.300] And the last one, number six, contradicting.
50
+ [434.500 --> 443.060] This one is pretty interesting. This occurs when your non-verbals contradict your spoken words.
51
+ [443.700 --> 450.740] So for instance, let's imagine a friend tells you they are in so much pain, but they're smiling.
52
+ [451.540 --> 458.340] These are contradictory, right? Or imagine that a friend tells you that they had so much fun hanging out, but
53
+ [458.980 --> 463.620] they have a deadpan look on their face. It almost seems sarcastic, right?
54
+ [464.420 --> 471.300] Well, even if they're not being sarcastic, it sticks out like a sore thumb because what they are saying
55
+ [472.020 --> 475.540] does not match how they are saying it.
56
+ [476.180 --> 484.340] That's a really helpful list. And on the last point, contradiction, there is a related term for this when our non-verbal communication
57
+ [484.340 --> 490.740] doesn't match up with our verbal communication. In a poker game, they call this a tell to see if you're bluffing.
58
+ [490.980 --> 499.860] Researchers call this non-verbal leakage. And even if you say how you feel with your words, other feelings can leak out through your
59
+ [499.860 --> 502.820] non-verbals. So can you tell us a little bit more about this?
60
+ [502.820 --> 511.620] I love the terminology for this concept because it is such a visceral image. We can think of non-verbal leakage
61
+ [511.620 --> 518.820] as our gestures blowing our cover, so to speak, when we're attempting to conceal something.
62
+ [519.540 --> 526.660] For example, if we verbally express one thing, but our body language, facial expressions, and
63
+ [526.660 --> 536.100] tone of voice are screaming something else entirely, our non-verbals are leaking into our words and
64
+ [536.100 --> 543.540] altering our message. In other words, there's a disconnect between what we're saying and how we
65
+ [543.540 --> 552.260] are saying it. Research shows that in studies on deception, 98% of people expressed non-verbal leakage
66
+ [552.260 --> 559.220] when they were trying to hide a charged emotion like anger, jealousy, depression, or dishonesty.
67
+ [560.020 --> 564.260] So viewers might be wondering, well, what does non-verbal leakage look like?
68
+ [565.620 --> 573.220] Ton of voice is a huge giveaway, and so are body gestures that misalign with the verbal message.
69
+ [574.020 --> 581.460] But there's also something called micro expressions where an emotion will flash across the person's face.
70
+ [582.100 --> 588.020] But you'll have to pay close attention or you'll miss it because these happen as fast as
71
+ [588.020 --> 595.540] one-fifteenth to one-twenty-fifth of a second. The seven micro expressions, happiness, surprise,
72
+ [595.540 --> 603.700] fear, anger, sadness, disgust, and contempt are universal and exist across most cultures.
73
+ [604.820 --> 609.620] Funny story on a recent skiing adventure. My husband decided it would be fun
74
+ [610.340 --> 617.060] for us to go bootpacking, meaning climbing up the mountain, outside of the ski resort balance.
75
+ [618.180 --> 625.780] I was mortified because not only had I not been skiing in over 10 years, I had never been back
76
+ [625.780 --> 632.340] country where I would have to carry my skis up a nicey mountain with all of my heavy gear on.
77
+ [633.060 --> 638.180] But you know, I could see that he was really excited about this. So I said, sure, that's awesome.
78
+ [638.980 --> 646.820] But as I said that my hands were trembling, I was sweating profusely and my eyebrows were deeply furrowed.
79
+ [647.460 --> 654.900] Major nonverbal leakage, right? Well fortunately I had my face mask and goggles on so he couldn't see
80
+ [654.900 --> 660.740] any of it. I wasn't until after we made it down the mountain that I fessed up about how nervous I was.
81
+ [661.860 --> 667.620] Much funnier in retrospect. That's a great example about how micro expressions and
82
+ [667.620 --> 673.780] nonverbal leakage work and how our nonverbal and verbal communication can sometimes send a mixed
83
+ [673.780 --> 679.140] message. So thank you for helping us out today, Mary Daphne. It was great to have you on the channel.
84
+ [679.140 --> 683.940] That was a lot of fun. Thank you so much for having me Alex. So I encourage you to follow the link
85
+ [683.940 --> 688.500] in the description below the video to take a look at Mary Daphne's channel. Also,
86
+ [688.500 --> 692.500] she'll be making a comment that I will pin to the top of the comments section. You can just
87
+ [692.500 --> 697.140] click on her name and get to her channel that way and be sure to say hi. So God bless and I will
88
+ [697.140 --> 698.100] see you all soon.
transcript/allocentric_yVT7dO_Tf4E.txt ADDED
@@ -0,0 +1,1467 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 25.720] s dehem pe Ernest inqurezahovim za keand Road
2
+ [25.720 --> 40.300] ako je takoč民� je$2011 do vol accrediteke Nemimy to wheimič Težnjana yodine iz J margin Smedna som t开 koutivim malo i kanvo priReadal tek Experiencecijima iz!!
3
+ [40.300 --> 41.180] Nabuch um rotated inf
4
+ [70.300 --> 86.060] Benga su tom
5
+ [86.060 --> 90.780] So Jeff wrote a book, which is in the meantime,
6
+ [90.780 --> 93.620] is a classic book on intelligence.
7
+ [93.620 --> 97.820] That's 2004, describing his memory prediction framework
8
+ [97.820 --> 99.300] theory of the brain.
9
+ [99.300 --> 101.060] And then started to maintain the belief
10
+ [101.060 --> 104.380] that it's time for computer science to learn from the brain
11
+ [104.380 --> 107.940] and for making computers more similar to the brain.
12
+ [107.940 --> 111.140] Jeff and I agreed then on the belief that the time
13
+ [111.140 --> 115.780] it come for a new attack on the problem of AI.
14
+ [115.780 --> 119.140] And that neuroscience would provide important cues.
15
+ [119.140 --> 122.700] You wrote the initiative, this was the intelligence initiative,
16
+ [122.700 --> 125.940] the precursor of the CBMM.
17
+ [125.940 --> 128.740] The initiative is exciting.
18
+ [128.740 --> 132.580] Over the last 30 years, I have seen many intelligence initiatives
19
+ [132.580 --> 134.660] come and go by the positioning and thought
20
+ [134.660 --> 138.860] behind high square that was the term intelligence initiative
21
+ [138.860 --> 140.140] is the best I've seen.
22
+ [140.140 --> 144.500] Then I tease the ideal location for an initiative like this.
23
+ [144.500 --> 147.940] And since then, companies such as Mobile Eye
24
+ [147.940 --> 153.180] and especially DeepMind, which were then just tiny startups
25
+ [153.180 --> 156.260] when they participated in the MIT symposium
26
+ [156.260 --> 160.660] brains mindset machine, which organized in 2011.
27
+ [160.660 --> 164.700] And those companies have achieved a lot of success in AI
28
+ [164.700 --> 168.780] by using two main algorithms, reinforcement learning
29
+ [168.780 --> 169.940] and deep learning.
30
+ [169.940 --> 173.980] And both of such algorithms were initially
31
+ [173.980 --> 178.540] inspired long ago by cognitive science and neuroscience.
32
+ [178.540 --> 182.460] So because of this, when I asked what
33
+ [182.460 --> 185.700] will be the next breakthrough in AI, of course,
34
+ [185.700 --> 187.940] I answered that I don't know.
35
+ [187.940 --> 191.620] But that it is a reasonable bet that it will also
36
+ [191.620 --> 194.180] come from neuroscience.
37
+ [194.180 --> 197.820] And they may welcome for looking in more details
38
+ [197.820 --> 201.460] at the anatomy and function of the layers
39
+ [201.460 --> 202.980] in each cortical areas.
40
+ [202.980 --> 205.340] And this is what Jeff would speak about.
41
+ [205.340 --> 208.140] The title is, have we missed half of what the New York
42
+ [208.140 --> 210.740] Cortex does, a Los Centric location
43
+ [210.740 --> 212.540] as the basis for perception?
44
+ [212.540 --> 215.180] Please join me in welcoming Jeff Oaken's.
45
+ [215.180 --> 216.180] Thank you.
46
+ [216.180 --> 217.180] Thank you.
47
+ [217.180 --> 218.180] Thank you.
48
+ [218.180 --> 219.180] Thank you.
49
+ [219.180 --> 222.460] Thank you.
50
+ [222.460 --> 223.220] Thank you, Tommy.
51
+ [223.220 --> 225.380] That was very generous.
52
+ [225.380 --> 228.660] And it's nice to be back here.
53
+ [228.660 --> 234.300] I do view MIT as really setting the agenda in the field
54
+ [234.300 --> 236.820] that I like to participate in.
55
+ [236.820 --> 238.980] And I almost completely forgot it back
56
+ [238.980 --> 241.940] the fact that I had been my application for graduate program
57
+ [241.940 --> 245.580] here was rejected many years ago.
58
+ [245.580 --> 246.100] That's good.
59
+ [246.100 --> 249.340] So I don't only think against you guys.
60
+ [249.340 --> 251.860] Anyway, so yes, that's all in my talk.
61
+ [251.860 --> 254.460] And I won't explain it other than I'll just jump right into it
62
+ [254.460 --> 255.740] here.
63
+ [255.740 --> 258.780] Just a few words about my company,
64
+ [258.780 --> 261.140] because it's a bit unusual.
65
+ [261.140 --> 263.940] Numenta is a small business in Northern California.
66
+ [263.940 --> 267.260] We're really like a private research lab.
67
+ [267.260 --> 268.460] There's 12 people.
68
+ [268.460 --> 271.300] We're almost completely dedicated to New York
69
+ [271.300 --> 275.460] Cortical Theory and scientists and engineers.
70
+ [275.460 --> 277.620] We have a rather ambitious goal, which
71
+ [277.620 --> 279.860] is the reverse engineer of the New York Cortex.
72
+ [279.860 --> 281.860] I'm not embarrassed to say that.
73
+ [281.860 --> 282.940] It's an ambitious goal.
74
+ [282.940 --> 283.740] It's achievable.
75
+ [283.740 --> 285.140] We should all be working on it.
76
+ [285.140 --> 286.940] One way or the other.
77
+ [286.940 --> 291.820] And our approach is a very detailed biological approach.
78
+ [291.820 --> 295.420] We want to understand how the neurons and the circuitry,
79
+ [295.420 --> 300.180] as we see it in the mammalian New York Cortex, what it does
80
+ [300.180 --> 301.740] and what its function is.
81
+ [301.740 --> 302.660] We're not just an idea.
82
+ [302.660 --> 304.100] It's inspired by the brain.
83
+ [304.100 --> 307.060] That can come after you understand how the brain works.
84
+ [307.060 --> 309.500] So we really stick to the biology.
85
+ [309.500 --> 312.220] We test this empirically with collaborations
86
+ [312.220 --> 315.420] and experimental labs and via simulation.
87
+ [315.420 --> 316.860] And that's what I'm talking about today.
88
+ [316.860 --> 320.020] We have a second goal, which relates to what Tommy just
89
+ [320.020 --> 321.660] mentioned here.
90
+ [321.660 --> 323.500] And it's definitely second in our case,
91
+ [323.500 --> 326.700] which is to enable technology based on cortical theory.
92
+ [326.700 --> 330.420] So I'm still a believer that the way we're ultimately
93
+ [330.420 --> 332.020] going to get to truly intelligent machines
94
+ [332.020 --> 333.900] is we're going to the fastest path
95
+ [333.900 --> 335.900] there is to understand how the brain works.
96
+ [335.900 --> 339.900] And we have a very active open source community.
97
+ [339.900 --> 342.020] All of our stuff is very open.
98
+ [342.020 --> 342.940] All of our source code.
99
+ [342.940 --> 344.820] You can reproduce all of our experiments.
100
+ [344.820 --> 348.020] And we believe this ultimately, this endeavor,
101
+ [348.020 --> 350.660] whether it's also other people, will be the basis for machine
102
+ [350.660 --> 354.260] intelligence as we will see it in the future.
103
+ [354.260 --> 355.540] OK.
104
+ [355.540 --> 356.700] I just want to be mine.
105
+ [356.700 --> 358.420] I know everyone's here as a neuroscience.
106
+ [358.420 --> 359.220] And you all know this.
107
+ [359.220 --> 362.820] But I just find it's a good idea just to review a few basics
108
+ [362.820 --> 364.940] before I delve into this.
109
+ [364.940 --> 366.300] Mammals have a near cortex.
110
+ [366.300 --> 367.500] Nomamples don't.
111
+ [367.500 --> 370.620] In the human, it's about 70% of the volume of your brain.
112
+ [370.620 --> 371.500] This is my model.
113
+ [371.500 --> 373.180] I carry it with me all the time.
114
+ [373.180 --> 374.740] It's about this big an area.
115
+ [374.740 --> 376.900] And it's about 2 1 1 2 1 2 1 1 2.
116
+ [376.900 --> 379.500] And what's most remarkable about the near cortex
117
+ [379.500 --> 381.740] is the consistency of the microarchitecture
118
+ [381.740 --> 383.140] you see everywhere you look.
119
+ [383.140 --> 386.460] It's not 100% consistent, but it's remarkably consistent.
120
+ [386.460 --> 388.580] And so instead of focusing on the small differences,
121
+ [388.580 --> 391.380] we really are focusing on the common elements we see everywhere.
122
+ [391.380 --> 393.100] And so all the different regions of the cortex
123
+ [393.100 --> 394.260] do different things.
124
+ [394.260 --> 397.340] It appears, and this was first proposed by Vernon Mount Castle,
125
+ [397.340 --> 400.380] many years ago, that cortex is cortex.
126
+ [400.380 --> 402.660] And the way we see it, and the way we hear it,
127
+ [402.660 --> 404.660] and the way we feel, and the way we do language,
128
+ [404.660 --> 408.980] somehow, is all based on the same underlying fundamental
129
+ [408.980 --> 413.220] architecture, which is just a remarkable thing to think about.
130
+ [413.220 --> 415.740] But it appears to be true.
131
+ [415.740 --> 419.660] So and Vernon Mount Castle was also basically proposed.
132
+ [419.660 --> 422.140] He says, well, the way to think about the near cortex
133
+ [422.140 --> 424.060] is just think about one little section of it
134
+ [424.060 --> 426.260] that goes through that 2 1 1 1 2 1 1 2 1 1 1 1 1.
135
+ [426.260 --> 427.620] He's called it a column.
136
+ [427.620 --> 429.420] And he says, basically in that column,
137
+ [429.420 --> 430.980] you're going to have that central function.
138
+ [430.980 --> 432.460] So the goal is to really understand
139
+ [432.460 --> 436.140] what a column, a single, like perhaps a millimeter square by 2
140
+ [436.140 --> 437.900] and 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 2 1 2 1 2 1 2 1 2 1 2
141
+ [437.900 --> 439.940] And if you can figure that out, you've got most of it
142
+ [439.940 --> 441.060] figured out.
143
+ [441.060 --> 444.220] So that's what we're going to talk about today, a cortical column.
144
+ [444.220 --> 446.780] Now if you open up a basic textbook,
145
+ [446.780 --> 448.620] Introduction to the Neuroscience type of thing,
146
+ [448.620 --> 450.300] you'll see a picture like this.
147
+ [450.300 --> 452.820] And they'll say, oh, there's a bunch of layers in the cortex
148
+ [452.820 --> 455.420] input arise into layer four, layer four projects
149
+ [455.420 --> 458.100] to layer two, three, layer two, three is the output,
150
+ [458.100 --> 459.940] goes to the next region, and then
151
+ [459.940 --> 461.540] layer two, three projects to layer five,
152
+ [461.540 --> 462.700] and that project to layer six.
153
+ [462.700 --> 465.260] That's how information flows through the cortical column.
154
+ [465.260 --> 469.060] It's actually not bad, but it's leaving out quite a bit.
155
+ [469.060 --> 472.260] By my count, right now, we deal with it relatively
156
+ [472.260 --> 474.900] about 12 different cellular layers.
157
+ [474.900 --> 476.980] Layer three is easily divided into two.
158
+ [476.980 --> 478.940] Layer five has three different cell types.
159
+ [478.940 --> 480.340] These may not be visible layers.
160
+ [480.340 --> 482.420] It doesn't mean the cells are actually stratified,
161
+ [482.420 --> 484.660] but their cells are different anatomy or morphology,
162
+ [484.660 --> 487.540] or physiology that can be uniquely identified.
163
+ [487.540 --> 489.340] Layer six is a very complicated layer.
164
+ [489.340 --> 492.100] It has these two layer six A and six B,
165
+ [492.100 --> 493.540] or sort of these very interesting layers,
166
+ [493.540 --> 496.140] and it's got a bunch of other cells down below there.
167
+ [496.140 --> 499.580] If you just follow, for example, the same as we did on the left
168
+ [499.580 --> 500.820] there, the feed forward circuit,
169
+ [500.820 --> 502.540] it gets complicated too.
170
+ [502.540 --> 505.380] So there are actually two inputs to every cortical column,
171
+ [505.380 --> 506.820] especially not the primary ones.
172
+ [506.820 --> 508.140] There's sometimes you have connections
173
+ [508.140 --> 510.060] directly from other cortical regions,
174
+ [510.060 --> 512.340] and sometimes I go through the thalmas into there.
175
+ [512.420 --> 514.500] So there's two sort of feed forward inputs.
176
+ [514.500 --> 517.100] They do arrive at layer four among other places,
177
+ [517.100 --> 520.780] but they only form about 10% of the synapses on layer four cells.
178
+ [520.780 --> 523.180] About 50% of the synapses on layer four cells
179
+ [523.180 --> 525.700] are shown in this blue hour through this very kind of
180
+ [525.700 --> 528.900] unusual bidirectional connection between layer six A.
181
+ [528.900 --> 530.580] So if you can understand what layer four is doing,
182
+ [530.580 --> 532.660] you can't ignore what layer six A is doing,
183
+ [532.660 --> 535.140] because it's providing about half the input there.
184
+ [535.140 --> 537.100] Indeed, layer four projects a layer three.
185
+ [537.100 --> 538.980] That's the output layer goes direct
186
+ [538.980 --> 540.180] out of the cortical regions,
187
+ [540.180 --> 542.580] but layer three also projects down to layer five.
188
+ [542.580 --> 545.340] And here you see a very similar type of circuit
189
+ [545.340 --> 548.900] between layer six B and layer one of the layer fives.
190
+ [548.900 --> 551.420] You have a similar sort of parallel structure going on there,
191
+ [551.420 --> 553.860] where there's this very characteristic
192
+ [553.860 --> 556.860] bidirectional connection, then that projects to upper layer five,
193
+ [556.860 --> 558.700] at least in some species, that's upper layer five,
194
+ [558.700 --> 560.780] but it's the layer five thick-tuffed itself.
195
+ [560.780 --> 563.220] And that becomes a second output of the cortical column,
196
+ [563.220 --> 565.460] and that is the one that goes through the thalmas.
197
+ [565.460 --> 568.540] So it's like these two sort of inputs and two outputs,
198
+ [568.540 --> 571.100] and there's this complicated circuit going on between.
199
+ [571.100 --> 573.300] Now, there's a lot of known about the cortical adding,
200
+ [573.300 --> 574.380] I'm not going to go through it,
201
+ [574.380 --> 576.580] but we can summarize a few things here.
202
+ [576.580 --> 578.780] We can say cortical columns are complex.
203
+ [578.780 --> 580.260] They're very complex.
204
+ [580.260 --> 582.740] At least 12 are more excited to try to sell it on layers.
205
+ [582.740 --> 584.620] There's two feed-forward pathways.
206
+ [584.620 --> 587.460] There's at least two feedback pathways.
207
+ [587.460 --> 588.700] I didn't show them here.
208
+ [588.700 --> 591.460] And there's numerous connections up and down
209
+ [591.460 --> 593.380] the column in between columns.
210
+ [593.380 --> 596.220] And then of course, there's an entire inhibitory circuit,
211
+ [596.220 --> 598.220] which is at least as many cell types
212
+ [598.220 --> 599.420] in equally complex.
213
+ [599.420 --> 603.180] So this is a very complex system here.
214
+ [603.180 --> 606.740] Now, the function of this thing is also going to be complex.
215
+ [606.740 --> 607.900] It's not going to be simple.
216
+ [607.900 --> 609.980] So anybody says, oh, it's a filter.
217
+ [609.980 --> 611.380] It's changing this or changing that.
218
+ [611.380 --> 613.140] That doesn't seem to be the case.
219
+ [613.140 --> 615.460] We should expect this thing to do a lot.
220
+ [615.460 --> 617.100] And in some sense, we're looking at,
221
+ [617.100 --> 618.900] and this is the thing that makes us think.
222
+ [618.900 --> 621.140] This is the source of everything.
223
+ [621.140 --> 623.980] In fact, whatever a column does,
224
+ [623.980 --> 625.980] has to apply to everything the cortex does,
225
+ [625.980 --> 628.180] because this is the circuitry of the cortex.
226
+ [628.180 --> 630.740] So when I think about, oh, how is this going to touch?
227
+ [630.740 --> 632.220] Or how am I going to see with this?
228
+ [632.220 --> 634.100] But it's also going to explain how we do language,
229
+ [634.100 --> 636.420] and to also ask to say something about how we do neuroscience
230
+ [636.420 --> 638.380] and how we build buildings and so on.
231
+ [638.380 --> 642.380] So it's something really remarkable.
232
+ [642.380 --> 643.980] Now, I have two thoughts about this
233
+ [643.980 --> 646.940] before I get into the details of my talk.
234
+ [646.940 --> 648.740] One is, I just want to remind myself,
235
+ [648.740 --> 651.020] this is one of the most important scientific problems
236
+ [651.020 --> 652.700] of all time.
237
+ [652.700 --> 653.700] It's worth stating that.
238
+ [653.700 --> 655.740] It's worth remembering that.
239
+ [655.740 --> 658.700] It's up there with the discovery of genetics.
240
+ [658.700 --> 662.060] It's really kind of the core of who we are as humanity.
241
+ [662.060 --> 665.180] And it's the only structure that knows things.
242
+ [665.180 --> 668.940] This is the only structure that discovers things.
243
+ [668.940 --> 672.340] And of course, it defines us as a species.
244
+ [672.340 --> 675.260] So it's a really, very important thing to work upon.
245
+ [675.260 --> 680.220] Now, I've been working on this problem for a long time,
246
+ [680.220 --> 681.820] and I like many of you.
247
+ [681.820 --> 683.820] And what we've been doing is we've
248
+ [683.820 --> 685.620] been told of teasing apart pieces of it
249
+ [685.620 --> 686.900] and trying to understand a piece.
250
+ [686.900 --> 688.100] And we started another piece.
251
+ [688.100 --> 690.300] And we tried to fit those two pieces together.
252
+ [690.300 --> 691.460] And then so on.
253
+ [691.460 --> 694.700] And lately, we've had some success in getting those pieces.
254
+ [694.700 --> 698.060] We started putting it together in all of the interesting ways.
255
+ [698.060 --> 702.260] And actually, in the last month, less than a month,
256
+ [702.260 --> 704.700] we discovered another piece.
257
+ [704.700 --> 707.260] Even for after I set up this talk,
258
+ [707.260 --> 709.300] and also in a whole bunch of stuff
259
+ [709.300 --> 712.500] filled fit together really, really well.
260
+ [712.500 --> 714.620] And so I'm going to tell you about that.
261
+ [714.620 --> 718.300] It goes beyond the abstract I mentioned today in the talk.
262
+ [718.300 --> 720.140] The end of my talk, I'm going to give you
263
+ [720.140 --> 723.780] explicit proposals about what many of these layers are doing.
264
+ [723.780 --> 725.820] I'm going to be filling in a diagram here,
265
+ [725.820 --> 727.700] explaining what's going on here, at least I have
266
+ [727.700 --> 728.780] hypothesis for that.
267
+ [728.780 --> 730.380] It won't be everything.
268
+ [730.380 --> 732.540] But it's going to be an interesting foundation.
269
+ [732.540 --> 734.180] And I'm going to make the case for that.
270
+ [734.180 --> 736.020] Now, to do that, in the time I have a lab,
271
+ [736.020 --> 739.060] I have to move quickly through a whole series of concepts.
272
+ [739.060 --> 740.860] And typically, when you give a scientific talk,
273
+ [740.860 --> 743.500] you give me one concept and you explain how you did it,
274
+ [743.500 --> 746.100] and what didn't work, and your experience, blah, blah, blah.
275
+ [746.100 --> 747.940] I don't have time for that.
276
+ [747.940 --> 750.220] I want you to understand that everything I present you here
277
+ [750.220 --> 751.260] is not just made up.
278
+ [751.260 --> 755.100] It was a lot of work, a lot of testing, a lot of,
279
+ [755.100 --> 756.500] it took a long time.
280
+ [756.500 --> 758.420] And I have a lot of confidence in it,
281
+ [758.420 --> 760.780] but I can't present the data to explain
282
+ [760.780 --> 762.620] that why I have that confidence.
283
+ [762.620 --> 764.900] So I just want you to at least give me the benefit of the doubt
284
+ [764.900 --> 766.580] that later when you ask me questions,
285
+ [766.580 --> 769.860] I can go into ND detail about this stuff in great detail.
286
+ [769.860 --> 771.540] But I'm trying to tell a story here today,
287
+ [771.540 --> 773.780] and I want to get to that end picture.
288
+ [773.780 --> 774.980] Now, the way I'm going to tell a story
289
+ [774.980 --> 776.460] is the way we discovered it.
290
+ [776.460 --> 777.980] It's not the way we went about our work.
291
+ [777.980 --> 780.980] It may not be the best way, but it's the way I know.
292
+ [780.980 --> 782.740] So I'm going to start at the beginning.
293
+ [782.740 --> 784.380] The beginning, all of our work was based
294
+ [784.380 --> 787.060] on a single observation.
295
+ [787.060 --> 788.980] The observation is the cortex is constantly
296
+ [788.980 --> 790.740] making predictions of its inputs.
297
+ [790.740 --> 794.540] Every time I feel something, I have an expectation
298
+ [794.540 --> 795.660] what I'm going to feel.
299
+ [795.660 --> 798.060] And that expectation is a very detailed prediction.
300
+ [798.060 --> 800.100] As I move my hand along this lectern,
301
+ [800.100 --> 803.380] if even the slightest little dip here, I would notice it.
302
+ [803.380 --> 804.740] It would be like, that's my attention.
303
+ [804.740 --> 806.380] Or if it felt a little funny, if it felt like
304
+ [806.380 --> 808.060] jello or cold or something.
305
+ [808.060 --> 808.820] So I have this.
306
+ [808.820 --> 810.820] That tells me if I notice changes,
307
+ [810.820 --> 813.180] I must have had an expectation what it was going to be.
308
+ [813.180 --> 815.460] And the same thing is I move my eyes.
309
+ [815.460 --> 818.220] I'm constantly predicting what I'm going to see of trying to
310
+ [818.220 --> 819.220] in the same with audition.
311
+ [819.220 --> 821.980] You're constantly trying to predict what I'm going to say
312
+ [821.980 --> 823.660] or what you're going to hear.
313
+ [823.660 --> 825.940] So we ask ourselves a question.
314
+ [825.940 --> 828.420] Is, OK, our research paradigm has been
315
+ [828.420 --> 831.420] how do networks of neurons, as seen in the neocortex,
316
+ [831.420 --> 833.460] learn predictive models of the world?
317
+ [833.460 --> 836.900] It's not that the cortex is only building during predictions,
318
+ [836.900 --> 839.380] but it seems to be a fundamental component
319
+ [839.380 --> 840.380] of what the cortex does.
320
+ [840.380 --> 841.740] And if we tease apart prediction, we
321
+ [841.740 --> 844.540] might understand what some of the functional components
322
+ [844.540 --> 845.860] underlying that are.
323
+ [845.860 --> 847.180] So that's what we want about.
324
+ [847.180 --> 849.340] Now, this question, this research question,
325
+ [849.340 --> 851.780] can be broken into two parts.
326
+ [851.780 --> 854.620] If you think about the patterns that are coming into the brain,
327
+ [854.620 --> 857.580] you've got these sensory streams, millions of sensory bits
328
+ [857.580 --> 859.940] coming into the brain changing all the time.
329
+ [859.940 --> 861.300] Why are they changing?
330
+ [861.300 --> 862.700] Two fundamental reasons.
331
+ [862.700 --> 864.660] Either the world itself is changing,
332
+ [864.660 --> 866.580] and I'll call that extrinsic sequences,
333
+ [866.580 --> 868.300] like you're listening to a melody.
334
+ [868.300 --> 869.700] And you're learning the sequence.
335
+ [869.700 --> 872.740] And it's a pattern in time that matters.
336
+ [872.740 --> 873.820] That's one form.
337
+ [873.820 --> 876.500] The second form is when you move yourself.
338
+ [876.500 --> 878.020] So and you're doing this constantly.
339
+ [878.020 --> 880.060] Every time you move your eyes, several times a second,
340
+ [880.060 --> 882.700] every time you touch something, every time you do walk
341
+ [882.700 --> 884.460] and go around the room, there's a flood
342
+ [884.460 --> 885.460] of changes coming in.
343
+ [885.460 --> 888.060] And it's been known for a very long time, back to hemorrholds,
344
+ [888.060 --> 891.300] that you can't really understand the world in those
345
+ [891.300 --> 893.460] sensory inputs if you're not accounting for the behaviors
346
+ [893.460 --> 894.460] that go with them.
347
+ [894.460 --> 896.660] So it's the sensory motor sequences that
348
+ [896.660 --> 897.980] are leading to those.
349
+ [897.980 --> 899.260] And so that's a harder problem.
350
+ [899.260 --> 900.900] So we started with the first one,
351
+ [900.900 --> 902.540] and then we tackled the second one.
352
+ [902.540 --> 904.460] So on the first one, we had a paper that
353
+ [904.460 --> 908.540] came out in March of 2016 called Why Neurons of Thousands
354
+ [908.540 --> 911.140] of St. Apps is a sequence of, a theory of sequence memory
355
+ [911.140 --> 912.220] in the New York cortex.
356
+ [912.220 --> 914.740] And in the end, the big idea is we suggested
357
+ [914.740 --> 918.100] that every pyramidal cell is actually a prediction machine.
358
+ [918.100 --> 919.820] And that vast majority of the St. Apps
359
+ [919.820 --> 922.340] is on the pyramidal cell or actually used for prediction.
360
+ [922.340 --> 923.580] I'm going to walk through that.
361
+ [923.580 --> 926.500] Then we showed if you took a cellular layer,
362
+ [926.500 --> 929.740] like you might say, one of the layers in one cortical column,
363
+ [929.740 --> 931.420] that a network of those metals would
364
+ [931.420 --> 933.820] learn a type of sequence memory, a very powerful sequence
365
+ [933.820 --> 935.660] memory, a predictive memory.
366
+ [935.660 --> 938.540] And in order, we also had the new to do some proper
367
+ [938.540 --> 941.740] sparse activations to understand that.
368
+ [941.740 --> 943.660] So that's in that paper.
369
+ [943.660 --> 947.340] Then we just had a paper come out in October of this year
370
+ [947.340 --> 949.660] called the theory of columns in the New York cortex,
371
+ [949.660 --> 951.660] a theory of how columns in the New York
372
+ [951.660 --> 953.540] cortex learn the structure of the world.
373
+ [953.540 --> 957.940] And that paper, the big idea is we deduce it every column.
374
+ [957.940 --> 959.060] Every, you can think of it.
375
+ [959.060 --> 961.140] We were talking mostly about primary and secondary sense
376
+ [961.140 --> 964.460] we columns, but ultimately, I think it would be every column.
377
+ [964.460 --> 968.140] We deduce that it must have a sense of an allocentric location.
378
+ [968.140 --> 970.300] And I used to read allocentric in a very broad term.
379
+ [970.300 --> 971.540] It just means other.
380
+ [971.540 --> 973.940] I'm not using it in the term specifically
381
+ [973.940 --> 976.620] as people who study like rid cells do and so on like that.
382
+ [976.620 --> 977.980] But really, you can think of when I say
383
+ [977.980 --> 980.180] I will send this as tripping some people up today,
384
+ [980.180 --> 981.700] you can think of it as object-centric.
385
+ [981.700 --> 983.980] So when I touch this little clicker here,
386
+ [983.980 --> 987.580] when my finger feels something, I'm arguing that the column
387
+ [987.580 --> 989.060] that's receiving the input from my finger
388
+ [989.060 --> 991.500] is also figuring out where it is on this object.
389
+ [991.500 --> 992.260] And we'll get into that.
390
+ [992.260 --> 993.980] So that was the big idea there.
391
+ [993.980 --> 997.220] And then as sensors move over objects
392
+ [997.220 --> 1000.260] and through the world, you learn models of complete objects.
393
+ [1000.260 --> 1002.060] And I'll walk you through that.
394
+ [1002.060 --> 1005.060] And then the third part here is our current research.
395
+ [1005.060 --> 1008.300] And this has not been published.
396
+ [1008.300 --> 1010.380] It's very new.
397
+ [1010.380 --> 1012.580] We asked the question, well, how could columns
398
+ [1012.580 --> 1015.740] compute this allocentric or object-centric location?
399
+ [1015.740 --> 1018.820] We had the idea that, well, let's look at grid cells
400
+ [1018.820 --> 1022.660] and play cells because they solve a similar problem.
401
+ [1022.660 --> 1025.500] And after we study this for a while,
402
+ [1025.500 --> 1027.940] we come to believe that cortical columns
403
+ [1027.940 --> 1030.900] contain analogs of grid cells and head direction cells.
404
+ [1030.900 --> 1034.540] That they're solving the same basic problem
405
+ [1034.540 --> 1037.980] that the antironal cortex is using to map environments.
406
+ [1037.980 --> 1039.100] It's been served.
407
+ [1039.100 --> 1042.660] And it's now using to map physical structures objects.
408
+ [1042.660 --> 1045.180] And it's a very parallel process.
409
+ [1045.180 --> 1046.860] And when we've understood that, now we're
410
+ [1046.860 --> 1048.460] starting to understand the function
411
+ [1048.460 --> 1050.460] of numerous layers and connections.
412
+ [1050.460 --> 1051.980] So I'm going to go through this in order.
413
+ [1051.980 --> 1054.500] I'm going to very quickly go through these points
414
+ [1054.500 --> 1057.900] and end up down here with the specific functions of layers
415
+ [1058.860 --> 1059.020] and connections.
416
+ [1059.020 --> 1060.380] So I'm going to go pretty quickly.
417
+ [1060.380 --> 1062.940] So let's start with one slide on the pyramel neuron
418
+ [1062.940 --> 1065.020] as a prediction system.
419
+ [1065.020 --> 1066.740] If this is your typical pyramel neuron,
420
+ [1066.740 --> 1068.340] it has thousands of synapses anywhere
421
+ [1068.340 --> 1072.860] from five to 30,000 synapses, only 10% or less than 10%
422
+ [1072.860 --> 1074.220] typically are proximal.
423
+ [1074.220 --> 1076.980] And actually drive that cell to fire.
424
+ [1076.980 --> 1079.700] 90% of them are on either the distal basal dendrites
425
+ [1079.700 --> 1081.020] or the apical dendrites.
426
+ [1081.020 --> 1082.380] And typically, they're completely
427
+ [1082.380 --> 1085.860] unable to make the cell fire, which a lot of great research
428
+ [1085.940 --> 1087.900] has been done to show that dendrites
429
+ [1087.900 --> 1089.540] are active processing elements.
430
+ [1089.540 --> 1093.580] So if you have somewhere around 15 active synapses,
431
+ [1093.580 --> 1096.700] they could come active at relatively close in time and space.
432
+ [1096.700 --> 1098.700] So they have to be within like a 40 microns
433
+ [1098.700 --> 1100.220] on a dendrite segment.
434
+ [1100.220 --> 1102.820] That can generate a dendritic spike.
435
+ [1102.820 --> 1104.940] The dendritic spike can go to the soma.
436
+ [1104.940 --> 1107.060] Generally, it does not cause the cell to fire.
437
+ [1107.060 --> 1108.700] It depolarizes the cell.
438
+ [1108.700 --> 1111.100] So it raises its voltage, but not enough
439
+ [1111.100 --> 1112.140] to generate a spike.
440
+ [1112.140 --> 1114.340] That can be a sustained depolarization.
441
+ [1114.340 --> 1117.260] Hundreds of milliseconds up to a couple of seconds.
442
+ [1117.260 --> 1120.900] We are going to argue that that is a predictive signal.
443
+ [1120.900 --> 1123.260] So the proximal synuses, this is our theory.
444
+ [1123.260 --> 1126.020] The proximal synapses cause somatic spikes.
445
+ [1126.020 --> 1128.420] They define the classific except the field of the neuron.
446
+ [1128.420 --> 1131.180] But the distal synapses cause dendritic spikes.
447
+ [1131.180 --> 1133.380] And they put the cell into a depolarized state
448
+ [1133.380 --> 1135.180] or predictive state.
449
+ [1135.180 --> 1138.620] What's the benefit of a cell being depolarized?
450
+ [1138.620 --> 1142.700] Are models and network models rely on that fact?
451
+ [1142.700 --> 1144.380] What happens is that a depolarized neuron
452
+ [1144.380 --> 1147.860] will fire a little bit sooner than another neuron.
453
+ [1147.860 --> 1149.500] If they both have the same receptor field.
454
+ [1149.500 --> 1151.940] They both have the same basic feed-fold receptor field.
455
+ [1151.940 --> 1153.420] The one that's going to be depolarized
456
+ [1153.420 --> 1155.780] will generate its first spike a little bit quicker.
457
+ [1155.780 --> 1157.420] And it's going to inhibit its neighbors
458
+ [1157.420 --> 1160.100] in a very fast and a very circuit.
459
+ [1160.100 --> 1162.540] So, and it turns out if you typical,
460
+ [1162.540 --> 1164.180] a particular parameter on the neuron
461
+ [1164.180 --> 1166.980] can recognize hundreds of unique patterns.
462
+ [1166.980 --> 1171.580] Hundreds you need context in which it's predicted its input.
463
+ [1171.580 --> 1172.980] This is how we model it.
464
+ [1172.980 --> 1175.700] When we, all of our simulations, we use this,
465
+ [1175.700 --> 1178.740] this is a picture of our softer model for this thing.
466
+ [1178.740 --> 1180.260] I'll basically ingreene there.
467
+ [1180.260 --> 1181.980] That's the proximal synapses.
468
+ [1181.980 --> 1184.380] And then we have the basal synapses
469
+ [1184.380 --> 1185.820] with my labeled here context.
470
+ [1185.820 --> 1187.980] It's an array of coincency detectors.
471
+ [1187.980 --> 1190.060] And then the apical dendritic are similar.
472
+ [1190.060 --> 1191.820] These are like threshold detectors.
473
+ [1191.820 --> 1194.140] So this is our model of the neuron.
474
+ [1194.140 --> 1195.380] It has multiple states.
475
+ [1195.380 --> 1196.380] I won't get into it.
476
+ [1196.380 --> 1198.700] I also should point out the learning model here.
477
+ [1198.700 --> 1200.580] We rely on synaptogenesis.
478
+ [1200.580 --> 1203.100] So we're not changing weights of synapses.
479
+ [1203.100 --> 1205.820] We're actually growing new synapses in our model
480
+ [1205.820 --> 1207.940] in a very clever way that matches biology.
481
+ [1207.940 --> 1209.460] But I'm not going to get into it.
482
+ [1209.460 --> 1212.580] Now, what are the properties of the sparse activations?
483
+ [1212.580 --> 1214.700] We have to cover this because you won't understand anything
484
+ [1214.700 --> 1215.820] else until I cover this.
485
+ [1215.820 --> 1217.900] And maybe you know this already, but I don't know.
486
+ [1217.900 --> 1220.180] So let's take, for example, we have one layer cell.
487
+ [1220.180 --> 1220.860] It doesn't really matter.
488
+ [1220.860 --> 1221.980] We're just going to take a bunch of cells
489
+ [1221.980 --> 1224.300] and say it's like one layer in our cortical column.
490
+ [1224.300 --> 1225.780] Let's say it's 5,000 neurons.
491
+ [1225.780 --> 1228.900] And typically what we see is a very sparse activation.
492
+ [1228.900 --> 1231.020] So let's say 2% of our neurons are going to be active
493
+ [1231.020 --> 1232.180] at any point in time.
494
+ [1232.180 --> 1234.060] So we have 100 active neurons.
495
+ [1234.060 --> 1235.940] Now, at any point in time, there's 100.
496
+ [1235.940 --> 1237.860] And the moment later, there's another 100.
497
+ [1237.860 --> 1238.700] And another 100.
498
+ [1238.700 --> 1239.940] Later is another 100.
499
+ [1239.940 --> 1241.100] So first question, I'm going to ask,
500
+ [1241.100 --> 1243.980] is what is the representational capacity of a layer of cells?
501
+ [1243.980 --> 1246.220] How many different ways can I pick 100 out of 5,000?
502
+ [1246.220 --> 1247.620] Well, you're all not surprised.
503
+ [1247.620 --> 1249.540] It's very, very big.
504
+ [1249.540 --> 1250.220] Which you may not know.
505
+ [1250.220 --> 1252.300] You can type this into any browser and just say 5,000.
506
+ [1252.300 --> 1255.540] She was 100, and it'll tell you.
507
+ [1255.540 --> 1258.700] And in this case, it's 3 times 10 to the 200.
508
+ [1258.700 --> 1260.580] That's infinite, as far as we're concerned.
509
+ [1260.580 --> 1261.580] And we don't have to worry about that.
510
+ [1261.580 --> 1262.940] We can pick them all day long.
511
+ [1262.940 --> 1264.460] The second thing is, if you randomly
512
+ [1264.460 --> 1268.140] choose two sets of patterns, two activation patterns,
513
+ [1268.140 --> 1269.100] what's the likelihood?
514
+ [1269.100 --> 1271.420] What's the distribution of the overlap?
515
+ [1271.420 --> 1273.620] How many cells would they have in common?
516
+ [1273.620 --> 1275.700] In this case, it's about two.
517
+ [1275.700 --> 1277.540] But then you can say, well, what's the chance of it's
518
+ [1277.540 --> 1278.660] going to have 10 cells?
519
+ [1278.660 --> 1281.500] 20 cells, or 30 cells in common.
520
+ [1281.500 --> 1285.220] And it turns out that it's very, very unlikely.
521
+ [1285.220 --> 1288.420] It very quickly drops off to, like, never.
522
+ [1288.420 --> 1290.540] Even though technically it could be.
523
+ [1290.540 --> 1292.740] So you can pick random, what we call,
524
+ [1292.740 --> 1295.820] SDRs of sparse activations all day long.
525
+ [1295.820 --> 1298.700] And they're almost all overlap by just a few.
526
+ [1298.700 --> 1301.580] So the very very orthogonal in that sense.
527
+ [1301.580 --> 1303.940] Now, we can take advantage of this, because the neuron,
528
+ [1303.940 --> 1306.140] what it means is the neuron, it can only
529
+ [1306.140 --> 1308.380] have to form a few synapses, or it
530
+ [1308.380 --> 1309.540] doesn't have to form connections.
531
+ [1309.540 --> 1310.620] All the cells that are active, it
532
+ [1310.620 --> 1312.100] wants to recognize a pattern.
533
+ [1312.100 --> 1314.860] So in this case, I say I want this neuron to recognize
534
+ [1314.860 --> 1316.060] I have 100 cells active.
535
+ [1316.060 --> 1317.740] These are the gray cells.
536
+ [1317.740 --> 1320.460] It only has connections on one of its dendrites to 10
537
+ [1320.460 --> 1321.740] of those, or 20 of those.
538
+ [1321.740 --> 1324.060] And it can reliably recognize that pattern.
539
+ [1324.060 --> 1326.300] Technically, it could have a lot of false positives,
540
+ [1326.300 --> 1327.140] but it just won't.
541
+ [1327.140 --> 1329.500] It just never going to happen.
542
+ [1329.500 --> 1330.940] The second thing we can do now, this
543
+ [1330.940 --> 1333.700] is perhaps something you haven't seen before.
544
+ [1333.700 --> 1337.540] But maybe you have is we can ask ourselves the question,
545
+ [1337.540 --> 1340.340] what happens if I form a union of patterns?
546
+ [1340.340 --> 1342.740] So instead of just invoking one pattern in this layer
547
+ [1342.740 --> 1344.940] cells, I'm going to invoke 10 patterns.
548
+ [1344.940 --> 1347.420] That's 1,000 active cells, or 20% of the cells
549
+ [1347.420 --> 1348.580] that are having active.
550
+ [1348.580 --> 1351.700] Well, you could say, wow, this cell is going to be trouble now,
551
+ [1351.700 --> 1354.100] because it's still looking at already 10 of those synapses,
552
+ [1354.100 --> 1356.900] and it could have a false positive.
553
+ [1356.900 --> 1361.140] But if you do the math, it's still extremely unlikely.
554
+ [1361.140 --> 1364.580] So this cell, by connecting to 20 synapses in the whole
555
+ [1364.580 --> 1367.700] population here, can reliably pick out that pattern,
556
+ [1367.700 --> 1369.860] even though there's a whole bunch of other patterns going on.
557
+ [1369.860 --> 1372.460] And you can do unions much greater than that.
558
+ [1372.460 --> 1375.060] We're going to rely on this property,
559
+ [1375.060 --> 1376.900] because what we think is going on,
560
+ [1376.900 --> 1380.740] every cellular layer in the column is representing things.
561
+ [1380.740 --> 1382.780] And often there's uncertainty.
562
+ [1382.780 --> 1385.740] And when there's uncertainty, it's going to use a union.
563
+ [1385.740 --> 1386.900] And it's going to say, oh, I don't know.
564
+ [1386.900 --> 1388.660] It could be x, y, z, or so on.
565
+ [1388.660 --> 1391.460] And what it means is that the networks don't get confused
566
+ [1391.460 --> 1393.300] as it tries to resolve that uncertainty.
567
+ [1393.300 --> 1395.780] As they bounce back and forth, they're going essentially
568
+ [1395.780 --> 1397.980] narrow down to the only consistent answer,
569
+ [1397.980 --> 1399.700] under, I'll explain some of this.
570
+ [1399.700 --> 1402.820] But the point is we think unions are happening everywhere.
571
+ [1402.820 --> 1406.260] And so the density of cell activity
572
+ [1406.260 --> 1408.100] basically represents uncertainty.
573
+ [1408.100 --> 1410.020] And when you really got something, you know what's going on.
574
+ [1410.020 --> 1411.860] It's going to be very sparse.
575
+ [1411.860 --> 1412.580] OK.
576
+ [1412.580 --> 1416.340] Then we said, OK, take a bunch of those pyramids on the runs.
577
+ [1416.340 --> 1420.700] For sparse activation, put them in a layer like this.
578
+ [1420.700 --> 1421.980] We had a few more things.
579
+ [1421.980 --> 1423.580] We're going to basically define, we're
580
+ [1423.580 --> 1425.420] going to put cells in the minicomps.
581
+ [1425.420 --> 1427.660] You might say 10 cells per minicom.
582
+ [1427.660 --> 1428.740] And with the minicomps, it doesn't
583
+ [1428.740 --> 1430.180] have to be a physical structure.
584
+ [1430.180 --> 1432.420] What we're always asking is that the cells in the minicom
585
+ [1432.420 --> 1435.700] have a same common feed forward receptive field property.
586
+ [1435.700 --> 1439.340] This is why classic whom will invies all this many, many years
587
+ [1439.340 --> 1440.180] ago.
588
+ [1440.180 --> 1441.900] All the cells in the sort of vertically line
589
+ [1441.900 --> 1444.420] have some sort of receptive field property.
590
+ [1444.420 --> 1445.940] You don't have to see the minicomps.
591
+ [1445.940 --> 1447.740] You just have to have that property.
592
+ [1447.740 --> 1449.900] So those cells in minicomps are going
593
+ [1449.900 --> 1451.940] to respond to the same feed forward pattern.
594
+ [1451.940 --> 1453.840] But they're going to form connections horizontally
595
+ [1453.840 --> 1455.780] that are unique.
596
+ [1455.780 --> 1459.740] And so that's the problem.
597
+ [1459.740 --> 1461.900] Here's what would happen in two time periods,
598
+ [1462.180 --> 1463.340] time 0 and time 1.
599
+ [1463.340 --> 1466.540] If I had no predictive state and an input comes in,
600
+ [1466.540 --> 1468.620] it's going to activate all the cells in the minicomps,
601
+ [1468.620 --> 1470.020] because they're all equally getting this thing.
602
+ [1470.020 --> 1471.380] And they look similar.
603
+ [1471.380 --> 1474.860] In a condition where there is a predicted state,
604
+ [1474.860 --> 1477.540] and I represented those by the little red circles here,
605
+ [1477.540 --> 1479.260] this means that these cells are predicting
606
+ [1479.260 --> 1479.940] you're going to be active.
607
+ [1479.940 --> 1481.180] They're depolarized.
608
+ [1481.180 --> 1482.780] The same input comes in.
609
+ [1482.780 --> 1485.620] But it's going to select one of those cells.
610
+ [1485.620 --> 1488.420] One that was predicted is going to fight first.
611
+ [1488.420 --> 1490.620] They were very fast in a vision,
612
+ [1490.620 --> 1493.460] and basically form a sparse pattern.
613
+ [1493.460 --> 1496.260] The next moment after this, what will happen is
614
+ [1496.260 --> 1499.100] the active patterns will then predict another cells.
615
+ [1499.100 --> 1502.900] And so you can go through these sparse activations in time,
616
+ [1502.900 --> 1505.260] prediction and activation predict and activation.
617
+ [1505.260 --> 1507.460] And that's the basis of sequence memory.
618
+ [1507.460 --> 1510.380] We have built this for years, and we tested this,
619
+ [1510.380 --> 1511.940] and we've applied it commercially.
620
+ [1511.940 --> 1513.780] We understand it very well.
621
+ [1513.780 --> 1515.140] I'll just mention a few things.
622
+ [1515.140 --> 1516.500] It's very high capacity.
623
+ [1516.500 --> 1518.660] This is important to remember.
624
+ [1518.660 --> 1521.860] You can, a slightly bigger network than this,
625
+ [1521.860 --> 1524.420] we've shown can learn up to a million transitions,
626
+ [1524.420 --> 1527.660] meaning it's like 10,000 songs of 100 note teats.
627
+ [1527.660 --> 1528.980] It's really high capacity.
628
+ [1528.980 --> 1530.980] It's surprising.
629
+ [1530.980 --> 1532.580] They can learn high order sequences.
630
+ [1532.580 --> 1534.740] So imagine you treat a, try not to sequence,
631
+ [1534.740 --> 1537.220] it's A, B, C, D, and X, B, C, Y.
632
+ [1537.220 --> 1538.940] If you show it A, B, C, it predicts D,
633
+ [1538.940 --> 1540.860] and you show it X, B, C, it predicts Y.
634
+ [1540.860 --> 1543.100] It doesn't get confused by the B and the C.
635
+ [1543.100 --> 1545.140] Similarly, if I just show it the B and the C,
636
+ [1545.140 --> 1547.100] it's going to predict both D and Y,
637
+ [1547.100 --> 1549.340] because that's all it can do at that point in time.
638
+ [1549.340 --> 1551.100] But it does all these things automatically.
639
+ [1551.100 --> 1554.260] It's extremely robust to noise, and failure,
640
+ [1554.260 --> 1556.060] you can knock out 40% of anything,
641
+ [1556.060 --> 1557.980] and it still performs well.
642
+ [1557.980 --> 1560.140] And it's very desirable learning properties.
643
+ [1560.140 --> 1563.620] It's very, it's all local learning, very simple rules.
644
+ [1563.620 --> 1564.860] I won't get into all of that.
645
+ [1564.860 --> 1567.140] It solves many biological constraints.
646
+ [1567.140 --> 1569.540] This is, there are many people implemented this by now,
647
+ [1569.540 --> 1571.740] and it's being used in some commercial applications.
648
+ [1571.740 --> 1574.580] But it is a biological model, first and foremost.
649
+ [1574.580 --> 1576.020] OK, we've done it the first section.
650
+ [1576.020 --> 1580.060] Now the second section, we asked how we're going to do
651
+ [1580.060 --> 1583.500] learn predictive models of sentient motor sequences.
652
+ [1583.500 --> 1585.620] Our first idea was to say, OK, let's start
653
+ [1585.620 --> 1589.540] with the same cellular layer, and can we turn it into
654
+ [1589.540 --> 1590.340] sentient motor layer?
655
+ [1590.340 --> 1593.140] And we said, well, here's a basic idea.
656
+ [1593.140 --> 1595.660] What if we just added a motor-related context?
657
+ [1595.660 --> 1598.420] So instead of the context just being the previous state,
658
+ [1598.420 --> 1600.500] we can have a motor-related context.
659
+ [1600.500 --> 1603.620] And we were inspired because we said, look, we know that 50%
660
+ [1603.620 --> 1606.500] of the inputs to the layer 4 cells come from layer 6A.
661
+ [1606.500 --> 1607.740] So that's an idea.
662
+ [1607.740 --> 1608.660] Let's go for that.
663
+ [1608.660 --> 1610.460] And we asked ourselves, well, what would that motor-related
664
+ [1610.460 --> 1612.940] context would be?
665
+ [1612.940 --> 1614.740] And well, this is the hypothesis.
666
+ [1614.740 --> 1616.300] You know, by adding a motor-related context,
667
+ [1616.300 --> 1618.340] a cellular-related-compete, it's input as the sentient
668
+ [1618.340 --> 1619.820] inputs.
669
+ [1619.820 --> 1622.900] And then we said, what is the correct motor-related context?
670
+ [1622.900 --> 1624.740] We started working on this several years ago.
671
+ [1624.740 --> 1626.700] We tried different things.
672
+ [1626.700 --> 1628.540] And they kind of worked, but they didn't work really
673
+ [1628.540 --> 1629.140] well.
674
+ [1629.140 --> 1631.340] They didn't scale well and so on.
675
+ [1631.380 --> 1634.780] But about just a little bit under two years ago,
676
+ [1634.780 --> 1636.980] we had an insight about it.
677
+ [1636.980 --> 1639.300] This gets to that allocentric.
678
+ [1639.300 --> 1642.380] So they may use my coffee cup as my prop.
679
+ [1642.380 --> 1645.460] I'm going to use this a lot during this talk.
680
+ [1645.460 --> 1648.500] So you can just basic ask yourself a very simple question.
681
+ [1648.500 --> 1650.100] Imagine I'm not looking at this coffee cup.
682
+ [1650.100 --> 1651.020] I'm just touching it.
683
+ [1651.020 --> 1651.940] I'm familiar with it.
684
+ [1651.940 --> 1655.180] This is my coffee cup from my office.
685
+ [1655.180 --> 1658.180] And I'm holding in my hand.
686
+ [1658.180 --> 1659.660] I'm about to move my finger.
687
+ [1659.660 --> 1661.780] And can I predict what I'm going to feel?
688
+ [1661.780 --> 1662.500] And yes, I can.
689
+ [1662.500 --> 1663.620] I know I'm going to feel.
690
+ [1663.620 --> 1664.740] I know I'm going to feel this edge here.
691
+ [1664.740 --> 1666.380] I also know if I touch down here,
692
+ [1666.380 --> 1667.980] I'm going to get this sort of rough thing here,
693
+ [1667.980 --> 1669.780] because this cup has a rough bottom.
694
+ [1669.780 --> 1671.980] It also has this little doodad here.
695
+ [1671.980 --> 1674.500] So as I touch my finger, I make the predictions.
696
+ [1674.500 --> 1676.700] Before I touch it, I know I'm going to feel.
697
+ [1676.700 --> 1679.060] Now, how could I know what I'm going to do?
698
+ [1679.060 --> 1681.020] I have to know, first of all, the cortex
699
+ [1681.020 --> 1682.740] has to know that this is a cup.
700
+ [1682.740 --> 1683.420] You know, it has to know it.
701
+ [1683.420 --> 1686.340] And it has to know where it's going to touch the cup.
702
+ [1686.340 --> 1687.500] It has to know that.
703
+ [1687.500 --> 1688.780] If I'm going to predict what I'm going to feel,
704
+ [1688.780 --> 1690.820] it must know where it, and that thing it's going to know
705
+ [1690.820 --> 1693.180] is where on the cup it's going to touch.
706
+ [1693.180 --> 1694.420] It's not relative to my body.
707
+ [1694.420 --> 1696.100] It's relative to the cup.
708
+ [1696.100 --> 1698.460] I need to know the allocentric location of those I can
709
+ [1698.460 --> 1701.180] possibly make that prediction.
710
+ [1701.180 --> 1702.740] That's deduction.
711
+ [1702.740 --> 1706.460] And the predictions are going to be a fairly fine granular
712
+ [1706.460 --> 1707.140] level.
713
+ [1707.140 --> 1708.980] Every part of my skin touching this cup
714
+ [1708.980 --> 1711.180] is predicting what it's going to feel.
715
+ [1711.180 --> 1711.980] And that's a lot of it.
716
+ [1711.980 --> 1713.460] It's not like some global prediction.
717
+ [1713.460 --> 1715.300] It's a very local prediction.
718
+ [1715.300 --> 1717.780] So we realize that that is a requirement.
719
+ [1717.780 --> 1720.700] And that's where this is for the allocentric location
720
+ [1720.700 --> 1722.100] comes from.
721
+ [1722.100 --> 1724.500] OK, so our answer now is, hey, let's take
722
+ [1724.500 --> 1726.220] if we have an allocentric location,
723
+ [1726.220 --> 1728.900] the location of the cup, how could we derive that?
724
+ [1728.900 --> 1730.780] I didn't know what does it look like.
725
+ [1730.780 --> 1731.540] We didn't know.
726
+ [1731.540 --> 1732.780] We just assumed we had.
727
+ [1732.780 --> 1734.820] So in the beginning, we just did experiments
728
+ [1734.820 --> 1738.060] where we were sort of randomly made up stuff.
729
+ [1738.060 --> 1739.820] And then we also realized we really
730
+ [1739.820 --> 1742.420] wanted a second layer to the network.
731
+ [1742.420 --> 1744.900] The second layer was what you typically call a pooling layer.
732
+ [1744.900 --> 1746.700] That's a term that a lot of people use.
733
+ [1746.700 --> 1748.300] If you don't know what it means, in this case,
734
+ [1748.300 --> 1750.540] what I mean by it is the second layer,
735
+ [1750.540 --> 1753.220] we're going to essentially pick a sparse activation
736
+ [1753.220 --> 1754.580] of cells up there.
737
+ [1754.580 --> 1757.500] And it's going to stay constant while the lower layers
738
+ [1757.500 --> 1758.700] changing.
739
+ [1758.700 --> 1760.180] Upper layer, those cells up there,
740
+ [1760.180 --> 1762.620] are going to learn to respond to the series
741
+ [1762.620 --> 1766.540] of individual sparse activations in the lower layer.
742
+ [1766.540 --> 1768.060] So if you think about the lower layer,
743
+ [1768.060 --> 1769.860] it's sort of representing the future,
744
+ [1769.860 --> 1772.340] the sensory feature, out of location.
745
+ [1772.340 --> 1776.660] And if you basically modeling an object,
746
+ [1776.820 --> 1779.220] as a set of features at locations,
747
+ [1779.220 --> 1780.300] it's going to be like a CAD file.
748
+ [1780.300 --> 1781.180] It kind of makes sense.
749
+ [1781.180 --> 1783.580] That's what else could you do modeling an object?
750
+ [1783.580 --> 1786.020] And what's interesting here is that the output layer,
751
+ [1786.020 --> 1789.060] this object layer, is going to be stable over movements
752
+ [1789.060 --> 1790.060] of the sensor.
753
+ [1790.060 --> 1791.940] And the input layer will be changing
754
+ [1791.940 --> 1793.100] with each movement of the sensor.
755
+ [1793.100 --> 1794.660] You have a stable representation of the object
756
+ [1794.660 --> 1795.340] as you move.
757
+ [1795.340 --> 1797.460] And it doesn't matter which order you move,
758
+ [1797.460 --> 1800.260] how you touch the object, as long as you know
759
+ [1800.260 --> 1802.620] the allocentric location, that magic signal.
760
+ [1802.620 --> 1804.900] We don't know how to do that yet, but that's the main signal.
761
+ [1804.900 --> 1806.060] So we modeled this.
762
+ [1806.060 --> 1807.780] And we did a lot of work with this.
763
+ [1807.780 --> 1809.740] So with an allocentric location input,
764
+ [1809.740 --> 1811.780] a column can learn models of complete objects,
765
+ [1811.780 --> 1813.620] or this two layer network cam,
766
+ [1813.620 --> 1816.500] and by using essentially different object locations
767
+ [1816.500 --> 1817.500] on the object over time.
768
+ [1817.500 --> 1819.140] So it's an integration time.
769
+ [1819.140 --> 1822.740] You can both learn model objects and you can infer.
770
+ [1822.740 --> 1823.860] I'll show you that.
771
+ [1823.860 --> 1826.340] Now, the next thing we realized is,
772
+ [1826.340 --> 1829.780] if you had a series of columns near each other,
773
+ [1829.780 --> 1833.020] imagine they were representing three tips of your finger.
774
+ [1834.260 --> 1835.820] And it's going to touch that coffee cup,
775
+ [1835.820 --> 1837.620] three fingers at a time.
776
+ [1837.620 --> 1839.980] Well, each finger is going to have its own location
777
+ [1839.980 --> 1840.500] on the object.
778
+ [1840.500 --> 1842.420] Each finger is going to have its own sense
779
+ [1842.420 --> 1844.060] re-impote, but those are unique.
780
+ [1844.060 --> 1845.340] But they're all going to be basically
781
+ [1845.340 --> 1847.100] trying to model the same object.
782
+ [1847.100 --> 1851.020] And if they're confused, they may not know what the object is.
783
+ [1851.020 --> 1852.940] But the output layer of these are going to be three,
784
+ [1852.940 --> 1854.500] because they're going to be basically
785
+ [1854.500 --> 1855.860] representing the same thing.
786
+ [1855.860 --> 1857.620] And so if you formed an associative link
787
+ [1857.620 --> 1860.700] between on the other layer, they can vote together.
788
+ [1860.700 --> 1863.300] And they can help resolve ambiguity.
789
+ [1863.300 --> 1864.700] That's the basic idea.
790
+ [1864.700 --> 1867.100] So E's column has partial knowledge of an object
791
+ [1867.100 --> 1870.020] as its sensory equivalent sensory thing is moving.
792
+ [1870.020 --> 1872.780] And these long-range connections in the object layer
793
+ [1872.780 --> 1874.100] allow the columns to vote.
794
+ [1874.100 --> 1875.620] And infringement will be much faster
795
+ [1875.620 --> 1877.820] when you're using multiple columns than with one column.
796
+ [1877.820 --> 1880.740] Just like that's the for me to reach into a dark box.
797
+ [1880.740 --> 1883.060] And if I use one finger to figure out what I'm talking about,
798
+ [1883.060 --> 1884.420] or if I grab it with my hand, I'll get it.
799
+ [1884.420 --> 1886.540] Or if I was looking at the world through a straw,
800
+ [1886.540 --> 1888.220] I'd have to move my straw around a bit.
801
+ [1888.220 --> 1890.060] But if I open my eyes and see the whole thing,
802
+ [1890.060 --> 1891.900] then I can do it very quickly.
803
+ [1891.900 --> 1895.700] So this is just a little cartoon animation.
804
+ [1895.700 --> 1899.220] Just to illustrate some of this, it's not too terribly accurate.
805
+ [1899.220 --> 1901.020] It's just illustration purposes.
806
+ [1901.020 --> 1902.860] So imagine this finger is going to touch this cup
807
+ [1902.860 --> 1904.020] in three locations.
808
+ [1904.020 --> 1906.860] And I have one column with his input layer and an output layer.
809
+ [1906.860 --> 1909.300] As I move towards this spot, I'm going to touch.
810
+ [1909.300 --> 1911.300] I have a predicted location signal.
811
+ [1911.300 --> 1914.220] That basically invoked a union of possible sensations
812
+ [1914.220 --> 1916.700] that might find at that location.
813
+ [1916.700 --> 1919.860] When I actually touch it, it's a sensory feature
814
+ [1919.860 --> 1922.060] that comes in, it selects one of those sensations.
815
+ [1922.060 --> 1923.620] It projects up to the output layer.
816
+ [1923.620 --> 1928.420] And this thing says, I know three objects that meet this.
817
+ [1928.420 --> 1931.100] Coffee cup, the can, and the tennis ball all meet that.
818
+ [1931.100 --> 1933.980] So I'll form a union representation up there.
819
+ [1933.980 --> 1935.820] Then I go to the new location.
820
+ [1935.820 --> 1937.220] I get a new location.
821
+ [1937.220 --> 1940.540] But it basically makes a fiction about what it might sense.
822
+ [1940.540 --> 1941.900] You actually get a proper sense.
823
+ [1941.900 --> 1943.900] And say, oh, I have this feature at this location.
824
+ [1943.900 --> 1945.180] I pass it up to the output layer.
825
+ [1945.180 --> 1946.940] And I eliminate the tennis ball, because that's
826
+ [1946.940 --> 1949.780] inconsistent with feeling a lip or an edge.
827
+ [1949.780 --> 1952.340] And then I go to the final sensation here,
828
+ [1952.340 --> 1954.740] new location, a new sensory feature,
829
+ [1954.740 --> 1955.340] pass it up.
830
+ [1955.340 --> 1958.300] And I can eliminate the co-cambi or the soda can,
831
+ [1958.300 --> 1960.940] because it's inconsistent.
832
+ [1960.940 --> 1964.260] If I do this with three fingers at the same time,
833
+ [1964.260 --> 1965.820] the hand grasped it.
834
+ [1965.820 --> 1968.140] I get three different locations, three different features.
835
+ [1968.140 --> 1969.860] In this case, we're showing them the same.
836
+ [1969.860 --> 1970.700] They pass it up.
837
+ [1970.700 --> 1972.900] In the output layer, we can say, oh, well, column one says,
838
+ [1972.900 --> 1975.380] it could be the coffee cup or a ball.
839
+ [1975.380 --> 1977.660] The other ones, two, or saying, it could be the coffee cup
840
+ [1977.660 --> 1980.860] or the can, you just quickly associate with each other.
841
+ [1980.860 --> 1982.460] And you eliminate and you're down to it.
842
+ [1982.460 --> 1984.020] The only thing that's possible for the three of them
843
+ [1984.020 --> 1984.860] is the coffee cup.
844
+ [1984.860 --> 1987.380] So very quickly you do that.
845
+ [1987.380 --> 1990.820] We tried this out then on a more sophisticated problem.
846
+ [1990.820 --> 1994.820] We started with the CLCMU Berkeley Benchmark, which
847
+ [1994.820 --> 1996.060] is about 80 objects.
848
+ [1996.060 --> 1999.580] They'll actually send them to you.
849
+ [1999.580 --> 2001.460] Or you can just use the 3D CAD file.
850
+ [2001.460 --> 2004.620] So we figured some of them are perishable food items.
851
+ [2004.620 --> 2006.180] We would go for the 3D CAD files.
852
+ [2008.460 --> 2013.140] And then we built a robotic simulated virtual hand
853
+ [2013.140 --> 2014.700] using the Unity Game Engine.
854
+ [2014.700 --> 2019.780] We built sensory arrays on each of the fingers,
855
+ [2019.780 --> 2023.100] and we built a multi-column array representing each finger.
856
+ [2023.100 --> 2026.220] We used 4,096 neurons per layer per column.
857
+ [2026.220 --> 2028.980] So if it's three fingers, that would be 24,000 neurons,
858
+ [2028.980 --> 2030.460] each with thousands of synapses.
859
+ [2030.460 --> 2032.260] And not surprising because it's a simulation
860
+ [2032.260 --> 2034.220] and worked very well.
861
+ [2034.300 --> 2038.180] But just a few things, it's the talk about here.
862
+ [2038.180 --> 2040.300] Now we did it with one finger.
863
+ [2040.300 --> 2042.700] And the one finger is touching it at different places.
864
+ [2042.700 --> 2046.260] In one touch, you can't really tell what the object is.
865
+ [2046.260 --> 2048.060] So this is a confusion matrix, which
866
+ [2048.060 --> 2050.380] is what the actual object is on this side,
867
+ [2050.380 --> 2052.220] in the vertical object is what it actually
868
+ [2052.220 --> 2053.700] was thought it might have been.
869
+ [2053.700 --> 2057.060] And you can see obviously the right answer is the diagonal.
870
+ [2057.060 --> 2059.140] But in this case, there's a lot of confusion.
871
+ [2059.140 --> 2061.620] And after the second touch, things started
872
+ [2061.620 --> 2063.580] narrowing down quite a bit.
873
+ [2063.580 --> 2065.580] After six touches, you were really, really well.
874
+ [2065.580 --> 2067.500] And after 10 touches, you were guaranteed to get it.
875
+ [2067.500 --> 2068.700] Now there's a lot of variability,
876
+ [2068.700 --> 2070.900] because if you touch sort of unique features on the object,
877
+ [2070.900 --> 2072.700] you can narrow it down quicker than if you touch
878
+ [2072.700 --> 2073.940] non-unique features.
879
+ [2073.940 --> 2076.260] But this gives you the general idea.
880
+ [2076.260 --> 2079.860] We also did a lot of experiments looking at the number
881
+ [2079.860 --> 2081.420] of columns, or the number of you
882
+ [2081.420 --> 2082.540] want to think about as fingers.
883
+ [2082.540 --> 2084.980] But we can do this abstractly.
884
+ [2084.980 --> 2088.740] And of course, what we'd expect is that the fewer columns
885
+ [2088.740 --> 2090.100] are using the more touches you have
886
+ [2090.100 --> 2093.660] or the more sensations you have to have to recognize this thing.
887
+ [2093.660 --> 2095.140] And if you have more, then it quickly
888
+ [2095.140 --> 2098.580] settles down to basically you can do it in one sensation.
889
+ [2098.580 --> 2101.580] And it gets harder depending on some other parameters.
890
+ [2101.580 --> 2103.100] Like there's a lot of parameters
891
+ [2103.100 --> 2104.900] you can make this harder or easier.
892
+ [2104.900 --> 2109.820] But the point is we show that sort of characteristic.
893
+ [2109.820 --> 2111.580] So that was that big idea there.
894
+ [2111.580 --> 2113.620] And then we really said, OK, we've
895
+ [2113.620 --> 2116.420] got to get to the heart of this allocentric location thing.
896
+ [2116.420 --> 2117.180] What's going on there?
897
+ [2117.180 --> 2118.700] What does that mean?
898
+ [2118.700 --> 2121.060] And as I said, we thought of that we said,
899
+ [2121.060 --> 2123.780] let's go look at the interrhynell cortex
900
+ [2123.780 --> 2124.860] to see what was going on there.
901
+ [2124.860 --> 2127.060] Now, I know there's a bunch of hippocampal people here.
902
+ [2127.060 --> 2129.140] And we were talking about this this morning.
903
+ [2129.140 --> 2131.500] There's various reasons why we chose to model
904
+ [2131.500 --> 2132.380] interrhynell cortex.
905
+ [2132.380 --> 2133.660] So think about it on a mental condition.
906
+ [2133.660 --> 2134.620] I won't get into it.
907
+ [2134.620 --> 2137.140] But don't get mad at me if I don't touch your favorite topic.
908
+ [2140.860 --> 2142.300] So we ended up here.
909
+ [2142.300 --> 2143.980] This wasn't our initial hypothesis.
910
+ [2143.980 --> 2146.020] Our initial hypothesis is the cortical columns
911
+ [2146.020 --> 2147.900] that could contain analogs to grid cells.
912
+ [2147.900 --> 2149.460] And very recently, we realized they
913
+ [2149.460 --> 2151.380] had to have analogs to head direction cells.
914
+ [2151.380 --> 2152.940] That was the last missing piece that I
915
+ [2152.940 --> 2155.220] didn't know about just a few weeks ago.
916
+ [2155.220 --> 2159.020] So let's just talk about what goes on in the interrhynell cortex.
917
+ [2159.020 --> 2163.180] And I won't climb to be an expert in this,
918
+ [2163.180 --> 2165.060] but we have run this by some experts.
919
+ [2165.060 --> 2166.220] And they said, it's OK.
920
+ [2166.220 --> 2167.060] You can say this, Jeff.
921
+ [2167.060 --> 2169.580] So we're going to go there.
922
+ [2169.580 --> 2172.860] The interrhynell cortex is one of the things it does.
923
+ [2172.860 --> 2176.380] It allows an animal, typically we study rats,
924
+ [2176.380 --> 2178.820] to basically build maps of its environment,
925
+ [2178.820 --> 2181.100] to know where it is, and maybe able to make predictions
926
+ [2181.100 --> 2183.260] and know where things are, sort of the foundation
927
+ [2183.260 --> 2185.740] of other sort of navigation problems.
928
+ [2185.740 --> 2189.460] And grid cells, I won't go into all the details.
929
+ [2189.460 --> 2192.500] We all know about them, but some of the details are really
930
+ [2192.500 --> 2194.460] important, but I won't get into them.
931
+ [2194.460 --> 2196.540] They allow us encode locations.
932
+ [2196.540 --> 2199.660] So the way to think about this, if you look at the rooms,
933
+ [2199.660 --> 2201.420] they're actually the same shape.
934
+ [2201.420 --> 2202.340] I should go back here.
935
+ [2202.340 --> 2204.740] They're the same shape, but they're different
936
+ [2204.740 --> 2206.380] than some salient feature.
937
+ [2206.380 --> 2208.660] And so the rat perceives them as different rooms,
938
+ [2208.660 --> 2210.900] and you would too, if you were in there.
939
+ [2210.900 --> 2214.300] And what we want to do is just have a representation
940
+ [2214.300 --> 2216.900] of where the location in those rooms are.
941
+ [2216.900 --> 2220.020] Now the way grid cells do this thing,
942
+ [2220.020 --> 2221.260] and I'll just tell you two things.
943
+ [2221.260 --> 2222.700] First of all, every point in this room
944
+ [2222.700 --> 2224.700] can be associated with the sparse activation
945
+ [2224.700 --> 2225.540] of the grid cells.
946
+ [2225.540 --> 2226.860] You have a bunch of grid cells, they're in these grid cell
947
+ [2226.860 --> 2227.540] modules.
948
+ [2227.540 --> 2229.020] But if you just looked at it, which cells are active,
949
+ [2229.020 --> 2231.100] and which cells are not active, it's sort of a sparse
950
+ [2231.100 --> 2232.140] representation.
951
+ [2232.140 --> 2234.740] And I've shown here three locations in these rooms.
952
+ [2234.740 --> 2238.020] Every location in these rooms has an associated pattern.
953
+ [2238.020 --> 2240.820] What's interesting about it is the locations in the room
954
+ [2240.820 --> 2241.900] are unique to the room.
955
+ [2241.900 --> 2245.660] So the actual coding of these locations in room 1
956
+ [2245.660 --> 2248.940] will be very different than the coding in room 2.
957
+ [2248.940 --> 2251.620] This is actually essential to the whole theory.
958
+ [2251.620 --> 2255.380] So A means that location in that room.
959
+ [2255.380 --> 2256.780] That's a sparse activation.
960
+ [2256.780 --> 2259.260] And X means that location in the room.
961
+ [2259.260 --> 2261.060] And R means that location in that room
962
+ [2261.340 --> 2263.340] is a very different thing.
963
+ [2263.340 --> 2265.620] And of course, one of the most important things here
964
+ [2265.620 --> 2268.420] is that this location is updated by movement.
965
+ [2268.420 --> 2271.740] So even in the complete dark, if the rad is in that room,
966
+ [2271.740 --> 2273.620] and it moves, and you walk forward,
967
+ [2273.620 --> 2276.100] it updates its location information.
968
+ [2276.100 --> 2279.580] And one of the clever things is it's a relation property.
969
+ [2279.580 --> 2281.500] So I want to go from here to there.
970
+ [2281.500 --> 2283.660] I can go this way and then turn this way.
971
+ [2283.660 --> 2286.100] And I get the same representation if I just went straight
972
+ [2286.100 --> 2288.100] or went around in the circle.
973
+ [2288.100 --> 2290.580] And what's clever about this, it works even
974
+ [2290.580 --> 2291.780] in novel environments.
975
+ [2291.780 --> 2293.140] That it's never been in before.
976
+ [2293.140 --> 2295.140] So it may be never been in room 3,
977
+ [2295.140 --> 2297.700] but it will have that path integration property there,
978
+ [2297.700 --> 2299.380] even in the dark.
979
+ [2299.380 --> 2300.420] So that's kind of clever.
980
+ [2300.420 --> 2303.460] Now, the rat needs to know, or you, it's fun to do this
981
+ [2303.460 --> 2304.380] in the dark yourself.
982
+ [2304.380 --> 2307.060] I do it at night.
983
+ [2307.060 --> 2309.660] It actually is fun to try to see how good you are,
984
+ [2309.660 --> 2312.700] and what you're, how good you at this, you are.
985
+ [2312.700 --> 2316.300] You need to know the orientation of, in this case,
986
+ [2316.300 --> 2319.020] the animal's head to the room.
987
+ [2319.020 --> 2322.140] And so there's these things called head direction cells.
988
+ [2322.140 --> 2325.380] These are not driven by magnetic fields or something
989
+ [2325.380 --> 2326.220] like that.
990
+ [2326.220 --> 2327.940] They are basically set of cells which
991
+ [2327.940 --> 2330.540] indicate the direction of the head.
992
+ [2330.540 --> 2332.460] They're anchoring of those head direction cells
993
+ [2332.460 --> 2333.460] as unique per room.
994
+ [2333.460 --> 2335.300] So it doesn't really, the room rooms
995
+ [2335.300 --> 2337.780] is not always aligned along an edge,
996
+ [2337.780 --> 2341.140] but within the system.
997
+ [2341.140 --> 2345.180] And the orientation is also updated in my movement.
998
+ [2345.180 --> 2347.260] So think about, now, why you need this.
999
+ [2347.260 --> 2348.420] You need this, first of all, you're
1000
+ [2348.420 --> 2350.940] going to need to know the orientation of the head direction
1001
+ [2350.940 --> 2352.740] if you're going to know where you're going to end up
1002
+ [2352.740 --> 2353.260] at the room.
1003
+ [2353.260 --> 2355.020] So if I walk forward two steps, well,
1004
+ [2355.020 --> 2358.140] it depends which way I was facing, where I'm going to be.
1005
+ [2358.140 --> 2361.380] Also, if I want to predict what I'm going to see,
1006
+ [2361.380 --> 2363.860] or since I have to know where I am in which direction
1007
+ [2363.860 --> 2366.180] I'm facing, because I could be in the same location here.
1008
+ [2366.180 --> 2367.740] As the animal moves, both of these
1009
+ [2367.740 --> 2368.940] are updated simultaneously.
1010
+ [2368.940 --> 2371.020] You have to update the orientation.
1011
+ [2371.020 --> 2372.140] I'm going to use the word orientation
1012
+ [2372.140 --> 2374.140] because I'm trying to generalize it.
1013
+ [2374.140 --> 2376.820] Orientation and the location both get updated.
1014
+ [2376.820 --> 2379.500] I might be updating just one orientation,
1015
+ [2379.500 --> 2382.060] or I might be updating my location,
1016
+ [2382.060 --> 2385.780] or I might be doing both, as I move around in a curve like that.
1017
+ [2385.780 --> 2387.940] So location and orientation are both necessary
1018
+ [2387.940 --> 2389.680] to learn the structure of rooms and predict
1019
+ [2389.680 --> 2391.980] the sensory input in that case.
1020
+ [2391.980 --> 2395.740] So we think the same thing is going on as a quarter of a column
1021
+ [2395.740 --> 2399.460] is trying to model external objects in the world.
1022
+ [2399.460 --> 2404.580] You can define a location associated with individual objects.
1023
+ [2404.580 --> 2406.860] So my coffee cup is like a room.
1024
+ [2406.860 --> 2411.220] And the points on are going to be both unique to the coffee cup
1025
+ [2411.220 --> 2414.380] and unique to the location of the coffee cup.
1026
+ [2414.380 --> 2415.780] And the same thing with the pen.
1027
+ [2415.780 --> 2418.100] And it's going to have to be updated by movement.
1028
+ [2418.100 --> 2421.340] In this case, the movement is the case of my finger
1029
+ [2421.340 --> 2426.140] is the movement of my finger relative to the cup.
1030
+ [2426.140 --> 2427.460] And so we have to have that.
1031
+ [2427.460 --> 2429.900] The second thing, and I only realized this recently,
1032
+ [2429.900 --> 2433.220] you also to solve the problems of modeling of objects
1033
+ [2433.220 --> 2434.860] and modeling of structures, you need
1034
+ [2434.860 --> 2437.340] to have an equivalent of an orientation.
1035
+ [2437.340 --> 2440.620] So I've tried to show here, as a sort of a sensory
1036
+ [2440.620 --> 2444.500] on your tip of your finger, both at a sensing point A
1037
+ [2444.500 --> 2446.060] but from different orientations.
1038
+ [2446.060 --> 2448.020] So you can look at it this way.
1039
+ [2448.020 --> 2450.300] I'm touching the lip of this cup.
1040
+ [2450.300 --> 2452.660] And as I rotate my finger like this,
1041
+ [2452.660 --> 2454.780] the sensation of my finger is changing.
1042
+ [2454.780 --> 2458.780] But the location I'm sensing on the cup is not.
1043
+ [2458.780 --> 2461.100] The thing is a feature of the cup,
1044
+ [2461.100 --> 2462.660] and not actually sensing the feature.
1045
+ [2462.660 --> 2465.180] I'm sensing the feature at an orientation.
1046
+ [2465.180 --> 2466.900] I can't say that this feature is actually
1047
+ [2466.900 --> 2468.860] the lip of this cup in the frame of the cup.
1048
+ [2468.860 --> 2471.540] But the sensation I get changes as I move the orientation
1049
+ [2471.540 --> 2473.940] of my finger relative to the object.
1050
+ [2473.940 --> 2476.500] So we need to have something like that, too,
1051
+ [2476.500 --> 2478.140] of the sensor patch to the object.
1052
+ [2478.140 --> 2480.620] Now I should state now that I'm going
1053
+ [2480.620 --> 2482.460] to give this whole theory in terms of touch.
1054
+ [2482.460 --> 2484.420] But the whole thing applies to vision, too.
1055
+ [2484.420 --> 2485.980] And I believe it applies to addition as well.
1056
+ [2485.980 --> 2487.780] It's a little hard to think about that.
1057
+ [2487.780 --> 2490.620] But there's nothing we're not doing anything specific here.
1058
+ [2490.620 --> 2492.300] We're really trying to talk about generic properties
1059
+ [2492.300 --> 2495.740] of sensor patches relative to things.
1060
+ [2495.740 --> 2497.340] Anyway, we're going to argue that this is anchored
1061
+ [2497.340 --> 2500.900] to the object in a way that it is over there.
1062
+ [2500.900 --> 2503.660] And this orientation has to be updated in my movement.
1063
+ [2503.660 --> 2507.020] So our basic idea is the following.
1064
+ [2507.020 --> 2509.740] Location and orientation are both necessary.
1065
+ [2509.740 --> 2512.220] That is a location orientation in my sensor patch,
1066
+ [2512.220 --> 2514.700] whether it's a part of my redness.
1067
+ [2514.700 --> 2517.940] It's where it's sensing, not where the sensor is.
1068
+ [2517.940 --> 2519.380] Where is it?
1069
+ [2519.380 --> 2522.260] You both necessary to learn the structure of objects
1070
+ [2522.260 --> 2525.860] and to predict sensory input and to infer.
1071
+ [2525.860 --> 2528.380] I view this as a deduced requirement.
1072
+ [2528.380 --> 2531.620] And therefore, I don't feel it's speculative.
1073
+ [2531.620 --> 2535.820] But you may not agree with that.
1074
+ [2535.820 --> 2539.500] So now with this knowledge, we went back and we did the following.
1075
+ [2539.500 --> 2541.340] We started putting these pieces together
1076
+ [2541.340 --> 2544.780] in ways that are interesting.
1077
+ [2544.780 --> 2546.740] And this is where I'm going to lay out
1078
+ [2546.740 --> 2549.700] these sort of basic of the theory here.
1079
+ [2549.700 --> 2551.820] And this is my most complex slide.
1080
+ [2552.100 --> 2555.180] If I lose you here, sorry, but I'll bring you back
1081
+ [2555.180 --> 2556.780] in a moment.
1082
+ [2556.780 --> 2558.700] Hopefully, now I think everyone who's really smart,
1083
+ [2558.700 --> 2561.660] I hope to figure this out, you probably had a mirror ready.
1084
+ [2561.660 --> 2567.140] I'm just going to up front say, without any further justification,
1085
+ [2567.140 --> 2570.780] that layer 6a is representing orientation of the sensory patch
1086
+ [2570.780 --> 2572.460] and layer 6p is representing location.
1087
+ [2572.460 --> 2574.420] There's reasons for this.
1088
+ [2574.420 --> 2575.860] I'll get it in a second.
1089
+ [2575.860 --> 2578.900] These are both going to be motor updated.
1090
+ [2578.900 --> 2582.620] There are going to be path integration type of,
1091
+ [2582.620 --> 2584.780] and it's sort of like it's grid cell like and head direction
1092
+ [2584.780 --> 2585.900] cell like.
1093
+ [2585.900 --> 2587.060] And they're going to have properties
1094
+ [2587.060 --> 2590.740] similar to those cells in the rhino cortex.
1095
+ [2590.740 --> 2592.700] Now let's follow the circuitry as information
1096
+ [2592.700 --> 2595.260] in your basic feed-forward pathway here.
1097
+ [2595.260 --> 2598.540] You've got a sensation, which is arriving to layer 4.
1098
+ [2598.540 --> 2601.660] And that's paired with this bidirectional connection.
1099
+ [2601.660 --> 2603.060] This very characteristic connection
1100
+ [2603.060 --> 2606.220] between layer 6a and layer 4.
1101
+ [2606.220 --> 2608.860] And what I'm going to argue there is that layer 4 is representing
1102
+ [2608.860 --> 2611.300] sensation at an orientation.
1103
+ [2611.300 --> 2613.060] Now again, if I didn't know the orientation,
1104
+ [2613.060 --> 2615.340] I just have a bunch of cells that look like edge detectors
1105
+ [2615.340 --> 2616.340] or something like that.
1106
+ [2616.340 --> 2617.980] But in the context of an orientation,
1107
+ [2617.980 --> 2619.260] I'll get a sparse pattern.
1108
+ [2619.260 --> 2622.060] And it's a sparse pattern that represents sensation
1109
+ [2622.060 --> 2623.300] at an orientation.
1110
+ [2623.300 --> 2626.180] This is our sequence memory layer that I started with.
1111
+ [2626.180 --> 2627.860] It can learn sequences, but it can also
1112
+ [2627.860 --> 2629.260] learn sensing motor sequences.
1113
+ [2629.260 --> 2631.580] And so it forms this unique representation of sensation
1114
+ [2631.580 --> 2633.260] at orientation.
1115
+ [2633.260 --> 2635.420] Now the next layer is going to be a pooling layer.
1116
+ [2635.420 --> 2637.660] Imagine if I were pooling the input
1117
+ [2637.660 --> 2640.540] as I rotate here at the same location like this.
1118
+ [2640.540 --> 2643.340] What just takes a while to sink this in your head,
1119
+ [2643.340 --> 2645.300] well, you end up with a stable representation
1120
+ [2645.300 --> 2647.100] of the underlying feature independent
1121
+ [2647.100 --> 2649.140] of the orientation of the sensor.
1122
+ [2649.140 --> 2651.180] So I would end up with a representation of whatever
1123
+ [2651.180 --> 2653.420] thing I'm actually sensing at that point,
1124
+ [2653.420 --> 2656.140] independent of whether this way, this way, this way.
1125
+ [2656.140 --> 2658.660] If I went through that motion, that's what would happen.
1126
+ [2658.660 --> 2660.100] This thing layer, and this represents
1127
+ [2660.100 --> 2663.660] the feature that is being sensed at that point.
1128
+ [2663.660 --> 2665.820] At the moment, there's no concept of object.
1129
+ [2665.820 --> 2667.700] I'm not locating this on the object.
1130
+ [2667.700 --> 2670.820] I'm just representing what I'm sensing with my finger.
1131
+ [2670.820 --> 2673.420] Layer 3 then projects to layer 5, as we saw that's
1132
+ [2673.420 --> 2675.140] a classic projection layer.
1133
+ [2675.140 --> 2677.260] And we're going to repeat this same circuit.
1134
+ [2677.260 --> 2679.180] We're going to have the location information
1135
+ [2679.180 --> 2680.980] projecting the layer 5b.
1136
+ [2680.980 --> 2682.820] And that's going to now represent a feat.
1137
+ [2682.820 --> 2684.940] And this is another sequence memory.
1138
+ [2684.940 --> 2688.100] Now we really have the feature at location.
1139
+ [2688.100 --> 2690.220] Our earlier experiments didn't do with us, right?
1140
+ [2690.220 --> 2691.420] And they had some problems.
1141
+ [2691.420 --> 2693.780] But now, because I've added the second thing up above,
1142
+ [2693.780 --> 2696.740] now I really am locating the feature at location.
1143
+ [2696.740 --> 2698.380] This feature at location is a very
1144
+ [2698.380 --> 2699.580] simple representation.
1145
+ [2699.580 --> 2702.940] It's independent of the orientation of my sensor.
1146
+ [2702.940 --> 2706.060] And if I pull over that, and the upper layer here
1147
+ [2706.060 --> 2708.700] was I'm labeling layer 5a, which really would be the layer
1148
+ [2708.700 --> 2711.140] 5 thick, tough, it sells in some species.
1149
+ [2711.140 --> 2712.980] It's above in some species below.
1150
+ [2712.980 --> 2715.500] But this pretend it's this one above here.
1151
+ [2715.500 --> 2717.700] That pooling layer would then be stable over objects.
1152
+ [2717.700 --> 2721.020] It would be a virtual object.
1153
+ [2721.020 --> 2725.740] So we have this two-stays-century-motor-infraint-sendium.
1154
+ [2725.740 --> 2729.620] Now, if you think about it earlier, I talked about you could
1155
+ [2729.620 --> 2732.380] share information between columns.
1156
+ [2732.380 --> 2735.260] The only two things that are worth sharing here are the object
1157
+ [2735.260 --> 2736.580] layer and the feature layer.
1158
+ [2736.580 --> 2738.820] Those are two things that neighboring columns might also
1159
+ [2738.820 --> 2739.660] be doing in common.
1160
+ [2739.660 --> 2742.020] Everything else in here should not be projecting the other
1161
+ [2742.020 --> 2744.140] columns because it's unique to this column.
1162
+ [2744.140 --> 2746.460] And sure enough, the two primary output layers of a
1163
+ [2746.460 --> 2749.740] cortical column are always identified as layer 3 and layer
1164
+ [2749.740 --> 2752.140] 5 thick, tough, it sells.
1165
+ [2752.140 --> 2754.980] And those basically represent the feature that you're
1166
+ [2754.980 --> 2757.740] sensing in the pendant of the object and the object that
1167
+ [2757.740 --> 2759.380] you're sensing.
1168
+ [2759.380 --> 2761.940] Now, those actually can be shared to multiple columns.
1169
+ [2761.940 --> 2765.460] And those become the feed-folding put to the next region.
1170
+ [2765.460 --> 2768.140] It's worth noting that a column, oh, wow, this,
1171
+ [2768.140 --> 2769.500] and I'll keep it in second point.
1172
+ [2769.500 --> 2772.380] A column, therefore, is a two-stage-sensory-motor model
1173
+ [2772.380 --> 2774.620] for learning and firm structure.
1174
+ [2774.620 --> 2777.460] This is just a deduced properties thing about touching.
1175
+ [2777.460 --> 2779.340] And it's important.
1176
+ [2779.340 --> 2782.780] Remember, a column usually cannot infer either the feature
1177
+ [2782.780 --> 2784.980] or the object with a single sensation.
1178
+ [2784.980 --> 2787.100] It's just not going to be possible.
1179
+ [2787.100 --> 2788.420] You have two choices.
1180
+ [2788.420 --> 2791.300] You can take the single column and you can integrate over time
1181
+ [2791.300 --> 2794.060] by sensing moving, sensing moving, sensing moving, or your
1182
+ [2794.060 --> 2796.860] eyes could look at us through a straw and sense,
1183
+ [2796.860 --> 2801.020] so on, sense move, or you can vote with neighboring columns.
1184
+ [2801.020 --> 2803.980] And both of the strategies are employed in the brain.
1185
+ [2803.980 --> 2806.900] The column, to be trained, has to move over the object,
1186
+ [2806.900 --> 2811.580] but the column to infer can rely on with its neighbors.
1187
+ [2811.580 --> 2814.660] As they said earlier, this system is most obvious for touch,
1188
+ [2814.660 --> 2816.500] because I could easily think about these columns
1189
+ [2816.500 --> 2818.540] as being separate sensory patches that are moving
1190
+ [2818.540 --> 2820.300] simply each other.
1191
+ [2820.300 --> 2823.020] But it also applies to vision fairly straightforwardly.
1192
+ [2823.020 --> 2825.220] And we'd be suggested that other sensing
1193
+ [2825.220 --> 2826.540] modalities work in the same way.
1194
+ [2826.540 --> 2828.860] We spent some time earlier this week trying to map these
1195
+ [2828.860 --> 2830.980] onto whisking in mice.
1196
+ [2830.980 --> 2833.980] And I think that can be done.
1197
+ [2833.980 --> 2836.660] And of course, as we said at the beginning of this talk,
1198
+ [2836.660 --> 2840.620] because this architecture, if this is any truth to this,
1199
+ [2840.620 --> 2844.420] if there is, this architecture is just about the cortex,
1200
+ [2844.420 --> 2847.220] so it suggests that we infer and learn and manipulate
1201
+ [2847.220 --> 2849.740] abstract concepts in the same way, the same way
1202
+ [2849.740 --> 2851.300] that we manipulate objects in the world.
1203
+ [2851.300 --> 2855.220] So the theory is the evolution discovered
1204
+ [2855.220 --> 2858.700] a way of navigating and knowing mapping out environment.
1205
+ [2858.700 --> 2861.300] It had to do this a long time ago, because all animals move
1206
+ [2861.300 --> 2863.460] and they have to figure out where they are and how to get home.
1207
+ [2863.460 --> 2865.540] And then there's another theory that's
1208
+ [2865.540 --> 2868.820] been published that the enterrionic cortex is
1209
+ [2868.820 --> 2870.940] sort of this three-layer structure in two parts.
1210
+ [2870.940 --> 2873.340] And I forget the scientists who proposed this initially,
1211
+ [2873.340 --> 2875.540] but they proposed that the neocortex was actually
1212
+ [2875.540 --> 2878.020] was formed by folding those two halves on top of one another
1213
+ [2878.020 --> 2879.940] into a six-layer structure.
1214
+ [2879.940 --> 2881.380] So we think what's basically happened
1215
+ [2881.380 --> 2883.740] is a solution preserved much of what's
1216
+ [2883.740 --> 2885.140] going on in the enterrionic cortex.
1217
+ [2885.140 --> 2885.820] Not exactly.
1218
+ [2885.820 --> 2888.100] It's those differences.
1219
+ [2888.100 --> 2889.180] But it preserved that.
1220
+ [2889.180 --> 2892.940] And now it's learning how to model objects in the world.
1221
+ [2892.940 --> 2894.740] And in the human brain, what happened
1222
+ [2894.740 --> 2897.620] is now continue that and using that same mechanism
1223
+ [2897.620 --> 2899.540] in the model assets.
1224
+ [2899.540 --> 2902.660] And so it was suggestive that, just suggestive that,
1225
+ [2902.660 --> 2904.940] when we think about things where there's mathematics,
1226
+ [2904.940 --> 2907.380] or physics, or brains, or neuroscience, or politics,
1227
+ [2907.380 --> 2910.300] or whatever, we're going to be using a similar type of thing.
1228
+ [2910.300 --> 2912.740] And what's interesting about this is this space,
1229
+ [2912.740 --> 2915.180] is this idea of location and orientation,
1230
+ [2915.180 --> 2916.940] they're dimensionless.
1231
+ [2916.940 --> 2919.660] They're defined by behavior.
1232
+ [2919.660 --> 2921.660] And they're not metric.
1233
+ [2921.660 --> 2922.860] It's not like x, y, and z.
1234
+ [2922.860 --> 2924.420] There's sort of this very unusual way
1235
+ [2924.420 --> 2925.980] of representing these things.
1236
+ [2925.980 --> 2928.580] And if behaviors weren't physical behaviors,
1237
+ [2928.580 --> 2933.580] but were mental behaviors like mathematical transforms
1238
+ [2933.580 --> 2937.540] or something like, you could apply behaviors to abstract spaces.
1239
+ [2937.540 --> 2941.860] And it's a thing that this might be the core of high level
1240
+ [2941.860 --> 2943.180] thought.
1241
+ [2943.180 --> 2943.940] OK.
1242
+ [2943.940 --> 2946.100] I want to have one more thing here.
1243
+ [2946.100 --> 2948.820] It's suggested we might want to rethink some thoughts
1244
+ [2948.820 --> 2952.820] about hierarchy that we've all had for a long, long time.
1245
+ [2953.300 --> 2954.620] This is a cartoon drawing.
1246
+ [2954.620 --> 2956.900] We've captured some of the basic essence of it.
1247
+ [2956.900 --> 2960.860] We think about sensor arriving at a primary center region,
1248
+ [2960.860 --> 2961.660] variable region one.
1249
+ [2961.660 --> 2963.500] Here, we extract some simple features,
1250
+ [2963.500 --> 2965.300] and then we converge on to the next region.
1251
+ [2965.300 --> 2966.980] We extract some complex features.
1252
+ [2966.980 --> 2968.780] And then we somewhere up the hierarchy,
1253
+ [2968.780 --> 2972.580] we actually start representing objects in their entirety.
1254
+ [2972.580 --> 2975.180] This proposal I have today is quite different.
1255
+ [2975.180 --> 2978.700] It says that every region has columns.
1256
+ [2978.700 --> 2982.020] Every column is actually learning complete models of the world.
1257
+ [2982.020 --> 2984.220] Very, I mean, I'm not joking.
1258
+ [2984.220 --> 2987.020] A single column can learn thousands of things.
1259
+ [2987.020 --> 2990.100] And I've only talked about what six of the layers do.
1260
+ [2990.100 --> 2991.340] There's a lot more to be done.
1261
+ [2991.340 --> 2993.060] But the idea that these things are actually
1262
+ [2993.060 --> 2994.700] very powerful modeling things.
1263
+ [2994.700 --> 3000.420] And you have a huge array of basically models.
1264
+ [3000.420 --> 3003.180] And they're all modeling the same stuff in the world.
1265
+ [3003.180 --> 3004.900] Now, a couple of things here.
1266
+ [3004.900 --> 3006.100] I want to make really clear.
1267
+ [3006.100 --> 3008.220] I'm not saying that the classic view was wrong.
1268
+ [3008.220 --> 3010.100] I'm adding some new thoughts to it
1269
+ [3010.100 --> 3013.980] that we hadn't really thought about before.
1270
+ [3013.980 --> 3017.940] One is, what's the difference between all these columns?
1271
+ [3017.940 --> 3020.340] Well, odd things about the cortex.
1272
+ [3020.340 --> 3023.260] When we talk about how regions project to each other,
1273
+ [3023.260 --> 3024.300] they never do it that way.
1274
+ [3024.300 --> 3027.660] They always project at least three regions above.
1275
+ [3027.660 --> 3030.580] It's like if the LGN is projecting the V1,
1276
+ [3030.580 --> 3032.580] it also projects the V2 and V4.
1277
+ [3032.580 --> 3033.900] And people say, yeah, but the connections
1278
+ [3033.900 --> 3035.100] aren't really strong.
1279
+ [3035.100 --> 3036.740] Well, they might be diverging.
1280
+ [3036.740 --> 3039.380] The point is, there's nothing that requires here
1281
+ [3039.380 --> 3040.820] a strict hierarchy.
1282
+ [3040.820 --> 3043.980] And so a secondary region could be looking
1283
+ [3043.980 --> 3046.980] over the same sensory array, but it'll wider area.
1284
+ [3046.980 --> 3048.820] Now, why would it be doing that?
1285
+ [3048.820 --> 3051.860] Imagine I'm going to recognize a letter E.
1286
+ [3051.860 --> 3054.380] And I can do this.
1287
+ [3054.380 --> 3056.540] I'm going to argue that I can do that in V1.
1288
+ [3056.540 --> 3060.300] Every column in my cell can recognize a letter E.
1289
+ [3060.300 --> 3062.980] And if that E was really, really small, right?
1290
+ [3062.980 --> 3065.300] The edge of my ability is only going
1291
+ [3065.300 --> 3067.460] to be recognizable in V1.
1292
+ [3067.460 --> 3070.900] Because the other region, it just doesn't exist.
1293
+ [3070.900 --> 3072.180] It's too fuzzy.
1294
+ [3072.180 --> 3073.900] But if it gets a little bit bigger,
1295
+ [3073.900 --> 3076.740] then it might be recognized by the columns in both V1 and V2.
1296
+ [3076.740 --> 3079.500] But if it gets really big, then the LGN can't do that anymore.
1297
+ [3079.500 --> 3080.580] It's just too big an area.
1298
+ [3080.580 --> 3082.180] I can't move over that.
1299
+ [3082.180 --> 3083.900] And so you could be representing things
1300
+ [3083.900 --> 3087.500] at different scales here, but the complete objects
1301
+ [3087.500 --> 3089.860] and they're sort of overlapping.
1302
+ [3089.860 --> 3093.300] Now, what if I had two sensory arrays going out at the same time?
1303
+ [3093.300 --> 3095.420] So I have now a vision and a touch array.
1304
+ [3095.420 --> 3098.700] And we are going to basically grasp the cup
1305
+ [3098.700 --> 3101.060] and see the cup at the same time.
1306
+ [3101.060 --> 3103.860] Well, you would be invoking models of the cup
1307
+ [3103.860 --> 3105.180] in many cortical columns.
1308
+ [3105.180 --> 3106.580] Because it would be columns on the retina
1309
+ [3106.580 --> 3108.020] that are sensing the cup and those columns
1310
+ [3108.020 --> 3110.620] in the sematic sensory regions that are sensing the cup.
1311
+ [3110.620 --> 3112.260] And so multiple columns are trying
1312
+ [3112.260 --> 3114.020] to infer that this is a cup.
1313
+ [3114.020 --> 3115.460] They all have models of the cup.
1314
+ [3115.460 --> 3116.420] Some are derived visually.
1315
+ [3116.420 --> 3117.740] Some are derived tactically.
1316
+ [3117.740 --> 3119.100] But they all model.
1317
+ [3119.100 --> 3122.820] Now, interestingly, if they all have models of the cups,
1318
+ [3122.820 --> 3124.700] and they're all sensing similar features,
1319
+ [3124.700 --> 3127.660] it's possible that they can vote in various ways here.
1320
+ [3127.660 --> 3129.860] And one of the things we see in the cortex,
1321
+ [3129.860 --> 3131.260] there's a lot of projections which
1322
+ [3131.260 --> 3133.500] don't make sense in a hierarchical fashion.
1323
+ [3133.500 --> 3136.540] You see projections from S2 going to V2.
1324
+ [3136.540 --> 3138.660] Well, that doesn't make sense in a hierarchical fashion.
1325
+ [3138.660 --> 3141.300] Hence here, they can be voting on cups.
1326
+ [3141.300 --> 3143.860] They can be the objects that are being voting on features.
1327
+ [3143.860 --> 3145.220] They can go up and down the hierarchy.
1328
+ [3145.220 --> 3147.140] They can go across to Colossum.
1329
+ [3147.140 --> 3151.100] And it's interesting that you can form very,
1330
+ [3151.100 --> 3152.420] as long as you go to the right layers,
1331
+ [3152.420 --> 3154.140] you can form very sparse connections
1332
+ [3154.140 --> 3155.180] to different parts of the brain.
1333
+ [3155.180 --> 3156.540] And it works.
1334
+ [3156.540 --> 3158.500] You don't have to have a lot of connections at each column.
1335
+ [3158.500 --> 3161.140] You can just send one connection, feel over here.
1336
+ [3161.140 --> 3162.500] It's kind of odd the way it works.
1337
+ [3162.500 --> 3165.300] But anyway, you can have all these connections
1338
+ [3165.300 --> 3166.020] with help vote.
1339
+ [3166.020 --> 3168.780] So the tactile system will be helping
1340
+ [3168.780 --> 3170.180] the vision system, the vision system,
1341
+ [3170.180 --> 3172.740] and the sematter sensory system.
1342
+ [3172.740 --> 3175.020] So little non-hierarchal connections
1343
+ [3175.020 --> 3177.260] allow columns to vote on share elements,
1344
+ [3177.260 --> 3178.820] such as object and feed.
1345
+ [3178.820 --> 3181.380] And that's kind of the thing we see up here.
1346
+ [3181.380 --> 3181.940] OK.
1347
+ [3181.940 --> 3183.940] So I'm almost done.
1348
+ [3183.940 --> 3186.580] The summary of the talk is we start with our goal,
1349
+ [3186.580 --> 3189.020] which is to understand the function operation, the lamar circuits,
1350
+ [3189.020 --> 3190.020] and the neocortex.
1351
+ [3190.020 --> 3191.860] Our methodology of study is to study how
1352
+ [3191.860 --> 3194.340] cortical columns make predictions of their inputs.
1353
+ [3194.340 --> 3197.820] We then propose a parallel neuron model, which
1354
+ [3197.820 --> 3199.420] is basically a prediction.
1355
+ [3199.420 --> 3201.500] We say every parallel neuron is basically
1356
+ [3201.500 --> 3203.860] using 90% of its synapses for prediction.
1357
+ [3203.860 --> 3206.580] In each neuron predicts its activity in hundreds of contexts,
1358
+ [3206.580 --> 3210.380] and that prediction is manifest as a depolarization.
1359
+ [3210.380 --> 3213.060] We then set a single layer of neurons,
1360
+ [3213.060 --> 3215.180] that forms a predictive memory of high order sequences.
1361
+ [3215.180 --> 3217.780] This has been well documented.
1362
+ [3217.780 --> 3219.220] As long as you have sparse activations,
1363
+ [3219.220 --> 3221.540] many columns, fast inhibition, and lateral connections
1364
+ [3221.540 --> 3223.580] that can be learned.
1365
+ [3223.580 --> 3225.540] We said we'd find a two layer network,
1366
+ [3225.540 --> 3227.940] which forms a predictive memory sensing motor sequences.
1367
+ [3227.940 --> 3232.260] If I have some motor drive context in a pooling layer,
1368
+ [3232.260 --> 3237.060] and of course we propose next that motor drive context
1369
+ [3237.060 --> 3240.220] with an allocentric location, object central location,
1370
+ [3240.740 --> 3244.740] therefore, then we further went beyond that to say,
1371
+ [3244.740 --> 3246.540] hey, hey, according to the columns,
1372
+ [3246.540 --> 3249.300] equivalents to location and orientation
1373
+ [3249.300 --> 3251.140] of the sensor relative to the object,
1374
+ [3251.140 --> 3254.220] and those are analogous to grid and hand direction cells.
1375
+ [3254.220 --> 3257.580] And this begins to define a framework for a carcacom.
1376
+ [3257.580 --> 3261.340] It's certainly not only a potential framework,
1377
+ [3261.340 --> 3264.060] but tie a bunch of things together that kind of makes sense.
1378
+ [3264.060 --> 3267.020] columns learn models of object as features at locations,
1379
+ [3267.020 --> 3270.660] using a two-stage sensory motor inference model.
1380
+ [3270.660 --> 3274.500] And I went through the details there matter a lot,
1381
+ [3274.500 --> 3277.180] but that's the basic idea.
1382
+ [3277.180 --> 3280.380] And then in some total, this is the Neocortex X contains
1383
+ [3280.380 --> 3284.100] thousands of parallel models that are all modeling the world
1384
+ [3284.100 --> 3286.860] surprisingly in high capacity that
1385
+ [3286.860 --> 3288.740] resolve uncertainty by associative linking
1386
+ [3288.740 --> 3291.580] and or movements of the sensors.
1387
+ [3291.580 --> 3293.540] There's a couple things that I should point out
1388
+ [3293.540 --> 3296.580] that we didn't do, very big ones.
1389
+ [3296.580 --> 3297.660] Objects have behaviors.
1390
+ [3297.660 --> 3299.620] Now, I should point out that everything I've
1391
+ [3299.620 --> 3302.460] talked about so far is really about the what pathway.
1392
+ [3302.460 --> 3303.860] We haven't been talking about the whole cortex.
1393
+ [3303.860 --> 3305.940] We've been talking about how the what path
1394
+ [3305.940 --> 3308.820] was model structure and so on.
1395
+ [3308.820 --> 3310.940] And if I'm talking about behaviors in the what path,
1396
+ [3310.940 --> 3315.060] you're on tape, what path, I'm talking about behaviors
1397
+ [3315.060 --> 3316.500] of the objects themselves.
1398
+ [3316.500 --> 3321.140] So my laptop has a behavior that the lid can open and shut.
1399
+ [3321.140 --> 3322.100] And I know that.
1400
+ [3322.100 --> 3323.580] Also, if I touch keys, they move.
1401
+ [3323.580 --> 3324.420] I know that.
1402
+ [3324.420 --> 3325.660] This thing has behaviors too.
1403
+ [3325.660 --> 3328.220] I press this button, something happens.
1404
+ [3328.220 --> 3330.340] Objects have their own set of behaviors.
1405
+ [3330.340 --> 3334.220] We have to add that into this model because it's not just
1406
+ [3334.220 --> 3337.140] the shape of an object, it can change.
1407
+ [3337.140 --> 3339.420] And the way that I think we're going to model behaviors,
1408
+ [3339.420 --> 3341.140] if you think about the model of objects
1409
+ [3341.140 --> 3343.540] or features at locations, those features
1410
+ [3343.540 --> 3345.020] can move in the object space.
1411
+ [3345.020 --> 3347.340] That would happen if I'm opening a laptop lid.
1412
+ [3347.340 --> 3350.220] Or the features can change at the particular location.
1413
+ [3350.220 --> 3353.740] So if I bring out my cell phone and it's on and I touch something
1414
+ [3353.740 --> 3355.900] on the screen, new features appear
1415
+ [3355.900 --> 3357.700] at the same locations that they've been here before.
1416
+ [3357.700 --> 3360.420] So the whole model in behavior of objects
1417
+ [3360.420 --> 3364.700] is how features move and change at locations.
1418
+ [3364.700 --> 3365.420] We have to do that.
1419
+ [3365.420 --> 3367.020] We haven't done that yet.
1420
+ [3367.020 --> 3368.820] We need a detailed model of the hierarchy,
1421
+ [3368.820 --> 3369.740] including the thomas.
1422
+ [3369.740 --> 3370.740] I didn't talk about thomas.
1423
+ [3370.740 --> 3372.500] We spent a lot of time today talking about thomas.
1424
+ [3372.500 --> 3375.020] We have hypothesis of what it's doing and why we need it.
1425
+ [3375.020 --> 3376.580] But we have to finish that out.
1426
+ [3376.580 --> 3379.260] And I also already mentioned, sort of build
1427
+ [3379.260 --> 3381.780] the complementary aware pathway.
1428
+ [3381.860 --> 3382.780] This is not a model.
1429
+ [3382.780 --> 3386.380] We haven't described anything about how we generate behaviors
1430
+ [3386.380 --> 3388.980] and why I might move and how I would reach something.
1431
+ [3388.980 --> 3390.380] I haven't talked about that at all.
1432
+ [3390.380 --> 3393.300] I've just talked about how would a what pathway column
1433
+ [3393.300 --> 3396.660] learn the structure of objects to move them.
1434
+ [3396.660 --> 3398.700] I want to put it in a plug here.
1435
+ [3398.700 --> 3401.500] Collaborations, there are many testable predictions
1436
+ [3401.500 --> 3403.500] in this model in some sense, a green field.
1437
+ [3403.500 --> 3406.780] Because people were proposing that cortical columns,
1438
+ [3406.780 --> 3408.540] even primary ones, are doing a hell of a lot more
1439
+ [3408.540 --> 3410.540] than most people think.
1440
+ [3410.540 --> 3412.940] And so we spent a lot of time this week talking
1441
+ [3412.940 --> 3415.260] to various labs about how we could do that.
1442
+ [3415.260 --> 3416.380] And we welcome that.
1443
+ [3416.380 --> 3418.500] We'll have discussions and we can talk on the phone
1444
+ [3418.500 --> 3420.020] or here today and so on.
1445
+ [3420.020 --> 3422.060] And we're always interested in hosting visiting scholars
1446
+ [3422.060 --> 3422.860] and interns.
1447
+ [3422.860 --> 3424.700] We have a couple right now.
1448
+ [3424.700 --> 3426.500] And so if you want to come spend some time
1449
+ [3426.500 --> 3428.580] in Son of California, even for short period of time,
1450
+ [3428.580 --> 3430.820] we have people come just for a couple of days
1451
+ [3430.820 --> 3432.060] and want to get immersed in what we do.
1452
+ [3432.060 --> 3434.700] We like having visitors like that.
1453
+ [3434.700 --> 3436.220] This is the team we have on the left.
1454
+ [3436.220 --> 3438.620] There's 12 people I want to call out specifically
1455
+ [3438.620 --> 3441.300] to Ahmed, who is with me right here.
1456
+ [3441.300 --> 3442.100] He's been with me.
1457
+ [3442.100 --> 3444.180] We've been partners for 12 years.
1458
+ [3444.180 --> 3445.860] And he's critical to the whole thing.
1459
+ [3445.860 --> 3447.900] And Marcus Lewis is one of our scientists.
1460
+ [3447.900 --> 3451.140] And he really helped understand the interaction
1461
+ [3451.140 --> 3453.100] between layer four and layer six and layer five
1462
+ [3453.100 --> 3455.540] and layer six B. I didn't really talk about his work here,
1463
+ [3455.540 --> 3457.700] but it's sort of under not lying everything we're doing.
1464
+ [3457.700 --> 3459.980] And he has some other insights into that.
1465
+ [3459.980 --> 3461.980] So I hope I didn't speak too quickly,
1466
+ [3461.980 --> 3462.900] but that's the end of my talk.
1467
+ [3462.900 --> 3463.740] Thank you.
transcript/challenge_0qj-w4nYvdk.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 9.260] There was 🥢
2
+ [18.260 --> 22.000] There was a beach
3
+ [82.000 --> 89.000] I'm not a fan of the game, I'm not a fan of the game.
4
+ [89.000 --> 96.000] I'm not a fan of the game, I'm not a fan of the game.
5
+ [96.000 --> 101.000] I'm not a fan of the game, I'm not a fan of the game.
6
+ [101.000 --> 108.000] I'm not a fan of the game, I'm not a fan of the game.
7
+ [108.000 --> 115.000] I'm not a fan of the game, I'm not a fan of the game.
8
+ [115.000 --> 122.000] I'm not a fan of the game, I'm not a fan of the game.
9
+ [122.000 --> 129.000] I'm not a fan of the game, I'm not a fan of the game.
10
+ [129.000 --> 137.000] I'm not a fan of the game, I'm not a fan of the game.
11
+ [137.000 --> 144.000] I'm not a fan of the game, I'm not a fan of the game.
12
+ [144.000 --> 151.000] I'm not a fan of the game, I'm not a fan of the game.
13
+ [151.000 --> 158.000] I'm not a fan of the game, I'm not a fan of the game.
14
+ [158.000 --> 165.000] I'm not a fan of the game, I'm not a fan of the game.
15
+ [165.000 --> 172.000] I'm not a fan of the game, I'm not a fan of the game.
16
+ [172.000 --> 179.000] I'm not a fan of the game, I'm not a fan of the game.
17
+ [179.000 --> 186.000] I'm not a fan of the game, I'm not a fan of the game.
18
+ [186.000 --> 192.000] I'm not a fan of the game, I'm not a fan of the game.
19
+ [192.000 --> 199.000] I'm not a fan of the game, I'm not a fan of the game.
20
+ [199.000 --> 206.000] I'm not a fan of the game, I'm not a fan of the game.
21
+ [206.000 --> 213.000] I'm not a fan of the game, I'm not a fan of the game.
22
+ [213.000 --> 219.000] I'm not a fan of the game, I'm not a fan of the game.
23
+ [219.000 --> 226.000] I'm not a fan of the game, I'm not a fan of the game.
24
+ [226.000 --> 233.000] I'm not a fan of the game, I'm not a fan of the game.
25
+ [233.000 --> 240.000] I'm not a fan of the game, I'm not a fan of the game.
26
+ [240.000 --> 246.000] I'm not a fan of the game, I'm not a fan of the game.
27
+ [246.000 --> 253.000] I'm not a fan of the game, I'm not a fan of the game.
28
+ [253.000 --> 260.000] I'm not a fan of the game, I'm not a fan of the game.
29
+ [260.000 --> 267.000] I'm not a fan of the game, I'm not a fan of the game.
30
+ [267.000 --> 273.000] I'm not a fan of the game, I'm not a fan of the game.
31
+ [273.000 --> 280.000] I'm not a fan of the game, I'm not a fan of the game.
32
+ [280.000 --> 287.000] I'm not a fan of the game, I'm not a fan of the game.
33
+ [287.000 --> 292.000] I'm not a fan of the game, I'm not a fan of the game.
34
+ [292.000 --> 298.000] I'm not a fan of the game, I'm not a fan of the game.
transcript/challenge_2RWZ-lPgMoM.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [0.480 --> 3.880] Thud lawyer
2
+ [4.480 --> 8.680] ��� generations
transcript/challenge_3_dAkDsBQyk.txt ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.000] First, there was PlayStation, aka PS1, then there's PS2, PS3, and now PS4.
2
+ [8.000 --> 13.000] And that makes sense. You'd think after Xbox, there'd be Xbox 2. But no.
3
+ [13.000 --> 20.000] Next came Xbox 360. And now, after 360, comes Xbox One.
4
+ [20.000 --> 26.000] Why one? Maybe that's how many seconds of thought they put in the naming it.
5
+ [26.000 --> 28.000] Can you get the butter, please?
6
+ [28.000 --> 34.000] You know, however, with the Xbox One, I can control my entire entertainment system using voice commands.
7
+ [34.000 --> 38.000] Up until now, I've had to use Leonard.
8
+ [38.000 --> 40.000] Then you get the other one. Pass the butter.
9
+ [40.000 --> 46.000] Get hang on. I don't feel like you're taking this dilemma seriously.
10
+ [46.000 --> 50.000] Fine, Sheldon. You have my undivided attention.
11
+ [50.000 --> 54.000] Okay, now, the PS4 is more angular and sleek looking.
12
+ [54.000 --> 56.000] No way.
13
+ [56.000 --> 61.000] It's true. But the larger size of the Xbox One may keep it from overheating.
14
+ [61.000 --> 63.000] You wouldn't want your gaming system to overheat?
15
+ [63.000 --> 69.000] No, you absolutely would not. And furthermore, the Xbox One now comes with a connect included.
16
+ [69.000 --> 70.000] Included?
17
+ [70.000 --> 72.000] Yes.
18
+ [72.000 --> 81.000] Not sold separately. Although the PS4 uses cool new GDDR5 RAM, well the Xbox One is still using the conventional DDR3 memory.
19
+ [82.000 --> 85.000] Why would they still be using DDR3? Are they nuts?
20
+ [85.000 --> 87.000] You're...
21
+ [87.000 --> 91.000] Keep that's what I thought. But then, they go and throw in an ES Ram Buffer.
22
+ [91.000 --> 93.000] Oh, you say... Who's that?
23
+ [93.000 --> 95.000] The Xbox.
24
+ [95.000 --> 96.000] You're kidding!
25
+ [96.000 --> 97.000] No, I am not.
26
+ [97.000 --> 102.000] If this ES Ram Buffer should totally bridge the 100 gigabit per second bandwidth gap between the two RAM types.
27
+ [102.000 --> 105.000] This is a nightmare. How will you ever make a decision?
28
+ [105.000 --> 107.000] You see, I don't know.
29
+ [107.000 --> 108.000] What should I do?
30
+ [108.000 --> 110.000] Please pass the buyer!
31
+ [110.000 --> 112.000] I don't know.
transcript/challenge_8yGhNwDMT-g.txt ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 3.360] Team Exercise 22. Drawing.
2
+ [4.400 --> 8.720] In this exercise, the team will form a circle at equal distances from each other.
3
+ [9.600 --> 13.520] Tell the team to form pairs with someone they don't know so well yet.
4
+ [14.400 --> 16.160] Each pair takes two chairs.
5
+ [17.040 --> 19.200] Put the backs against each other and sits down.
6
+ [20.240 --> 23.280] Each pair receives a piece of paper and a pen.
7
+ [24.640 --> 28.000] The duos decide who is person A and who is person B.
8
+ [28.960 --> 33.040] Person B will be the first to make an abstract drawing that shows simple shapes.
9
+ [34.000 --> 39.200] Person A will try to recreate this drawing based on verbal instructions from person B.
10
+ [40.320 --> 43.280] Person B starts with describing his drawing.
11
+ [44.400 --> 48.000] First draw a medium-sized triangle in the middle of the paper.
12
+ [49.120 --> 54.240] Next, draw a circle on the bottom left of the triangle, slightly touching it.
13
+ [55.040 --> 59.600] Lastly, draw a rectangle that intersects with the top of the triangle.
14
+ [60.640 --> 63.200] The participants turn around and compare their drawings.
15
+ [64.000 --> 69.280] After having exchanged feedback about differences in the drawings and the way they communicated,
16
+ [69.280 --> 70.400] they switch roles.
17
+ [71.280 --> 75.120] Person A will now copy the drawing of person B.
18
+ [75.120 --> 79.360] Without looking at this piece of paper, still only using spoken instructions.
19
+ [80.160 --> 86.640] The drawing may now also show specific objects or things, for example, a light bulb.
20
+ [88.080 --> 93.600] To not give the object away, the person describing the drawing may only use figurative instructions.
21
+ [93.600 --> 96.480] He can describe the image by all kinds of figures.
22
+ [97.040 --> 99.200] But of course he can say it's a light bulb.
23
+ [99.920 --> 101.440] But for example he can say,
24
+ [102.720 --> 105.200] draw a circle in the center of your paper.
25
+ [105.920 --> 109.680] Under the circle, draw a cylinder that looks like a screw.
26
+ [110.560 --> 113.200] Lastly, draw short stripes around it.
27
+ [114.400 --> 116.720] Now the two drawings are compared again.
28
+ [117.600 --> 120.560] And person B will find out what the object really was.
29
+ [121.440 --> 125.200] It will become clear what went well and where the communication could have been better.
30
+ [126.400 --> 130.880] After the couples have switched two times, new pairs will be formed and they will repeat the
31
+ [130.880 --> 132.400] exercise in the same manner.
32
+ [133.200 --> 138.720] The pairs may now choose if they prefer to draw abstract shapes or specific objects.
33
+ [139.760 --> 144.000] After having done the exercise a couple of more times, the team will form a circle again
34
+ [144.000 --> 146.880] and evaluate what they've experienced during the exercise.
35
+ [147.520 --> 150.480] What style of communication works most efficiently.
36
+ [150.480 --> 152.000] And what style didn't work at all.
37
+ [152.800 --> 156.960] Is an abstract drawing more difficult to draw compared to a specific drawing?
38
+ [157.840 --> 161.360] Ask each participant what they think and let them share their experiences.
39
+ [162.160 --> 165.600] Your trainer guides the team and brings variations to the exercise.
40
+ [165.600 --> 169.040] What kind of variations you can read below this video on YouTube.
41
+ [169.040 --> 174.160] And please subscribe to our channel to see a new team exercise each Sunday on youtube.com
42
+ [174.160 --> 179.200] slash team exercises to improve cooperation and communication.