Tuesday, April 18, 2017

The Big Ideas in Cognitive Neuroscience, Explained



Are emergent properties really for losers? Why are architectures important? What are “mirror neuron ensembles” anyway? My last post presented an idiosyncratic distillation of the Big Ideas in Cognitive Neuroscience symposium, presented by six speakers at the 2017 CNS meeting. Here I’ll briefly explain what I meant in the bullet points. In some cases I didn't quite understand what the speaker meant so I used outside sources. At the end is a bonus reading list.

The first two speakers made an especially fun pair on the topic of memory: they held opposing views on the “engram”, the physical manifestation of a memory in the brain.1 They also disagreed on most everything else.


1. Charles Randy Gallistel (Rutgers University) What Memory Must Look Like

Gallistel is convinced that Most Neuroscientists Are Wrong About the Brain. This subtly bizarre essay in Nautilus (which was widely scorned on Twitter) succinctly summarized the major points of his talk. You and I may think the brain-as-computer metaphor has outlived its usefulness, but Gallistel says that “Computation in the brain must resemble computation in a computer.” 

Shannon information is a set of possible messages encoded as bit patterns and sent over a noisy channel to a recipient that will hopefully decode the message with minimal error. In this purely mathematical theory, the semantic content (meaning) of a message is irrelevant. The brain stores numbers and that's that.

  • Memories (“engrams”) are not stored at synapses.
Instead, engrams reside in molecules inside cells. The brain “encodes information into molecules inside neurons and reads out that information for use in computational operations.” A 2014 paper on conditioned responses in cerebellar Purkinje cells was instrumental in overturning synaptic plasticity (strengthening or weakening of synaptic connections) as the central mechanism for learning and memory, according to Gallistel.2 Most other scientists do not share this view.3

  • The engram is inter-spike interval.
Spike train solutions based on rate coding are wrong. Meaning, the code is not conveyed by the firing rate of neurons. Instead, numbers are conveyed to engrams via a combinatorial interspike interval code. Engrams then reside in cell-intrinsic molecular structures. In the end, memory must look like the DNA code.

  • Emergent properties are for losers.
“Emergent property” is a code word for “we don't know.”



2. Tomás Ryan (@TJRyan_77) Information Storage in Memory Engrams

Ryan began by acknowledging that he had tremendous respect for Gallistal's speech which was in turn powerful, illuminating, very categorical, polarizing, and rigid. But wrong. Oh so very wrong. Memory is not essentially molecular, we should not approach memory and the brain from a design perspective, and information storage need not mimic a computer.

  • The brain does not use Shannon information.
More precisely, “the kind of information the brain uses may be very different from Shannon information.” Why is that? Brains evolved, in kludgy ways that don't resemble a computer. The information used by the brain may be encoded without having to reduce it to Shannon form, and may not be quantifiable as units.

  • Memories (“engrams”) are not stored at synapses.
Memory is not stored by changes in synaptic weights, Ryan and Gallistel agree on this. The dominant view has been falsified by a number of studies including one by Ryan and colleagues that used engram labeling. Specific “engram cells” can be labeled during learning using optogenetic techniques, and later stimulated to induce the recall of specific memories. These memories can be reactivated even after protein synthesis inhibitors have (1) induced amnesia, and (2) prevented the usual memory consolidation-related changes in synaptic strength.

  • We learn entirely through spike trains.
Spike trains are necessary but not sufficient to explain how information is coded in the brain. On the other hand, instincts are transmitted genetically and are not learned via spike trains.

  • The engram is an emergent property.
And fitting with all of the above, “the engram is an emergent property mediated through synaptic connections” (not through synaptic weights). Stable connectivity is what stores information, not molecules.


Angela Friederici (Max Planck Institute for Human Cognitive and Brain Sciences) Structure and Dynamics of the Language Network

Following on the heels of the rodent engram crowd, Friederici pointed out the obvious limitations of studying language as a human trait.

  • Language is genetically predetermined.
The human ability to acquire language is based on a genetically predetermined structural neural network. Although the degree of innateness has been disputed, a bias or propensity of brain development towards particular modes of information processing is less controversial. According to Friederici, language capacity is rooted in “merge”, a specific computation that binds words together to form phrases and sentences.

  • The “merge” computation is localized in BA 44.
This wasn't one of my original bullet points, but I found this statement rather surprising and unbelievable. It implies that our capacity for language is located in the anterior ventral portion of Brodmann's area 44 in the left hemisphere (the tiny red area in the PHRASE > LIST panel below).



The problem is that acute stroke patients with dysfunctional tissue in left BA 44 do not have impaired syntax. Instead, they have difficulty with phonological short-term memory (keeping strings of digits in mind, like remembering a phone number).

  • There is something called mirror neural ensembles.
    I'll just have to leave this slide here, since I really didn't understand it, even on the second viewing.



    “This is a poor hypothesis,” she said.


    Jean-Rémi King (@jrking0) Parsing Human Minds

    King's expertise is in visual processing (not language), but his talk drew parallels between vision and speech comprehension. A key goal in both domains is to identify the algorithm (sequence of operations) that translates input into meaning.

    • Recursion is big. 
    Despite these commonalities, the structure of language presents the unique challenge of nesting (or recursion): each constituent in a sentence can be made of subconstituents of the same nature, which can result in ambiguity.


    • Architectures are important. 
    Decoding aspects of a sensory stimulus using MEG and machine learning is lovely, but it doesn't tell you the algorithm. What is the computational architecture? Is it sustained or feedforward or recurrent?

      Each architecture could be compatible with a pattern of brain activity at different time points. But do the classifiers at different time points generalize to other time points? This can be determined by a temporal generalization analysis, which “reveals a repertoire of canonical brain dynamics.”


      Danielle Bassett (@DaniSBassett A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior

      Bassett previewed an arc of exciting ideas where we've shown progress, followed by frustrations and failures, which may ultimately provide an opening for the really Big Ideas. Her focus is on learning from a network perspective, which means patterns of connectivity in the whole brain. What is the underlying network architecture that facilitates the spatial distributed effects?



      What is the relationship between these two notions of modularity?
      [I ask this as an honest question.]

      Major challenges remain, of course.

      • Build a bridge from networks to models of behavior.
      Incorporate well-specified behavioral models such as reinforcement learning and the drift diffusion model of decision making. These models are fit to the data to derive parameters such as the alpha parameter from reinforcement learning rate. Models of behavior can help generate hypotheses about how the system actually works.

      • Use generative models to construct theories. 
      Network models are extremely useful, but they're not theories. They're descriptors. They don't generate new frameworks for understanding what the data should look like. Theory-building is obviously critical for moving forward.


      John Krakauer (@blamlab Big Ideas in Cognitive Neuroscience: Action

      Krakauer mentioned the Big Questions in Neuroscience symposium at the 2016 SFN meeting, which motivated the CNS symposium as well as a splashy critical paper in Neuron. He raised an interesting point about how the term “connectivity” has different meanings, i.e. the type of embedded connectivity that stores information (engrams) vs. the type of correlational connectivity when modules combine with each other to produce behavior. [BTW, is everyone here using “modules” in the same way?]

      • Machine learning will save us. 
      Krakauer discussed work on motor learning using adaptation paradigms and simple execution tasks. But there's a dirty secret: there is no computational model, no algorithmic theory of how practice makes you better on those tasks. Can the computational view get an upgrade from machine learning? Go out and read the manifesto by Marblestone, Wayne, and Kording: Toward an Integration of Deep Learning and Neuroscience. And you better learn about cost functions, because they're very important.4



      • Go back to behavioral neuroscience.
      This is the only way to work out the right cost functions. Bottom line: Networks represent weighting modules into the cost function.4 


      OVERALL, there was an emphasis on computational approaches with nods to the three levels of David Marr:

      computation – algorithm – implementation



      We know from from Krakauer et al. 2017 (and from CNS meetings past and present) that co-organizer David Poeppel is a big fan of Marr. The end goal of a Marr-ian research program is to find explanations, to reach an understanding of brain-behavior relations. This requires a detailed specification of the computational problem (i.e., behavior) to uncover the algorithms. The correlational approach of cognitive neuroscience and even the causal-mechanistic circuit manipulations of optogenetic neuroscience just don't cut it anymore.



      Footnotes

      1 Although neither speaker explicitly defined the term, it is most definitely not the engram as envisioned by Scientology: “a detailed mental image or memory of a traumatic event from the past that occurred when an individual was partially or fully unconscious.” The term was first coined by Richard Semon in 1904.

      2 This paper (by Johansson et al, 2014) appeared in PNAS, and Gallistel was the prearranged editor.

      3 For instance, here's Mu-ming Poo: “There is now general consensus that persistent modification of the synaptic strength via LTP and LTD of pre-existing connections represents a primary mechanism for the formation of memory engrams.”

      4 If you don't understand all this, you're not alone. From Machine Learning: the Basics.
      This idea of minimizing some function (in this case, the sum of squared residuals) is a building block of supervised learning algorithms, and in the field of machine learning this function - whatever it may be for the algorithm in question - is referred to as the cost function. 


      Reading List

      Everyone is Wrong

      Here's Why Most Neuroscientists Are Wrong About the Brain. Gallistel in Nautilus, Oct. 2015.

      Time to rethink the neural mechanisms of learning and memory. Gallistel CR, Balsam PD. Neurobiol Learn Mem. 2014 Feb;108:136-44.

      Engrams are Cool

      What is memory? The present state of the engram. Poo MM, Pignatelli M, Ryan TJ, Tonegawa S, Bonhoeffer T, Martin KC, Rudenko A, Tsai LH, Tsien RW, Fishell G, Mullins C, Gonçalves JT, Shtrahman M, Johnston ST,  Gage FH, Dan Y, Long J, Buzsáki G, Stevens C. BMC Biol. 2016 May 19;14:40.

      Engram cells retain memory under retrograde amnesia. Ryan TJ, Roy DS, Pignatelli M, Arons A, Tonegawa S. Science. 2015 May 29;348(6238):1007-13.

      Engrams are Overrated

      For good measure, some contrarian thoughts floating around Twitter...


      “Can We Localize Merge in the Brain? Yes We Can”

      Merge in the Human Brain: A Sub-Region Based Functional Investigation in the Left Pars Opercularis. Zaccarella E, Friederici AD. Front Psychol. 2015 Nov 27;6:1818.

      The neurobiological nature of syntactic hierarchies. Zaccarella E, Friederici AD. Neurosci Biobehav Rev. 2016 Jul 29. doi: 10.1016/j.neubiorev.2016.07.038.

      Really?

      Asyntactic comprehension, working memory, and acute ischemia in Broca's area versus angular gyrus. Newhart M, Trupe LA, Gomez Y, Cloutman L, Molitoris JJ, Davis C, Leigh R, Gottesman RF, Race D, Hillis AE.  Cortex. 2012 Nov-Dec;48(10):1288-97.

      Patients with acute strokes in left BA 44 (part of Broca's area) do not have impaired syntax.


      Dynamics of Mental Representations

      Characterizing the dynamics of mental representations: the temporal generalization method. King JR, Dehaene S. Trends Cogn Sci. 2014 Apr;18(4):203-10.

      King JR, Pescetelli N, Dehaene S. Brain Mechanisms Underlying the Brief Maintenance of Seen and Unseen Sensory InformationNeuron. 2016; 92(5):1122-1134.


      A Spate of New Network Articles by Bassett

      A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior. Bassett DS, Mattar MG. Trends Cogn Sci. 2017 Apr;21(4):250-264.

      This one is most relevant to Dr. Bassett's talk, as it is the title of her talk.

      Network neuroscience. Bassett DS, Sporns O. Nat Neurosci. 2017 Feb 23;20(3):353-364.

      Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity. Bassett DS, Khambhati AN, Grafton ST. Annu Rev Biomed Eng. 2017 Mar 27. doi: 10.1146/annurev-bioeng-071516-044511.

      Modelling And Interpreting Network Dynamics [bioRxiv preprint]. Ankit N Khambhati, Ann E Sizemore, Richard F Betzel, Danielle S Bassett. doi: https://doi.org/10.1101/124016


      Behavior is Underrated

      Neuroscience Needs Behavior: Correcting a Reductionist Bias. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, Poeppel D. Neuron. 2017 Feb 8;93(3):480-490.

      The first author was a presenter and the last author an organizer of the symposium.



      Thanks to @jakublimanowski for the tip on Goldstein (1999).

      Subscribe to Post Comments [Atom]

      Tuesday, April 04, 2017

      What are the Big Ideas in Cognitive Neuroscience?


      This year, the Cognitive Neuroscience Institute (CNI) and the Max-Planck-Society organized a symposium on Big Ideas in Cognitive Neuroscience. I enjoyed this fun forum organized by David Poeppel and Mike Gazzaniga. The format included three pairs of speakers on the topics of memory, language, and action/motor who “consider[ed] some major challenges and cutting-edge advances, from molecular mechanisms to decoding approaches to network computations.”

      Co-host Marcus Raichle recalled his inspiration for the symposium: a similar Big Ideas session at the Society for Neuroscience meeting. But human neuroscience was absent from all SFN Big Ideas, so Dr. Raichle contacted Dr. Gazzaniga, who “made it happen” (along with Dr. Poeppel). The popular event was standing room only, and many couldn't even get into the Bayview Room (which was too small a venue). More context:
      “Recent discussions in the neurosciences have been relentlessly reductionist. The guiding principle of this symposium is that there is no privileged level of analysis that can yield special explanatory insight into the mind/brain on its own, so ideas and techniques across levels will be necessary.”

      The two hour symposium was a welcome addition to hundreds of posters and talks on highly specific empirical findings. Sometimes we must take a step back and look at the big picture. But since I'm The Neurocritic, I'll start out with some modest suggestions for next time.

      • There was no time for questions or discussion.
      • There were too many talks.
      • It would be nice for all speakers to try to bridge different levels of analysis.
      • This is a small point, but ironically the first two speakers (Gallistel, Ryan) did not talk about human neuroscience.

      So my idea is to have four speakers on one topic (memory, let's say) with two at the level of Gallistel and Ryan1, and two who approach human neuroscience using different techniques. Talks are strictly limited to 20 minutes. Then there is a 20 minute panel discussion where everyone tries to consider the implications of the other levels for their own work. Then (ideally) there is time for 20 minutes of questions from the audience. However, since I'm not an expert in organizing such events, allotting 20 minutes for the audience could be excessive. So the timing could be restructured to 25 min for talks, 10-15 min panel, 5-10 min audience. Or combine the round table with audience participation.

      Last year, Symposium Session 7 on Human Intracranial Electrophysiology (which included the incendiary tDCS challenge by György Buzsáki) had a round table discussion as Talk 5, which I thought was very successful.

      Video of the Big Ideas symposium is now available on YouTube, but in case you don't want to watch the entire two hours, I'll present a brief summary below.


      Big Box Neuroscience

      Here's an idiosyncratic distillation of some major points from the symposium.

      • The brain is an information processing device in the sense of Shannon information theory.
      • The brain does not use Shannon information.
      • Memories (”engrams”) are not stored at synapses.
      • We learn entirely through spike trains.
      • The engram is inter-spike interval.
      • The engram is an emergent property.
      • Emergent properties are for losers.
      • Language is genetically predetermined.
      • There is something called mirror neural ensembles.
      • Recursion is big.
      • Architectures are important.
      • Build a bridge from networks to models of behavior.
      • Use generative models to construct theories.
      • Machine learning will save us.
      • Go back to behavioral neuroscience.

      Maybe I'll explain what this all means in the next post. You can also check out the official @CogNeuroNews coverage.


      ADDENDUM (April 18 2017): The sequel is finally up: The Big Ideas in Cognitive Neuroscience, Explained


      Footnote

      1 Controversy is always entertaining, and these two had diametrically opposed views.




      Subscribe to Post Comments [Atom]

      eXTReMe Tracker