Skip to content
November 26, 2012 / neurograce

Connecting Neuroscience and Education: Not a Direct Route

Education has always been known to “shape minds” and “build brains” and do other neuro-metaphor things. So, unsurprisingly, educators have cultivated an interest in how the brain works, why certain methods work on it while others fail, and what causes individual differences amongst students. Unfortunately, for a long time, neuroscience simply had no hope of answering these questions. As a result, plenty of pseudoscience filled the void between education and neuroscience. But with advances in neuroscience, some organizations feel that the gap is now bridgeable, or at the very least, the first bricks can be laid. Columbia and Harvard both have programs within their schools of education that focus on Educational Neuroscience. And countless other “brain-based teaching” training centers exist around the world, with their focus on informing teachers about neuroscience in order to impact how they run their classrooms.

But does this make sense? Can studying neuroscience in its current state really benefit teachers or their interactions with students?  Is it practical to do so? Neuroscience is a field of unwieldy size and complexity. Only a small portion of neuroscience research is even remotely relevant to learning. But deciding what aspects to teach teachers and then teaching those in isolation is risky. It can allow for studies to be misrepresented or taken out of context, and doesn’t instill in educators an ability to discern between good and bad science. As a result, we get an epidemic of “neuromyths” amongst teachers, such as the idea of right vs left brain-ness. And novel findings, such as the discovery of mirror neurons,  get extrapolated into teaching techniques without scientific support. The artifice of neuroscience knowledge can give teaching programs the appearance of authority, such as when Dr. Mariale Hardiman of Brain-Targeted Teaching incorrectly describes different types of memory and explains their relation to current teaching techniques:

“ Unfortunately, too often what is presented in our classrooms is designed for students’ working memories-students learn information so they can retrieve it on a test or quiz then quickly forget much of it as they move on to the next topic.

During tasks that involve only working memory, the brain uses proteins that currently exist in brain synapses (Ratey, 2001). When information moves, however, from working to long-term memory systems, new proteins are created. Effective teaching can result in biochemical changes in the brain! “

Working memory! Biochemical changes! A citation! It must be real.

Even amongst those who do fully understand the neuroscience, connecting the output of these studies to teaching isn’t straight-forward. Most lab studies investigating the neural mechanisms of learning and memory use animal models whose applicability to humans is in no way known. Even imaging studies done on humans during learning tasks are simplified and take place in a lab in isolation, quite the opposite of the classroom environment. Education is a pragmatic subject: the findings of a neuroscience lab suggesting that a certain method should work in theory is meaningless if it’s proven ineffective in the classroom. So neuroscience’s place in the training of educators is unclear.

There are of course some places of intersection between neuroscience and education. However, these come mostly in diagnosing, monitoring, and treating learning disorders. Imaging techniques can be used to verify learning disorders, and neuro-education supporters are quick to point out that it was PET data that settled the debate over whether dyslexia was a visual or phonological problem. They also like to mention that fMRI data showing increased activity in language processing areas correlates with increased reading performance after treatment for dyslexia. But these findings haven’t aided teachers of students without disorders, and even their impact on special-needs students is unclear. It is impractical to think neuroimaging could be used on a regular basis in the classroom as a means of tracking progress. And would that be helpful anyways? Does showing that brain activity correlates with performance add any additional information that performance alone couldn’t tell you? Does it have any effect on how teachers teach? I don’t see why it should. Again, knowing the neural mechanisms behind normal or disordered learning is, in itself, useless for teachers. What matters is establishing an application of that knowledge for the classroom. And I don’t believe it is the job of teachers to establish that connection.

So, programs that attempt to teach teachers about neuroscience are, to me, impractical. That is not to say that the field of education should be cut off from the gains in knowledge that neuroscience produces. But the bridge between the two needs to be built at a higher level. Before findings from neuroscience can be helpful to teachers, it must be processed. It has to be sent through a pipeline of testing that takes a very basic hypothesis based on neural activity in a laboratory setting and sees how it survives in ever more realistic contexts. Specialists such as cognitive neuropsychologists are well-suited to help guide relevant neuroscience findings into the realm of experimental education. From there, effective techniques can be distilled into methods that teachers can implement. It won’t be a smooth process. Much like in drug development, transferring basic science to the real world exposes a host of unforeseeable complications. But in education, the only test that matters is the real world.

November 19, 2012 / neurograce

It’s All Relative: The neural processing of time and how it could relate to lifespan length

Our perception of time is a wobbly thing. We all know it flies when we’re having fun. But if you’ve ever found yourself waiting for a bus in the cold, sitting through an especially boring lecture, or tripping down a flight of stairs then you know it also has a habit of slowing down nearly to a halt. As it turns out, our ability to keep an internal stopwatch has been crucial for our survival. It lets us know if two events could be causally-related and allows for “cost-benefit analysis”, by giving some measure of the amount of work we’re putting into a task. But, as with most big & important mental phenomenon, we are far from a clear understanding of how it works.

Many researchers approach the question of how neurons could keep track of time by training animals to delay their activity before getting a reward. For example, as this article on the research of time perception mentions, hummingbirds were able to learn that fake flowers in the lab were refilled with nectar every 20 minutes, and returned to feed on them accordingly. Rats and other more experimentally-accessible animals can be trained to do similar behavior while having their neural activity recorded.

Results from this kind of work suggest that rather than having the equivalent of a metronome in our brains—where, say, a group of neurons is continuously firing bursts at regular intervals—time is encoded by the interactions of neurons that are firing at different times. For example, the process starts with the arrival of one event creating a wave of neural activity. If another event arrives while that wave is still present, the second event’s own activity wave will be affected by the presence of the first. So, the second event’s activity wave will be slightly different than if it had occurred after the disappearance of the first wave. The response to the second event is therefore altered in a way that is dependent on its timing relative to the first.

Naturally, these kinds of mechanisms are highly dependent on the duration of the neural response, a property that is determined by a variety of factors ranging from ion channel characteristics to network structure. How (and if) any of these factors are changed during occasions where we feel a difference in our perception of time is still a mystery. It is hard to induce the sensation of time dilation or constriction in a laboratory setting, and even harder to get a rat to tell you when they’ve felt it. The Eagleman lab , however, did manage to strap computers to human subjects and have them perform cognitive tests while in a free-fall (a far more entertaining study than most undergrad psych majors are subjected to). The subjects reported a sense of slowed-down time but their performance on the cognitive tests showed no increase in their temporal resolution. They concluded from this that the feeling of having time to notice every detail doesn’t come because you are actually perceiving more information per second, but because you remember more of that perceived information. Emotionally-charged events (such as free-falling with a computer tied to your wrist) have a way of enhancing our ability to lay down memories. So, for the same reason that people can recall where they were when Kennedy got shot or you still know exactly what you were wearing when you had your first kiss, when we are in a life-threatening event we feel retrospectively that time had slowed down.

But if we want to get at neural mechanisms of temporal processing without having to push people off a cliff with a computer on their wrist AND an electrode in their head, we are going to have to work with animals. Rather than attempt to assess if lab rats feel the same perturbations in their perception of time during fear or boredom, we can try to understand the representation of temporal information by looking at cognitive behaviors that depend on it.  Specifically, we can look at timing through the lens of learning.

Classical Pavlovian conditioning says that if a stimulus (say, a bell ringing) that causes no instinctive response is repeatedly paired with a stimulus (like the presentation of food) that does cause a response (such as salivation), the initial stimulus presented alone can cause the response. And that is why Pavlov’s dogs salivate at the sound of a bell. Importantly, the ability to learn this association depends on how far apart the two stimuli are presented in time. If you ring a bell now and give the dog a steak in two days, he is unlikely to make the connection. So, a neural representation of time is necessary for this kind of training. The acceptable length of this inter-stimulus gap has been shown to vary depending on the stimulus pairings.

Interestingly, for a specific variation of this conditioning (eyeblink conditioning), we know that the gap time also varies across species. The eyeblink conditioned response comes about when a tone is presented right before the subject gets a puff of air in their eye. The airpuff makes the subject blink, and eventually they blink just from hearing the tone. In humans, the amount of time in between the tone and the airpuff that will result in the best learning of this association is around 800ms. In rabbits, it is around 500ms. And in rats, the few studies that have been done say 280ms. What this is suggesting is something like a difference in perspective across species. Humans, rabbits, and rats have distinct opinions on how far apart in time two events can be and still have an association between them. These species appear to be working on different timescales. For humans, with an average lifespan of ~80 years, things are a bit dilated. And compared to a rabbit (~10 year lifespan), the temporal processing of a rat (~2 years) is constricted. Of course this relationship between the timescale of associative learning and average lifespan hasn’t been studied extensively and factors as simple as brain size need to be controlled for. But it would be interesting to see if the trend holds for animals with impressive lifespans, like turtles or such short-lived creatures as fruit flies (of course the task would have to be re-designed, since the eyeblink response presupposes that the animal can blink). And if we do find that neural dynamics are scaled to align with lifespan, countless more questions would burst forth from this. What is different about the neural response? Does it stem from single cell properties or differences in network structure, or both? How is this encoded genetically and can it be altered during development? And perhaps most mysteriously, why does this mechanism exist? What could be the advantages of such a normalization of time processing across species?

Overall, the study of temporal information processing, much like physicists’ study of time itself, is proving to be conceptually and practically challenging. It is also producing some confusing results that, when understood properly, could provide a profound advancement in our understanding of the brain. That day is probably pretty far off, but as long as we all have fun trying to get there, it will come soon enough.

November 12, 2012 / neurograce

The Big Picture: Benefits of an Interdisciplinary Field

The department of Neuroscience at the University of Pittsburgh was established in 1983. This makes my dear old alma mater home to one of the oldest such departments in the nation.  They’re popping up all over the place nowadays (we’re up to 367 in the US), but make no mistake, Neuroscience as its own discipline is an infant in academic standards. So why is this? Was no one terribly interested in the brain before the 80s? Or is academic bureaucracy so toilsome that it took this long to get it recognized officially? The answer to the first is obviously no.  To the second, well yes probably, but that’s not the main reason.  What makes Neuroscience so difficult to institutionalize is exactly what makes it so powerful: its incredible interdisciplinariness.

Studying neuroscience requires integrating concepts from chemistry, genetics, psychology, evolutionary and molecular biology, pharmacology, and physiology. If you want to understand the methods you better have a grasp on medicine, statistics, and physics as well. And depending on your specific interests, you might also need acoustics, mechanics, linguistics, computer science, and/or mathematics. When people started doing neuroscience research, they came at it from one of these already-established fields.  Which is why, even still, rather than earning a BS in Neuroscience, many undergrads are forced to “take the neuroscience track” of their biology, psychology, or computer science major. Neuroscience is highly complex, and does not lend itself to easy administrating.

But, as I’ve said, it is this complexity and melding of fields at all levels that endows Neuroscience with such a strong explanatory power. Because, after all, the world is not as divided as our institutions of higher learning would make it seem. We have put up these walls to make our study of the universe simpler, and indeed they have. But it is necessary, at times, to make sure they haven’t become obstacles instead. And that is what Neuroscience is doing. Its willingness to dart across disciplinary lines and snatch up any potentially useful concept or technique ensures that our progress won’t be limited by past perspectives. And, quite honestly, when you’re attacking something as mysterious and complicated as the brain, you have to be willing to take help from wherever you can get it.

There are other, more personal benefits to working between disciplines as well. Attending a biochemistry-heavy lecture on transmitter synthesis in the morning, then spending the afternoon researching computational models of decision making is an exercise in mental dexterity.  Being able to jump between conceptual mindsets is a good skill to have. Furthermore, having a hand in many different fields means you are constantly exposed to new information and whole new concepts. Each time, you are practicing the process of understanding, getting better and faster at it. This exposure also helps you remember how to be a novice. That may seem like a bad trait for a seasoned scientist, but the fact is that some of the best insights can come from those who aren’t entrenched in the pre-existing dogma. As Suzuki says, “In the beginner’s mind there are many possibilities, but in the expert’s mind there are few.” Taking the novice perspective because you are forced to as an outsider can remind you that it is good practice to consider your own positions through that lens, and to question your assumptions occasionally. As Neil Thiese (a liver pathologist, complexity theorist, and fan of interdiscplinariness), stressed at a recent talk, you want to be able to see your data as it is presented to you –not as you expect it to be based on your existing beliefs. I believe practicing interdisciplinariness makes this easier. Finally, a field with a broad knowledge base can just lead to some pretty awesome ideas. My favorite example of the spoils of this in Neuroscience is groups like the Serre lab at Brown, that utilize computer vision to automate the tedious job of categorizing animal behavior. Thomas Serre develops computational models of visual information processing based on experimental data, and is applying those models to the design of lab equipment that can record rodent behavior and give quantitative descriptions of it to experimentalists. It’s a closed loop of neuroscience research!

Do you really want to know anything about Beowulf anymore anyhow?

Now for all my touting of the benefits of interdisciplinariness and having a broad and open mind, I do still appreciate the need for focus and walls. The process of educating a scientist is essentially a funnel that ends in a PhD thesis based on a very specific set of knowledge. This, by necessity and design, causes a detriment in most other areas of knowledge. There simply isn’t enough time to learn everything about everything, and to try to do so would result in knowing not very much about anything. The world is indeed complicated and we need to chunk it up into manageable bits if we want to be able to process it. I think the recent rise in interdisciplinary fields of all varieties simply reflects a need to partition things a little bit differently, and perhaps less rigidly. History has taught us that great insights come from people with broad interests: Da Vinci’s painting influenced his notions of light and color; Darwin was a naturlist, not just an ornithologist, entomologist, or botanist. Of course, not everyone can be a Darwin or a Da Vinci. The right balance is key. Specialization allows us to add to our knowledge base, one little pixel at a time. But someone needs to be looking at the big picture.

November 5, 2012 / neurograce

The Price of Ignorance: Why scientists need to educate lawmakers about scientific fact and policy

During elections, candidates talk a lot. As a result, the public can sometimes come away with a glimpse of how they truly view the world.  Sadly, what has frequently been shown is the woeful scientific ignorance that some members of the political process have. Recently, a lot of these alarming revelations have been in the realm of women’s reproductive organs or the mechanisms of global climate activity. But we’ve seen these kinds of knowledge lapses in every area that policy even remotely touches. From Michelle Bachman’s anecdote about the HPV vaccine causing mental retardation to Christine O’Donnell’s belief that researchers are creating mice with human brains, politicians have revealed not just a lack of scientific knowledge, but also a lack of knowledge about how science is done. And, even more startling, in some cases they seem to wear their ignorance as a badge of honor.

This problem isn’t simply an embarrassment. It is a serious threat to the policy-making process. It’s not just that these politicians aren’t “into science.” They believe things about the world that are unequivocally false. And that can, quite obviously, have huge consequences in the law, even without anyone having bad intentions. If you were, for example, putting your money into a FDIC-backed savings account, you would have no reason to prepare for the possibility of suddenly losing that money. And so, when a congressman believes that rape cannot result in pregnancy, he, just as logically, sees no reason to make allowances for that scenario.

And that is why it is so crucial to make certain that the people in power are completely and accurately informed about the topics on which they legislate. The current method of determining who gets important positions such as membership on the House Committee of Science is (surprise, surprise) more based on politics than qualifications. Which is how we end up with people like Rep. Paul Braun, who unabashedly touts his belief in a 9,000 year-old earth. However, not all politicians are so strongly anti-science. Rather, they just need some education in it. Luckily, as scientists, there are things that we can do to achieve this. A talk by former Illinois congressman John Edward Porter outlines some of the ways scientists can reach out to policy makers. He suggests offering your service as a science policy advisor to a local politician, giving talks about your work to a general audience, and make the needs of the scientific community clearly known.  There is a lot of red tape that comes from inefficient design of research funding and lacking infrastructure. For example, interdisciplinary work (which, almost by definition, tends to be highly innovative and promising) can have difficulty finding funding, due to “funding silos” which ensure that money stays strictly within the interests of one agency. A lack of simple procedures for collaborating with foreign scientists can also hold back a lot of potential progress. Politicians have the power to remove these roadblocks, but need to be made aware of them first. By advancing the scientific process, we can advance the acquisition of knowledge and, hopefully (although it is proving difficult), use that knowledge to inform future policy.

There are also things that scientists can do independently of interacting directly with government officials. Making the general public properly informed about the facts behind controversial issues and the ways in which basic research has directly benefited their lives is a great place to start. Candidates have to listen to the wants of their electorate, so an informed and supportive voting base can be a scientist’s best friend. Also, being smart and efficient with the resources that we do receive through government funding will reduce any reason for policy makers to view science as wasteful. There will always be bouts of difficult financial times where budgetary decisions will need to be made, and it will be important for research funding to not be labeled as pork.

But these kinds of efforts aren’t simply about getting more money for scientists. As a nation, how we choose to view and invest in research and the scientific process will determine our level of progress and success. To live in a modern dark age where scientific illiteracy is accepted will inevitably cause a stunt in our growth as a culture. Although many advocacy groups probably also feel strongly that their goals are important for society as a whole, I really do believe that the science lobby is not a special interest. We are not doing work to try to sway opinions, but rather to determine facts. And the results of our work can have unforeseen benefits and ripples throughout many aspects of daily life. For that reason, getting the science right in law-making cannot be optional.

These beliefs don’t by their very nature need to be partisan. Sadly though, in our current political state, it would seem that when a wholly false scientific claim is uttered, it comes from the mouth of a Republican. It is also the Republican Party that is more likely to embrace a culture of scientific ignorance. And specific policy choices regarding climate legislation and funding priorities reflect that. Furthermore, there are some projects and initiatives that the Republican mantra of “Let the private sector do it!” simply could never achieve. The expense of basic research is not something most private industries see a benefit in taking on. Yet they use the results of such government-funded work to create new applications and technologies. This process is mutually beneficial as it allows companies to be successful and the work of scientists to have real-world implications. But to pretend that such a setup could exist in the absence of large government support is absurd.  For these reasons, as a scientist and person who supports fact-based reasoning, I cannot support the Republican Party.

Of course, each candidate and each race is different, so it is important to stay informed. and the AAAS are a great way to do just that. And if you’re interested in doing the informing yourself, the are a variety of opportunities for scientists to assist politicians on shaping their science policy. Research America has an advocacy goal and provides resources accordingly. The AAAS also offers an amazing fellowship that places PhDs in government positions where “science translators” are needed (I’ll let you know the year that I plan to apply so that all you other qualified applicants can back off). And of course it’sImage always possible to simply contact government officials directly with your concerns or assistance.

Oh, and don’t forget to vote!

October 30, 2012 / neurograce

The “Big Science” approach to Neuroscience and why it succeeds even if it fails

“Big Science” is a way of going about research and discovery that, being generally logical, systematic, unified, and well-funded, seems to stand in contradiction to how science is normally done. It comes about when some large, unifying entity (the government, a coalition of nations, a very wealthy investor) decides that a problem is important and recruits an army of scientists to get it solved. Prominent examples include the Manhattan Project, the Human Genome Project, and the Large Hadron Collider. The field of Neuroscience has been experiencing its own “Big Science” boom as of late and, similarly to the above examples, both critics and supporters have been vocal.
The biggest target of scorn/praise is easily Henry Markram and his Blue Brain project. His stated goal of reverse engineering and replicating the human brain in a supercomputer in 10 years has been politely described as ambitious. What his group is currently drudging through is the creation of minutely-detailed models of cortical columns, including everything from specific connectivity properties down to dendrite spine structures. It’s the type of data-heavy project that could only be achieved through Big Science. But despite its monopoly on press coverage, Blue Brain isn’t the only project of such caliber. The Allen Institute for Brain Science is pursuing similar goals. One of which is the atlasing of the entire brain, including not just physical structures but gene expression as well. Their “high-throughput pipeline” for data harvesting is strict, standardized, and state-of-the-art in order to most efficiently produce results. There are also the Connectone Projects, for mouse and human brains. These NIH-backed endeavors span several universities and have the goal of creating a detailed connectivity map of the entire brain (down to the cellular level for the mouse). Again, this is achieved through systematic methodologies and procedures, focused on anatomical imaging.

So what is it that critics of Big Neuroscience object to? There are people who disagree with the very science. They claim that Blue Brain’s level of detail is unnecessary, or that knowing the mouse connectome won’t explain the brain. Another complaint is that the goals, despite appearances, are actually ill-defined and untestable: How do we know that the cortical column model is accurate and complete? What does it mean to obtain a cellular-level connectome when there are animal-to-animal variances on that level? Another major component to the critical choir is that the goals are not achievable. We simply don’t have the experimental techniques available yet to accurately model every neuron morphology or map every synapse. And throwing a bunch of scientists at that problem who are systematically repeating the same experiment won’t fix that. We also don’t even know what what we need to know. As Bradley Voytek said on his post about Blue Brain, “the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that.”

There is another theme to the opposition, and it stems from a concern over the public perception of science. These kinds of projects tend to draw mainstream media attention, and are therefore subject to some misinterpretation, simplification, and sensationalizing. It doesn’t help when the lead scientists themselves are party to the over-hyping. Blue Brain’s own self-description contains a smattering of “exceptional”, “perfected”, and “complete virtual brain”. The fear is that the intensity of these challenges will be under-reported and the payoffs over-reported. This can create unrealistic expectations that, when not fulfilled, will reflect poorly on the field. This is especially problematic given the amount of money involved: The NIH gave a total of $38.5 million for the Human Connectome Project. The Allen Institute started its work with $100 million from Paul Allen. Blue Brain is currently seeking a one billion Euro award from Future and Emerging Technologies Flagship Initiatives. With the field’s reputation and that kind of cash on the line, success is more important than ever. But some say putting the emphasis on some kind of public measure of success, rather than good science (which routinely ends in failures) is a dangerous practice.

But even with all these arguments of varying validity, I stand in support of the Big Science. If all these projects live up to their hype, the leaps in knowledge will be incredible. But even if they don’t, there will be a wealth of collateral benefits from the process itself. Attempting projects that require such detail has, for example, revealed a systemic problem in our data reporting policies. Both Sean Hill of Blue Brain and Christoph Koch of the Allen Institute have cited the insufficient detail found in the current literature as a reason for doing experiments in-house. In this way, a by-product of these large-scale data collection and modeling projects is pinpointing the weaknesses in the normal small science practices. It is especially helpful in advancing neuroinformatics which, as we all know, is essential for advancing the field as a whole. Because of the nature of their experiments, these projects create a huge database of neuronal properties that are both uniformly-sampled and thoroughly-recorded. And with the (worrisome) exception of Blue Brain, these projects make much of their data available to the public. This not only provides the research community with the actual data, but also with a standardized form for reporting such data and, in some cases, a model to plug it in to. So even if the critics are right, and the goals of the project are too ambitious for our time, the infrastructure will be there waiting when the experimental results are achievable. These projects also spur advances in the automation of experimental procedures, since such automation is necessary to achieve results at the required rate. So, merely attempting Big Neuroscience on a specific problem can create positive ripples across the field.

Finally, despite the different approach, Big Science projects ultimately have the same objectives as any normal laboratory. Blue Brain’s detailed cortical model, for example, is just a means to find out what are the essential aspects of neural networks for computation. But rather than the usual approach of working from the simplest model and adding complexity, they take the opposite route. Many labs across the world investigate gene expression in their brain region of interest and correlate it with their behavior or disease of study. The Allen Institute is simply gathering all that information in one swoop. Even the over-hyping isn’t unique to Big Science; professors talk up their work to the media (and grant readers) all the time. And the effect is probably a net positive, as the start of a new project is likely to bring attention and favor to the field, while any failure is less likely to gain coverage. And even if these projects do fail by whatever measure they are being held to, that will provide an opportunity, just like in any scientific failure, to discover why. And to then apply those lessons to the next project, big or small. The benefits of these behemoths, in any case, outweigh the risks. So, let’s put away our pitchforks, and let the giants live.

October 21, 2012 / neurograce

Society for Neuroscience Conference: A Paradox

A small sampling of posters

S-F-N. Those three letters are instantly recognizable to any card-carrying neuroscientist. While they technically stand for “Society for Neuroscience”, what they usually refer to is the giant conference held by that society each year. This behemoth event brings on average over 30,000 researchers, presenters, exhibitors, and vendors together. Every hotel in town is jam-packed with neuroscientists and, after 5pm, so is every bar. You’d be hard-pressed to find a flight that doesn’t have an attendant struggling with poster tubes, or a restaurant whose diners aren’t donning SfN lanyards. It is a scientific spectacle beyond compare.

And it is that grandness, that sheer size, that makes SfN the conference that it is. But it is also what makes SfN so hard to love. It is absolutely overwhelming. Between the lectures, mini-symposiums, and nano-symposiums at any given moment there are roughly 20 talks you could be listening to. Feel like going to a poster? Each of the twice-daily poster sessions offers about 1775 options at a time. It’s unwieldy. However, the variety of topics ensures that you will find at least a quarter of what is being presented completely unappealing. So what you have, in essence, is a handful of smaller conferences going on in parallel: with the neuro-pathologists hanging out at symposiums about Parkinson’s mouse models, the bioengineers sticking to the BCI poster row, and psychologists seeking out anything fMRI. It may seem contradictory, but at the conference with the widest range of neuroscience research available, it is incredibly easy to stay wrapped in the cocoon of your sub-domain. Your best bet for getting a glimpse into another field is to catch some of the poster titles you sprint past on your journey from poster A47 to FFF64.

But maybe you’ll be going too fast, because SfN isn’t just intellectually big. 30,000 people necessitates a big space as well. The New Orleans Morial Convention Center where the conference was just held is three floors and over a kilometer long, and we took up all of it. And like with any conference center, the space is not physically appealing. There are drab colors, bad lighting, and a temperature consistently three degrees below the level of human comfort. With this many people you probably also find yourself zig-zagging through slow-moving crowds and waiting in a 20-minute line to buy a $10 sandwich.

So why do we do it? Why do so many (hopefully) logical people drag themselves to this monster each year? It can’t simply be for the science. There are countless other, smaller conferences that focus on the more narrow fields of research that people end up sticking to at SfN anyways. No, I would argue that it is in fact the hugeness of it that makes people come. Yes, walking through the sea of projects and presenters can make you feel very small, and like your work is insignificant or irrelevant. But when you consider yourself as part of this whole, there is a sense of solidarity and the notion that this field is progressing. We’re having a conversation. And while we may have different dialects, we’re all speaking the same language. So for all the headaches that come from a gathering this size, there is also great power in it, and promise.

And, of course, we mustn’t forget the second great driver of SfN attendance: socializing. SfN is attended by scientists from all different countries, from all different labs, and in all different stages of their careers. It can be a great opportunity to network, and to catch up with old colleagues and meet potential new ones. Also, a talk over dinner can end up accomplishing a lot more than lengthy emails and Skype calls can when collaborating long-distance. And nothing helps new co-workers bond like working towards the mutual goal of getting back to the hotel in an unknown town at 3am. The socializing may seem like a by-product of the conference, but (as many SfN attendees are surprisingly willing to admit) it can be the most crucial and productive aspect.

So despite all the negatives of SfN: the exhausting size, the sensory and intellectual overload, the awkwardness that comes from when my attempt to get a free Zeiss tote bag devolves into an elaborate lie about how my lab is in the market for a new multi-photon microscope…. I digress. But even with all that, I wouldn’t want to dissolve SfN. It is a unique (and for first-timers, potentially emotional) experience. Being confronted annually with the vast knowledge being produced, as well as the far vaster amount still missing, can be a helpful exercise to neuroscientists of all levels. The size and disjointed-ness of the conference exactly reflects the field itself, and that is something we shouldn’t ignore. In some ways SfN is like the internet: it has the depth to ensure that a user can spend all their time completely enveloped in the narrow topic they set out to find, but it also contains the breath to allow exploration for those who seek it. It is not the most efficient way to disseminate our knowledge, but it is perhaps the most unifying. There may be a lot of divisions within our field, but when we invade and overtake a city each fall, we don’t do it as electrophysiologists or molecular biologists or mathematicians; we do it as one giant horde of SfNers.

October 8, 2012 / neurograce

The future of brain-computer interface: A glimpse into the nano-membrane filled crystal ball

Brain-computer interface is… just what it sounds like. Some device is used to transfer information about the brain’s activity to a computer, or vice versa. So we end up with two flavors of BCI: recording (for the brain-to-computer direction), and stimulating (for the computer-to-brain path). Utilizing the lucky fact that electrical signals are the language of the brain, we can do both of these things with tiny electrodes placed either in, on, or next to cells. And with this dual pathway we can then read off motor cortex information in order to move a prosthetic limb, or send seizure-combating signals into an epileptic brain. It seems so simple, right?

The old way…

But, like the Texas penal system,what we’ve got is an execution problem. There is no easy way to get those electrodes to the cells. And once they’re in, there are still risks of infection or rejection. The signal is also subject to degradation by movement, electrical noise, or electrode deterioration. These are problems routinely faced by researchers working with animals, and are only magnified tenfold when the possibility of human applications arises. Right now, non-invasive techniques such as EEG and fMRI are the most common methods of reading brain activity in humans. But none of those options have the spatial or temporal resolution to provide meaningful data about real-time information processing, which is critical for the goals of any BCI. For useful data, we need those tiny electrodes. But as long as implanting them remains such an invasive, messy, and potential dangerous procedure, it won’t be available to the masses.

Now, it should be noted, there is the minor matter of knowing what those neural signals that we record actually mean. Or figuring out how to make a pattern of stimulation that has the intended effect. Cracking the neural code is obviously key to interfacing with the brain; it’s hard to have a conversation when you don’t speak the language. And right now we are far from fluent. But it is an area of huge research and I believe the breakthroughs are coming. However without the ability to utilize that knowledge in a physically plausible way, we don’t stand much to gain from it. And that is why our lack of a good implementation is so problematic.

So, what then is the future of BCI execution? I’d put my money, if I had any, into nanotechnology. And not (entirely) because it just sounds futuristic. The fact is, the process of opening up the skull, placing a hard and very foreign object onto the brain, and closing it back up again is never going to be a clean one. We need our BCI devices to be more natural, and to deliver them in a non-invasive way. And the only way to do that is if they are very small, and made of some more bio-friendly materials. And that is exactly what nanotechnology researchers such as John Rogers are working on. His group’s recent Science paper (and his appearance on NPR) describes a biodegradable electrode, which could theoretically be implanted in the brain and utilized for diagnostic purposes and then be allowed to dissolve away safely. Longer lasting versions could be used for months or even years. Furthermore, these kinds of ‘soft’ electrodes can fit better to the curves of the brain, leading to better signal quality. Another nanotech company has developed a coating for traditional electrodes the enhances the signal quality and longevity by reducing the response of the immune system to the electrode. So, through these measures some of the danger of neural implantation can be reduced and the quality of recordings increased.

The future!

There is still, however, the issue of delivery. But, do not fear; nanotechnology has the answer to that too! Well, potentially. Because nanomembranes are so small, thin and pliable, they can easily survive being delivered via injection (as this study looking at nano-scaffolds for bone regeneration shows). There is also already precedence for the injection of neural-stimulating electrodes. The BION system involves injecting traditional electrodes into peripheral nerves via a hypodermic needle. Those electrodes are then powered and controlled wirelessly, and allow for stimulation and recording of motor neurons. Now to translate this method into a useful BCI application would require getting the electrodes into the brain, not just the peripheral nervous system. And that is a problem of another magnitude, not easily solved even with nanomembranes in your toolbox. The blood-brain barrier is just annoyingly particular about what it lets thorough. And even if you could get the nanomembrane electrode in, you’d need to have a way to control where it plants itself. Trying to move a prosthetic arm with neural signals coming from your visual cortex is not advisable. So the injection would probably have to be targeted, meaning there are still some serious obstacles to a completely non-invasive procedure. But we have to leave some problems for future scientists…

Maybe the notion of tiny, injectable brain-controlling devices sounds crazy (or terrifying) to you. But it is the general direction we must go in if we want to really utilize all the knowledge that we’re painstakingly gathering about how the brain works. Potential applications aren’t limited just to disease treatment or prosthetic limb control. Even perfectly healthy people could find BCI beneficial. The gaming industry, for example, has already wet its feet in the BCI pool, using EPOC headsets to add another element of control to games. But these superfluous applications are only sensible if the BCI is incredibly low-risk. No one would endanger their health to make World of Warcraft slightly more entertaining… Ok, maybe some people would, but the FDA isn’t going to approve it.  So we are a long way off from BCI impacting the everyday life of the average person. But all crazy technology had to start somewhere. I’m excited to see how this particular one develops, and how quickly we can get ourselves into the future.

October 2, 2012 / neurograce

Standing on the shoulders of… people of roughly average height. A review of failed theories in Neuroscience

The history of scientific progress is occasionally portrayed as an epic tale, made up of some combination of serendipity and destiny. Great scientists of the past are looked at as heroes and pioneers who are meant to inspire the current generation. Focusing on the successes and breakthroughs is an effective technique for drawing people into the field. But it’s a far from accurate portrayal of how science usually advances. The fact is, there are a lot of wrong turns on the path to understanding, and we sometimes continue on those wrong paths for a (retrospectively) embarrassingly long time. This may seem like a side to scientific research that we want to ignore or cover-up, but I find it can actually be quite helpful to investigate it. As someone involved in doing research, it is important to realize that false results and incorrect conclusions can look just as legitimate and be supported by just as many smart people as correct ones. It reinforces the “question everything” mentality that we should all have and helps to develop a critical eye. Plus, with 20/20 hindsight some of those failed theories, especially in neuroscience, can be pretty entertaining. So I’ve compiled a list of a few that made their way into the field over the past few hundred years. Even if you don’t learn anything factual about the brain from them, you can at least go away knowing that you’re smarter than Descartes.

The flow of fluids controls the actions of the brain, nerves, and muscles. The notion of “animal spirits” flowing through and controlling the body originated with the ancient Greeks, but stuck around (despite evidence against it) until the late 1700s. The theory places the ventricles (the fluid-filled chambers in the middle of the brain) at the center of the action by describing them as repositories of the spirits, which are sent out to the peripheral nerves as needed. The spirits flow through the peripheral nerves and then affect muscle fibers via hydraulic power. Descartes expanded this view by incorporating his view that the pineal gland was the seat of the soul. He posited that the pineal gland’s soul-induced movements could alter the flow of the spirits, and thus alter thought and behavior.

Of course we know now that motor neurons work through electrical activity, not hydraulics. And that their activity stems from that of the cortex and other brain structures, not the neuron-less ventricles. Though to his credit, Descartes did have a neat idea of how memory works that is a pretty good analogy of our current understanding:

The pores or gaps lying between the tiny fibers of the substance of the brain may become wider as a result of the flow of animal spirits through them. This changes the pattern in which the spirits will later flow through the brain and in this way figures may be “preserved in such a way that the ideas which were previously on the gland can be formed again long afterwards without requiring the presence of the objects to which they correspond. And this is what memory consists in” 


That squares more or less with the notion of activity-dependent synaptic plasticity. So we’ll just say he broke even.

But it looks so scientific…

Phrenology, aka ‘you are your skull shape’. This is a classic example of brain science gone bad from the early 19th century. It is based on three basic principles: 1. Cognitive functions are localized in the brain. 2. The size of the brain area devoted to those functions is proportional to their presence in a patient. 3. The shape of the skull is an accurate way to measure the shape of the brain. When you put those together, you get doctors massaging your head and then telling you that you have abnormally high “love of home” but a deficit in “agreeableness.” The problem with the science of phrenology is that those three principles have decreasing levels of accuracy. The first one is generally still agreed upon. Certain functions can be associated with specific brain areas and will disappear if there is a lesion there. However, phrenologists were more concerned with character traits than concrete cognitive functions, as the phrenology map shows. The notion of personality traits being so localized and distributed across the brain is not supported. As for the second, the importance of size is only true in a gross sense. Patients with significantly degraded hippocampii, for example, will have memory problems. But a little extra mass in primary visual cortex probably won’t mean much. Finally, the third principle is simply wrong. Skull shapes vary from person to person, but have little relation to brain shape.

Sleep is like nightly hydrocephalus. Hydrocephalus is bad. It’s caused by excessive fluid pressure on the brain and has symptoms that can include low pulse, inability to process sensory stimuli, and low muscle tone. Hey those happen during sleep too! And as this article from 1841 points out, that is pretty good evidence that they’re caused by the same thing. The specific mechanism the author suggests is that there is an increase in blood flow to the brain at night, and this puts excess pressure on the brain causing the symptoms of sleep. The mechanisms of sleep are tricky to figure out, but luckily the blood-induced hydrocephalus theory died out pretty quick. Research nowadays credits the sleep cycle to the ability of hypothalamus cells to control the dynamics of the cortex.

The brain is one giant, connected cell. Microscope technology wasn’t great in the late 19th century. And neurons are pretty densely packed in the brain. So looking at a slide full of tangled cell bodies, axons, and dendrites may not be terribly informative. Many people thought protoplasm was shared amongst cell bodies via micro-bridges. But cell theory suggested that neurons should be independent, membrane-bound cells and the evidence was inconclusive. So, the debate raged over the turn of the century. Camillo Golgi, a prominent scientist studying the nervous system opposed the so-called neuron doctrine, favoring the old syncytium view instead. Another now-famous researcher, Santiago Ramon y Cajal, held the opposite view. He used, ironically enough, the stain that Golgi invented to stain neurons in a way that showed them as separate entities. We now accept the fact that in the vast majority of cases, two neurons are separated by a synapse. And the world of neuroscience would be very different if that were not true. But don’t feel too bad for Golgi, he got the Nobel prize in 1906 for his work. He just had to share it, with Ramon y Cajal.

A single neuron can be both excitatory and inhibitory. Whether a neuron increases or decreases the activity of its post-synaptic target depends on two things: what neurotransmitter it releases and what receptors the post-synaptic cell has. And in the middle of the 20th century, there was much speculation about both of those. Work on how ascending neurons excite their target motor neuron while inhibiting the antagonist muscle’s motor neuron was hot at the time and it suggested that the firing of a single neuron was causing both the excitation and inhibition. At that time the notion of multi-transmitter production was only recently being investigated in most cell types, but acetylcholine (Ach) was well-established as the transmitter of motor neurons. So, it was posited, Ach must have a different effect at different synapses. As it turns out, the inhibition that was seen in the antagonist fibers was caused by small inhibitory interneurons called Renshaw cells that the motor neuron excited. These interneurons release a different neurotransmitter (glycine) that is responsible for the inhibition. No manic-depressive motor neurons after all.
There’s always exceptions to the rule, but on the whole, we now believe that neurons produce one class of neurotransmitter and are either excitatory or inhibitory at all of their synapses. It’s what we call Dale’s Principle (not so much because Henry Dale came up with it, but more because John Eccle’s says so). Interestingly, there’s no reason, biologically speaking, that this needs to be true. It’s conceivable that cells could produce a variety of neurotransmitters (they do a lot more complicated things already), although perhaps segregating those into different synapses could be tough. But cells already have many different receptors, and so the control of inhibition/excitation could be determined on a synapse-by-synapse basis on the post-synaptic end. But evolution did not design it so. And it turns out that the reality of Dale’s principle has a huge impact on the computational abilities of the brain. As this paper shows, if Dale’s principle is violated (i.e, the absolute values of synaptic weights are randomized as opposed to being constant for a cell), spiking correlations can decrease and firing rate fluctuations can increase. This can have big consequences on information processing. So let’s all be glad for the consistency of our neurons.

No new neurons! Up until quite recently, neuroscientists would warn you to be very careful with your brain cells, because they’re the only ones you’re ever gonna get. Even our old friend Ramon y Cajal was a proponent of the fixed nature of the nervous system and with his support the idea stuck for over 50 years. But unfortunately this was mostly the result of an absence of evidence being used as evidence of absence for decades. A smattering of studies throughout the 1960s and 70s hinted at the possibility of adult neurogenesis but were none too convincing. Then came the 80s, and along with the Rubik’s cube and pocket calculators, scientists were gifted with BrdU. A synthetic nucleoside that can label new cells, BrdU was the perfect tool to investigate neurogenesis. And as this lovely review shows, they found it! But before you start bare-knuckle boxing or cracking walnuts with your skull, you should probably know that new neurons are still more the exception than the rule. They’re found only in the olfactory bulb and hippocampus. The later is involved with learning and memory, but the exact role of neurogenesis there is still unclear.

These are some of the major missteps of the field over the ages. And I’m sure there’s millions of smaller ones scattered throughout the literature. I find it fun to study these, but it does make me wonder about what currently entrenched neuro-theories will someday be proved utterly false. I’d like to know which of my conceptions about the brain will be mocked in whatever the future equivalent of a blog is (hoverblog?). But sadly, without foresight we can only progress through doing careful studies and asking critical questions. Luckily, studying the past so as to not repeat it is a great way to learn how to do just that.


September 27, 2012 / neurograce

Quick!!! Help fund a great idea for Neuroscience education!

This Kickstarter project has 2 days left to fund the development of a kid-friendly interactive App to help teach neuroscience. This stuff is important, so you should fund it if you can! Let Ned the Neuron live!


September 24, 2012 / neurograce

Conscious Unawareness

I was at a neuroscience retreat a few months ago when I and some elder neuroscientists were talking about ways of quantifying and measuring perceptions and cognitive functions. I mentioned that I felt that this was a big problem in the study of consciousness, and one of the professors there replied, “Consciousness? That’s a dirty word. Neuroscientists should never talk about consciousness.”

To me, such a notion seemed positively absurd. But I also knew it to be a fairly common sentiment in the field. It is rare to find a well-respected and prominent neuroscientist devoting their time to the study of consciousness openly and directly. Christof Koch is a notable exception, but even he gets a majority of his peer-reviewed publications through his work on visual attention. This absence is noticeable on a large scale by looking at the distribution of abstracts for this year’s Society for Neuroscience (SfN) conference. Of the 17,253 (woah) poster and talk abstracts, only 44 of them are tagged with the keyword “consciousness.”

I understand the aversion to this topic. For one, it has a bit of a stigma for being too far on the philosophical end of the spectrum. Real scientists don’t bother with such semantic nonsense. But I would argue that all science once belonged to the realm of philosophy. Ancient Greeks philosophers described love and strife as the forces behind the attraction and repulsion of objects, but luckily that didn’t stop the pursuit of more accurate and analytical explanations. But if science is failing to provide the data, philosophy and myth are the only paths left. And with a topic like consciousness, which is such an equally integral and mysterious part of human experience, you can be sure that people will explore whatever path is available. And so it will remain a hot topic of philosophical debate until factual evidence can cool it down.

There is another, perhaps stronger, motive for avoiding the study of consciousness: it’s hard. It’s just incredibly difficult. It is a challenge to even decide how to approach it. There is nothing close to a uniformly-accepted definition for consciousness (I haven’t even attempted one here. That will be a topic for another post). Nor are there clear ways of testing and quantifying many of the current definitions available. And the usual approach of neuroscientists—discovering or creating an animal model—isn’t entirely feasible given the nature of this subject. But none of this constitutes a valid excuse for not studying consciousness. Part of the scientific process involves defining your terms and deciding what are reasonable questions to be asked about them. And we haven’t been halted by such difficulties in the past. Remember those SfN abstracts? Well 285 of them are devoted to the equally ill-defined and un-model-able disorder of schizophrenia. And that only affects 1% of the population. So rather than to say that we can’t study consciousness because we don’t even know what it is, I would say we need to study consciousness in order to find out what it is.

Finally, the notion of scientific “dirty words” or off-limits topics for neuroscientists goes against our whole mantra. I feel that in order to be in this field, you have to believe that everything that is seen, felt, remembered, experienced, etc is all a product of neural activity, and can be explained as such once we understand it. To say that such a key aspect of thought is not even approachable for scientific study is to acknowledge a crippling weakness in our field that I don’t think is there. Furthermore, if not us, then who? What field is better suited to tackle this problem? Plenty have tried, including theology, mathematics, and physics. And from that we’ve gotten cyclical (and thus meaningless) answers such as ‘God is the source of consciousness and consciousness proves the existence of God.’ Or the nonsense that has come from trying to apply what is known about quantum mechanics to explain the workings of the brain (When the only tool you have is a hammer every problem looks like a nail). Just because neuroscience might shy away from the issue due to the lack of a solid foundation doesn’t mean everyone will. Rather than have the void filled by others, I think it is best that neuroscience claim its role as the rightful owner of this very tricky problem and work on solving it.

Luckily, there are some people out there who agree. The Mind Science Foundation (MSF) keeps this impressive database of people working in the field of consciousness research. It includes a fair amount of philosophers, writers, and other not-technically-scientists. But that’s because the goal of the MSF is to bring together all people who support the notion of a biological basis of consciousness and a scientific approach to the study of it. In this way, we can make the study of consciousness an interdisciplinary pursuit that is fueled primarily by neuroscientific findings. This pursuit doesn’t have to hide or ignore the above-mentioned difficulties inherent in the task. There just needs to be honesty about the current limitations and an effort to work around or remove them. And if this effort succeeds, then I look forward to the day when that terrible c-word can proudly become a part of every neuroscientist’s vocabulary.

%d bloggers like this: