Skip to content
December 17, 2012 / neurograce

Thoughts on Thinking: What is and is not necessary for intelligence

robot-thinkerThought is a concept that is frequently relegated to the Stewart-esque realm of “I know it when I see it.” To think isn’t merely to churn data and spit out output for each input; we instinctively believe there is more behind it than that.  When a friend pauses and stares off for a minute before answering a question, we allow them that time to think, having some notion of the process they’re going through. But we are not so empathetic when we see the colorful spinning wheel or turning hourglass after asking something of our computers.

A lack of a hard definition, however, makes studying thought a confusing endeavor.  We would ideally like to break it down into component parts, basic computations that can conceivably be implemented by groups of neurons.  We could then approach each independently, assuming that this phenomenon is equivalent to the sum of its parts. But the reductionist approach, while helpful for getting at the mechanisms of those individual computations, seems to strip the thought process of something essential, and is likely to lead to less than satisfying results.

Some of the great thinkers of cognition came together recently at Rockefeller University to both celebrate the 100th birthday of Alan Turing, and discuss this notion of what makes thought thought, how we can study it, and how (or, if) we can replicate it. The talks were wide-reaching and the crowd represented a variety of viewpoints, but common themes did emerge.  One of the most striking ones was the requirement of novelty.  That is, to have true intelligence requires the ability to answer novel questions based on currently-known information.  Recalling a fact is not intelligent.  Recalling a fact and integrating it into a new line of reasoning that was built purely from other recalled facts is.  The production of this new line of reasoning makes it possible to answer a new kind of question. And, as John Hopfield pointed out, the number of novel avenues of reasoning that can be explored and new questions an intelligent being can answer grows exponentially with the amount of information known.

This kind of extension of given information into new dimensions requires mental exploration. Intelligent beings seem to be able to walk down different paths and get a sense of which is the right one to follow. They can build hypothetical worlds and predict what will happen in them. They can abstract situations away to the highest level and use that to find deep similarities between superficially disparate things.  Josh Tenenbaum used this example from a 1944 psychological experiment to demonstrate just how instinctively we deviate from just processing the information that was given to us and start to add new layers of understanding.

The fact that most (decent) people come away from that video with a sense of ill-will towards a shape shows how wild our mental exploration routinely gets. But what is it that allows us to make these new connections and forge novel cognitive paths? And is it possible to design entities that do it as well?

Currently, much of AI research is focused on the creation of artificial neural networks that can extract meaningful information from a small amount of input and use that to be able to categorize future input of a similar kind. For example, by discovering commonalities across the shape of letters in hand-writing samples, a computer can identify letters in a new hand-writing  sample and turn it successfully into text, even though it had never “seen” that exact sample before. It is important to note, however, that while the computer is answering something novel in the sense that it is telling us the text of a new hand-writing sample, the question itself is not novel. Identifying letters is exactly what the computer was trained to do and exactly what it does; nothing new there. Many people would argue, that these kinds of networks, no matter how complex, could never end up creating the intelligence we experience in ourselves. Pattern recognition simply doesn’t capture enough of what intelligence requires. There is no deeper understanding of why the patterns exist. There aren’t abstract connections between concepts. There is no possibility for novel exploration.

But what if we made it more complicated, by say training networks to do pattern recognition on patterns themselves? Or combined these networks with other AI technology that is better suited for learning causal relationships, such as Bayesian inference? That combinatorial approach is what helped IBM’s Watson become a Jeopardy! whiz. Watson got the frequently-ambiguous and complex clues as textual inputs and displayed amazing language analysis skills by coming up with the correct response. He was drawing connections between the words in the clue and concepts that are external of it. An incredibly impressive feat, but we still don’t say that Watson thought about the clues or understood the answer. Watson just did. He did as he was programmed to do. That gut response suggests that we believe that automation and intelligence must be mutually exclusive.

But such a conviction is worrisome. It means that the bar we’re measuring intelligence by inches up anytime a machine comes close to reaching it.  Somehow in the process of understanding thought, we seem to discredit any thought processes we can understand. Certainly, Watson wouldn’t pass the Turing test. He had a limited skill set and behavioral repertoire. He couldn’t tell a personal story, or laugh with the rest of us when he occasionally came up with an absurd response to a clue. But he was very capable in his realm and the idea that he was not at all “thinking” seems in someway unfair. Perhaps not unfair to Watson, but unfair to our level of progress in understanding thinking. The idea that someday a robot could be built which does by all appearances perfectly replicate human cognition can be unsettling to some people. We don’t want to think of ourselves as merely carbon-based computing machines. But we shouldn’t let hubris stand in the way of progress. We may miss the opportunity to explain thought because we are too afraid of explaining it away.

December 12, 2012 / neurograce

Uncharted Territory: How the processing of smell differs from other senses

unmappable

Anyone who has used a map (or as it’s more commonly called today, a smart phone) knows that it’s not so hard to turn the three-dimensional world into a useful 2-D representation. In fact, our brains do it instinctively. Spatial information enters via the retina, is carried along through the optic nerve and a few synapses, and eventually creates a wave of activity in the occipital lobe of the brain. This allows the external world to be mapped out in two dimensions across the surface of the visual cortex (with cells next to each other encoding areas of the visual field that are next to each other). The auditory system has an analogous process, although it may seem less intuitive. The cochlear, the snail-shaped organ in your inner ear, is responsible for creating a map of the frequencies of sounds. As that information gets passed along, low frequency stimuli end up being processed in one end of the auditory cortex and high frequencies at the other. The somatosensory cortex is similarly mapped; cortical areas getting input from your hand are next to those getting input from your wrists, etc. But when it comes to smell, everything is a bit…muddled.

See, the organization of the cells that process smells isn’t quite so straightforward. There is not what we would call an “olfactory map” in the cortex, at least not one that scientists can recognize. The pipeline of getting information from the nose into the brain is actually a quite intriguing one. It starts with the olfactory receptor neurons in the nose coming in contact with whatever molecules are floating about in the air. Different olfactory receptor neurons will bind to different molecules, and when this happens these receptor cells become active. They then project to big chunks of cells in the olfactory bulb called glomeruli, and in turn activate these cells. Each glomerulus will only get inputs from olfactory receptor cells that respond to the same odor molecule. So there is a great convergence of information, with each glomerulus responding to specific odorants. But it all appears to be for naught, because at the very next stage of processing things get all tangled up again. The cells in the glomeruli project to the piriform cortex, but in a random way which destroys the order and segregation they initially had. Cells in the piriform cortex thus get input from a variety of receptor types and respond to specific odors based on that input. However, we don’t have the same kind of physical relationship amongst cells that we see in other sensory modalities. That is, one cell’s response to a stimulus may have nothing in common to the response of the cell directly next to it. Cell A can be highly active when there is a hint of vanilla in the air and its neighbor cell B may fire in response to a whiff of vinegar.

This is odd because we generally view maps in cortex as serving some kind of purpose. To start, It’s energetically advantageous to have most of your connections be to nearby cells. If these connections are excitatory then it would nice if those nearby cells are trying to send a similar message. With this setup, local connections can be used to strengthen the signal, and with the addition of inhibitory interneurons that project farther away, weaken an opposing signal that’s a little way down the cortex. Basically, it’s easier to enhance the response of similarly-responsive cells and suppress the activity of unrelated cells if there is some meaningful layout to the cell landscape. But from our current vantage point, olfactory cortex looks like a complete jumble with no structure to be found.

But then again, what kind of structure would we expect? With visual cortex it’s easy: areas that are physically close to each other in the external world should be represented by cells that are physically close to each other in the cortex. A similar notion is true for auditory cortex: frequencies that are near each other should activate cells that are near each other. But what makes a smell “near” another smell? The molecules that activate olfactory receptors, the physical things that make up what we know as smells, can have complex molecular structures that vary in numerous dimensions. One could compare, for example, molecular size, the number of carbon atoms, or what functional groups are attached to the end of a molecule. Haddad et al.  actually came up with 1,664 different metrics to describe an odorant. Given all these different scales, to say that one odor molecule is similar to another doesn’t have much practical meaning. Especially when we consider the fact that molecules which are incredibly similar physically can elicit completely different perceptual experiences. For example, carvone is a molecule that can smell like spearmint, but if it’s structure is switched to the mirror opposite form, it elicits the spicy scent of caraway seeds. So the notion of the brain being able to create a map of smells presupposes the existence of a simple relationship amongst odorants, which simply doesn’t exist.

As with most mysteries of science, though, it presents a great opportunity. That is, the opportunity to be wrong. Perhaps piriform cortex does in fact have a grand organizing principle that is simply alluding our detection. The Sobel lab has produced some interesting work connecting odorant structure, receptor activity, and the perceived pleasantness of an odor. If they are able to find some kind of odor pleasure map, it would represent a new kind of topography for sensory cortex: one that is based on internal perception, not just external physical properties. This would produce a flood of new questions about how such organization develops, and the universality of smell preferences across species and individuals. What currently looks to our electrodes as evolution’s mistake, could in fact be a very well planned-out olfactory citymap. And learning how to read that map could lead to a whole new way of understanding our olfactory world.

December 3, 2012 / neurograce

Brain Donation: Removing the stigma in order to advance the science

According to 2012 National Donor Registration Report Card, 101.4 million people in the US are registered organ, eye, and tissue donors. With nearly one third of Americans exhibiting such bodily generosity, we don’t seem to have a cultural problem with the notion of tissue donation. But your standard, back-of-the-driver’s-license donation commitment doesn’t cover what happens to your brain after you die. And the number of people who do seek out a specific brain donation plan is much, much smaller.

The reasons behind this “brain drain” are probably varied. For one, since we don’t have any Frankenstein-esque brain transplant technology (yet!), brain donations don’t have a direct ability to save lives. Thus, they provide less of a feel-good incentive for participants. Also, the organizations that collect brains for medical research aren’t highly publicized. So, many people simply don’t realize it’s an option, or that they have to make a separate commitment for it. But there is also the special nature of the brain that I am inclined to believe makes brain donation a uniquely difficult concept for the general public to sign on to. Most people don’t feel a strong connection between their sense of self and their left kidney. But not so with the brain. The notion of the seat of your consciousness being cut out, shipped to a lab, and treated like any other growth on a petri dish creates some understandable discomfort. We want to believe that our identity is somehow more resilient than that, that it can’t (or at least shouldn’t) simply be dissected.

But the fact is that no matter your philosophical or religious beliefs, that three pound mass of cells isn’t going to do much for you once you’re gone. It can, however, do a lot for scientists studying human neurological diseases. Even if you die with a completely healthy brain, your donation provides important controls against which neuroscientists can compare pathological brains. Healthy blood-relatives of diseased patients are especially useful for this. In fact, the brain bank...get it?Harvard Brain Bank has previously complained of a lack of normal brains with which to compare their more ample supply of diseased brains. So pick your most-hated neurological or psychological disorder, commit your brain to the study of it, and you’ll be able to rest in peace assured that you’re contributing to its cure. Here’s a list of some of your options:
Alzheimer’s
The Taub Institute at Columbia University
Boston University Alzheimer’s Disease Center
NYU Alhzeimer’s Disease Center
Penn Memory Center
Parkinson’s
Parkinson’s Disease Foundation
Queen Square Brain Bank
Progressive supranuclear palsy
CurePSP
Multiple Sclerosis
National Multiple Sclerosis Society
Mental Illness (Schizophrenia, Alcoholism, etc)
Southwest Brain Bank
Using our Brains
Autism
MIND Institute
Narcolepsy
Stanford School of Medicine
Huntington’s
PREDICT-HD
Frontotemporal Degeneration
Association for Frontotemporal Degeneration
Brain Trauma
VA CSTE
Restless Leg Syndrome (no offense to the RLS people, but I imagine they have a hard time competing for brains against these other diseases)
RLS Foundation
General Brain Banks (these repositories process and store tissue and send it to a variety of labs upon request)
Brain Endowment Bank
University of Pittsburgh Medical Center
Harvard Brain Bank
The Human Brain and Spinal Fluid Resource Center
The Brain Observatory

The Brain Observatory is my personal favorite. The name is great,  they do some really nice imaging work with their specimens, and I’ve always wanted to have a professional photo shoot. Also, one year ago today they took on the charge of dissecting and imaging the brain of the infamously memory-impaired patient HM. Who wouldn’t want to be treated to the same star-quality experience?

A few procedural notes: Some of these banks are limited to only taking local donations (including the Brain Observatory, sadly). You may also be worried about how donating your brain might affect normal funeral procedures. The short answer is that it won’t. The removal process can be done at the hospital and is “minimally invasive” (although that description seems a bit generous). But it doesn’t cause any delays or prohibit an open casket. Importantly, since technically it is the next-of-kin that ultimately allows the donation to happen, your family has to be aware of your wishes. So sit them down, preferably not over dinner, to let them know your intentions. Or just write a post about it on your blog (hi family!).

Furthermore, your decision to donate is best made early, since some studies may want to collect some pre-mortem data from you. As the movie Head Games discusses, the VA CSTE lab at Boston University has recruited a number of current and former NFL players to participate in cognitive testing in addition to their commitment to donate. Their repeated head injuries can cause a lot of ongoing neurological changes throughout their life. This kind of longitudinal study can make the post-mortem contribution even more powerful.

To me, the urge to donate should be especially strong amongst fellow neuroscientists. We are the ones working to understand the brain. And we know the satisfaction of getting access to a lot of good quality data about it. We also know what is truly possible when we have that data. If you’ve spent your life donating the activity of your brain to neuroscience, why not give one last contribution to the field?

November 26, 2012 / neurograce

Connecting Neuroscience and Education: Not a Direct Route

Education has always been known to “shape minds” and “build brains” and do other neuro-metaphor things. So, unsurprisingly, educators have cultivated an interest in how the brain works, why certain methods work on it while others fail, and what causes individual differences amongst students. Unfortunately, for a long time, neuroscience simply had no hope of answering these questions. As a result, plenty of pseudoscience filled the void between education and neuroscience. But with advances in neuroscience, some organizations feel that the gap is now bridgeable, or at the very least, the first bricks can be laid. Columbia and Harvard both have programs within their schools of education that focus on Educational Neuroscience. And countless other “brain-based teaching” training centers exist around the world, with their focus on informing teachers about neuroscience in order to impact how they run their classrooms.

But does this make sense? Can studying neuroscience in its current state really benefit teachers or their interactions with students?  Is it practical to do so? Neuroscience is a field of unwieldy size and complexity. Only a small portion of neuroscience research is even remotely relevant to learning. But deciding what aspects to teach teachers and then teaching those in isolation is risky. It can allow for studies to be misrepresented or taken out of context, and doesn’t instill in educators an ability to discern between good and bad science. As a result, we get an epidemic of “neuromyths” amongst teachers, such as the idea of right vs left brain-ness. And novel findings, such as the discovery of mirror neurons,  get extrapolated into teaching techniques without scientific support. The artifice of neuroscience knowledge can give teaching programs the appearance of authority, such as when Dr. Mariale Hardiman of Brain-Targeted Teaching incorrectly describes different types of memory and explains their relation to current teaching techniques:

“ Unfortunately, too often what is presented in our classrooms is designed for students’ working memories-students learn information so they can retrieve it on a test or quiz then quickly forget much of it as they move on to the next topic.

During tasks that involve only working memory, the brain uses proteins that currently exist in brain synapses (Ratey, 2001). When information moves, however, from working to long-term memory systems, new proteins are created. Effective teaching can result in biochemical changes in the brain! “

Working memory! Biochemical changes! A citation! It must be real.

Even amongst those who do fully understand the neuroscience, connecting the output of these studies to teaching isn’t straight-forward. Most lab studies investigating the neural mechanisms of learning and memory use animal models whose applicability to humans is in no way known. Even imaging studies done on humans during learning tasks are simplified and take place in a lab in isolation, quite the opposite of the classroom environment. Education is a pragmatic subject: the findings of a neuroscience lab suggesting that a certain method should work in theory is meaningless if it’s proven ineffective in the classroom. So neuroscience’s place in the training of educators is unclear.

There are of course some places of intersection between neuroscience and education. However, these come mostly in diagnosing, monitoring, and treating learning disorders. Imaging techniques can be used to verify learning disorders, and neuro-education supporters are quick to point out that it was PET data that settled the debate over whether dyslexia was a visual or phonological problem. They also like to mention that fMRI data showing increased activity in language processing areas correlates with increased reading performance after treatment for dyslexia. But these findings haven’t aided teachers of students without disorders, and even their impact on special-needs students is unclear. It is impractical to think neuroimaging could be used on a regular basis in the classroom as a means of tracking progress. And would that be helpful anyways? Does showing that brain activity correlates with performance add any additional information that performance alone couldn’t tell you? Does it have any effect on how teachers teach? I don’t see why it should. Again, knowing the neural mechanisms behind normal or disordered learning is, in itself, useless for teachers. What matters is establishing an application of that knowledge for the classroom. And I don’t believe it is the job of teachers to establish that connection.

So, programs that attempt to teach teachers about neuroscience are, to me, impractical. That is not to say that the field of education should be cut off from the gains in knowledge that neuroscience produces. But the bridge between the two needs to be built at a higher level. Before findings from neuroscience can be helpful to teachers, it must be processed. It has to be sent through a pipeline of testing that takes a very basic hypothesis based on neural activity in a laboratory setting and sees how it survives in ever more realistic contexts. Specialists such as cognitive neuropsychologists are well-suited to help guide relevant neuroscience findings into the realm of experimental education. From there, effective techniques can be distilled into methods that teachers can implement. It won’t be a smooth process. Much like in drug development, transferring basic science to the real world exposes a host of unforeseeable complications. But in education, the only test that matters is the real world.

November 19, 2012 / neurograce

It’s All Relative: The neural processing of time and how it could relate to lifespan length

Our perception of time is a wobbly thing. We all know it flies when we’re having fun. But if you’ve ever found yourself waiting for a bus in the cold, sitting through an especially boring lecture, or tripping down a flight of stairs then you know it also has a habit of slowing down nearly to a halt. As it turns out, our ability to keep an internal stopwatch has been crucial for our survival. It lets us know if two events could be causally-related and allows for “cost-benefit analysis”, by giving some measure of the amount of work we’re putting into a task. But, as with most big & important mental phenomenon, we are far from a clear understanding of how it works.

Many researchers approach the question of how neurons could keep track of time by training animals to delay their activity before getting a reward. For example, as this article on the research of time perception mentions, hummingbirds were able to learn that fake flowers in the lab were refilled with nectar every 20 minutes, and returned to feed on them accordingly. Rats and other more experimentally-accessible animals can be trained to do similar behavior while having their neural activity recorded.

Results from this kind of work suggest that rather than having the equivalent of a metronome in our brains—where, say, a group of neurons is continuously firing bursts at regular intervals—time is encoded by the interactions of neurons that are firing at different times. For example, the process starts with the arrival of one event creating a wave of neural activity. If another event arrives while that wave is still present, the second event’s own activity wave will be affected by the presence of the first. So, the second event’s activity wave will be slightly different than if it had occurred after the disappearance of the first wave. The response to the second event is therefore altered in a way that is dependent on its timing relative to the first.

Naturally, these kinds of mechanisms are highly dependent on the duration of the neural response, a property that is determined by a variety of factors ranging from ion channel characteristics to network structure. How (and if) any of these factors are changed during occasions where we feel a difference in our perception of time is still a mystery. It is hard to induce the sensation of time dilation or constriction in a laboratory setting, and even harder to get a rat to tell you when they’ve felt it. The Eagleman lab , however, did manage to strap computers to human subjects and have them perform cognitive tests while in a free-fall (a far more entertaining study than most undergrad psych majors are subjected to). The subjects reported a sense of slowed-down time but their performance on the cognitive tests showed no increase in their temporal resolution. They concluded from this that the feeling of having time to notice every detail doesn’t come because you are actually perceiving more information per second, but because you remember more of that perceived information. Emotionally-charged events (such as free-falling with a computer tied to your wrist) have a way of enhancing our ability to lay down memories. So, for the same reason that people can recall where they were when Kennedy got shot or you still know exactly what you were wearing when you had your first kiss, when we are in a life-threatening event we feel retrospectively that time had slowed down.

But if we want to get at neural mechanisms of temporal processing without having to push people off a cliff with a computer on their wrist AND an electrode in their head, we are going to have to work with animals. Rather than attempt to assess if lab rats feel the same perturbations in their perception of time during fear or boredom, we can try to understand the representation of temporal information by looking at cognitive behaviors that depend on it.  Specifically, we can look at timing through the lens of learning.

Classical Pavlovian conditioning says that if a stimulus (say, a bell ringing) that causes no instinctive response is repeatedly paired with a stimulus (like the presentation of food) that does cause a response (such as salivation), the initial stimulus presented alone can cause the response. And that is why Pavlov’s dogs salivate at the sound of a bell. Importantly, the ability to learn this association depends on how far apart the two stimuli are presented in time. If you ring a bell now and give the dog a steak in two days, he is unlikely to make the connection. So, a neural representation of time is necessary for this kind of training. The acceptable length of this inter-stimulus gap has been shown to vary depending on the stimulus pairings.

Interestingly, for a specific variation of this conditioning (eyeblink conditioning), we know that the gap time also varies across species. The eyeblink conditioned response comes about when a tone is presented right before the subject gets a puff of air in their eye. The airpuff makes the subject blink, and eventually they blink just from hearing the tone. In humans, the amount of time in between the tone and the airpuff that will result in the best learning of this association is around 800ms. In rabbits, it is around 500ms. And in rats, the few studies that have been done say 280ms. What this is suggesting is something like a difference in perspective across species. Humans, rabbits, and rats have distinct opinions on how far apart in time two events can be and still have an association between them. These species appear to be working on different timescales. For humans, with an average lifespan of ~80 years, things are a bit dilated. And compared to a rabbit (~10 year lifespan), the temporal processing of a rat (~2 years) is constricted. Of course this relationship between the timescale of associative learning and average lifespan hasn’t been studied extensively and factors as simple as brain size need to be controlled for. But it would be interesting to see if the trend holds for animals with impressive lifespans, like turtles or such short-lived creatures as fruit flies (of course the task would have to be re-designed, since the eyeblink response presupposes that the animal can blink). And if we do find that neural dynamics are scaled to align with lifespan, countless more questions would burst forth from this. What is different about the neural response? Does it stem from single cell properties or differences in network structure, or both? How is this encoded genetically and can it be altered during development? And perhaps most mysteriously, why does this mechanism exist? What could be the advantages of such a normalization of time processing across species?

Overall, the study of temporal information processing, much like physicists’ study of time itself, is proving to be conceptually and practically challenging. It is also producing some confusing results that, when understood properly, could provide a profound advancement in our understanding of the brain. That day is probably pretty far off, but as long as we all have fun trying to get there, it will come soon enough.

November 12, 2012 / neurograce

The Big Picture: Benefits of an Interdisciplinary Field

The department of Neuroscience at the University of Pittsburgh was established in 1983. This makes my dear old alma mater home to one of the oldest such departments in the nation.  They’re popping up all over the place nowadays (we’re up to 367 in the US), but make no mistake, Neuroscience as its own discipline is an infant in academic standards. So why is this? Was no one terribly interested in the brain before the 80s? Or is academic bureaucracy so toilsome that it took this long to get it recognized officially? The answer to the first is obviously no.  To the second, well yes probably, but that’s not the main reason.  What makes Neuroscience so difficult to institutionalize is exactly what makes it so powerful: its incredible interdisciplinariness.

Studying neuroscience requires integrating concepts from chemistry, genetics, psychology, evolutionary and molecular biology, pharmacology, and physiology. If you want to understand the methods you better have a grasp on medicine, statistics, and physics as well. And depending on your specific interests, you might also need acoustics, mechanics, linguistics, computer science, and/or mathematics. When people started doing neuroscience research, they came at it from one of these already-established fields.  Which is why, even still, rather than earning a BS in Neuroscience, many undergrads are forced to “take the neuroscience track” of their biology, psychology, or computer science major. Neuroscience is highly complex, and does not lend itself to easy administrating.

But, as I’ve said, it is this complexity and melding of fields at all levels that endows Neuroscience with such a strong explanatory power. Because, after all, the world is not as divided as our institutions of higher learning would make it seem. We have put up these walls to make our study of the universe simpler, and indeed they have. But it is necessary, at times, to make sure they haven’t become obstacles instead. And that is what Neuroscience is doing. Its willingness to dart across disciplinary lines and snatch up any potentially useful concept or technique ensures that our progress won’t be limited by past perspectives. And, quite honestly, when you’re attacking something as mysterious and complicated as the brain, you have to be willing to take help from wherever you can get it.

There are other, more personal benefits to working between disciplines as well. Attending a biochemistry-heavy lecture on transmitter synthesis in the morning, then spending the afternoon researching computational models of decision making is an exercise in mental dexterity.  Being able to jump between conceptual mindsets is a good skill to have. Furthermore, having a hand in many different fields means you are constantly exposed to new information and whole new concepts. Each time, you are practicing the process of understanding, getting better and faster at it. This exposure also helps you remember how to be a novice. That may seem like a bad trait for a seasoned scientist, but the fact is that some of the best insights can come from those who aren’t entrenched in the pre-existing dogma. As Suzuki says, “In the beginner’s mind there are many possibilities, but in the expert’s mind there are few.” Taking the novice perspective because you are forced to as an outsider can remind you that it is good practice to consider your own positions through that lens, and to question your assumptions occasionally. As Neil Thiese (a liver pathologist, complexity theorist, and fan of interdiscplinariness), stressed at a recent talk, you want to be able to see your data as it is presented to you –not as you expect it to be based on your existing beliefs. I believe practicing interdisciplinariness makes this easier. Finally, a field with a broad knowledge base can just lead to some pretty awesome ideas. My favorite example of the spoils of this in Neuroscience is groups like the Serre lab at Brown, that utilize computer vision to automate the tedious job of categorizing animal behavior. Thomas Serre develops computational models of visual information processing based on experimental data, and is applying those models to the design of lab equipment that can record rodent behavior and give quantitative descriptions of it to experimentalists. It’s a closed loop of neuroscience research!

Do you really want to know anything about Beowulf anymore anyhow?

Now for all my touting of the benefits of interdisciplinariness and having a broad and open mind, I do still appreciate the need for focus and walls. The process of educating a scientist is essentially a funnel that ends in a PhD thesis based on a very specific set of knowledge. This, by necessity and design, causes a detriment in most other areas of knowledge. There simply isn’t enough time to learn everything about everything, and to try to do so would result in knowing not very much about anything. The world is indeed complicated and we need to chunk it up into manageable bits if we want to be able to process it. I think the recent rise in interdisciplinary fields of all varieties simply reflects a need to partition things a little bit differently, and perhaps less rigidly. History has taught us that great insights come from people with broad interests: Da Vinci’s painting influenced his notions of light and color; Darwin was a naturlist, not just an ornithologist, entomologist, or botanist. Of course, not everyone can be a Darwin or a Da Vinci. The right balance is key. Specialization allows us to add to our knowledge base, one little pixel at a time. But someone needs to be looking at the big picture.

November 5, 2012 / neurograce

The Price of Ignorance: Why scientists need to educate lawmakers about scientific fact and policy

During elections, candidates talk a lot. As a result, the public can sometimes come away with a glimpse of how they truly view the world.  Sadly, what has frequently been shown is the woeful scientific ignorance that some members of the political process have. Recently, a lot of these alarming revelations have been in the realm of women’s reproductive organs or the mechanisms of global climate activity. But we’ve seen these kinds of knowledge lapses in every area that policy even remotely touches. From Michelle Bachman’s anecdote about the HPV vaccine causing mental retardation to Christine O’Donnell’s belief that researchers are creating mice with human brains, politicians have revealed not just a lack of scientific knowledge, but also a lack of knowledge about how science is done. And, even more startling, in some cases they seem to wear their ignorance as a badge of honor.

This problem isn’t simply an embarrassment. It is a serious threat to the policy-making process. It’s not just that these politicians aren’t “into science.” They believe things about the world that are unequivocally false. And that can, quite obviously, have huge consequences in the law, even without anyone having bad intentions. If you were, for example, putting your money into a FDIC-backed savings account, you would have no reason to prepare for the possibility of suddenly losing that money. And so, when a congressman believes that rape cannot result in pregnancy, he, just as logically, sees no reason to make allowances for that scenario.

And that is why it is so crucial to make certain that the people in power are completely and accurately informed about the topics on which they legislate. The current method of determining who gets important positions such as membership on the House Committee of Science is (surprise, surprise) more based on politics than qualifications. Which is how we end up with people like Rep. Paul Braun, who unabashedly touts his belief in a 9,000 year-old earth. However, not all politicians are so strongly anti-science. Rather, they just need some education in it. Luckily, as scientists, there are things that we can do to achieve this. A talk by former Illinois congressman John Edward Porter outlines some of the ways scientists can reach out to policy makers. He suggests offering your service as a science policy advisor to a local politician, giving talks about your work to a general audience, and make the needs of the scientific community clearly known.  There is a lot of red tape that comes from inefficient design of research funding and lacking infrastructure. For example, interdisciplinary work (which, almost by definition, tends to be highly innovative and promising) can have difficulty finding funding, due to “funding silos” which ensure that money stays strictly within the interests of one agency. A lack of simple procedures for collaborating with foreign scientists can also hold back a lot of potential progress. Politicians have the power to remove these roadblocks, but need to be made aware of them first. By advancing the scientific process, we can advance the acquisition of knowledge and, hopefully (although it is proving difficult), use that knowledge to inform future policy.

There are also things that scientists can do independently of interacting directly with government officials. Making the general public properly informed about the facts behind controversial issues and the ways in which basic research has directly benefited their lives is a great place to start. Candidates have to listen to the wants of their electorate, so an informed and supportive voting base can be a scientist’s best friend. Also, being smart and efficient with the resources that we do receive through government funding will reduce any reason for policy makers to view science as wasteful. There will always be bouts of difficult financial times where budgetary decisions will need to be made, and it will be important for research funding to not be labeled as pork.

But these kinds of efforts aren’t simply about getting more money for scientists. As a nation, how we choose to view and invest in research and the scientific process will determine our level of progress and success. To live in a modern dark age where scientific illiteracy is accepted will inevitably cause a stunt in our growth as a culture. Although many advocacy groups probably also feel strongly that their goals are important for society as a whole, I really do believe that the science lobby is not a special interest. We are not doing work to try to sway opinions, but rather to determine facts. And the results of our work can have unforeseen benefits and ripples throughout many aspects of daily life. For that reason, getting the science right in law-making cannot be optional.

These beliefs don’t by their very nature need to be partisan. Sadly though, in our current political state, it would seem that when a wholly false scientific claim is uttered, it comes from the mouth of a Republican. It is also the Republican Party that is more likely to embrace a culture of scientific ignorance. And specific policy choices regarding climate legislation and funding priorities reflect that. Furthermore, there are some projects and initiatives that the Republican mantra of “Let the private sector do it!” simply could never achieve. The expense of basic research is not something most private industries see a benefit in taking on. Yet they use the results of such government-funded work to create new applications and technologies. This process is mutually beneficial as it allows companies to be successful and the work of scientists to have real-world implications. But to pretend that such a setup could exist in the absence of large government support is absurd.  For these reasons, as a scientist and person who supports fact-based reasoning, I cannot support the Republican Party.

Of course, each candidate and each race is different, so it is important to stay informed. ScienceDebate.org and the AAAS are a great way to do just that. And if you’re interested in doing the informing yourself, the are a variety of opportunities for scientists to assist politicians on shaping their science policy. Research America has an advocacy goal and provides resources accordingly. The AAAS also offers an amazing fellowship that places PhDs in government positions where “science translators” are needed (I’ll let you know the year that I plan to apply so that all you other qualified applicants can back off). And of course it’sImage always possible to simply contact government officials directly with your concerns or assistance.

Oh, and don’t forget to vote!

October 30, 2012 / neurograce

The “Big Science” approach to Neuroscience and why it succeeds even if it fails

“Big Science” is a way of going about research and discovery that, being generally logical, systematic, unified, and well-funded, seems to stand in contradiction to how science is normally done. It comes about when some large, unifying entity (the government, a coalition of nations, a very wealthy investor) decides that a problem is important and recruits an army of scientists to get it solved. Prominent examples include the Manhattan Project, the Human Genome Project, and the Large Hadron Collider. The field of Neuroscience has been experiencing its own “Big Science” boom as of late and, similarly to the above examples, both critics and supporters have been vocal.
The biggest target of scorn/praise is easily Henry Markram and his Blue Brain project. His stated goal of reverse engineering and replicating the human brain in a supercomputer in 10 years has been politely described as ambitious. What his group is currently drudging through is the creation of minutely-detailed models of cortical columns, including everything from specific connectivity properties down to dendrite spine structures. It’s the type of data-heavy project that could only be achieved through Big Science. But despite its monopoly on press coverage, Blue Brain isn’t the only project of such caliber. The Allen Institute for Brain Science is pursuing similar goals. One of which is the atlasing of the entire brain, including not just physical structures but gene expression as well. Their “high-throughput pipeline” for data harvesting is strict, standardized, and state-of-the-art in order to most efficiently produce results. There are also the Connectone Projects, for mouse and human brains. These NIH-backed endeavors span several universities and have the goal of creating a detailed connectivity map of the entire brain (down to the cellular level for the mouse). Again, this is achieved through systematic methodologies and procedures, focused on anatomical imaging.

So what is it that critics of Big Neuroscience object to? There are people who disagree with the very science. They claim that Blue Brain’s level of detail is unnecessary, or that knowing the mouse connectome won’t explain the brain. Another complaint is that the goals, despite appearances, are actually ill-defined and untestable: How do we know that the cortical column model is accurate and complete? What does it mean to obtain a cellular-level connectome when there are animal-to-animal variances on that level? Another major component to the critical choir is that the goals are not achievable. We simply don’t have the experimental techniques available yet to accurately model every neuron morphology or map every synapse. And throwing a bunch of scientists at that problem who are systematically repeating the same experiment won’t fix that. We also don’t even know what what we need to know. As Bradley Voytek said on his post about Blue Brain, “the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that.”

There is another theme to the opposition, and it stems from a concern over the public perception of science. These kinds of projects tend to draw mainstream media attention, and are therefore subject to some misinterpretation, simplification, and sensationalizing. It doesn’t help when the lead scientists themselves are party to the over-hyping. Blue Brain’s own self-description contains a smattering of “exceptional”, “perfected”, and “complete virtual brain”. The fear is that the intensity of these challenges will be under-reported and the payoffs over-reported. This can create unrealistic expectations that, when not fulfilled, will reflect poorly on the field. This is especially problematic given the amount of money involved: The NIH gave a total of $38.5 million for the Human Connectome Project. The Allen Institute started its work with $100 million from Paul Allen. Blue Brain is currently seeking a one billion Euro award from Future and Emerging Technologies Flagship Initiatives. With the field’s reputation and that kind of cash on the line, success is more important than ever. But some say putting the emphasis on some kind of public measure of success, rather than good science (which routinely ends in failures) is a dangerous practice.

But even with all these arguments of varying validity, I stand in support of the Big Science. If all these projects live up to their hype, the leaps in knowledge will be incredible. But even if they don’t, there will be a wealth of collateral benefits from the process itself. Attempting projects that require such detail has, for example, revealed a systemic problem in our data reporting policies. Both Sean Hill of Blue Brain and Christoph Koch of the Allen Institute have cited the insufficient detail found in the current literature as a reason for doing experiments in-house. In this way, a by-product of these large-scale data collection and modeling projects is pinpointing the weaknesses in the normal small science practices. It is especially helpful in advancing neuroinformatics which, as we all know, is essential for advancing the field as a whole. Because of the nature of their experiments, these projects create a huge database of neuronal properties that are both uniformly-sampled and thoroughly-recorded. And with the (worrisome) exception of Blue Brain, these projects make much of their data available to the public. This not only provides the research community with the actual data, but also with a standardized form for reporting such data and, in some cases, a model to plug it in to. So even if the critics are right, and the goals of the project are too ambitious for our time, the infrastructure will be there waiting when the experimental results are achievable. These projects also spur advances in the automation of experimental procedures, since such automation is necessary to achieve results at the required rate. So, merely attempting Big Neuroscience on a specific problem can create positive ripples across the field.

Finally, despite the different approach, Big Science projects ultimately have the same objectives as any normal laboratory. Blue Brain’s detailed cortical model, for example, is just a means to find out what are the essential aspects of neural networks for computation. But rather than the usual approach of working from the simplest model and adding complexity, they take the opposite route. Many labs across the world investigate gene expression in their brain region of interest and correlate it with their behavior or disease of study. The Allen Institute is simply gathering all that information in one swoop. Even the over-hyping isn’t unique to Big Science; professors talk up their work to the media (and grant readers) all the time. And the effect is probably a net positive, as the start of a new project is likely to bring attention and favor to the field, while any failure is less likely to gain coverage. And even if these projects do fail by whatever measure they are being held to, that will provide an opportunity, just like in any scientific failure, to discover why. And to then apply those lessons to the next project, big or small. The benefits of these behemoths, in any case, outweigh the risks. So, let’s put away our pitchforks, and let the giants live.

October 21, 2012 / neurograce

Society for Neuroscience Conference: A Paradox

A small sampling of posters

S-F-N. Those three letters are instantly recognizable to any card-carrying neuroscientist. While they technically stand for “Society for Neuroscience”, what they usually refer to is the giant conference held by that society each year. This behemoth event brings on average over 30,000 researchers, presenters, exhibitors, and vendors together. Every hotel in town is jam-packed with neuroscientists and, after 5pm, so is every bar. You’d be hard-pressed to find a flight that doesn’t have an attendant struggling with poster tubes, or a restaurant whose diners aren’t donning SfN lanyards. It is a scientific spectacle beyond compare.

And it is that grandness, that sheer size, that makes SfN the conference that it is. But it is also what makes SfN so hard to love. It is absolutely overwhelming. Between the lectures, mini-symposiums, and nano-symposiums at any given moment there are roughly 20 talks you could be listening to. Feel like going to a poster? Each of the twice-daily poster sessions offers about 1775 options at a time. It’s unwieldy. However, the variety of topics ensures that you will find at least a quarter of what is being presented completely unappealing. So what you have, in essence, is a handful of smaller conferences going on in parallel: with the neuro-pathologists hanging out at symposiums about Parkinson’s mouse models, the bioengineers sticking to the BCI poster row, and psychologists seeking out anything fMRI. It may seem contradictory, but at the conference with the widest range of neuroscience research available, it is incredibly easy to stay wrapped in the cocoon of your sub-domain. Your best bet for getting a glimpse into another field is to catch some of the poster titles you sprint past on your journey from poster A47 to FFF64.

But maybe you’ll be going too fast, because SfN isn’t just intellectually big. 30,000 people necessitates a big space as well. The New Orleans Morial Convention Center where the conference was just held is three floors and over a kilometer long, and we took up all of it. And like with any conference center, the space is not physically appealing. There are drab colors, bad lighting, and a temperature consistently three degrees below the level of human comfort. With this many people you probably also find yourself zig-zagging through slow-moving crowds and waiting in a 20-minute line to buy a $10 sandwich.

So why do we do it? Why do so many (hopefully) logical people drag themselves to this monster each year? It can’t simply be for the science. There are countless other, smaller conferences that focus on the more narrow fields of research that people end up sticking to at SfN anyways. No, I would argue that it is in fact the hugeness of it that makes people come. Yes, walking through the sea of projects and presenters can make you feel very small, and like your work is insignificant or irrelevant. But when you consider yourself as part of this whole, there is a sense of solidarity and the notion that this field is progressing. We’re having a conversation. And while we may have different dialects, we’re all speaking the same language. So for all the headaches that come from a gathering this size, there is also great power in it, and promise.

And, of course, we mustn’t forget the second great driver of SfN attendance: socializing. SfN is attended by scientists from all different countries, from all different labs, and in all different stages of their careers. It can be a great opportunity to network, and to catch up with old colleagues and meet potential new ones. Also, a talk over dinner can end up accomplishing a lot more than lengthy emails and Skype calls can when collaborating long-distance. And nothing helps new co-workers bond like working towards the mutual goal of getting back to the hotel in an unknown town at 3am. The socializing may seem like a by-product of the conference, but (as many SfN attendees are surprisingly willing to admit) it can be the most crucial and productive aspect.

So despite all the negatives of SfN: the exhausting size, the sensory and intellectual overload, the awkwardness that comes from when my attempt to get a free Zeiss tote bag devolves into an elaborate lie about how my lab is in the market for a new multi-photon microscope…. I digress. But even with all that, I wouldn’t want to dissolve SfN. It is a unique (and for first-timers, potentially emotional) experience. Being confronted annually with the vast knowledge being produced, as well as the far vaster amount still missing, can be a helpful exercise to neuroscientists of all levels. The size and disjointed-ness of the conference exactly reflects the field itself, and that is something we shouldn’t ignore. In some ways SfN is like the internet: it has the depth to ensure that a user can spend all their time completely enveloped in the narrow topic they set out to find, but it also contains the breath to allow exploration for those who seek it. It is not the most efficient way to disseminate our knowledge, but it is perhaps the most unifying. There may be a lot of divisions within our field, but when we invade and overtake a city each fall, we don’t do it as electrophysiologists or molecular biologists or mathematicians; we do it as one giant horde of SfNers.

October 8, 2012 / neurograce

The future of brain-computer interface: A glimpse into the nano-membrane filled crystal ball

Brain-computer interface is… just what it sounds like. Some device is used to transfer information about the brain’s activity to a computer, or vice versa. So we end up with two flavors of BCI: recording (for the brain-to-computer direction), and stimulating (for the computer-to-brain path). Utilizing the lucky fact that electrical signals are the language of the brain, we can do both of these things with tiny electrodes placed either in, on, or next to cells. And with this dual pathway we can then read off motor cortex information in order to move a prosthetic limb, or send seizure-combating signals into an epileptic brain. It seems so simple, right?

The old way…

But, like the Texas penal system,what we’ve got is an execution problem. There is no easy way to get those electrodes to the cells. And once they’re in, there are still risks of infection or rejection. The signal is also subject to degradation by movement, electrical noise, or electrode deterioration. These are problems routinely faced by researchers working with animals, and are only magnified tenfold when the possibility of human applications arises. Right now, non-invasive techniques such as EEG and fMRI are the most common methods of reading brain activity in humans. But none of those options have the spatial or temporal resolution to provide meaningful data about real-time information processing, which is critical for the goals of any BCI. For useful data, we need those tiny electrodes. But as long as implanting them remains such an invasive, messy, and potential dangerous procedure, it won’t be available to the masses.

Now, it should be noted, there is the minor matter of knowing what those neural signals that we record actually mean. Or figuring out how to make a pattern of stimulation that has the intended effect. Cracking the neural code is obviously key to interfacing with the brain; it’s hard to have a conversation when you don’t speak the language. And right now we are far from fluent. But it is an area of huge research and I believe the breakthroughs are coming. However without the ability to utilize that knowledge in a physically plausible way, we don’t stand much to gain from it. And that is why our lack of a good implementation is so problematic.

So, what then is the future of BCI execution? I’d put my money, if I had any, into nanotechnology. And not (entirely) because it just sounds futuristic. The fact is, the process of opening up the skull, placing a hard and very foreign object onto the brain, and closing it back up again is never going to be a clean one. We need our BCI devices to be more natural, and to deliver them in a non-invasive way. And the only way to do that is if they are very small, and made of some more bio-friendly materials. And that is exactly what nanotechnology researchers such as John Rogers are working on. His group’s recent Science paper (and his appearance on NPR) describes a biodegradable electrode, which could theoretically be implanted in the brain and utilized for diagnostic purposes and then be allowed to dissolve away safely. Longer lasting versions could be used for months or even years. Furthermore, these kinds of ‘soft’ electrodes can fit better to the curves of the brain, leading to better signal quality. Another nanotech company has developed a coating for traditional electrodes the enhances the signal quality and longevity by reducing the response of the immune system to the electrode. So, through these measures some of the danger of neural implantation can be reduced and the quality of recordings increased.

The future!

There is still, however, the issue of delivery. But, do not fear; nanotechnology has the answer to that too! Well, potentially. Because nanomembranes are so small, thin and pliable, they can easily survive being delivered via injection (as this study looking at nano-scaffolds for bone regeneration shows). There is also already precedence for the injection of neural-stimulating electrodes. The BION system involves injecting traditional electrodes into peripheral nerves via a hypodermic needle. Those electrodes are then powered and controlled wirelessly, and allow for stimulation and recording of motor neurons. Now to translate this method into a useful BCI application would require getting the electrodes into the brain, not just the peripheral nervous system. And that is a problem of another magnitude, not easily solved even with nanomembranes in your toolbox. The blood-brain barrier is just annoyingly particular about what it lets thorough. And even if you could get the nanomembrane electrode in, you’d need to have a way to control where it plants itself. Trying to move a prosthetic arm with neural signals coming from your visual cortex is not advisable. So the injection would probably have to be targeted, meaning there are still some serious obstacles to a completely non-invasive procedure. But we have to leave some problems for future scientists…

Maybe the notion of tiny, injectable brain-controlling devices sounds crazy (or terrifying) to you. But it is the general direction we must go in if we want to really utilize all the knowledge that we’re painstakingly gathering about how the brain works. Potential applications aren’t limited just to disease treatment or prosthetic limb control. Even perfectly healthy people could find BCI beneficial. The gaming industry, for example, has already wet its feet in the BCI pool, using EPOC headsets to add another element of control to games. But these superfluous applications are only sensible if the BCI is incredibly low-risk. No one would endanger their health to make World of Warcraft slightly more entertaining… Ok, maybe some people would, but the FDA isn’t going to approve it.  So we are a long way off from BCI impacting the everyday life of the average person. But all crazy technology had to start somewhere. I’m excited to see how this particular one develops, and how quickly we can get ourselves into the future.