Category Archives: Brain

Scientists discover how the brain’s hypothalamus controls ageing – and manage to slow it down

If you are reading this and you don’t smoke, then your major risk factor for dying is probably your age. That’s because we have nearly eliminated mortality in early life, thanks to advances in science and engineering. But despite this progress, we still haven’t worked out how to eliminate the damaging effects of ageing itself.

Now a new study in mice, published in Nature, reveals that stem cells (a type of cell that can develop into many other types) in a specific area of the brain regulate ageing. The team even managed to slow down and speed up the ageing process by transplanting or deleting stem cells in the region.

The gap between generations is levelling out.

Ageing poses an important challenge for society. By 2050, there will be as many old people (age 65+) as children (under 15) on Earth for the first time. This change is reflected in unprecedented stress on our health and social care systems. Understanding how we can keep ourselves in good health as we age is becoming increasingly important.

The mechanisms that keep organisms healthy are relatively few in number and conserved between species, which means we can learn a lot about them by studying animals such as mice. Among the most important are senescent cells – dysfunctional cells which build up as we age and cause damage to tissue – chronic inflammation and exhaustion of stem cells. These mechanisms are thought to be connected at the cell and tissue level. As with a ring of dominoes, a fall anywhere can trigger a catastrophic collapse.

Vanishing cells

The researchers behind the new paper were studying the mouse hypothalamus, which we’ve known for some time controls ageing. This almond-sized structure at the centre of the brain links the nervous and endocrine (hormone) systems. The hypothalamus helps regulate many basic needs and behaviours including hunger, sleep, fear and aggression. In the human brain, initiation of behaviours is usually complex, but if you flee in blind panic or find yourself in a blazing rage, then your hypothalamus is temporarily in the driving seat.

The hypothalamus in the human brain. Life Sciences Database

The team looked at a specialised group of stem cells within the hypothalamus and monitored what happened to them as cohorts of mice aged. Mice normally live for about two years but they found that these cells began to disappear by about 11 months. By 22 months, they had vanished completely. The rate at which the stem cells was lost closely correlated with ageing changes in the animals, such as declines in learning, memory, sociability, muscle endurance and athletic performance.

But correlation doesn’t mean causation. To find out if the decline was causing these ageing changes, they deleted stem cells using a specially engineered virus that would only kill them in the presence of the drug Ganciclovir. In 15-month-old mice, receiving this drug combination destroyed 70% of their hypothalamic stem cells. They prematurely displayed signs of ageing and died roughly 200 days earlier as a result. That’s significant as mice only live for about 730 days.

The group also implanted hypothalamic stem cells from newborn mice into middle-aged animals. In this case, the animals became more social, performed better cognitively and lived about 200 days longer than they otherwise would have.

These experiments also provided clues to how the hypothalamic stem cells were being lost in the first place. The implantation only worked when the stem cells had been genetically engineered to be resistant to inflammation. It seems that, as the animals aged, chronic, low-grade inflammation in the hypothalamus increased.

This inflammation is probably caused either by the accumulation of senescent cells or surrounding neurons entering a senescent-like state. Inflammation kills the hypothalamic stem cells because they are the most sensitive to damage. This then disrupts the function of the hypothalamus with knock-on effects throughout the organism. And so the dominoes fall.

Elixir of youth?

The ultimate goal of ageing research is identifying pharmaceutical targets or lifestyle interventions that improve human health in later life. While this is a study in mice, if we can show that the same mechanisms are at play in humans we might one day be able to use a similar technique to improve health in later life. But this remains a long way in the future.

Other interventions, such as removing senescent cells, also improve health, extending life by up to 180 days in mice. A logical next step is to see if these interventions “stack”.

Could we stop unsuccessful ageing in humans with the same technique?
Evgeny Atamanenko

The study also demonstrates that hypothalamic stem cells exert major effects through secreting miRNAs, which control many aspects of how cells function. MiRNAs are short, non-coding RNAs – a molecule that is simpler than DNA but can also encode information. When miRNAs were supplied alone to mice lacking stem cells they actually showed similar improvements to those who received stem-cell treatment.

The delivery of miRNAs as drugs is still in its infancy but the study suggests potential routes to replenishing a hypothalamus denuded of stem cells: preventing their loss in the first place by controlling the inflammation. This might be achieved either through the development of drugs which kill senescent cells or the use of anti-inflammatory compounds.

The research is important because it elegantly demonstrates how different health maintenance mechanisms interact. However, one downside is that only male mice were used. It is well known that the structure of the hypothalamus differs markedly between the sexes. Drugs and mutations which extend lifespan also usually show markedly different potency between males and females.

The ConversationWhether humans will ever be able to live significantly longer than the current maximum lifespan of 125 years is hard to tell. But it seems the greatest barrier to a healthy later life is no longer the rate of progress but the speed with which we can turn our growing knowledge of the biology of ageing into drugs and lifestyle advice.

Richard Faragher, Professor of Biogerontology, University of Brighton

This article was originally published on The Conversation. Read the original article.

The brain: a radical rethink is needed to understand it

Henrik Jörntell, Lund University

Understanding the human brain is arguably the greatest challenge of modern science. The leading approach for most of the past 200 years has been to link its functions to different brain regions or even individual neurons (brain cells). But recent research increasingly suggests that we may be taking completely the wrong path if we are to ever understand the human mind. The Conversation

The idea that the brain is made up of numerous regions that perform specific tasks is known as “modularity”. And, at first glance, it has been successful. For example, it can provide an explanation for how we recognise faces by activating a chain of specific brain regions in the occipital and temporal lobes. Bodies, however, are processed by a different set of brain regions. And scientists believe that yet other areas – memory regions – help combine these perceptual stimuli to create holistic representations of people. The activity of certain brain areas has also been linked to specific conditions and diseases.

The reason this approach has been so popular is partly due to technologies which are giving us unprecedented insight into the brain. Functional magnetic resonance imaging (fMRI), which tracks changes in blood flow in the brain, allows scientists to see brain areas light up in response to activities – helping them map functions. Meanwhile, Optogenetics, a technique that uses genetic modification of neurons so that their electrical activity can be controlled with light pulses – can help us to explore their specific contribution to brain function.

FMRI scan during working memory tasks.
John Graner/wikipedia

While both approaches generate fascinating results, it is not clear whether they will ever provide a meaningful understanding of the brain. A neuroscientist who finds a correlation between a neuron or brain region and a specific but in principle arbitrary physical parameter, such as pain, will be tempted to draw the conclusion that this neuron or this part of the brain controls pain. This is ironic because, even in the neuroscientist, the brain’s inherent function is to find correlations – in whatever task it performs.

But what if we instead considered the possibility that all brain functions are distributed across the brain and that all parts of the brain contribute to all functions? If that is the case, correlations found so far may be a perfect trap of the intellect. We then have to solve the problem of how the region or the neuron type with the specific function interacts with other parts of the brain to generate meaningful, integrated behaviour. So far, there is no general solution to this problem – just hypotheses in specific cases, such as for recognising people.

The problem can be illustrated by a recent study which found that the psychedelic drug LSD can disrupt the modular organisation that can explain vision. What’s more, the level of disorganisation is linked with the severity of the the “breakdown of the self” that people commonly experience when taking the drug. The study found that the drug affected the way that several brain regions were communicating with the rest of the brain, increasing their level of connectivity. So if we ever want to understand what our sense of self really is, we need to understand the underlying connectivity between brain regions as part of a complex network.

A way forward?

Some researchers now believe the brain and its diseases in general can only be understood as an interplay between tremendous numbers of neurons distributed across the central nervous system. The function of any one neuron is dependent on the functions of all the thousands of neurons it is connected to. These, in turn, are dependent on those of others. The same region or the same neuron may be used across a huge number of contexts, but have different specific functions depending on the context.

It may indeed be a tiny perturbation of these interplays between neurons that, through avalanche effects in the networks, causes conditions like depression or Parkinson’s disease. Either way, we need to understand the mechanisms of the networks in order to understand the causes and symptoms of these diseases. Without the full picture, we are not likely to be able to successfully cure these and many other conditions.

Map of neural connections.
Thomas Schultz/wikimedia, CC BY-SA

In particular, neuroscience needs to start investigating how network configurations arise from the brain’s lifelong attempts to make sense of the world. We also need to get a clear picture of how the cortex, brainstem and cerebellum interact together with the muscles and the tens of thousands of optical and mechanical sensors of our bodies to create one, integrated picture.

Connecting back to the physical reality is the only way to understand how information is represented in the brain. One of the reasons we have a nervous system in the first place is that the evolution of mobility required a controlling system. Cognitive, mental functions – and even thoughts – can be regarded as mechanisms that evolved in order to better plan for the consequences of movement and actions.

So the way forward for neuroscience may be to focus more on general neural recordings (with optogenetics or fMRI) – without aiming to hold each neuron or brain region responsible for any particular function. This could be fed into theoretical network research, which has the potential to account for a variety of observations and provide an integrated functional explanation. In fact, such a theory should help us design experiments, rather than only the other way around.

Major hurdles

It won’t be easy though. Current technologies are expensive – there are major financial resources as well as national and international prestige invested in them. Another obstacle is that the human mind tends to prefer simpler solutions over complex explanations, even if the former can have limited power to explain findings.

The entire relationship between neuroscience and the pharmaceutical industry is also built on the modular model. Typical strategies when it comes to common neurological and psychiatric diseases are to identify one type of receptor in the brain that can be targeted with drugs to solve the whole problem.

For example, SSRIs – which block absorption of serotonin in the brain so that more is freely available – are currently used to treat a number of different mental health problems, including depression. But they don’t work for many patients and there may be a placebo effect involved when they do.

Similarly, epilepsy is today widely seen as a single disease and is treated with anticonvulsant drugs, which work by dampening the activity of all neurons. Such drugs don’t work for everyone either. Indeed, it could be that any minute perturbation of the circuits in the brain – arising from one of thousands of different triggers unique to each patient – could push the brain into an epileptic state.

In this way, neuroscience is gradually losing compass on its purported path towards understanding the brain. It’s absolutely crucial that we get it right. Not only could it be the key to understanding some of the biggest mysteries known to science – such as consciousness – it could also help treat a huge range of debilitating and costly health problems.

Henrik Jörntell, Senior Lecturer in Neuroscience, Lund University

Do 3D films make you dizzy – or is it just your imagination?

The realism of today’s 3D blockbusters can blow audiences away. By using 3D glasses to present different images to the two eyes, stereoscopic 3D technology fools the brain into believing it is viewing a real scene rather than a flat image on a screen. Now 3D televisions enable viewers to experience the effect at home as well.

Yet 3D has not become as popular as some might have hoped. Many people say watching 3D gives them unpleasant side-effects such as headache or nausea. Scientists don’t fully understand why this is. It’s true that badly made 3D effects can cause discomfort. However, makers of 3D content are well aware of the possible issues and work hard to avoid them.

A more fundamental problem may be conflict between different senses. When we watch a film such as Avatar, our visual system may tell us that we are wheeling high in the skies of a distant moon, but other senses tell us that we are sitting motionless in a chair. Of course, 2D films present this kind of conflict as well, but our brains may simply be more used to accepting that 2D content is not “real”.

Some people have suggested that 3D content may cause more serious side effects. For example, Samsung’s safety leaflet links its 3D TV set to a vast range of possible symptoms – not only headache, fatigue, motion sickness and eye strain, but also decreased postural stability, altered vision, dizziness, cramps, convulsions and even loss of awareness. Clearly if 3D TV has such effects, there are important safety implications. But to date, very little work has been done to assess this.

We recently invited 433 volunteers, aged from 4 to 82 years, into my lab to watch the film Toy Story on either a 2D or 3D TV. We used two common types of 3D TV, known as “active” and “passive”. Participants carried out a battery of tests designed to assess their balance and coordination, both before and after viewing. They wore two triaxial accelerometers – small devices to record their body movements – as they walked around a simple obstacle course. To assess eye-hand coordination, participants played a “buzz the wire” game, guiding a hoop along a convoluted wire track without allowing the two to come into contact.

We argued that, if viewing 3D made participants dizzy, they would take longer to complete the obstacle course, and/or the accelerometers would show that their body movements were less stable. If it affected their vision, they would take longer to complete the “buzz the wire” game, and/or make more mistakes.

Some people have suggested that adverse effects with 3D reflect underlying visual problems. So we also had our volunteers’ vision thoroughly assessed by eye care professionals before they visited the lab.

Of course, Holly’s nausea had nothing to do with the 1kg of popcorn she’d just eaten.

On our objective tests of balance and coordination, we couldn’t detect any effects of 3D at all. Not surprisingly, people tended to perform a little better the second time round. But it didn’t seem to matter whether they had watched the film in 2D or 3D, or whether the 3D was active or passive. We also couldn’t find any links between age or eyesight and whether people were affected by 3D.

We did find that people who had viewed the 3D movie reported that the depth was more realistic. They also reported more adverse effects, mainly headache and eye strain, but also including dizziness or nausea. However, it’s not clear that the dizziness was really due to 3D.

Craftily, we gave some of our volunteers 3D glasses, making them think they were viewing in 3D, but showed them the film in 2D. These people reported dizziness at about the same rate (3%) as those viewing real 3D. In contrast, people viewing real 3D were much more likely to report headache or eyestrain (around 10%) than people who just thought they were viewing 3D. This suggests that while 3D gives some people a headache, it doesn’t really make people dizzy – people just expect it to.

Of course, it’s possible that 3D caused an impairment that was so subtle or transient that our tests failed to detect it. On the other hand, that also implies less cause for concern in everyday life. We also tested only one 3D film, choosing Toy Story as something fun and engaging for all age-groups. Even if computer-generated 3D from the experts at Pixar doesn’t cause dizziness, it remains possible that less carefully-controlled 3D content – say, live-action football – could do so.

Nevertheless, given the lack of previous work in this area, our study provides welcome reassurance. Can 3D effects give you a headache? Yes, for some people. Can they make you dizzy? Probably not. Do they make Toy Story more exciting? That depends who’s watching.

The Conversation

Jenny Read is Reader in vision science at Newcastle University.

This article was originally published on The Conversation.
Read the original article.

Video gamers are sexy, or at least they think they are

There’s abundant research – and controversy – on the effects of playing violent video games. But, strangely, there’s precious little looking at why people choose to play violent games at all.

So we, along with our colleagues, decided to investigate this question. What we found not only challenged the stereotype that only men enjoy violent games, it also revealed something else that was rather unexpected: the motivation to play violent video games is closely connected to people’s desire for sex.

Breaking stereotypes

Picture someone playing a violent video game. Chances are, you’ve just imagined a young male shooting or stabbing something. That’s the prevailing stereotype that informs our perception of gamers and violent video games.

This stereotype has been particularly damaging in the technology industry because it has made it difficult for women to be taken seriously in the game development community, and made many of them feel unwelcome online.

Even though the Entertainment Software Association (ESA) has shown that men and women now make up an equal proportion of gamers, many articles and discussions revolve around separating the sexes in what they play.

But this simplistic view of gamers is a perfect example of how stereotypes cloud our perception of psychological differences between the sexes. To explore whether gender really is the best determinant of what we play, we decided to move past standard gender-violence stereotypes.

Video games and sex

We surveyed 500 American adult male and female video game players to try to better understand what people were playing, and why. So we asked them to tell us which five games they were currently playing and how violent they felt the games were.

We did indeed find that men preferred to play violent games more than women. But the difference between the sexes was not nearly as large as you might expect. So it turns out that both women and men both enjoy violent games, at least among self-declared gamers.

Next we asked them to tell us their views about sex and their interest in having sex using the Sexual Openness Inventory. This asks how important sex is to people, how likely they are to engage in it and preferences for casual sex.

We also asked them how they perceive themselves as a potential romantic partner, a quality called “mate value”. Individuals who say they have a higher mate value basically perceive themselves as a “better catch” to romantic partners.

Not all stereotypes about gamers are true.
Luke Hayfield/Flickr, CC BY

Don’t shoot me, love me

What we found was that desire for sex was correlated with their violent video game play. And both men and women who said that they were more interested in sex played more violent video games.

But the most interesting results were found when we looked at mate value. There was no correlation between the amount of violent video games that men played and what they thought of themselves as a partner.

However, there was a strong correlation in women. So the women who played violent video games more thought of themselves as a better catch than those who played them less.

To make sure we were on the right track, we replicated our findings with a second study with another 500 individuals. We found the same results.

We also extended our initial findings by asking participants to rate the extent to which gaming made them feel strong and sexy, and more attractive. We found that women were more motivated to play violent video games because doing so made them feel more attractive and sexy.

So it seems that women were driven to play games because it improved their self-perception as a high quality romantic partner.

It’s all about sex

These findings are a perfect example of where stereotypes don’t describe the beautiful variation we see in our society.

Our results also break another stereotype about gamers: that they are nerdy, basement-dwelling individuals who are interested in technology more than finding a romantic partner. Rather, gamers are often interested in sex. And the gamers who are most interested in sex tend to play the most violent games.

From an evolutionary perspective, this makes some sense. Like other animals, in our ancestral past, those who successfully competed and secured resources and mates had the most offspring.

People who want more access to a greater variety of partners thus need to be competitive enough to gain access to them. So violent video games might be tapping into some ancient penchant for competitive behaviour for the sake of proving one’s worth as a mate.

We have dubbed our idea the “dominance-practice hypothesis”, as violent video games provide men and women a virtual arena to compete on equal footing. And when women compete, that seems to make them feel pretty good about their ability to be a “good catch”.

So it seems that even in video games, it’s all about sex.

Michael and Tom will be on hand for an author Q&A between 3 to 4pm AEST on Thursday July 2. Post your questions in the comments section below.

The Conversation

Michael Kasumovic is Evolutionary Biologist, ARC Future Fellow at UNSW Australia.
Tom Denson is Associate Professor of Psychology and ARC Future Fellow at UNSW Australia.

This article was originally published on The Conversation.
Read the original article.

It feels instantaneous, but how long does it really take to think a thought?

As inquisitive beings, we are constantly questioning and quantifying the speed of various things. With a fair degree of accuracy, scientists have quantified the speed of light, the speed of sound, the speed at which the earth revolves around the sun, the speed at which hummingbirds beat their wings, the average speed of continental drift….

These values are all well-characterized. But what about the speed of thought? It’s a challenging question that’s not easily answerable – but we can give it a shot.

What’s a thought?
Fergus Macdonald, CC BY-NC

First, some thoughts on thought

To quantify the speed of anything, one needs to identify its beginning and end. For our purposes, a “thought” will be defined as the mental activities engaged from the moment sensory information is received to the moment an action is initiated. This definition necessarily excludes many experiences and processes one might consider to be “thoughts.”

Here, a “thought” includes processes related to perception (determining what is in the environment and where), decision-making (determining what to do) and action-planning (determining how to do it). The distinction between, and independence of, each of these processes is blurry. Further, each of these processes, and perhaps even their sub-components, could be considered “thoughts” on their own. But we have to set our start- and endpoints somewhere to have any hope of tackling the question.

Finally, trying to identify one value for the “speed of thought” is a little like trying to identify one maximum speed for all forms of transportation, from bicycles to rockets. There are many different kinds of thoughts that can vary greatly in timescale. Consider the differences between simple, speedy reactions like the sprinter deciding to run after the crack of the starting pistol (on the order of 150 milliseconds [ms]), and more complex decisions like deciding when to change lanes while driving on a highway or figuring out the appropriate strategy to solve a math problem (on the order of seconds to minutes).

Even looking inside the brain, we can’t see thoughts.
Duke University Photography Jim Wallace, CC BY-NC-ND

Thoughts are invisible, so what should we measure?

Thought is ultimately an internal and very individualized process that’s not readily observable. It relies on interactions across complex networks of neurons distributed throughout the peripheral and central nervous systems. Researchers can use imaging techniques, such as functional magnetic resonance imaging and electroencephalography, to see what areas of the nervous system are active during different thought processes, and how information flows through the nervous system. We’re still a long way from reliably relating these signals to the mental events they represent, though.

Many scientists consider the best proxy measure of the speed or efficiency of thought processes to be reaction time – the time from the onset of a specific signal to the moment an action is initiated. Indeed, researchers interested in assessing how fast information travels through the nervous system have used reaction time since the mid-1800s. This approach makes sense because thoughts are ultimately expressed through overt actions. Reaction time provides an index of how efficiently someone receives and interprets sensory information, decides what to do based on that information, and plans and initiates an action based on that decision.

Neurons do the work of transmitting thoughts.
Bryan Jones, CC BY-NC-ND

Neural factors involved

The time it takes for all thoughts to occur is ultimately shaped by the characteristics of the neurons and the networks involved. Many things influence the speed at which information flows through the system, but three key factors are:

  • Distance – The farther signals need to travel, the longer the reaction time is going to be. Reaction times for movements of the foot are longer than for movements of the hand, in large part because the signals traveling to and from the brain have a longer distance to cover. This principle is readily demonstrated through reflexes (note, however, that reflexes are responses that occur without “thought” because they do not involve neurons that engaged in conscious thought). The key observation for the present purpose is that the same reflexes evoked in taller individuals tend to have longer response times than for shorter individuals. By way of analogy, if two couriers driving to New York leave at the same time and travel at exactly the same speed, a courier leaving from Washington, DC will always arrive before one leaving from Los Angeles.

  • Neuron characteristics – The width of the neuron is important. Signals are carried more quickly in neurons with larger diameters than those that are narrower – a courier will generally travel faster on wide multi-lane highways than on narrow country roads.

    Nerve signals jump between the exposed areas between myelin sheathes.
    Neuron image via

    How much myelination a neuron has is also important. Some nerve cells have myelin cells that wrap around the neuron to provide a type of insulation sheath. The myelin sheath isn’t completely continuous along a neuron; there are small gaps in which the nerve cell is exposed. Nerve signals effectively jump from exposed section to exposed section instead of traveling the full extent of the neuronal surface. So signals move much faster in neurons that have myelin sheaths than in neurons that don’t. The message will get to New York sooner if it passes from cellphone tower to cellphone tower than if the courier drives the message along each and every inch of the road. In the human context, the signals carried by the large-diameter, myelinated neurons that link the spinal cord to the muscles can travel at speeds ranging from 70-120 meters per second (m/s) (156-270 miles per hour[mph]), while signals traveling along the same paths carried by the small-diameter, unmyelinated fibers of the pain receptors travel at speeds ranging from 0.5-2 m/s (1.1-4.4 mph). That’s quite a difference!

  • Complexity – Increasing the number of neurons involved in a thought means a greater absolute distance the signal needs to travel – which necessarily means more time. The courier from Washington, DC will take less time to get to New York with a direct route than if she travels to Chicago and Boston along the way. Further, more neurons mean more connections. Most neurons are not in physical contact with other neurons. Instead, most signals are passed via neurotransmitter molecules that travel across the small spaces between the nerve cells called synapses. This process takes more time (at least 0.5 ms per synapse) than if the signal was continually passed within the single neuron. The message carried from Washington, DC will take less time to get to New York if one single courier does the whole route than if multiple couriers are involved, stopping and handing over the message several times along the way. In truth, even the “simplest” thoughts involve multiple structures and hundreds of thousands of neurons.

And they’re off!
Oscar Rethwill, CC BY

How quickly it can happen

It’s amazing to consider that a given thought can be generated and acted on in less than 150 ms. Consider the sprinter at a starting line. The reception and perception of the crack of the starter’s gun, the decision to begin running, issuing of the movement commands, and generating muscle force to start running involves a network that begins in the inner ear and travels through numerous structures of the nervous system before reaching the muscles of the legs. All that can happen in literally half the time of a blink of an eye.

Although the time to initiate a sprint start is extremely short, a variety of factors can influence it. One is the loudness of the auditory “go” signal. Although reaction time tends to decrease as the loudness of the “go” increases, there appears to be a critical point in the range of 120-124 decibels where an additional decrease of approximately 18 ms can occur. That’s because sounds this loud can generate the “startle” response and trigger a pre-planned sprinting response.

Researchers think this triggered response emerges through activation of neural centers in the brain stem. These startle-elicited responses may be quicker because they involve a relatively shorter and less complex neural system – one that does not necessarily require the signal to travel all the way up to the more complex structures of the cerebral cortex. A debate could be had here as to whether or not these triggered responses are “thoughts,” because it can be questioned whether or not a true decision to act was made; but the reaction time differences of these responses illustrate the effect of neural factors such as distance and complexity. Involuntary reflexes, too, involve shorter and simpler circuitry and tend to take less time to execute than voluntary responses.

How well can we gauge our own speed of thought?
William Brawley, CC BY

Perceptions of our thoughts and actions

Considering how quickly they do happen, it’s little wonder we often feel our thoughts and actions are nearly instantaneous. But it turns out we’re also poor judges of when our actions actually occur.

Although we’re aware of our thoughts and the resulting movements, an interesting dissociation has been observed between the time we think we initiate a movement and when that movement actually starts. In studies, researchers ask volunteers to watch a second hand rotate around a clock face and to complete a simple rapid finger or wrist movement, such as a key press, whenever they liked. After the clock hand had completed its rotation, the people were asked to identify where the hand was on the clock face when they started their own movement.

Surprisingly, people typically judge the onset of their movement to occur 75-100 ms prior to when it actually began. This difference cannot be accounted for simply by the time it takes for the movement commands to travel from the brain to the arm muscles (which is on the order of 16-25 ms). It’s unclear exactly why this misperception occurs, but it’s generally believed that people base their judgment of movement onset on the time of the decision to act and the prediction of the upcoming movement, instead of on the movement itself. These and other findings raise important questions about the planning and control of action and our sense of agency and control in the world – because our decision to act and our perception of when we act appear to be distinct from when we in fact do.

In sum, although quantifying a single “speed of thought” may never be possible, analyzing the time it takes to plan and complete actions provides important insights into how efficiently the nervous system completes these processes, and how changes associated with movement and cognitive disorders affect the efficiency of these mental activities.

The Conversation

Tim Welsh is Professor of Kinesiology and Physical Education at University of Toronto.

This article was originally published on The Conversation.
Read the original article.

Mathematics, spaghetti alla carbonara and you

I’ve come to believe that mathematics, as an investigative science, as a practical discipline and as a creative art, shares many characteristics with cookery. It’s not just spaghetti alla carbonara, it’s the whole business of inventing dishes and preparing them. It’s an analogy with many parts, and it has consequences.

To introduce myself: I’m a professional mathematician, an amateur cook and an enthusiastic eater. The ideas in this essay are distilled from years of formal reasoning, mad culinary experiments and adventurous meals. In short, I’ve found that:

  1. I do mathematics for much the same reasons that I cook.

  2. I use the same problem-solving methods in math and cooking.

  3. I judge dishes and math papers with many of the same criteria.

Together these observations suggest a picture of mathematics (or a picture of cooking) quite different from the popular view. The analogy is fun and the payoff is liberating.

My reasons

I am motivated in both fields by curiosity and by thrills. I grew up reading Martin Gardner’s Mathematical Games column in Scientific American. It’s hard to describe how exciting these were. I read about logical paradoxes, about hexaflexagons, about rep-tiles, Sprouts, and Dr Matrix. I folded flexagons, I analyzed Sprouts, I teased classmates with paradoxes. It was thrilling.

At the same time I experienced thrills of a different sort. I remember keenly the first time my mother made apple pie. I remember the time my father grilled tuna steak. I remember the first time I tasted a whiskey sour. In all, these experiences made me what I am today: a seeker of thrills, a mathematical and gustatory glutton.

I also play with food and mess with math to satisfy an insistent curiosity.

Where will I bounce?
Jim Henle, CC BY

What happens if I combine Chartreuse and avocado?

Where will I end up if I start in one corner of this figure and start bouncing off the sides?

What vegetables can I caramelize?

How much of the infinite plane can I cover with different-sized squares?

Squares and squares and squares on an infinite plane.
Jim Henle, CC BY


Many books have been written about mathematical problem-solving. And many, many books have been written about cooking. But there is one single principle that is fundamental to both disciplines. It may be the only essential principle of problem-solving:

Make mistakes.

Make mistakes and learn from them. It’s the go-to method in both fields.

It’s hard teaching this to students. They believe that mathematicians figure things out first and then act. But mathematicians don’t. We jump in and mess up. It’s the best way to see what’s going on.

Suppose you are asked to find a number such that tripling the number is the same as adding 12. If you know algebra, you write

3 x n = n + 12

and solve for n. But let’s say you don’t know algebra. So you jump in. You guess 10. Does that work? Tripling 10 gets you 30, but adding 12 gets you 22.

3 x 10 = 30 10 + 12 = 22

30 doesn’t equal 22. Let’s try again. Guess 12 (after all, that’s a number in the problem). But tripling 12 gets you 36 and adding 12 gets you 24.

3 x 12 = 36 12 + 12 = 24

So 12 is worse! Let’s move in the other direction. Guess 8. Tripling 8 gets you 24. Adding 12 gets you 20.

3 x 8 = 24 8 + 12 = 20

Closer! Maybe your next guess is 6. And if it is, you solved the problem.

3 x 6 = 18 6 + 12 = 18

Knead that dough.

Leaping into the unknown is also the best way to learn to cook. Home cooks are often reluctant to try baking bread. They believe you have to know what you’re doing before you start putting ingredients in a bowl. But that belief can prevent you from ever baking your first loaf.

I don’t claim, by the way, that making mistakes is easy. It takes guts (sometimes). It also takes perseverance and hard work. But it doesn’t take a “math brain.”

You can judge a dish or a math problem on its aesthetics.
Chris Baird, CC BY


Simple lines work in a food and in math.
Jim Henle, CC BY

Some dishes are wonderful for their simplicity, for their simple, clean taste. Cheesecake, for example.

In the same way, a mathematical object can be attractive because it has a clean, simple structure.

Fiery flavors?
Wes Peck, CC BY-ND

On the other hand, some foods are celebrated for the complexity of their taste. Wine, for example.

In the same way, a mathematical structure can be alluring for its mystery and depth.

“Simplicity” and “complexity” are just two aesthetics that math and gastronomy share. Some others are “elegance,” “playfulness” and “novelty.”

Complexity has appeal in cooking and math.
Jim Henle, CC BY

You can do it

You have the analogy now: a moderately strong similarity between mathematics and cooking. What does that similarity suggest?

Well first of all, I’ve argued that the key to success in math is to make mistakes. Accepting this principle pushes you to accept a really powerful idea. If making mistakes is the key, then everyone can cook. And everyone can do mathematics.

Second, the similarity points out that mathematics has aesthetics. Mathematicians believe this. You should too. You can pick winners (I like that math) and losers (that stuff bores me). That’s what we do. I love logic and geometry. Don’t ask me about statistics.

Most students intuitively get this about history, about literature, about science. But mathematics appears different to them. Math, they fear, is the judge. Math, they think, either likes you or it doesn’t like you.

Send it back to the kitchen if it doesn’t suit you.
US Army Africa, CC BY

But if you don’t like the food a restaurant serves you, you go somewhere else, right?

Now students today do go somewhere else. But many do it because they feel they have no choice; math doesn’t like them. Forget that! Math doesn’t play favorites. If you dump math, it should be because in your judgment, math is not attractive.

The third consequence follows from the first two, and it’s the best of all. If students work hard, if they make mistakes, if they persevere, they will succeed in mathematics. But if students find mathematics unlovable, they won’t stick with it.

The most important goal of any mathematics course is not that the students learn – that’s secondary. The real goal is simple: help the students love mathematics.

The Conversation

Jim Henle is Professor of Mathematics and Statistics at Smith College.

This article was originally published on The Conversation.
Read the original article.

What’s that smell? A controversial theory of olfaction deemed implausible

Humans can discriminate tens of thousands of odors. While we may take our sense of smell for granted, it adds immeasurably to our quality of life: the aroma of freshly brewed coffee; the invigorating smell of an ocean breeze or a field of wildflowers; the fragrance of a lover or the natural smell of a baby. Our olfactory sense also warns us when milk turns rancid, when a baby’s diaper needs changing and when there’s a gas leak. In animals, the sense of smell is essential for detection of predators and other dangers, food sources and mates.

How this amazing sense works to discriminate odors is controversial. The mainstream mechanism vying for consideration is chemical. Often referred to as the shape theory of olfaction, it proposes that attractive and repulsive interactions between molecules come into play when an odorant interacts with its receptor in the nose – ultimately triggering perception of the smell. These molecular interactions reflect the chemical features of whatever you’re sniffing: molecular size, shape, and functional groups – combinations of atoms such as hydroxyl (OH) or carbonyl (C=O) that possess special chemical reactivity.

The alternative mechanism is called the vibrational theory of olfaction. It assumes that transfer of an electron occurs when odorants bind with their receptors. This process is thought to occur when olfactory receptors detect odorant molecular vibrations. The suggestion that a molecule’s smell is based on its vibration frequency is similar to how the sense of hearing functions. The vibration theory has been promoted by a popular book on the topic.

Through our new research, my colleagues and I are shifting the debate. Based on our experiments, we conclude that the chemical mechanism is the correct one and the vibrational theory of olfaction is implausible. Here’s how we investigated.

Does it pass the sniff test?
David Resz, CC BY-NC

Probing the sense of smell

Studies into both theories of olfaction have used a psychophysical approach: human volunteers sniff odor molecules and describe their perception of the smell.

We wanted to reexamine some earlier work that had been touted as support for the spectral theory. Luca Turin’s group had looked at musks, the heavy molecules used as base notes in most perfumes. Their experiment hinged on whether people could distinguish by smell what are called isotopomers: molecules with all their regular carbon-hydrogen bonds replaced with carbon-deuterium bonds. Deuterium is just a heavier isotope of hydrogen, due to its extra neutron. Isotopomers are considered identical in structure and functionality. The logic of the experiment was that because the vibrations of bonds to hydrogen and deuterium are very different, isotopomers should smell different. And that’s what Turin’s group found in the case of these musks that had a relatively large number of hydrogens or deuteriums present.

Our new research involved a biophysical, rather than the usual psychophysical, approach. Given the differences in the smell reported for the musk isotopomers, my coworkers and I asked whether olfactory receptors highly responsive to musks could also distinguish musk isotopomers. My coauthors Hanyi Zhuang and Hiroaki Matsunami screened the entire repertoire of human olfactory receptors, looking for those that responded to musks. They introduced DNA encoding each of 330 olfactory receptors into tissue culture cells in the lab. Then they used an elegant technique to measure odorant-binding to the receptors: receptor activation is converted into light emission that is easy to quantify. We identified one receptor, OR5AN1, which was strongly activated by musks.

Chemical structure of muscone, the fragrant musk originally isolated from the Siberian musk deer. In the deuterated isotopomer, all 30 hydrogens are replaced by deuterium.
Николай Усик, CC BY-SA

As in the earlier experiment, we prepared musk isotopomers and other compounds by replacing all hydrogens with deuterium. The musk isotopomers prepared included muscone, the fragrant musk originally isolated from the Siberian musk deer.

When we exposed receptor OR5AN1 to highly purified pairs of musk isotopomers, no difference in the magnitude of light emission was seen, which indicated identical response and therefore identical binding. This was true for nine other olfactory receptors and isotopomers of their substrates we examined. But how can isotopomers that interact identically with their primary receptors be perceived as smelling different? And how does this new observation fit into the debate on the mechanism of olfaction?

A serpentine model of an olfactory receptor. Each of the 307 circles represents an amino acid, identified by its one-letter code.
PNAS February 28, 2012 vol. 109 no. 9 3492-3497, used with permission, Author provided

How we perceive odor molecules

Odor perception begins with inhalation of a volatile odorant. In the nose, the odorant dissolves in the nasal mucus layer surrounding the olfactory receptors. These receptors are found in the olfactory epithelium, a three-square-inch patch of tissue lying on the roof of the nasal cavity behind each nostril. The receptors themselves consist of a chain of amino acids anchored into the plasma cell membrane and traversing it seven times.

Chemistry occurs in the nasal mucus because enzymes capable of modifying the odorant structures are present. These enzymes protect the receptors against injury by toxic odorants. Each receptor can detect several related odorants that pass through the nasal mucus, although with intensity that varies from odorant to odorant. Most scents are composed of multiple different odorants. Each odorant typically activates several olfactory receptors, which in turn indirectly send electric signals to the brain. Each unique odor, through the identity of the different olfactory receptors activated and the intensity of activation in each case, leads to an “odorant pattern” unique to that odor, which the brain perceives as the smell of musk, lilacs, garlic, and so on. This is the basis for our ability to recognize and distinguish tens of thousands of unique odors.

How could isotopomers be perceived to have different scents?

Subtle differences occur when hydrogen in a molecule is replaced by deuterium, since carbon-deuterium bonds are shorter and stronger than carbon-hydrogen bonds. As a result, enzymes could transform deuterated versus undeuterated compounds at varying rates. These different rates of reaction are well-known and could account for the differences in perceived smell of isotopomers. Maybe it’s all down to perireceptor effects – the kind of chemical transformation that occurs in the nose before the odorant reaches its receptor. Trace impurities present to a differing extent in isotopomers could also lead to differences in perceived odors.

My coworkers and I found no receptor that discriminates between isotopomers. We therefore argue that the vibrational theory of olfaction is implausible. If the receptors weren’t responding to shape but to electron transfer, we should have been able to observe that in the form of a different receptor response between the pairs of isotopomers.

Odor-sensing, shown here with musk essence R-muscone, involves odorant–receptor interactions, not molecular vibrations.
Proc. Natl. Acad. Sci. USA 2015, 112(21):6519-6520, used with permission, Author provided

Bio-molecules, such as enzymes and other proteins, interact most strongly when they have complementary surfaces and complementary distribution of active groups. Scientists commonly use the “lock and key” metaphor for interactions of complementary bio-molecules, but the metaphor should not be taken literally: bio-molecules are flexible, and molecular size and functional group attractive and repulsive forces are all involved. Olfactory receptors would likely employ a similar mechanism since they are proteins, structurally similar to drug receptors that typically utilize “lock and key” mechanisms.

My coworkers and I have computationally modeled such interactions for olfactory receptors, discovering a role for copper ions. Finally, coauthor Seogjoo Jang examined the theoretical grounds supporting the vibration theory of olfaction and found them unrealistic in a biological milieu.

This is one case where a study’s negative results – not finding something – could have a major impact on mainstream thinking about how we smell.

The Conversation

Eric Block is Carla Rizzo Delray Distinguished Professor of Chemistry at University at Albany, State University of New York.

This article was originally published on The Conversation.
Read the original article.

Shopping mall design could nudge shoplifters into doing the right thing – here’s how

Shoplifting is a serious problem. Although it is often perceived as an “ordinary crime” due to its supposed victimless nature, in fact it costs the UK’s retail industry £335m a year. And part of this cost is passed on to consumers in the form of higher prices.

The way buildings and streets are designed can help reduce shoplifting, and architects, city planners and law enforcement teams have a range of techniques to help them do this. For example, Crime Prevention Through Environmental Design (CPTED) strategies try to maximise opportunities for official surveillance and restrict people’s access to certain areas while directing them to others.

Such techniques appeal to rational thought in potential shoplifters by trying to make the costs or risks of crime outweigh the benefits. But other elements of retail design appeal to unconscious decision making, encouraging you to do things without realising, in order to increase the chances of you making a purchase. We believe the same ideas can be used to deter shoplifters.

A retail environment can be described as “a bundle of cues, messages and suggestions which communicate to shoppers”. This has an ability to manipulate people’s behaviour and make them more likely to buy something.

Impulse purchase.

Have you ever wondered why you have to walk all the way to the far end of a shopping mall to access the next set of stairs or escalators? While dictating the flow of visitors around the shopping centre, it also ensures people are exposed to the maximum number of stores and products, increasing the chance of an impulse buy.

Because “all buildings imply at least some form of social activity“, the arrangement of wall partitions, doors and other features can affect, amplify or curtail social interaction. For instance, a designer can create specific areas such as access lanes where people can come into contact with each other. It is this ability of a retail environment to influence choices that is at the heart of our proposition to tackle high-street crime.

Nudge theory

The nudge theory is the idea that people make most decisions unconsciously and non-rationally and so people can be encouraged to do things without having to convince them logically.

Under this idea, we believe potential shoplifters can be encouraged to do the right thing using environmental signals that target the non-rational parts of their brains. Nudging provides an interesting antithesis to conventional approaches because it is not dependent on a rational judgement by the criminal (for example, deciding security cameras make a theft too risky).

We believe that nudges can either be developed to target shoplifters specifically or to foster an environment that affects everyone in it by enhancing natural surveillance. For example, we can imagine a store that earmarks a certain amount each year for charitable work and another amount as shoplifting costs. What if the store displayed signs indicating that money saved by reduced shoplifting would be donated to charity?

By presenting this cue we are not threatening prosecution. We are offering a choice that allows a potential criminal to contribute to society by not stealing from the store. In this manner, the tenets of nudging are employed as cues in the environment to present their choice very differently from conventional means.

We are enhancing the benefits of not committing crime as an alternative to enhancing the cost of doing so. Although this approach still relies on some rational thinking on the part of the criminal, it is inspired by nudge theory because it alters the way choices are presented to criminals in order to encourage them to do the right thing.

The more non-rational elements of nudging could also be employed to produce playful environments that encourage natural surveillance. If people want to interact with their space, for example if it includes art installations or technology, they may be encouraged to unconsciously watch their immediate environment (see video below). Such playful interactions with goods or other customers in a retail environment, (if designed correctly) would present a harder target for criminals and at relatively low-cost.

We want to encourage a shift from conventional approaches from punishment to prevention when tackling high street crime. To do this, we think designers and architects should experiment with nudge theory to produce innovative thinking in this space, augmenting conventional crime prevention methods such as CPTED. We have already tried incarceration for centuries and people are still found shoplifting. Perhaps alternative ideas could help reduce crime.

The Conversation

Dhruv Sharma is PhD candidate, HighWire Centre for Doctoral Training at Lancaster University.
Myles Kilgallon Scott is PhD candidate, HighWire Doctoral Training Centre at Lancaster University.

This article was originally published on The Conversation.
Read the original article.

Can we unlearn social biases while we sleep?

Your brain does a lot when you are asleep. It’s when you consolidate memories and integrate the things you’ve learned during the day into your existing knowledge structure. We now have lots of evidence that while you are sleeping, specific memories can be reactivated and thus strengthened.

We wondered whether sleep could play a role in undoing implicit social biases. These are the learned negative associations we make through repeat exposure – things like stereotypes about women not being good at science or biases against black people. Research has shown that training can help people learn to counter biases, lessening our knee-jerk prejudices, many of which can operate without our notice. We know from earlier studies that sound can cue the process of memory consolidation. Can this sleep-based memory trick strengthen newly learned information and in turn help reduce or reverse biases?

How does sleep strengthen memories?

The mechanism that strengthens and stabilizes memories of new information while you sleep is replay. When you learn something, the neurons in your brain start firing to make new connections with each other. Once you hit the sack, those neurons fire again in a similar pattern to when you were awake and learning.

This replay takes memories that are still fresh and malleable and makes them more stable and long-lasting. Some memories can be spontaneously reactivated during sleep, but recent studies have shown that we can directly manipulate which memory gets reactivated and consolidated using sound cues. This is called targeted memory reactivation.

Neurons at work.
Dr Jonathan Clarke/Wellcome Images, CC BY-NC-ND

To do this, researchers have paired unique sound cues with learning episodes, so that there are strong associations between the sound cues and the information learned. Imagine a certain beep being played every time a subject is shown a picture of a face associated with a certain word. After people fall into deep sleep, we can reactivate these memories by replaying those specific beep sound cues. Because the sleeping brain still processes environmental stimuli, such sound cues serve to remind the brain of these memories – and help them become stable and long-lasting.

Prior studies already showed that we can selectively improve memory for the location of objects (such as remembering where objects appeared on a computer screen) or skills (such as playing a melody).

Social biases are learned – like bad habits. We know that habits are well-learned, and can operate without effort, even without our awareness of their influence. Many daily routines are habits: we don’t need to reflect upon them or think twice. Rather, we do these routines automatically. Learning to counter preexisting biases is like learning a new habit, and at the same time, breaking an old, bad habit.

Prior research on prejudice and stereotyping shows that extensive counter-bias training can lessen automatic stereotyping. Building on this bias reduction and sleep-based memory consolidation research, we aimed to test whether people can further process such counter-bias memories during sleep. Can such learning reduce long-lasting stereotypes and social biases? 

Unlearning biases.
Drawing via

Using sleep to counter biases

We recruited 40 participants from Northwestern University. They were all white and 18-30 years old. We started by measuring their baseline implicit social biases using an implicit association test (IAT) (which you can take yourself).

An IAT can test the associative strength between a concept and a stereotype, for instance, “female” and “math/science.” It measures how fast the subject presses a button to make associations. The longer it takes someone to connect a female face with physics, for instance, the stronger their bias against women and science. Everyone took two versions of the test – one that looked at gender bias and another that looked at racial bias. We ended up with a quantification of each subject’s implicit biases.

We then had participants go through counter-stereotype training, which is meant to help reduce preexisting stereotypes. We targeted gender stereotypes (eg, women are not good at science) and racial bias (eg, black people are disliked). Participants were shown pictures of faces paired with words that countered a specific stereotype. Specifically, we showed female faces with words associated with math or science, and black faces paired with pleasant words like cheer, smile, honor.

During the session, we also played sound cues that became associated with these pairs. Whenever the participant made a fast and correct response to counter-bias stimuli pairs – for instance, associating female faces with science words or black faces with good words – they heard a particular sound cue. One sound was for gender biases, another for racial biases.

After the counter-stereotype training, participants took a 90-minute nap. Once they entered deep sleep, we played one of the two sounds cues repeatedly without waking them up. Since participants were exposed to both sounds during counter-bias training, but just one during their nap, we were able to draw comparisons between the one cued while they slept and the one that was not. That meant we could compare how much the stereotypes targeted by the training were reduced.

No implicit biases here.
Woman via

Sound cues can help strengthen counter-bias training and reduce stereotypes

After the nap, we tested whether subjects had reduced their level of bias by having them retake the implicit association test. Preexisting stereotypes that were associated with the sound cue replayed during sleep were significantly reduced when the participant woke up. So if a participant heard the sound cue associated with the counter gender-bias training while they slept, when they retook the IAT, they were less likely to use stereotypes about women not being good at science.

We were surprised that this sleep-based intervention was so powerful when participants woke up: the biases were reduced by at least 50% relative to the pre-sleep bias level. But we were also surprised at how long the effect lasted. At the one-week follow-up test, the sleep-based intervention was still effective: bias reduction was stabilized and was significantly smaller (approximately 20%) than its baseline level established at the beginning of the experiment. 

This is unexpected because a one-time intervention can quickly decay when people return to their normal life. But those during-sleep sound cues helped subjects retain the counter-stereotype training effects. Our finding agrees with the theory that sleep is important for the long-term stabilization of memories.

We can use this to counter other stereotypes and preexisting beliefs

Our society values egalitarianism, yet people may still be influenced by racial or gender biases. Even the best-intentioned of us have preexisting biases, but that doesn’t mean we can’t change. Here we show that biases can be changed, and that the lasting effect of our counter-stereotyping intervention depended on replay during sleep.

We might be able to use this method to reduce other preexisting, yet undesired, thoughts and beliefs. Beyond gender and racial stereotypes, these methods could be used to reduce other biases, such as stigma toward disability, weight, sexuality, religion or political preference.

Because we designed this study thinking of biases as a type of bad habit, it may also have implications for how to break other bad habits, such as smoking. 

To read more about sleep and memory see: Sleep study raises hope for clinical treatment of racism, sexism and other biases

The Conversation

Xiaoqing Hu is Postdoctoral Fellow at University of Texas at Austin.

This article was originally published on The Conversation.
Read the original article.