Category Archives: Brain

Memetic Warfare and the Sixth Domain Part Three


Can an image, sound, video or string of words influence the human mind so strongly the mind is actually harmed or controlled? Cosmoso takes a look at technology and the theoretical future of psychological warfare with Part Three of an ongoing series. 

Click here for Part One.

Click here for Part Two.

A lot of the responses I got to the first two installments talked about religion being weaponized memes. People do fight and kill on behalf of their religions and memes play a large part in disseminating the message and information religions have to offer.

Curved bullet meme is a great one. Most of the comments I see associated with this image have to do with how dumb someone would have to be to believe it would work. Some people have an intuitive understanding of spacial relations. Some might have a level of education in physics or basic gun safety and feel alarm bells going off way before they’d try something this dumb. It’s a pretty dangerous idea to put out there, though, because a percentage of people the image reaches could try something stupid. Is it a viable memetic weapon? Possibly~! I present to you, the curved bullet meme.

How-to-curve-path-of-bullet

The dangers here should be obvious. The move starts with “begin trigger-pull with pistol pointed at chest (near heart)” and anyone who is taking it seriously beyond is Darwin Award material.

Whoever created this image has no intention of someone actually trying it. So, in order for someone to fall for this pretty obvious trick, they’d have to be pretty dumb. There is another way people fall for tricks, though.

There is more than one way to end up being a victim of a mindfuck and being ignorant is part of a lot of them but ignorance can actually be induced. In the case of religion, there are several giant pieces of information or ways of thinking that must be gotten all wrong before someone would have to believe that the earth is coming to an end in 2012, or the creator of the universe wants you to burn in hell for eternity for not following the rules. By trash talking religion in general, I’ve made a percentage of readers right now angry, and that’s the point. Even if you take all the other criticisms about religion out of the mix, we can all agree that religion puts its believers in the position of becoming upset or outraged by very simple graphics or text. As a non-believer, a lot of the things religious people say sound as silly to me as the curved bullet graphic seems to a well-trained marksman.

To oversimplify it further: religions are elaborate, bad advice. You can inoculate yourself against that kind of meme but the vast majority of people out there cling desperately, violently to some kind of doctrine that claims to answer one or more of the most unanswerable parts of life. When people feel relief wash over them, they are more easily duped into doing what it takes to keep their access to that feeling.

There are tons of non-religious little memes out there that simply mess with anyone who follows bad advice. It can be a prank but the pranks can get pretty destructive. Check out this image from the movie Fight Club:

Motor Oil

Thinking no one fell for this one? For one thing, it’s from a movie, and in the movie it was supposed to be a mean-spirited prank that maybe some people fell for. Go ahead and google “fertilize used motor oil”, though, and see how many people are out there asking questions about it. It may blow your mind…

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Understanding Cognitive Bias Helps Decision Making


in·tu·i·tion
ˌint(y)o͞oˈiSH(ə)n/
noun
noun: intuition
  1. the ability to understand something immediately, without the need for conscious reasoning.

People tend to trust their own intuition. Has there been much formal study about the veracity of intuition?

Brain science itself is a young field, and the terminology has yet to mature into a solid academic lexicon. To further increase your chances of being confused, modern life is rife with distractions, misinformation, and addictive escapisms, leaving the vast majority of society having no real idea what the hell is happening.

To illustrate my point, I’m going to do something kind of recursive. I am going to document my mind being changed about a deeply held belief as I explore my own cognitive bias. I am not here to tell you what’s REALLY going on or change your mind about your deeply held beliefs. This is just about methods of problem solving and how cognitive bias can become a positive aspect of critical thought.

Image: "Soft Bike" sculptiure by Mashanda Lazarus http://www.ilovemashanda.com/

Image: “Soft Bike” sculptiure by Mashanda Lazarus
http://www.ilovemashanda.com/

I’m advocating what I think is the best set of decision making skills, Critical Thought. The National Council for Excellence in Critical Thinking defines critical thinking as the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. (I’m torn between the terms Critical Thinking and Critical Thought, although my complaint is purely aesthetic.)

Ever since taking an introduction to Logic course at Fitchburg State college I have been convinced that Logic is a much more reliable, proven way to make decisions. Putting logic to practice when decision-making is difficult, though. Just like a math problem can be done incorrectly, Some logic can even counter-intuitive. My favorite example of intuition failing over logic is always chess. Even as I write this I can’t convince myself otherwise: I have regretted every intuitive chess move. It’s statistically impossible that all my intuitive moves have been bad moves yet logic works in the game so much better that my mind has overcompensated in favor of logic. In the microcosm of chess rules, logic really is the better decision-making tool. Often the kernel of a good move jumps out at me as intuition but then must still be thoroughly vetted with logic before I can confidently say it’s a good move.

In high school, I was an underachiever. I could pass computer science and physics classes without cracking a book. My same attempt to coast through math classes left me struggling because I could not intuitively grasp the increasingly abstract concepts. The part of my mind that controls logic was very healthy and functioning but my distrust for my own intuition was a handicap. I would be taking make up mathematics courses in the summer but getting debate team trophies during the school year.

duchamp

Photograph of Marcel Duchamp and Eve Babitz posing for the photographer Julian Wasser during the Duchamp retrospective at the Pasadena Museum of Art, 1963 © 2000 Succession Marcel Duchamp, ARS, N.Y./ADAGP, Paris.

I’m not just reminiscing; everyone’s decision making process is an constantly-updating algorithm of intuitive and logical reasoning. No one’s process is exactly the same but we all want to make the best decisions possible. For me it’s easy to rely on logic and ignore even a nagging sense of intuition. Some people trust intuition strongly yet struggle to find the most logical decision; everyone is most comfortable using a specially-tailored degree of intuition and logic. People argue on behalf of their particular decisions and the methodology behind them because a different method is useful in for each paradigm.

In chess, intuition is necessary but should be used sparingly and tempered with logic. It’s my favorite example because the game can be played without any intuition. Non-AI computers are able to beat the average human at chess. Some AI can beat chess masters. So, I’m biased towards logic. Chess is just a game, though. People are always telling me I should have more faith in intuitive thinking.

“But,” you should be asking, “Isn’t there an example of reliance on intuition as the best way to decide how to proceed?”

At least that’s what I have to ask myself. The best example I found of valuable intuition is the ability to ride a bike. It is almost impossible to learn to ride a bike in one session; it takes several tries over a week or longer to create the neural pathways needed to operate this bio-mechanical device. Samurais trained to feel that their weapon was part of themselves, or an extension of their very arm.  The mechanical motion of  the human body as it drives a bicycle becomes ingrained, literally, in the physical brain. The casual, ubiquitous expression, “It’s like riding a bike”, is used to idiomatically describe anything that can be easily mastered at an intermediate level, forgotten for years, but recalled at near perfect fidelity when encountered once again.

The Backwards Brain Bicycle – Smarter Every Day episode 133

Destin at Smarter Everyday put together a video that shows the duality of intuitive thinking. It is completely possible to train the human mind with complicated algorithms of decision making that can be embrace diversification and even contradictory modes of thinking.

Cont. below…

After watching this video, I embraced a moment of doubt and realized that there are very positive and useful aspects to intuition that I often don’t acknowledge. In this case of reversed bicycle steering, a skill that seems to only work after it has been made intuitive can be “lost” and only regained with a somewhat cumbersome level of concentration.

The video demonstrates the undeniable usefulness of what essentially amounts to anecdotal proof that neural pathways can be hacked, that contradictory new skills can be learned. It also shows that a paradigm of behavior can gain a tenacious hold on the mind via intuitive skill. It casts doubt on intuition in one respect but without at least some reliance on this intuitive paradigm of behavior it seems we wouldn’t be able to ride a bike at all.

This video forced me to both acknowledge the usefulness of ingrained, intuitive behaviors while also reminding me of how strong a hold intuition can have over the mind. Paradigms can be temporarily or perhaps permanently lost.  In the video, Destin has trouble switching back and forth between the 2 seemingly over-engaging thought systems but the transition itself can be a part of a more complicated thought algorithm, allowing the mind to master and embrace contradictory paradigms by trusting the integrity of the overall algorithm.

Including Confirmation Bias in a greater algorithm.

These paradigms can be turned on and off and just as a worker might be able to get used to driving an automatic transmission car to work and operating a stick shift truck at the job site and drive home in the automatic again after the shift.

This ability to turn on and off intuitive paradigms as a controlled feature of a greater logical algorithm requires the mind to acknowledge confirmation bias. I get a feeling of smug satisfaction that logic comprises the greater framework of a possible decision making process anytime I see evidence supporting that belief. There are just as many people out there who would view intuition as the the framework of a complex decision making process, with the ability to use or not use logical thought as merely a contributing part of a superior thought process. If my personal bias of logic over intuition is erroneous in some situations, can I trust the mode of thinking I am in? Using myself as an example, my relief at realizing data confirms what I have already accepted as true is powerful.

That feeling of relief must always be noted and kept in check before it can overshadow the ability to acknowledge data that opposes the belief. Understanding confirmation bias is the key to adding that next level to the algorithm, in the video example from Smarter Everyday, steering a normal bike is so ingrained in the neural pathway that the backwards steering’s inability to confirm actually fill in the blank and the mind sends an incorrect set of instruction of the mechanical behavior to the body. Understanding the dynamics of confirmation bias would enable the mind to embrace the greater thought system that would enable the mind to go back and forth between those conflicting behavioral paradigms. I’m positing that it should be possible to master a regular bike and the “backwards bike” and be able to switch back and forth between both bikes in quick succession. The neural pathways between both behavior paradigms can be trained and made stronger than the video shows.

I believe that with practice, someotrciksne could alternate steering mechanism quickly and without as much awkwardness as we are seeing in the video just as my initial confirmation bias, now identified, doesn’t have to dictate my decision and I might be more open minded to an intuitive interpretation leading to the best decision in certain situations.

An inability to acknowledge that one’s own mind might be susceptible to confirmation bias paradoxically makes one more susceptible.  Critical thinking is a method of building immunity to this common trap of confidence. Identifying the experience of one’s own confirmation bias is a great way to try and understand and control this intuitive tendency.  No matter what your thoughts are regarding logic and intuition, examining one’s confirmation biases and better embracing them should lead to better decision making skills.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Spider Venom and the Search for Safer Pain Meds


Some of the most poisonous animals on the planet are found down under. Australian researchers retrieved exciting new data when taking a closer look at spider venom. Biosynthesized chemicals designed to be highly reactive with other organisms could inspire new drugs and, eventually, an entire new class of painkillers.

It can be defensive but the function of spider venom is often to incapacitate or kill prey. University of Queensland academics released their findings in The British Journal of Pharmacology, after they isolated seven unique peptides found in certain spider venoms that can block the molecules that allow pain-sensitive nerve pathways to communicate with the brain. One of the pepetides originated in the physiology of a Borneo orange-fringed tarantula. That peptide possessed the correct chemical structure, combined with a stability and effectiveness to become a non-opiate painkiller.

15% of all adults are in chronic pain, according a study published in 2012 Journal of Pain. Most readers are already aware of the danger of addiction and lagging effetiveness of opiate drugs like morphine, hydrocodone, oxycodone. The medical community is hungry for a change in available medications. Opiates are all derivatives or inspired by opium plants which have been tried and tested for centuries. Venomous spiders are difficult to study but the motivation for new drugs has loosened funding with the help of promising finds like this one.

“Spider venom acts in a different way to standard painkillers,” ~ Dr. Jennifer Smith, research officer @ University of Queensland’s Institute for Molecular Bioscience.

While cessation from pain might in itself create an addictive reaction, this venom is promising, according to Dr. Smith, because it blocks the channel through which the pain would even reach the brain. Opiates merely block the widespread opioid receptors in actual brain cells, deep within and in the surrounding nerve tissue of the brain itself.

What’s the mechanism of action for this spider-drug? Some people are born with a rare genetic defect that renders them unable to feel pain. Geneticists identified the human gene responsible, known as SCN9A. Dr. Smith hopes the peptide will enable the cells of a human without the defect to shut down part of the DNA that manifests this immunity to pain.

There could be other breakthroughs in medicine and chemistry. The findings are awesome in the Australian project but those researchers only documented findings of roughly 200 out of 45,000 known species of spider.  Out of those 200, 40% contained peptides that interacted with the way pain channels communicate. The next step would be to test the painkillers on animals.

“We’ve got a massive library of different venoms from different spider species and we’re branching out into other arachnids: scorpions, centipedes and even assassin bugs,” said Dr. Smith.

 

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

The hidden complexity inside your skeleton


Your bones are savvy. They are light yet strong and they repair themselves when they break. What’s more – although you can’t tell – your bones continually renew themselves, replacing old bone for new.

This isn’t unique. Other tissues and cells (most noticeably skin) replace themselves. But bones do it with adaptation, adjusting to meet the body’s mechanical and physiological needs.

How does the skeleton achieve something so remarkable? New imaging technology is revealing a previously under-appreciated dimension of bones: the living cellular network built deep inside them. This living network is composed of the most abundant cell in bone: the amazing osteocyte.

Osteocytes (literally “bone cells”) are buried alive in bone tissue whenever bone is formed. They develop long branch-like dendritic fingers that infiltrate the tissue and reach out to interconnect with one another.

Living inside hard, rock-like bone, osteocytes have been difficult to study. They were considered inactive and uninteresting for a long time. They are now known to sense mechanical strains, orchestrate bone tissue renewal, and regulate calcium levels in the bloodstream.

Almost as complex as the brain

As more researchers investigate these cells and their network, the picture has become more elaborate. Osteocytes are clearly numerous and densely interconnected (see the image below), but putting an actual number on them had never been done. But it’s worth doing.

Numbers in biology help us discover new insights, so much so that researchers have set up a database and handbook of many “bionumbers” across many species, collected from the scientific literature.

For example, the number of synapses in the human neural cortex is estimated at 150 trillion. An MIT-led citizen science project involving 120,000 online gamers has already helped in understanding how the brain sees movement by mapping these connections through a project called EyeWire.

But why should anyone care about the number of osteocytes? Because, as well as controlling bone strength and the release of vital minerals such as calcium and phosphate into the bloodstream, there is now evidence that these cells might influence how your immune system works, how fat you are, how your kidney works, and even male fertility.

So, to get a sense of the size of the osteocyte network, we started to quantify it in the human skeleton. What we found exceeded even our expectations. It turns out that inside your skeleton lives a network that is almost as complex as the neural network of your brain.

Osteocytes and their dendritic fingers form a network within bone
Kevin Mackenzie, University of Aberdeen, Wellcome Images (B0008430), CC BY-NC-ND

How the numbers stack up

Taking recent imaging data (e.g. here and here), we calculated that the human skeleton contains about 42 billion osteocytes. That’s about six times the Earth’s population. In comparison, the human brain contains 86 billion neurons, packed in a volume (around 1.2 litres) comparable with that of the skeleton (which is about 1.75 litres). Although, of course, the skeleton is more spread out.

When we added together the length of these little cell fingers, imagining them being placed end to end, we found that this network is about 175,000 kilometres long. That’s more than four times the Earth’s circumference, and almost identical to the total length of axons in the brain: 180,000 km.

We based many estimates on simple algebraic manipulations of previously published data. But one essential piece of information could not be estimated easily: the number of connections osteocytes make with their neighbours. A brain without connections can do nothing, so estimating connections in the osteocyte network is important.

Unfortunately, connections between osteocytes are hard to see directly. What is seen instead are the little tunnels through the bone that osteocytes and their fingers live in.

So to measure this proxy tunnel network and the cell network within, we resorted to a mathematical model of dendritic finger branching. Feeding this model with data on the proxy network, we calculated that 23 trillion connections exist in the osteocyte network of the human body.

An evolved smart biomaterial

So, by these measures, your skeleton is a lot like your brain, with a similar number of cells interconnected in a similar sized space. But why do our skeletons need such a complex network? We don’t know exactly, but we do know that these cells exchange information, just like neurons do.

The tunnels that osteocytes occupy can still be seen in old bones, including dinosaur fossils. We can use this information to understand how bones have evolved to become the self-detecting and self-regulating biomaterial we own; that’s something that can’t be done with brain fossils.

Osteocytes communicate with each other about where the skeleton is weak and needs to be strengthened, or where there is damage that needs to be fixed. These messages are transmitted to cells on the bone surface that are able to remove damaged bone (osteoclasts) and form new bone (osteoblasts).

We know very little about how these cells communicate. But if we did, we could find better treatments for skeletal disorders like osteoporosis or osteogenesis imperfecta, and find ways to get football players back on the field more quickly (and more safely!) after a fracture.

In the meantime, the next time you stand up, walk around or do weights, think about how the network of osteocytes in your bones is responding to the stresses and strains you are putting it through. And thank your osteocytes for keeping your skeleton strong (and smart) enough to support you.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Flashbulb memories – why do we remember learning about dramatic events so vividly?


It isn’t surprising that many Bostonians have vivid memories of the 2013 Marathon bombing, or that many New Yorkers have very clear memories about where they were and what they were doing on 9/11.

But many individuals who were not onsite for these attacks, or not even in Boston on April 15 2013 or in New York on September 11 2001 also have vivid memories of how they learned about these events. Why would people who were not immediately or directly affected have such a long-lasting sense of knowing exactly where they were and what they were doing when they heard the news?

These recollections are called flashbulb memories. In a flashbulb memory, we recall the experience of learning about an event, not the factual details of the event itself.

There might be an advantage to recalling the elements of important events that happen to us or to those close to us, but there appears to be little benefit to recalling our experience hearing this kind of news. So why does learning about a big event create such vivid memories? And just how accurate are flashbulb memories?

Strong emotions and personal connections

Not all historical events lead to flashbulb memories. An event must capture our individual attention and be identified as something significant before the memory is intensified. In order for us to exhibit this enhanced memory phenomenon, it seems critical that we feel a sense of personal or cultural connection to the event that results in a strong emotional reaction.

John F Kennedy’s funeral, November 24 1963
The US National Archives

Hearing that a loved one has unexpectedly died would likely lead to a flashbulb-like memory, however, psychologists tend to study public events so they can examine a large number of memories referencing the same event. By doing so, investigators can examine the memories from a large group of individuals, often at varying intervals, to see how memories change over time.

Cross-cultural studies of flashbulb memories show that although the types of events and the memories that result are quite similar from person to person, the specific events that lead to these memories vary dramatically.

Martin Luther King, Jr at the Civil Rights March on Washington, DC, August 28 1963.
The US National Archives

For instance, the 1977 study that coined the term “flashbulb memories” showed that although both black and white Americans almost universally recalled flashbulb memories of John F Kennedy’s assassination, black Americans were more than twice as likely to have flashbulb memories for the assassination of Martin Luther King, Jr than were white Americans.

Some theorists have argued that part of the reason that our flashbulb memories are so long-lasting is because having such a vivid memory is “proof” of our membership in a particular social group. It would be a poor patriot who could not remember what he or she was doing on September 11 2001.

Flashbulb memories have vivid details

The first description of flashbulb-type memories in the psychological literature (by F W Colgrove in 1899) is actually of Abraham Lincoln’s assassination and the sample report includes abundant, specific detail:

Everybody looked so sad, and there was such terrible excitement that my father stopped his horse, and leaning from the carriage called: ‘What is it my friends? What has happened?’ ‘Haven’t you heard?’ was their reply–’Lincoln has been assassinated.’ The lines fell from my father’s limp hands, and with tears streaming from his eyes he sat as one bereft of motion.

Although we can remember many events from our lives for decades or longer, it’s the particular ease with which these extremely vivid memories come to mind after lengthy, sometimes lifelong delays, that also makes them remarkable.

Compared to ordinary autobiographical memories, flashbulb memories include richer sensory detail. For example, you may readily be able to picture people and places clearly and to hear the sounds of voices and ambient noises intensely. These memories are also characterized by the presence of “idiosyncratic details” that seem to be irrelevant to the overall scene.

More details, but not necessarily accurate details

Because we can easily recall a lot of details about the event, we believe those details accurately reflect what happened. But it turns out that the durability and the vividness of these memories are actually more reliable than their accuracy. In other words, although we feel like we remember exactly where we were and what we were doing, the evidence suggests that our confidence may be misplaced.

Have you ever disagreed with a spouse or a sibling about what actually happened at an event you both attended? You might realize that our memories are not a perfect reproduction of what occurred in the past. Instead, psychologists describe memories as being reconstructions of the past. Memories are based, in part, on what actually happened (obviously), but are also influenced by our current thoughts and emotions and our reasons for remembering.

You don’t remember events exactly as they happened

All memories tend to lose detail over time and we sometimes confuse details from one event with those from another. This is also true of flashbulb memories. We are just as prone to forgetting and, more interestingly, potentially more vulnerable to mis-remembering, flashbulb memories than other autobiographical memories. Because we frequently think about and talk about our flashbulb memories, we sometimes add details from other events or incorporate details suggested by others. By doing so, we shape our memories into a coherent, interesting story to share.

Media coverage contributes, in part, to this phenomenon. Repeatedly viewing footage that was only available later can sometimes lead to mistakenly remembering that you saw those images at the time of the event. The media can also serve as a cue to think about or talk about these memories, enhancing their accessibility and vividness.

So, although we have a subjective feeling of remembering these events “exactly” as they occurred, this is typically not the case. When people are asked to record their memories, these objective reports for flashbulb memories include errors of omission and commission to the same degree as other autobiographical memories from the same time.

Why, then, do we feel like we remember exactly where we were and what we were doing when we learned of an important event? Because doing so demonstrates to ourselves and to others what we believe to be important.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Why employing autistic people makes good business sense


Microsoft has announced its intention to hire more autistic people – not as a charitable enterprise but because, as corporate vice-president Mary Ellen Smith said: “People with autism bring strengths that we need at Microsoft.” Employing autistic people makes good business sense.

Microsoft is not the only firm to reach this conclusion. More and more companies are beginning to seek employees from the pool of autistic talent. Specialisterne is a consultancy that recruits only autistic individuals. Originally based in Denmark it now operates in 12 countries worldwide and is currently working with Microsoft.

In recent years Vodafone and German software giant SAP have also launched recruitment drives to find more autistic employees, and to provide better support for them in the workplace. Both companies state that this is due to the competitive edge it gives, with SAP executives reporting increased productivity.

This is testament to the excellent work that many autistic people and their supporters have done to raise awareness of the strengths and abilities associated with autism, as well as the better-known difficulties with social communication and interaction. But what are these strengths?

Highlighting strengths not drawbacks

The three companies above might suggest that benefits come to those working in computing. Indeed Specialisterne has revealed that its autistic consultants find on average 10% more bugs than their non-autistic colleagues when checking software code for errors.

There is growing evidence to back up these observations. Studies of attention and perception among people with autism reveal that those with the condition see the world differently. For example, my colleagues and I have shown that autism is characterised by Increased Perceptual Capacity – the ability to process more information at any given time.

This can be problematic if the extra information results in altered sensitivities – such as finding bright lights painful, or having difficulty focusing on a conversation in a noisy room. But in other situations it can provide an increased ability to absorb and process useful information at an exceptional level of detail.

For example, in tasks that require you to find a target hidden among other elements, autistic people are faster and more accurate. They are also better at noticing both expected and unexpected things in their visual field. Autistic people also show excellent pattern recognition, a superior ability to identify and remember sounds and are much more likely to have perfect pitch.

This gives them skills to excel as artists, musicians and scientists. Perhaps controversially, these skills are also put to military use: the Israeli Defence Force has a specialist intelligence unit comprised exclusively of autistic analysts, whose skills are used to detect military threats.

Computer hacker Gary McKinnon, who was diagnosed with autism spectrum disorder, and his mother Janice.
Katie Collins/PA

Beyond stereotypes

Is this idea of the “autistic genius” who is more comfortable with computers than people a worrying throwback to old stereotypes? We should be careful not to regress to the Rain Man vision of autism, depicting autistic people as largely unable to function yet with an isolated area of genius. Such savants do exist, but they are rare (1-10% of the population), whereas the increased perceptual abilities discussed above are more common. While the latter are less extreme, they demonstrate that alongside the difficulties autistic people often have unique gifts.

Temple Grandin, an autistic author, self-advocate and professor agrees, arguing that: “the autistic brain is good at something and bad at something else”. She urges us to seek out autistic people’s skills, rather than fixating on shortcomings. There are many old jokes within the IT industry that the top programmers at leading firms are on the autistic spectrum – hidden away in solitary cubicles, coding for hours on end (for example Bill Gates is regularly diagnosed as autistic by the press).

But Grandin reminds us not to pigeonhole all people based on classic “autism skills”. There are many types of autistic brain: not only the pattern thinkers and mathematicians who will excel at programming but also visual thinkers who will be great graphic designers, artists such as Stephen Wiltshire and photographers, or verbal thinkers who would make excellent stage actors (including Darryl Hannah) or journalists.

Stephen Wiltshire, an autistic artist who draws huge landscapes entirely from memory.
Wallace Woon/EPA

Improving chances

Society’s understanding of this issue is improving, but there’s still much to be done. According to the National Autistic Society’s 2012 survey, only 15% of autistic adults in the UK are in full-time paid employment (compared to 31% for other disabilities), despite 61% of those who are unemployed saying they want to work.

What can we do to improve this? Microsoft’s announcement and others like it helps, but employers need to be better educated about the value autistic employees can bring. Businesses need to know about potential difficulties that autistic employees might experience, the simple adjustments that can accommodate them and the wide range of skills and interests that they can bring to the workplace. Lee Scott MP, the prime minister’s special needs envoy, is developing a scheme that will ask each MP to help find work for young autistic people in businesses in their constituencies.

We also need the education system to be better at equipping autistic people with the skills they need. On his blog, John Elder Robison, an autistic author and engineer, explains how the expectation of college graduation is harmful and believes we need to focus on making sure students, autistic and non-autistic alike, gain the skills they need rather than achieve a particular qualification. This leans towards more vocational programmes that capitalise on individuals’ special interests and abilities, helping shape those into employable skills.

The film X+Y is an example of what happens when talent and ability is identified early and fostered in the right way. Based on the real-life documentary (Beautiful Young Minds) about autistic mathematician Daniel Lightwing, the film tells his story from early childhood through his training to take part in the International Mathematics Olympiad. I won’t spoil the ending, but it’s definitely worth a watch.

We must also take care to value all autistic individuals, irrespective of whether they have a particular ability or not. As the extremely eloquent Ari Ne’eman, autistic campaigner and member of the US National Council on Disabilities, said: “People have worth regardless of whether they have special abilities. If society accepts us only because we can do cool things every so often, we’re not exactly accepted.”

The Conversation

This article was originally published on The Conversation.
Read the original article.

Image of Brain under the influence of LSD


Lysergic acid diethylamide, AKA, LSD is probably the most famous hallucinogen. Despite the anecdotes of scary and beautiful trips, and the new age rumors of psychotropic medicinal potential, little is known about the actual, physical effects of LSD on the brain.  The drug has been under-researched, regardless of your stance on it, and in this day and age of legalization and the waning era of a completely ineffectual drug war it is hard to trust public opinion on any recreational mind altering substance. Timothy Leary’s 1960s-era writings and studies of the drug are the last true exploration – until now.

Last Summer, Carhart-Harris presented his findings after being the first UK Scientist to legally administer LSD to  human volunteers. The Misuse of Drugs Act of 1971 outlawed it for public use of any kind, including science. His presentation included a slide showing still unpublished cross-sectional brain images of a volunteer chilling in an fMRI scanner, tripping on acid. This kind of pro-LSD presentation is one of a handful from the worldwide science community that spurred the recent work of British medical researchers, lead by Imperial College London Neuropsychopharmacology Professors David Nutt and Dr Robin Carhart-Harris, who are  recording data as the drug interacts with regular healthy brains with MEG and fMRI brain scans.  It’s England’s first large-scale study of LSD in fifty years, and the first-ever study of this kind with a scientifically respectable sample size.

The study is being performed by The Beckley Foundation Psychedelic Research Programme, after crowdfunding on the website Walacea was extremely successful. The Walacea page says the money help to complete the research study which will present published results later in 2015. The crowdfunding is an important part of the story because university science budgets and government money have been slow to cover the costs of something so stigmatized by negative anecdotes.

‘Despite the incredible potential of this drug to further our understanding of the brain, political stigma has silenced research. We must not play politics with promising science that has so much potential for good’., said Prof. Nutt (Yes, that is his real name.)

LSD is in a restricted class of drugs in England where it is considered a Schedule-1 narcotic. There were a lot of legal requirements to meet before the team could get a license to use LSD on test subjects. They also needed approval from a science ethics committee to administer LSD to human subjects. After jumping through all the hoops, the researchers realized why LSD has gone so understudied. It was expensive, and they found they often had to convince people they were actually doing real science before they could get the paperwork to be taken seriously. The entire process has been slow and well-monitored, as a result.

The relatively sophisticated brain images the study hopes to produce of their subjects tripping on LSD could lead to new treatments for psychological disorders, most likely including obsessive compulsion and depression.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

How the brain reads music: the evidence for musical dyslexia


Music education in the western world often emphasizes musical literacy, the ability to read musical notation fluently. But this is not always an easy task – even for professional musicians. Which raises the question: Is there such a thing as musical dyslexia?

Dyslexia is a learning disability that occurs when the brain is unable to process written words, even when the person has had proper training in reading. Researchers debate the underlying causes and treatments, but the predominant theory is that people with dyslexia have a problem with phonological processing – the ability to see a symbol (a letter or a phoneme) and relate it to speech sounds. Dyslexia is difficult to diagnose, but it is thought to occur in up to 10% of the population.

In 2000, Neil Gordon, a retired pediatric neurologist, proposed the idea of musical dyslexia (dysmusia), based on growing evidence that the areas of the brain involved in reading music and text differed.

The idea that dyslexia could affect the reading of non-language symbols is not new. For instance, dyscalculia is the difficulty reading and understanding mathematical symbols. Recent research supports dyslexia and dyscalculia as separate conditions with unique causes (dyscalculia is thought to be caused by a deficit in spatial processing in the parietal lobe). If the brain processes words and mathematical symbols differently, why not musical symbols too?

Reading music is a whole brain activity.
Flutist via www.shutterstock.com.

Music’s written system

Western music, like language, has a highly evolved coding system. This allows it to be written down and transmitted from composer to performer. But music, unlike language, uses a spatial arrangement for pitch. The page is divided into staffs of five lines each. Basically, the higher a symbol is placed on the staff, the higher the pitch.

Unlike letters in text, pitches can be stacked, indicating simultaneous performance (chords). Music also uses a system of symbols to indicate how pitches should be played. Symbols can indicate duration (rhythm), volume (dynamics) and other performance cues. Music also utilizes written words to indicate both the expressive features of the music and the lyrics in vocal music. Lyrics may be in languages not spoken by the performer.

Due to differences in the physical features of the written systems, it makes sense that the brain would read music and text differently. This appears to be the case – at least to some extent.

Reading music and reading text use different systems in the brain.
Violin and books via www.shutterstock.com.

Text and music reading in the brain

In the brain, reading music is a widespread, multi-modal activity, meaning that many different areas of the brain are involved at the same time. It includes motor, visual, auditory, audiovisual, somatosensory, parietal and frontal areas in both hemispheres and the cerebellum – making music reading truly a whole brain activity. With training, the neural network strengthens. Even reading a single pitch activates this widespread network in musicians. While text and music reading share some networks, they are largely independent. The pattern of activation for reading musical symbols and letters is different across the brain.

Composer Maurice Ravel.
Bibliothèque nationale de France via Wikimedia Commons

Brain damage, especially if it is widespread, as was the case with the composer Maurice Ravel, (perhaps best known for Boléro), will likely impair both text and music reading abilities. Ravel had a form of frontotemporal lobe dementia.

However, there have been cases where a more limited brain injury impaired reading of one coding system and spared the other.

Ian McDonald, a neurologist and amateur pianist, documented the loss and recovery of his own ability to read music after a stroke, though his ability to read text was unaffected. Oliver Sacks described the case of a professional pianist who, through a degenerative brain disease (Posterior Cortical Atrophy), first lost her ability to read music while retaining her text reading for many years. In another case, showing the opposite pattern, a musician lost his ability to read text, but retained his ability to read music.

Cases where music and language seem to be differently affected by brain damage have fascinated researchers for centuries. The earliest reported case of someone who was unable to speak, but retained his ability to sing, was in the 1745 article, On a Mute who Can Sing.

More recently, the Russian composer, Vissarion Shebalin, lost his language abilities after a severe stroke, but retained his ability to compose. Maintaining the ability to sing in the absence of language has led to the creation of a therapeutic treatment called Melodic Intonation Therapy that essentially replaces speech with song. This allows the patient to communicate verbally. These cases and many others demonstrate that music and language are to some extent separate neurological processes.

Differences in reading ability can occur even within musical notation. Cases have been reported where musicians have lost their ability to read pitch, but retained their ability to read rhythm, and vice versa. fMRI studies have confirmed that the brain processes pitch (spatial information) and rhythm (symbol recognition) differently.

Musical dyslexia

The research starts to imply how a specifically musical dyslexia could occur. This deficit may be centered on pitch or musical symbols or both. No conclusive case of musical dyslexia has yet been reported (though Hébert and colleagues have come close) and efforts to determine the effects of dyslexia on reading musical notation have been inconclusive.

Children in western cultures are taught to read text, but not always taught to read music. Even when they are, inabilities to read music are not generally treated as a serious concern. Many gifted musicians are able to function at a professional level purely learning music by ear. Among musicians, there is a wide range of music reading proficiencies. This is especially apparent with sight reading (the first performance of a notated piece). Identifying musical dyslexia could help explain why some musicians read well and others don’t.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Why is it so difficult to think in Higher Dimensions?


Humans can only perceive three dimensional space but theoretical math works out just fine when manipulating objects in four or more spacial dimensions. Mathematicians, scientists and philosophers still debate whether higher spacial dimensions actually exist.

It’s hard to imagine higher dimensions. Even one additional spatial dimension is hard to see with your inner mind’s eye. If you want to imagine six, seven or eight spacial dimensions it isn’t just hard – no one’s even truly conceptualized hyperspace. It’s what makes the subject compelling but also what makes it frustrating to talk about. The examples theorists are able to use to help people “visualize” what can’t be seen must work within human limitations, and are thus second and third dimensional examples of a higher dimensional concept or object.

“Wait a second,” some of you are wondering, “Isn’t TIME the fourth dimension?”
This article is about spacial dimensions only. Personally, I agree with Amrit Sorli and Davide Fiscaletti’s work which I feel adequately proves that time is NOT a spacial dimension. If you want to debate this issue further, you can read my reasoning in my follow up piece, Time: fourth dimension or nah?, also available on Cosmoso.net

One of the most basic exercises in multidimensional theory is to imagine moving in a fourth. The distance between you and everything around you stays the same but in some fourth dimension you are moving. Most people can’t truly do this imagination game because there in nothing in our three spacial dimensions to compare the experience to.

Flatland_sphereFlatland

In the famous book about spacial dimensions, Flatland, living, two-dimensional beings existed in a universe that was merely two dimensions.  A being with three dimensions, such as a sphere, would appear as a circle able to change circumference as it moved through a third dimension no one in flatland has ever conceptualized.

Humans evolved to notice changes in our three-dimensional environment, inheriting our ancestors ability to conceptualize space in three dimensions as a hardwired trait that actually stops us from conceptualizing other aspects of reality that might nonetheless  exist. Other people see hyperspace as a theoretical construct of mathematics that doesn’t describe anything in reality, pointing to the lack of evidence of other dimensions.

Tesseracts Predate Computer-assisted Modelling.

A Tesseract. Many people in the advanced math classrooms of my generation of high school students struggled to wrap their heads around tesseracts without moving diagrams. If a picture is worth a thousand words are we talking animated gifs and words used to describe three dimensional space or should we make up a new saying?

We are able to conceptualize three dimensions in the abstract when we watch TV, look at a painting, or play a video-game. Anytime we look at a screen we watch a two dimensional image from a point outside that dimension. Having an outside point of view for a three dimensional space could give us a way to artificially understand a higher spatial dimension. Until that time comes, we are sort of stuck explaining fourth dimensions by demonstrating how it would look on a two dimensional screen which we view from a third dimensional viewpoint.

It’s kind of like imagining “one million”; you can prove it mathematically to yourself, you can count to it and you know how valuable it is but you can’t truly picture one million of anything. Trying to explain this conceptualization problem with words is pretty tough because your brain is not equipped to handle it. Humans try to wrap their minds around it and dream up ways to explain hyperspace to each other anyways.

4D Rubix Puzzle

A rubix cube is particularly compelling as a multi-dimensional teaching tool, because it puts spacial dimensions in the abstract in the first place, and then gives the cube the ability to change the dimensional orientation of a third of it’s mass. It’s hard to wrap your head around a normal three dimensional rubix puzzle. By adding another dimension and using the same principle, one can ALMOST imagine that fourth spacial dimension. Most people can’t solve a three dimensional Rubix puzzle but if you think you are ready for the fourth dimension, you can download it and play it on your two dimensional screen, here: Magic Cube 4D

If you don’t think you’re ready to try and solve that puzzle but you want to know more you can watch this roughly 1/2 hour video about it:

Miegakure

While Miegakure is still under development, it’s set for release in 2015. Interactive games like this can spur collaborative thinking from a larger pool of collaborators – and make game developers tons of money.

If you want something a little less abstract than Rubix, check out this prototype for Miegakure, the surreal PlayStation 4 game that lets the user explore a four dimensionally capable world through three dimensional spaces that connect to each other through higher dimensions. It’s a great idea that makes everyone have the initial thought of wondering how the heck they coded it. Then the idea sinks in and you realize they wrote the code first and played with the visual manifestation as they went. It’s a great metaphor for the idea in the first place; begins as a concept rather than an observation. The essence of the argument against hyperspace actually existing is the lack of physical evidence. Unlike a ghost story or a spiritual, religious attempt to explain the supernatural, there is actually mathematical evidence that seems to make higher dimensions possible. It has logical evidence as opposed to empirical data. There are ways to observe without using human senses but it’s difficult to prove an observation of something the majority of humans have trouble even seeing with their mind’s eye, so to speak.

One day we might be able to use technology to increase our understanding of this abstract concept, and manipulate an entirely new kind of media. For now we are stuck with two and three dimensional visual aids and an mental block put in place by aeons of evolution.

 Read More about Hyperspace on Cosmoso.net~!
Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

What is going on in your brain when you sleep?


Sleep has profound importance in our lives, such that we spend a considerable proportion of our time engaging in it. Sleep enables the body, including the brain, to recover metabolically, but contemporary research has been moving to focus on the active rather than recuperative role that sleep has on our brain and behaviour.

Sleep is composed of several distinct stages. Two of these, slow-wave (or deep) and REM sleep, reflect very different patterns of brain activity, and have been related to different cognitive processes.

Slow-wave sleep is characterised by synchronised activity of neurons in the neo-cortex firing at a slow rate, between 0.5 and three times per second. The neo-cortex comprises the majority of the cerebral cortex in the brain which plays a role in memory, thought, language and consciousness. In contrast during REM sleep, when most of our dreaming happens, neuronal firing is rapid and synchronised at much higher frequencies, between 30 to 80 times per second.

Such patterns of brain activity during REM sleep are reminiscent of those observed during wakefulness, and for this reason REM sleep is often referred to as “paradoxical” sleep.

Cognitive functions

There is growing evidence that slow-wave sleep is related to the consolidation of memory and is involved in transferring information from the hippocampus, which encodes recent experiences, and forging long-term connections within the neo-cortex. REM sleep has been linked to processes involving abstraction and generalisation of experiences, resulting in creative discovery and improved problem solving.

Though there are substantial similarities between wakefulness and REM sleep, numerous studies have explored differences in the activity of brain regions between these states, with the cingulate cortex, hippocampus and amygdala more active during REM sleep than wakefulness. These regions are particularly interesting to cognitive neuroscientists because they are key areas involved in emotional regulation and emotional memory.

However, which sub-regions are active within these broader cortical and limbic areas – the pathways in the brain that produce these patterns of activation – and the precise function of the activity in these regions during REM sleep is currently under-described.

Cortical activity in rats

A new study published in Science Advances studied the physiology and functionality of REM sleep in a group of rats and provides insight into the cortical activity and the sub-cortical pathways that result in this activity. The level of detail of this study provides a major step forward for our understanding of the effect that REM sleep has on our brain and cognitive behaviour.

Rat sleep.
Tomi Tapio K, CC BY

The authors studied groups of rats who were allowed to sleep, but prevented from entering REM sleep for three days. Six hours before assessment, half of the rats were allowed to sleep normally, and half continued to be deprived of REM sleep. The rats that were permitted to sleep normally then demonstrated raised levels of REM sleep within those six hours. This enabled a comparison of the effect of recent REM sleep between groups. An additional control group of rats were allowed to sleep normally throughout the study.

Gene expression analysis involves tracking the presence of particular mRNA or proteins that can be identified as the consequences of certain genes operating. The rats who underwent substantial REM sleep before testing were found to demonstrate greater expression of several genes that are associated with syntaptic plasticity (how quickly their synapses can adapt to changes in a local environment) and which affects the efficiency of neural transmission in the hippocampus.

In the neo-cortex, the gene expressions related to how well our synapses adapt also increased following REM sleep, but those related to neural transmission were reduced compared with the group that was prevented from REM sleep. So, the function of REM sleep appears to be due to changes in the way that neurons communicate. This is consistent with the view that REM sleep allows the brain’s memory processing systems to re-balance, which enables effective responses to experiences the next day.

Where in the brain?

Stained neurons from somatosensory cortex in the macaque monkey.
Brainmaps.org, CC BY

In a further study, the same group determined the precise location of where these changes actually occur in the brain. In the neo-cortex, there was a general increase in plasticity throughout several areas, including sensorimotor regions that bring together sensory and motor functions. In the hippocampus, it was generally confined to the dentate gyrus, which is thought to contribute to forming new episodic memories among other things. REM sleep was also associated with reduced neuro-transmission throughout many regions of the neo-cortex, indicating that REM sleep likely results in a general weakening of the connections between synapses, which may enable brain networks to better learn from multiple experiences rather than be affected only by single instances.

The claustrum: consolidating emotion and memory.
Was a bee

The final studies the group conducted determined the source of the cortical changes in plasticity and neuro-transmission during REM sleep. By tracking signal transmission between different brain areas together with chemical lesioning (in which brain areas are temporarily inactivated), they identified two further areas called the claustrum and the supramammillary nucleus as having key roles during REM sleep.

These two areas have been identified as involved in integrating emotion and memory. The claustrum is a very thin layer of neurons that are found underneath the inner neo-cortex. It is known to link to and from very many regions of this part of the brain. As such, the claustrum has been implicated in integrating stimuli from several senses and is involved in linking areas involved in emotional processing and attention.

The supramammillary nucleus, within the hippocampus, is also known to interconnect to multiple areas of the brain, several of which are associated with emotional processing.

The implications of this work provide converging evidence that REM sleep modulates activation and synaptic processing in areas of the brain that contribute to the processing of emotion. This is also consistent with previously untested accounts that suggest REM sleep is important for encoding memories (but without their emotional content). While the role of dreaming during REM sleep is still yet to be linked to observed effects from neuro-chemicals in the brain, understanding what is happening in our brains when we dream could yet prove to be key to processing of emotion and memory.

The Conversation

This article was originally published on The Conversation.
Read the original article.