Category Archives: Evolution

What we learn from primate personality

Every human is different. Some are outgoing, while others are reserved and shy. Some are focused and diligent, while others are haphazard and unfussed. Some people are curious, others avoid novelty and enjoy their rut.

This is reflected in our personality, which is typically measured across five factors, known as the “Big Five”. These are:

  • Openness – intellectual curiosity and preference for novelty
  • Conscientiousness – the degree of organisation and self-discipline
  • Extraversion – sociability, emotional expression and tendency to seek others’ company
  • Agreeableness – degree of trust or suspicious of others and tendencies towards helpfulness and altruism, and
  • Neuroticism – emotional stability or volatility.

But did you know that our primate cousins – other apes (chimpanzees, bonobos, orangutans, gorillas and gibbons) and monkeys – also exhibit a similar personality profile? Some are bold, others shy. Some are friendly, other aggressive. Some are curious, while others are conservative.

But they also differ from us in some interesting ways. And it’s in teasing out these differences that we can learn a surprising amount about the way they live, and how they have evolved.

Social influence

Comparative psychologists have long adapted personality tests to measure the personality of other species, including pets, big cats, and our “hairy” primate relatives.

Since nonhuman animals cannot fill out a questionnaire, a human who knows them well – perhaps a caregiver, zookeeper, owner, researcher or park ranger – rates their personality for them.

Chimpanzees, it turns out, are remarkably similar to us in their personality make-up. They have been found to have the same five personality factors that we have. However, they also have a sixth Dominance factor. This includes features such as: independent, confident, fearless, intelligent, bullying and persistent.

Why do chimps have a Dominance factor and we don’t? It appears to be due to the kind of society that chimps live in. Understanding the dominance hierarchy of male chimpanzees – who is powerful and who is not – is a matter of survival and well-being for every chimpanzee in a community.

Other primates also show interesting variations in personality that correspond to their social dynamics.

Do I look conscientious or neurotic?
Rod Waddington/Flickr, CC BY-SA

Macaque machinations

The 22 species of macaque monkeys are the only primates that are as widespread in their distribution as we are. Along with their disparate habitats, they also have a wide variation in the structure of the societies, which appears to have influenced the evolution of their personalities.

A team of researchers, led by Mark Adams and Alexander Weiss of Edinburgh University, investigated personality and social structure in six species of macaque and found some interesting variation.

There are four main categories of social style, ranging from Grade 1 “despotic” to Grade 4 “tolerant”, depending on how strict or relaxed their female dominance hierarchies are.

Grade 1 species showed strong nepotism or favouritism towards kin and high ranking monkeys. These species include rhesus macaques, a species commonly used in laboratories and sent into space before humans, and Japanese macaques, which include the famous snow monkeys who soak in hot springs.

At the other end of the spectrum, the Grade 4 species showed more tolerance in social interactions between unrelated females. This includes Tonkean macaques, which are found in Sulawesi and the nearby Togian Islands in Indonesia, and Crested macaques, which are critically endangered.

(A wild crested macaque received international attention when he stole a wildlife photographer’s camera and then photographed himself. This could be an example of a “bold” and “curious” personality.)

In the middle of the social tolerance scale are the Grade 2 and 3 species. This includes Assamese macaques, which are sometimes found at high altitudes in Nepal and Tibet, and Barbary macaques, which include the infamous “apes” of Gibraltar (actually monkeys, not apes), who are often overweight and aggressive because tourists overfeed them.

Do I look aggressive to you?
Michelle Bender/Flickr, CC BY-NC-ND

Personality differences between macaque species

Interestingly, the individual species of macaques didn’t all have the same personality factors. The Japanese, Barbary, crested and Tonkean macaques had only four, while the Assamese had five, and rhesus monkeys had six factors.

All of the species exhibited the dimension of Friendliness. This seems to be a personality factor unique to macaques, and is a blend of chimpanzee Agreeableness and human Altruism.

Tonkean macaques also had a Sociability personality factor. Just like chimpanzees and humans, this species of macaque uses affiliative contacts (i.e. friendship) to reinforce bonds. Only crested macaques did not show the personality factor of Openness (i.e. curiosity), usually found in humans and other primates. The factors Dominance and Anxiety were found for rhesus and Japanese macaques.

The old and the new

The study also showed the fascinating connections between personality and social style. Grade 1 despotic species – Japanese and rhesus macaques – were rather similar, and so were Grades 2, 3 and 4, including the more tolerant species such as Assamese, Tonkean and crested macaques.

On the evolutionary scale, African primates, such as the African Barbary macaque, are “older”. Therefore, they represented the “ancestral” social behaviours for macaques.

Barbary macaque personality has a Dominance/Confidence factor, which is related to social assertiveness, an Opportunism factor, which relates to aggression and impulsivity, a Friendliness factor, relating to social affiliation, and an Openness factor, relating to curiosity and exploratory behaviour.

Rhesus and Japanese macaques, on the other hand, are “younger” on the evolutionary scale. Therefore, the Dominance and Anxiety factors seen in these species must have evolved later.

Psst. You’re disagreeable.
jinterwas/Flickr, CC BY

Understanding the personality of an individual animal or species can help in animal management and welfare. Rhesus macaques, for example, display an Anxiety personality factor. These monkeys are also most commonly used in bio-medical laboratory research. Knowing that some individuals may be prone to anxiety means that researchers must make extra efforts to alleviate any potential distress.

The findings that some Barbary macaques may be especially socially assertive, aggressive, impulsive, curious and exploratory may also help us convince tourists to keep their distance from these monkeys in Gibraltar to avoid conflicts!

Such studies of animal personality also shed light on our own personality dimensions. Our lack of a Dominance factor suggests that our ancestral environment was perhaps more egalitarian and less characterised by high social stratification, which is also borne out by anthropological and palaeontological studies.

Ultimately, we can learn a lot from our primate cousins, not only about their personalities, but about personality itself – not to mention learning a thing or two about ourselves and the social environment in which we evolved.

The Conversation

Carla Litchfield is Lecturer, School of Psychology, Social Work and Social Policy at University of South Australia.

This article was originally published on The Conversation.
Read the original article.

Understanding Cognitive Bias Helps Decision Making

noun: intuition
  1. the ability to understand something immediately, without the need for conscious reasoning.

People tend to trust their own intuition. Has there been much formal study about the veracity of intuition?

Brain science itself is a young field, and the terminology has yet to mature into a solid academic lexicon. To further increase your chances of being confused, modern life is rife with distractions, misinformation, and addictive escapisms, leaving the vast majority of society having no real idea what the hell is happening.

To illustrate my point, I’m going to do something kind of recursive. I am going to document my mind being changed about a deeply held belief as I explore my own cognitive bias. I am not here to tell you what’s REALLY going on or change your mind about your deeply held beliefs. This is just about methods of problem solving and how cognitive bias can become a positive aspect of critical thought.

Image: "Soft Bike" sculptiure by Mashanda Lazarus

Image: “Soft Bike” sculptiure by Mashanda Lazarus

I’m advocating what I think is the best set of decision making skills, Critical Thought. The National Council for Excellence in Critical Thinking defines critical thinking as the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. (I’m torn between the terms Critical Thinking and Critical Thought, although my complaint is purely aesthetic.)

Ever since taking an introduction to Logic course at Fitchburg State college I have been convinced that Logic is a much more reliable, proven way to make decisions. Putting logic to practice when decision-making is difficult, though. Just like a math problem can be done incorrectly, Some logic can even counter-intuitive. My favorite example of intuition failing over logic is always chess. Even as I write this I can’t convince myself otherwise: I have regretted every intuitive chess move. It’s statistically impossible that all my intuitive moves have been bad moves yet logic works in the game so much better that my mind has overcompensated in favor of logic. In the microcosm of chess rules, logic really is the better decision-making tool. Often the kernel of a good move jumps out at me as intuition but then must still be thoroughly vetted with logic before I can confidently say it’s a good move.

In high school, I was an underachiever. I could pass computer science and physics classes without cracking a book. My same attempt to coast through math classes left me struggling because I could not intuitively grasp the increasingly abstract concepts. The part of my mind that controls logic was very healthy and functioning but my distrust for my own intuition was a handicap. I would be taking make up mathematics courses in the summer but getting debate team trophies during the school year.


Photograph of Marcel Duchamp and Eve Babitz posing for the photographer Julian Wasser during the Duchamp retrospective at the Pasadena Museum of Art, 1963 © 2000 Succession Marcel Duchamp, ARS, N.Y./ADAGP, Paris.

I’m not just reminiscing; everyone’s decision making process is an constantly-updating algorithm of intuitive and logical reasoning. No one’s process is exactly the same but we all want to make the best decisions possible. For me it’s easy to rely on logic and ignore even a nagging sense of intuition. Some people trust intuition strongly yet struggle to find the most logical decision; everyone is most comfortable using a specially-tailored degree of intuition and logic. People argue on behalf of their particular decisions and the methodology behind them because a different method is useful in for each paradigm.

In chess, intuition is necessary but should be used sparingly and tempered with logic. It’s my favorite example because the game can be played without any intuition. Non-AI computers are able to beat the average human at chess. Some AI can beat chess masters. So, I’m biased towards logic. Chess is just a game, though. People are always telling me I should have more faith in intuitive thinking.

“But,” you should be asking, “Isn’t there an example of reliance on intuition as the best way to decide how to proceed?”

At least that’s what I have to ask myself. The best example I found of valuable intuition is the ability to ride a bike. It is almost impossible to learn to ride a bike in one session; it takes several tries over a week or longer to create the neural pathways needed to operate this bio-mechanical device. Samurais trained to feel that their weapon was part of themselves, or an extension of their very arm.  The mechanical motion of  the human body as it drives a bicycle becomes ingrained, literally, in the physical brain. The casual, ubiquitous expression, “It’s like riding a bike”, is used to idiomatically describe anything that can be easily mastered at an intermediate level, forgotten for years, but recalled at near perfect fidelity when encountered once again.

The Backwards Brain Bicycle – Smarter Every Day episode 133

Destin at Smarter Everyday put together a video that shows the duality of intuitive thinking. It is completely possible to train the human mind with complicated algorithms of decision making that can be embrace diversification and even contradictory modes of thinking.

Cont. below…

After watching this video, I embraced a moment of doubt and realized that there are very positive and useful aspects to intuition that I often don’t acknowledge. In this case of reversed bicycle steering, a skill that seems to only work after it has been made intuitive can be “lost” and only regained with a somewhat cumbersome level of concentration.

The video demonstrates the undeniable usefulness of what essentially amounts to anecdotal proof that neural pathways can be hacked, that contradictory new skills can be learned. It also shows that a paradigm of behavior can gain a tenacious hold on the mind via intuitive skill. It casts doubt on intuition in one respect but without at least some reliance on this intuitive paradigm of behavior it seems we wouldn’t be able to ride a bike at all.

This video forced me to both acknowledge the usefulness of ingrained, intuitive behaviors while also reminding me of how strong a hold intuition can have over the mind. Paradigms can be temporarily or perhaps permanently lost.  In the video, Destin has trouble switching back and forth between the 2 seemingly over-engaging thought systems but the transition itself can be a part of a more complicated thought algorithm, allowing the mind to master and embrace contradictory paradigms by trusting the integrity of the overall algorithm.

Including Confirmation Bias in a greater algorithm.

These paradigms can be turned on and off and just as a worker might be able to get used to driving an automatic transmission car to work and operating a stick shift truck at the job site and drive home in the automatic again after the shift.

This ability to turn on and off intuitive paradigms as a controlled feature of a greater logical algorithm requires the mind to acknowledge confirmation bias. I get a feeling of smug satisfaction that logic comprises the greater framework of a possible decision making process anytime I see evidence supporting that belief. There are just as many people out there who would view intuition as the the framework of a complex decision making process, with the ability to use or not use logical thought as merely a contributing part of a superior thought process. If my personal bias of logic over intuition is erroneous in some situations, can I trust the mode of thinking I am in? Using myself as an example, my relief at realizing data confirms what I have already accepted as true is powerful.

That feeling of relief must always be noted and kept in check before it can overshadow the ability to acknowledge data that opposes the belief. Understanding confirmation bias is the key to adding that next level to the algorithm, in the video example from Smarter Everyday, steering a normal bike is so ingrained in the neural pathway that the backwards steering’s inability to confirm actually fill in the blank and the mind sends an incorrect set of instruction of the mechanical behavior to the body. Understanding the dynamics of confirmation bias would enable the mind to embrace the greater thought system that would enable the mind to go back and forth between those conflicting behavioral paradigms. I’m positing that it should be possible to master a regular bike and the “backwards bike” and be able to switch back and forth between both bikes in quick succession. The neural pathways between both behavior paradigms can be trained and made stronger than the video shows.

I believe that with practice, someotrciksne could alternate steering mechanism quickly and without as much awkwardness as we are seeing in the video just as my initial confirmation bias, now identified, doesn’t have to dictate my decision and I might be more open minded to an intuitive interpretation leading to the best decision in certain situations.

An inability to acknowledge that one’s own mind might be susceptible to confirmation bias paradoxically makes one more susceptible.  Critical thinking is a method of building immunity to this common trap of confidence. Identifying the experience of one’s own confirmation bias is a great way to try and understand and control this intuitive tendency.  No matter what your thoughts are regarding logic and intuition, examining one’s confirmation biases and better embracing them should lead to better decision making skills.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Spider Silk Continues to Inspire Biotech Advancement

From folklore to children’s stories, it seems humans have always been fasterrificcinated with spider silk, the diverse material produced in abundance, at will from the body of nearly all species of spider. Studying the biomechanics of the spinnerets and the chemicals that combine to produce various textures of silk at a molecular level has allowed scientists a new perspective on efficiency and biosynthesis.

The golden orb-weaver spider (Nephila clavipes) produces so much silk everyday it has become the most studied spider in the world, and was even included in a trip to the International Space Station in a special terrarium. Golden Orb-Weaver silk is 30 times thinner than your average human hair. If Spider-man were to produce a proportionate thickness of the same material the line would likely hold, maybe even hold the weight of two adult humans(Valigra, 1999.)

Spider-manIt’s hard to find a material as strong while still retaining the flexibility and elasticity of spider silk. Maybe impossible. The dragline of the average spider silk is five times more durable than the Kevlar used in bullet-proof vests(Benyus, 2002, p. 132), plus, it’s lighter and breathes better. Kevlar is a petroleum product and requires pressurized vats of intensely hot sulfuric acid (Benyus, 2002, p.135; 2001). Biologically-inspired materials might be drastically more efficient on energy costs to create. Oil-based synthetic molecules often create dangerous bi-products which are hazardous to handle, expensive to store and virtually impossible to dispose. Spiders create superior materials with a very small amount of energy, heat or byproducts. (Benyus, 2001). NASA studies found that Gold Orb Spider spinneret systems can be so efficient they include reusing spider silk eaten and ingested after use.


Electron-microscope imaging shows the variety of textures a single spider can produce from its body.

Spider silk would be so incredibly useful it might not even be possible to anticipate the range of products it might inspire. Most materials knows to man are either elastic or have a high tensile strength but some  pider silks fall in a  rare category of scoring high in both areas (Benyus, 2001). Spider silk can stretch 40 percent longer than its relaxed state without losing any of it’s shape when it returns. Even the stretchiest nylon can’t perform that way (Benyus, 2002, p.132; 2001). Dupont materials compared silk to current steel cables used on bridges and standing structures worldwide and found dragline spider silk strong enough to be used as the quick-stop brake system on a jet in flight on an aircraft carrier (Valigra, 1999), at a fourth of the thickness of steel cables.

“spider silk is so strong and resilient that on the human scale, a web resembling a fishing net could catch a passenger plane in flight. If you test our strongest steel wire against comparable diameter silk they would have a similar breaking point. But if confronted with multiple pressures, such as gale-force winds, the silk can stretch as well; something steel cannot do” (Benyus, 2001, 2002).

Spiders evolved the ability to spin a web strong and versatile enough to  allow it to run across, pull and twist into position and manipulate with its many legs in order to trap prey, set complicated tricks into action and run along without becoming entangled. The elasticity and strength of the web are partly why it is so easy for another species to become ensnared. Researchers who have taken the time to examine closely have realized in awe the potential for application in spaceflight, industrial, commercial and even fashion industries.

Spider silk also shows incredible tolerance for colder temperatures without becoming brittle or falling apart. Spiders are able to hide underground or near the warm trunk of a tree and return to their outdoor webs later to repair and rebuild what is largely left intact. These cold-tolerant properties lend superior promise to its potential as aan advanced suitable for bridge cables, as well as lightweight parachute lines for outdoor climbing in military and camping equipment. Scientists have been hyping up its many bumberpotential medical applications such as  sutures and replacement ligaments (Benyus, 2001) and as a durable substance to fabricate clothing and shoes (made of “natural fibers”) and synthetic moldable solid material that can create rust-free panels and hyper durable car bumpers. (Lipkin, R., 1996).

“if we want to manufacture something that’s at least as good as spider silk, we have to duplicate the processing regime that spiders use” Christopher Viney, early biomimetic proponent (Benyus, 2002, pp. 135-6).

Take a look at the fascinating process as a spider creates silk and you will find something that more closely resembles human technology than animal biology. Spiders have evolved to create something highly specialized without tools or any sort of special diet requirements to fuel autosynthesis of silk.  Spider silk is formed out of liquid proteins within the spider’s abdomen. Several complex chemicals in a cocktail travel through the duct of a narrow gland. The substance is squeezed out in a very controlled manner through any combination of six nozzles called spinnerets. the protein collected from eating insects and various vegetable matters “emerges an insoluble, nearly waterproof, highly ordered fiber” (Benyus 2001).

Most spiders can produce a few different types of of silks. They can make threads that can be used to build structures, a strong dragline, or an elastic cable for repelling and reusing while creating the foundation for a web.  They can make a sticky, wet line that clings to itself and most other surfaces for fastening strong structures, making cocoons and trapping prey. There is much to be learned because all of human scientific knowledge on the subject still comes from a handful of studies of only fifteen or more spiders to date. There are 40,000 spider species, most of which we know almost nothing about. There might be even better silk from some species.

“But yes there is probably a tougher, stronger, stiffer fiber being produced right this minute by a spider we know nothing about. A spider whose habitat may be going up in smoke” Viney (Benyus, 2002, pp.138-40).

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Spider Venom and the Search for Safer Pain Meds

Some of the most poisonous animals on the planet are found down under. Australian researchers retrieved exciting new data when taking a closer look at spider venom. Biosynthesized chemicals designed to be highly reactive with other organisms could inspire new drugs and, eventually, an entire new class of painkillers.

It can be defensive but the function of spider venom is often to incapacitate or kill prey. University of Queensland academics released their findings in The British Journal of Pharmacology, after they isolated seven unique peptides found in certain spider venoms that can block the molecules that allow pain-sensitive nerve pathways to communicate with the brain. One of the pepetides originated in the physiology of a Borneo orange-fringed tarantula. That peptide possessed the correct chemical structure, combined with a stability and effectiveness to become a non-opiate painkiller.

15% of all adults are in chronic pain, according a study published in 2012 Journal of Pain. Most readers are already aware of the danger of addiction and lagging effetiveness of opiate drugs like morphine, hydrocodone, oxycodone. The medical community is hungry for a change in available medications. Opiates are all derivatives or inspired by opium plants which have been tried and tested for centuries. Venomous spiders are difficult to study but the motivation for new drugs has loosened funding with the help of promising finds like this one.

“Spider venom acts in a different way to standard painkillers,” ~ Dr. Jennifer Smith, research officer @ University of Queensland’s Institute for Molecular Bioscience.

While cessation from pain might in itself create an addictive reaction, this venom is promising, according to Dr. Smith, because it blocks the channel through which the pain would even reach the brain. Opiates merely block the widespread opioid receptors in actual brain cells, deep within and in the surrounding nerve tissue of the brain itself.

What’s the mechanism of action for this spider-drug? Some people are born with a rare genetic defect that renders them unable to feel pain. Geneticists identified the human gene responsible, known as SCN9A. Dr. Smith hopes the peptide will enable the cells of a human without the defect to shut down part of the DNA that manifests this immunity to pain.

There could be other breakthroughs in medicine and chemistry. The findings are awesome in the Australian project but those researchers only documented findings of roughly 200 out of 45,000 known species of spider.  Out of those 200, 40% contained peptides that interacted with the way pain channels communicate. The next step would be to test the painkillers on animals.

“We’ve got a massive library of different venoms from different spider species and we’re branching out into other arachnids: scorpions, centipedes and even assassin bugs,” said Dr. Smith.


Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Archaeologists Change Direction in Kenya, Find World’s Oldest Known Tools

Archaeologists surveying the Kenyan Rift Valley area quite accidentally discovered what are perhaps the oldest known stone tools in the world. They date back about 3.3 million years ago, which would make them at least 700,000 years older than what were formerly the oldest stone tools – discovered in the Ethiopian region of Hadar. So old are the most recent discoveries, that in fact they are precursors of even the earliest fossilized skulls we have of our own genus, Homo, by about half a million years. This is the exciting part. Although we’d like to think that Homo was the first to craft tools – it was actually the work of a more primitive and distant ancestor that we share.

The ancient tools might never have been discovered were it not for a happy accident. Sonia Harmand from Stony Brook University and her team were on their way to a previously uncovered fossil site at the western shore of Lake Turkana on one particular morning in July of 2011. However, the group made a wrong turn and ended up near what was then an uncharted formation of rock. The researchers quickly decided that it was an ideal enough place to harbor artifacts, decided to survey it and by the afternoon, they discovered what they were looking for. The site has been named Lomekwi 3, and giving a closer look, they managed to uncover dozens of stone age tools— including the leftover flakes from cut minerals, as well as cores and even anvils, which were discovered both at the surface, as well as below the ground. Harmand first described her team’s findings on April 14 when giving a lecture during the annual meeting of the Paleoanthropology Society in San Francisco.

“The cores and flakes we recovered are clearly knapped and are not the result of accidental or natural rock fracture,” Harmand said. “The Lomekwi 3 knappers were able to deliver sufficient intentional force to detach repeatedly series of adjacent and superposed flakes and then to continue knapping by rotating the cores.” The team was able determined the age of the tools using a relative dating technique, analyzing the area in which they had been found, between what were two layers of volcanic ash as well as a magnetic reversal of the known ages – due to the nature of continental drift and the Earth’s rotation, magnetic fields change polarity over time, and knowing these periods can help determine the age of the materials.

Another surprising feature of the Lomekwi 3 tools is that they are unusually large – much larger than the stone tools that were excavated in Ethiopia that had previously set the record for the oldest known tools, and they are in fact even larger than the rocks typically used by chimpanzees to crack apart nuts and shells, pointing to a unique transitional period in hominid technology. As described by Harmand, these preliminary observations could indicate that the Lomekwi toolmakers deliberately sought out the biggest, heaviest blocks made out of very hard raw material from local sources despite the availability of smaller blocks. They then applied a number of knapping techniques in order to remove some of the sharper edged flakes away from the core of the rocks. While the chimps, who we share a common ancestor with (and several primate species long deceased,) have been known to go on hunts, the exact purpose of the Lomekwi tools is still unclear. Chimps have been known to use spear-like objects, but the size and weight of the tools suggest another purpose.

Animal bones of the same period have been recovered from the site, perhaps conjuring up images of the ape in 2001: Space Odyssey, who realized he could use bleached tapir bones as a weapon. However, they contain no markings to indicate human activity. Evidence from another site, however, dating to the time of the tools does suggest that hominins (the group in which we, H. sapiens and our extinct relatives all fall into) were already butchering animals for food.

Back in 2010, researchers working at the dig site of Dikika, also in Ethiopia, (where the remains of Australopithecus afarensis, the three-million year-old species to which Lucy belonged, had once been uncovered), made the announcement that they had unearthed 3.4 million-year-old animal bones which showed distinctive marks, knife scrapings from where the hominins had sliced away morsels of meat from the bone with their stone tools. The claim didn’t go without a rather heated debate. Some skeptics refuted the discovery with the suggestion that any alleged cut marks were actually due to these bones being cut and trampled by the feet of passing animals. Other researchers countered that the distinct cut marks may actually have been due to the bites of crocodiles scavenging food. Although the latest discovery of tools at the Lomewki site does not necessarily prove nor disprove that hominins were responsible for making marks on the Dikika remains, it certainly is sufficient evidence to maintain that hominins close enough to be contemporaries of the Dikika nomads did in fact create implements that capable of leaving distinct cut marks.

The identity of these Lomekwi knappers remains unknown. If the manufacture of stone tools is exclusive to the Homo genus, then the evidence suggests that they may have evolved significantly earlier than what the fossil record currently suggests. A more likely scenario, however, which Harmand endorses, is that either the Australopithecus or another hominin, Kenyanthropus (which has been found nearby)— both of which have been known to have existed some 3.3 million years ago were responsible for the Lomekwi tools. Whether in fact the Kenyanthropus is actually of a distinct hominin lineage or just another type of Australopithecus still remains a point of contention, however.

Up until this point, the earliest known stone tools were considered to be derived out of the so-called Oldowan toolmaking tradition. The 20th century paleontologist Louis Leakey coined this term when he described some of the first primitive tools discovered at the Olduvai Gorge back in the 1930s. However, Harmand says that these newly discovered tools are actually different in comparison to the early Oldowan discoveries that they deserve a new name: the Lomekwian tradition.

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.

New species of prehistoric bird lived 1.8 million years ago

3.5 million years ago, when our ancestors were just beginning to walk upright, the real rulers of the Earth were carnivorous birds with hooked beaks who stood up to ten feet tall as they stalked their prey in the grasslands, apex hunters who were even able to compete with the fearsome saber tooth cats in a time when low sea levels and frozen land bridges were not uncommon. Paleontologists in South America recently uncovered a new species of these large avian predators, known as “terror birds,” an almost complete skeleton, which is already revealing a great deal about how they hunted.

The fossil was first discovered on the beaches of Mar del Plata back in 2010, a city located at the eastern coast of Argentina, not far from the country’s fossil rich region of Patagonia, where many intriguing species of Cretaceous dinosaurs have been found in recent years. Over 90 percent of the creatures bones remain intact, according to the lead researcher on the study, Federico Degrange, who serves as an assistant researcher of vertebrate paleontology at the Centro de Investigaciones en Ciencias de la Tierra and the Universidad Nacional de Córdoba of Argentina.

The scientists have given this particular species of terror bird a new name: Llallawavis scagliai, merging classical Latin with the Quechua dialect spoken in the central Andes. “Llallawa” is the Quechua word for “magnificent,” while “avis” is Latin for bird. The name is in honor of the Argentine naturalist Galileo Juan Scaglia (1915-1989), who was the former director of Mar del Plata’s science museum. Scaglia’s own grandson discovered the bird fossil, and the findings were documented this week in the Journal of Vertebrate Paleontology.

Due to its exceptional preservation, the fossil has been an invaluable resource for studying the terror bird’s anatomy in depth, allowing for a fairly accurate reconstruction of the animal. The specimen marks the first time a fossilized terror bird with a complete trachea and complete palate (the roof of its mouth). It’s a bit smaller than most species in the terror bird family, and was also among the last to roam the Earth. Even more intriguing are the intricate bones of the animal’s ears, complete with its eye sockets, brain box and skull, which allow scientists to understand a great deal about the flightless bird’s sensory capabilities.

A look at the inner ear structure of L. scagliai would suggest that the terror bird was capable of picking up low-frequency sounds, meaning it was capable of listening for only the footsteps of its prey hitting the ground with a low rumble before striking. It may have also used noises of low frequency to communicate with other members of the flock, as the bird was a known pack hunter – sending messages in pitches that only they could hear.

“That actually tells us quite a bit about what the animals do, simply because low-frequency sounds tend to propagate across the environment with little change in volume,” said Lawrence Witmer, a professor of anatomy
 from Ohio University
, who has collaborated before with Degrange, but did not participate involved in the new study.

“Low-frequency sounds are great for long-[distance] communication, or if you’re a predator, for sensing the movements of prey animals,” Witmer said in an interview with Live Science.

This ability gives L. scagliai some famous company. Other animals that are capable of hearing low-frequency sounds include the greatest prehistoric hunter of all time – Tyrannosaurus rex, as well as modern day elephants and rhinos, and even crocodiles, which are distantly related to modern birds – an ancestry dating back some 230 million years.

Another surprising feature the researchers took notice of was the bird’s skull, which they found to be surprisingly more rigid than the skulls of other birds. This, however, may actually have worked to the bird’s advantage, according to the scientists, because a rigid skull would have enabled the terror bird to crush the jugular vein of its prey with a large beak, likely the way in which it brought down prey.

“Terror birds didn’t have a strong bite force, but they were capable of killing prey just by striking up and down with the beak,” Degrange said.

The near-complete skeleton also indicates that the terror birds were of a much more diverse variety during the Late Pliocene epoch than what experts had once thought. The birds enjoyed a considerably long period on this planet. They emerged in the Cenozoic, with the oldest fossils dating back between 52 million and 50 million years ago. The latest species are estimated to have died out 1.8 years ago, although some scientists maintain that there may have been terror bird populations that only became extinct as recently as 17,000 years ago – coinciding with the arrival of the first humans in South America. Evidence for this is rather sparse as of now, according to Degrange, although large predatory flightless birds did have populations on islands in the tropics until about 400 years ago, such as the moa. The only one that survives today is the cassowary, found primarily in Papua New Guinea, and has been known to attack people.

While it’s an exciting time for the researchers, classification is only the initial stages. In the years ahead, they hope to further study in depth the terror bird’s eye bones, brain case and skull over the next several years, hoping that they will come to a deeper understanding of how the bird saw and other capabilities that allowed it to hunt so successfully through the grasslands of the Pleistocene.

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.

New Hominid Ancestor Lived 3.7 Million Years Ago

Deep in Ethiopia’s Awash Valley about four decades ago, paleontologists came very close to understanding the origin of modern humans when they unearthed Australopithecus afarensis, better known as Lucy, an upright walking ape that lived about 3.2 million years ago, one of our distant ancestors. Now we know that Lucy wasn’t alone – sharing the savannah with another ape-like creature that scientists refer to as “Little Foot,” suggesting a rich evolutionary diversity throughout much of Africa at the time.

The date of the fossil is important for another reason – it may give us a closer approximation to when modern day humans first began to appear. Currently, Australopithecines are ranking high among the species believed to be our direct ancestors, living in Africa between 2.9 million and 4.1 million years ago. The exact lineage from which we came, Homo, dates back to approximately two million years ago.

Although the Australopithecus afarensis thrived in the grasslands of eastern Africa, another australopithecine nicknamed Little Foot, due to the diminutive nature of the bones, lived in southern Africa. Discovered about 20 years ago by paleoanthropologist Ronald Clarke from the University of the Witwatersrand in South Africa, Little Foot apparently fell down a narrow shaft in the Sterkfontein Caves. This left behind a nearly complete skeleton that could yield key insights on human evolution.

So what sort of australopithecine was Little Foot? There are about five different varieties known, and also the sub tribes of hominins Paranthropus and Ardipithecus which had comparable brain sizes and walked on two legs. A number of scientists think that Little Foot likely belonged to the genus Australopithecus africanus, which differed from Lucy in that it had a rounder skull used to contain a slightly larger brain and smaller teeth than Lucy or much else of Australopithecus afarensis. Now Clarke and his colleagues have proposed the idea that perhaps Little Foot may have been another type of australopithecine – the Australopithecus prometheus, which was characterized by a longer and flatter face as well as larger cheek teeth than the Australopithecus africanus.

“It was impossible to fit Little Foot into the human family tree with any certainty because ever since its discovery, the age of Little Foot has been debated,” according to the study’s lead author Darryl Granger, who is a geochronologist at Purdue University of West Lafayette, Indiana. If the researchers are able to determine successfully when Little Foot’s family tree first came into being, they might then be able to make a more accurate distinction of when Australopithecus first diverged, as well as which region of Africa the Homo genus first originated in.

While most paleoanthropologists agree that the first hominins, as well as all of the modern human race, have ancestral roots in Africa, it is difficult to pinpoint a precise starting point, since a rapidly changing climate drove them across the continent and gradually towards the Middle East and European mainland.

The evidence discovered by Granger and his colleagues has suggested that Little Foot lived at approximately the same time as Lucy, but the current fossilized remains are not yet sufficient enough to determine what species it belonged to.

“The most important implication from dating Little Foot is that we now know that australopithecines were in South Africa early in their evolution,” Granger told Live Science. “This implies an evolutionary connection between South Africa and East Africa prior to the age of Little Foot, and with enough time for the australopithecine species to diverge.”

This proposes another issue that many have not considered, however – other australopithecines — and, eventually, humans, such as Neanderthals and Homo Sapiens, “did not all have to have derived from Australopithecus afarensis,” Clarke told Live Science. “There could well have been many species of Australopithecusextending over a much wider area of Africa.”

The researchers made their first attempt at determining the age of Little Foot a little over a decade ago. Their result was an age of around four million years, which according to Granger, would rank Little Foot among the oldest of the australopithecines. Despite the fact that they looked a lot more like apes than humans (our closest living relative the chimpanzee quickly comes to mind), Australopithecines did show some signs of being human – the fossil record shows they cared for their children much like modern humans, and they may have had a sense of aesthetic. Some of their dwellings contained stones that resembled faces, collected from nearby valleys.

You might wonder how they can date materials discovered within a cave, which can prove to be another frustrating problem for archaeologists. Besides being able to determine if they’re from the same era as the remains found within the cave, there is also the problem of erosion that could cause unearthed materials to fill a cave along with water currents and sediments, something that could easily throw off most types of analysis. Such mineral rich runoff is known as flowstones when found in caves. The initial step taken by archaeologists was to date this material, which revealed an age of 2.2 million years old. “I was disappointed, but I could see nothing wrong with their ages,” Granger recalled.

A recent study suggested the flaw with this method – as the flowstones formed at a different time than the rock layer containing the fossils. For their newest analysis, Granger and colleagues were able to determine the age of the fossils with by analyzing the aluminum and beryllium isotopes in quartz found in the same layer, a technique known as surface exposure dating, which can give dates on mineral materials that are up to 30 million years old.

The researchers came across something else surprising – the earliest stone tools that they discovered within the same cave only date back about 2.2 million years ago. This gives them an age similar to the early stone tools discovered throughout eastern and southern Africa, indicating that there must have been some degree of interaction between their later descendants. “This implies a connection between South African and East African hominids that occurred soon after the appearance of stone tools,” Granger said.

The researchers hope that their new dating method will be used by other archaeological sites across the globe. will “There should be a thorough study to explore the strengths and weaknesses of the method,” Granger said.

The findings made by Granger, Clarke and their research team were documented and published in the April 2 issue of the journal Nature.

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.

Darwin’s finches highlight the unity of all life

When Charles Darwin visited the Galapagos Islands in October 1835, he and his ship-mates on board HMS Beagle collected specimens of birds, including finches and mockingbirds, from various islands of the archipelago.

At the time, Darwin took little interest in the quaint finches, making only a one-word mention of them in his diary. As painstakingly shown by Frank Sulloway and more recently by John Van Whye, it wasn’t until two years later that the finches sparked Darwin’s interest.

By then he had received feedback from the leading taxonomist of the time, John Gould, that the samples comprised 14 distinct species, none of which had been previously described! Gould also noted that their “principal peculiarity consisted in the bill [i.e. beak] presenting several distinct modifications of form”.

So intrigued was Darwin by this variation in size and shape of beaks that in the second (1845) edition of Journal of Researches he included illustrations of the distinctive variation between species in the size and shape of their beaks. He added a comment that:

Seeing this gradation and diversity of structure in one small, intimately related group of birds, one might really fancy that from an original paucity of birds in this archipelago, one species had been taken and modified for different ends.

The famously varied beak shapes of the Galapagos finches, as illustrated in the second edition of Darwin’s Journal of Researches.

Unfortunately for Darwin, the closer he examined the available evidence on Galapagos finches, the more confusing the picture became. This was partly because the specimens available to him were not sufficiently labelled as to their island of collection.

Presumably, it was his doubt about the available evidence that resulted in Darwin making no mention of Galapagos finches in any edition of Origin of Species.

Why, then, do people now label them as “Darwin’s finches”, and why are these finches now regarded as a classical textbook example of his theory of evolution by natural selection?

Paragons of evolution

Despite not mentioning Galapagos finches, Darwin did make much use of evidence from other Galapagos species (especially mockingbirds) in Origin of Species.

As the influence of Origin of Species spread, so too did the evolutionary fame of the Galapagos Islands. Increasingly, other biologists were drawn into resolving the questions about finches that Darwin had left unanswered.

By the end of the 19th century, Galapagos finches were among the most studied of all birds. By the mid-20th century, there was abundant evidence that Galapagos finches had evolved to fill the range of ecological niches available in the archipelago – a classic example of evolution by adaptive radiation.

Beak size and shape were key attributes in determining adaptation to the different types of food available. In the second half of the 20th century, classic research by Princeton University’s Peter and Rosemary Grant provided evidence of quite strong natural selection on beak size and shape.

Under the hood

New light has also been shed on the evolution of Darwin’s finches in a paper recently published in Nature. In this latest research, the entire genomes of 120 individual birds from all Galapagos species plus two closely related species from other genera were sequenced.

The work was done by a team led by Swedish geneticist Leif Andersson, with major input from Peter and Rosemary Grant, who are still leading experts on the finches.

Comparison of sequence data enabled them to construct a comprehensive evolutionary tree based on variation across the entire finch genome. This has resulted in a revised taxonomy, increasing the number of species to 18.

The most striking feature of the genome-based tree is the evidence for matings between different populations, resulting in the occasional joining of two branches of the tree. This evidence of “horizontal” gene flow is consistent with field data on matings of finches gathered by the Grants.

A comparison of whole-genome sequence between two closely related groups of finches with contrasting beak shape (blunt versus pointed) identified at least 15 regions of chromosomes where the groups differ substantially in sequence.

Unity of life

The most striking difference between the two groups was observed in a chromosomal region containing a regulatory gene called ALX1. This gene encodes a peptide that switches other genes on and off by binding to their regulatory sequences.

Like other such genes, ALX1 is crucially involved in embryonic development. Indeed, mutations in ALX1 in humans and mice give rise to abnormal development of the head and face.

It is an extraordinary illustration of the underlying unity of all life on Earth that Leif Andersson and his colleagues have shown that the ALX1 gene also has a major effect on beak shape in finches, and that this gene has been subject to natural selection during the evolution of the Galapagos finches.

If Darwin were alive today, he would be astounded at the power of genomics tools such as those used in generating the results described in this paper. He would also be delighted to see such strong evidence not only in support of evolution but also in support of one of its major forces, natural selection.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Ancient elixirs to fight modern superbacteria

The Middle Ages has long suffered an unfortunate reputation as being a dark period of violence and superstition. It was a time when disease ran rampant – particularly the infamous Black Death, which may have wiped out as much as one-third of Europe in the 14th century as it made its way from China’s Silk Road. Western medicine seemed helpless against the waves of plague – something that would again ravage London in the 17th century, and is now attributed to three strains of plague transmitted by the bacteria Yersinia pestis.

While medieval physicians didn’t know about microbes, however, it doesn’t mean that their medicine was entirely useless. While there may not have been many ancient elixirs effective against plague, one remedy might actually be an effective weapon against a very modern type of plague: antibiotic-resistant superbacteria.

Researchers at the University of Nottingham in Great Britain successfully replicated a medieval potion and subsequently tested it against one strand of bacteria that is notoriously aggressive and prevalent in hospitals: staphylococcus aureus, more commonly known as MRSA. The remedy is over a millennia old and was first developed by populations of Anglo-Saxon that occupied Britain in the early Middle Ages.

If the name sounds familiar – that’s because it is – the town just a breath away from the legendary Sherwood Forest, which now harbors an Institute for Medieval Research. Some historians leafed through a 1,000 year old manuscript known as “Bald’s Leechbook,” where they found a remedy for eye infection – perhaps something that Robin Hood’s band of merrymen would be prone to – scratched corneas after an armed skirmish with Nottingham’s sheriff.

The infection would typically be treated by an herbalist, mixing the concoction in a brass vessel, along with a remedy of bile, mixed in the cow’s stomach, and some freshly picked Allium that grows in the forest, a bulb closely related to garlic.

Viking studies professor Christina Lee first found the potion and went about translating the recipe from Old English. While herbalists had hardly the same training as today’s medical doctors, they had to at least have some method for determining the right treatment for different types of infection. It must have been a bit like working in the dark, too, as they had a few centuries to go before germ theory of disease would be discovered.

To recreate the salve, she turned to chemists working at the university’s Center for Biomolecular Sciences.

It might have seemed like an unusual request, but little did Lee know that it would be a crucial step at addressing a growing concern. Antibiotics are often specific to one strain of pathogens, and dependent on entire generations of bacteria being wiped out. However, bacteria replicate at a rapid pace – producing several generations in just a matter of 24 hours. If one bacterial cell develops a tolerance for antibiotics, it can swiftly pass this along through a primitive evolutionary process known as horizontal gene transfer, eventually producing a generation of superbacteria. In hospitals, where many antibiotics are administered regularly, the environment for superbacteria is more inviting.

Lee’s investigation might actually have opened the doors to a new way of approaching the problem, going after antimicrobial agents that are found in nature, something that caught the attention of microbiologist Freya Harrison. In her lab, the chemists followed the recipe with precision, yielding four individual batches with fresh ingredients. They even used the medieval methods for cultivating it, with a brass sheet as their brewing container, where they poured the distilled water.

They then used lab conditions to set off the growth of a strain of Staphylococcus aureus bacteria, which had grown resistant to the standard drug Methicillin, each grown in a small piece of collagen. The impact of the salve was astounding: roughly only one in 1,000 bacterial cells survived.

Harrison said that she was “absolutely blown away” by the power of this antique concoction, something she initially suspected would have a slight antibacterial effect. Some ingredients of the salve – namely copper and bile salts showed some lethal effect on the bacteria in the lab. Plants in the garlic family have also been known for producing chemicals that will intercept the ability of healthy bacteria to damage tissue that had been infected – a property that has made garlic cloves a time-honored cold remedy.

There’s something to say about the whole being greater than the sum of its parts, however. When they combined all their ingredients, under medieval conditions, they found some even more exciting discoveries under the microscope. The eye salve acted more aggressively than the control substance they applied to another set of bacteria, with adhesive particles that were able to break through the bacteria’s sticky coating, tearing apart colonies of mature bacteria that showed little reaction to antibiotic treatments.

So potent was the concoction developed by Harrison’s research team that they later diluted the salve, seeing how much dose was needed to be effective. Even in situations where populations of S. aureus survived, communication between bacteria in the colony was disrupted – perhaps the most intriguing aspect of the salve. Without any cross-talk between the cells, the genes that promote antibiotic resistance could not be signaled – an important and organic way to attacking bacterial infections.

This new and unlikely coalition between historians – especially in the very specialized branch of medieval medicine – and microbiologists led to the development of a new program called AncientBiotics at Nottingham, where researchers are seeking funding to further explore this new parallel between the sciences and humanities.

“We know that MRSA-infected wounds are exceptionally difficult to treat in people and in mouse models,” said Kendra Rumbaugh, who performed the testing of Bald’s remedy on MRSA-infected skin wounds in mice. “We have not tested a single antibiotic or experimental therapeutic that is completely effective,” added Rumbaugh, a professor of surgery at Texas Tech University’s School of Medicine. But she said the ancient remedy was at least as effective – “if not better than the conventional antibiotics we used.”

James Sullivan
James Sullivan is the assistant editor of Brain World Magazine and a contributor to Truth Is Cool and OMNI Reboot. He can usually be found on TVTropes or RationalWiki when not exploiting life and science stories for another blog article.

Genome editing poses ethical problems that we cannot ignore

The ability to precisely and accurately change almost any part of any genome, even in complex species such as humans, may soon become a reality through genome editing. But with great power comes great responsibility – and few subjects elicit such heated debates about moral rights and wrongs.

Although genetic engineering techniques have been around for some time, genome editing can achieve this with lower error rates, more simply and cheaply than ever – although the technology is certainly not yet perfect.

Genome editing offers a greater degree of control and precision in how specific DNA sequences are changed. It could be used in basic science, for human health, or improvements to crops. There are a variety of techniques but clustered regularly inter-spaced short palindromic repeats, or CRISPR, is perhaps the foremost.

CRISPR has prompted recent calls for a genome editing moratorium from a group of concerned US academics. Because it is the easiest technique to set up and so could be quickly and widely adopted, the fear is that it may be put into use far too soon – outstripping our understanding of its safety implications and preventing any opportunity to think about how such powerful tools should be controlled.

The ethics of genetics, revisited

Ethical concerns over genetic modification are not new, particularly when it comes to humans. While we don’t think genome editing gives rise to any completely new ethical concerns, there is more to gene editing than just genetic modification.

First, there is no clear consensus as to whether genome editing is just an incremental step forward, or whether it represents a disruptive technology capable of overthrowing the current orthodoxy. If this is the case – and it’s a very real prospect – then we will need to carefully consider genome editing’s ethical implications, including whether current regulation is adequate.

Second, there are significant ethical concerns over the potential scope and scale of genome editing modifications. As more researchers use CRISPR to achieve more genome changes, the implications shift. Our consideration of a technology that is rarely used and then only in specific cases will differ from one that is widely used and put to all sorts of uses.

Should we reach this tipping point, we will have to revisit the conclusions of the first few decades of the genetic modification debate. Currently modifying plants, some animals, and non-inheritable cells in humans is allowed under strict controls. But modifications that alter the human germ-line are not allowed, with the exception of the recent decision in the UK to allow mitochondrial replacement.

While this may mean weighing up potential benefits, risks and harms, as the potential applications of genome editing are so broad even this sort of assessment isn’t straightforward.

What patterns can genetic surgeons weave?
too human by lonely/

Use for good and for ill

Genome editing techniques have so far been used to change genomes in individual cells and in entire (non-human) organisms. Benefits have included better targeted gene therapy in animal models of some diseases, such as Duchenne Muscular Dystrophy. It’s also hoped that it will lead to a better understanding of the structure, function and regulation of genes. Genetic modification through genome editing of plants has already created herbicide- and infection-resistant crops.

But more contentious is how genome editing might be used to change traits in humans. While this has been the basis for many works of fiction, in real life our capacity to provide the sort of genetic engineering seen in films and books such as Gattaca and Brave New World has been substantially limited.

Genome editing potentially changes this, presenting us with the very real possibility that any aspect of the human genome could be manipulated as we desire. This could mean eliminating harmful genetic conditions, or enhancing traits deemed advantageous, such as resistance to diseases. But this ability may also open the door to eugenics, where those with access to the technology could select for future generations based on traits considered merely desirable: eye, skin or hair colour, or height.

Permanent edits

The concern prompting the US academics’ call for a moratorium is the potential for altering the human germ-line, making gene alterations inheritable by our children. Gene therapies that produce non-inheritable changes in a person’s genome are ethically accepted, in part because there is no risk for the next generation if things go wrong. However to date only one disease – severe combined immunodeficiency – has been cured by this therapy.

Germ-line alternations pose much greater ethical concerns. A mistake could harm future individuals by placing that mistake in every cell. Of course the flip-side is that, if carried out safely and as intended, germ-line alterations could also provide potentially permanent solutions to genetic diseases. No research is yet considering this in humans, however.

Nevertheless, even if changes to the germ-line turn out to be safe, the underlying ethical concerns of scope and scale that genome editing brings will remain. If a technique can be used widely and efficiently, without careful oversight governing its use, it can readily become a new norm or an expectation. Those unable to access the desired genetic alterations, be they humans with diseases, humans without enhanced genetic characteristics, or farmers without genetically modified animals or crops, may all find themselves gravely and unfairly disadvantaged.

The Conversation

This article was originally published on The Conversation.
Read the original article.