Category Archives: Health

No smoke without fire – the link between smoking and mental health


A recent study suggested a causal association between smoking tobacco and developing psychosis or schizophrenia, building on research about the relationship between the use of substances and the risk of psychosis. While cannabis is one of the usual suspects, a potential link with tobacco will have come as a surprise to many.

The report was based on a review of 61 observational studies and began with the hypothesis that if tobacco smoking played a part in increasing psychosis risk, rather than being used to deal with symptoms that were already there, people would have higher rates of smoking at the start of their illness. It also posited that smokers have a higher risk of developing psychosis and an earlier onset of symptoms to non-smokers. They found that more than half of people with a first episode of schizophrenia were already smokers, three times higher than that of a control group.

However, one of the limitations of the study, as the authors admit, is that many of the studies in their review did not control for the consumption of substances other than tobacco, such as cannabis. As many people combine tobacco with cannabis when they smoke a joint, the extent to which tobacco is the risk factor is still unclear.

One clear message the research highlighted was the high level of smoking among those with mental health problems and that smoking is not necessarily simply something that alleviates symptoms – the so-called “self-medication hypothesis”.

Almost half of all cigarettes

The figure is stark: 42% of all cigarettes smoked in England are consumed by people with mental health problems. So while the life expectancy of the general population continues to climb, those with a severe mental health problem have their lives cut short by up to 30 years – in part due to smoking.

Since the 1950s, rates of smoking have dramatically reduced in the population while the number of people with psychosis has remained constant. So why has the incidence of psychosis not mirrored the reduction in the overall numbers of smokers? Two factors might explain this. First institutional neglect has held up efforts and resources employed to reduce smoking for people with mental health problems – until recently public health campaigns have ignored this group with justifications that “surely they have enough to worry about without nagging about smoking” or “it’s one of the few pleasures they have”.

A more sinister role is also played by the tobacco industry, who have not been passive or unaware of one of their most loyal consumer groups: people with mental health problems. The industry has been active in funding research that supports the self-medication hypothesis, pushing the idea that people with psychosis need tobacco to relieve their symptoms, rather than tobacco having any link to those symptoms. The industry has also been a key player in obstructing hospital smoking bans which they perceive as a threat to tobacco consumption. Worse still they have marketed cigarettes specifically to people with mental health problems.

Combining substances

Time to quit.
Smoking by Shutterstock

People with psychosis use substances for the same reasons you and I do: relax or feel less stressed. And the good news is, counter to many people’s preconceptions, individuals with mental health problems are no different to anyone else in their desire and ability to quit smoking.

This is welcome given the clear links between smoking and physical health. But there are particular issues when it comes to smoking and those with psychosis. For example, smoking impacts on the medical treatment of psychosis, as tobacco is known to interact with Clozapine, one of the drugs used to treat the condition. Because smoking interferes with the therapeutic action of Clozapine and some other anti-psychotics, requiring higher doses of the drug.

Then there’s the cannabis question. People with mental health problems are more likely to use drugs such as cannabis. This is usually combined with tobacco when smoked in a joint. So initiation into cannabis and its continued use contributes to higher rates of tobacco dependence for people with mental health problems.

The relationship between cannabis and psychosis has preoccupied researchers, policy makers and clinicians for decades. Unfortunately most of this evidence which has influenced and underpinned public health messages about cannabis is either outdated or methodologically flawed.

Since many of the seminal studies on this issue were carried out, there has been a marked change in the type of cannabis that is available. These studies were recruiting and investigating users who were exposed to lower potency varieties of cannabis. Over the past decade, higher potency forms of cannabis such as “skunk” have become dominant on the streets. This has been compounded by research being done by simply enquiring whether research participants currently use or have ever used cannabis. This assumes cannabis is a single type of drug, rather than a range of substances with varying strengths and constituent ingredients. To make matters worse, we rely on proxy measures of cannabis potency drawn from seizures made by the police. Such seizures may not be representative of contemporary cannabis availability.

Many people are exposed to a combination of drugs, whether prescribed, recreational or a mix of both. This raises the potential for interactions, where the effect of one drug alters the effects of another. This raises a further possibility in the smoking, cannabis and psychosis story. Could some people’s psychosis be attributed to an interaction between cannabis and tobacco? Information about drug interactions is scarce and pharmaceutical research has routinely excluded people who use substances from drug trials. This does not reflect reality as many people will combine medication with recreational drug use.

All of these factors serve as a useful reminder how little we know about the causes of psychosis, the role drugs play and the many vested interests that direct the route we take in trying to understand how we can prevent or treat people who are affected by mental health problems.

The Conversation

Ian Hamilton is Lecturer in Mental Health at University of York.

This article was originally published on The Conversation.
Read the original article.

We all age at different speeds – and scientists have worked out how to calculate it


A study has confirmed what many of us have been saying for years: age is nothing but a number. The researchers developed a method to determine the pace of ageing in individuals by looking at a range of biomarkers – including blood pressure and gum health. The study participants, all aged 38, varied widely in “biological age” and those ageing more quickly also looked older and reported more health problems.

The concept of biological age is often thought of as the proportion of an individual’s ultimate lifespan that has elapsed. In the context of this study, however, its measurement and meaning are slightly different. Examining 954 men and women in the Dunedin Birth Study Cohort, the researchers determined the biological ages of the participants to years above or below 38, which gave a range from 28 years to 61 years.

The only definition of ageing that really works is based on populations rather than individuals. Ageing is an increase in the likelihood of dying with increasing chronological age, as shown in this table. That is one reason why this work is significant; because it gives an idea of ageing in an individual.

According to population measures, in the absence of any other information, two people aged 65 have an equal risk of dying in the coming year. If one is destined to die from an undiagnosed cancer within two years and the other lives to 95, which individual is older? This is one reason the search for biomarkers of ageing is important, but the authors of this study give yet another reason.

Studying age-related diseases vs studying ageing

Age is the major risk factor for more than 75% of the mortality suffered by those aged over 64 (based on UK ONS 2013 mortality data), including cancers, circulatory and respiratory illness and neurodegeneration. The traditional view is that each of these many conditions have their own particular causes. This view has driven much research – and funding.

However, the view of biogerontologists, who study the biology of ageing, is that there are a few causes of ageing which substantially contribute to all of these age-related conditions. According to this view, if just a fraction of the billions spent on researching individual conditions were spent on finding and treating the basic causes of ageing, the payoffs could be huge, not least in terms of extended productive (tax-paying) lifespan and reduced healthcare costs.

This sort of basic research has been poorly funded in the past, but the logic of, and evidence for the biogerontologists’ view is beginning to be understood. Treatments to delay the onset of ageing and hence extend healthy lifespan in a majority of the population are likely to be found in the next 20 years.

Ageing alogorithm

They may already be being tested in animal models in a lab somewhere. But because humans are so long-lived, we can’t wait 40 or 50 years to see if it works. To test it in humans we need measures of biological age. To generate their estimates of biological age, the researchers used a previously described algorithm based on seven fairly common biomedical parameters.

They then produced a “pace of ageing” measure based on 18 parameters covering a range of organs and systems and known to change with age. Measures were taken at ages 26, 32 and 38. These included waist-hip ratio, lung and kidney function, blood pressure, cholesterol, even gum health. Study members with higher biological ages also showed a more rapid pace of ageing over the previous 12 years.

Study members with older biological ages and a faster pace of ageing looked older than others and reported more health problems. They also had poorer cognitive function, vascular health, grip strength, balance and motor ability.

Can the wrinkles on your face actually reveal your pace of ageing?
Goodluz/Shutterstock

Pace of ageing was scaled so that the mean was one year of physiological change per chronological year, with a range of 0-3 years of change per chronological year. It is frightening to think that the study member with a biological age of 61 may physiologically age 18 years within the next six chronological years, taking him (most likely a man, as men typically die earlier than women) to near his mean population life expectancy (around 80-years-old). According to the model this 38-year-old person may die within six or seven years.

Advances in anti-ageing therapies and in estimating biological age raise big questions for society, both at an individual level and in the public and private sectors. We should not be frightened of them, but we should start talking about these changes now, before they arrive.

The Conversation

David Clancy is Lecturer in Biomedical Science at Lancaster University.

This article was originally published on The Conversation.
Read the original article.

Should there be a dress code for doctors?


If you live near a hospital, you’ve probably seen the sight: a young physician in loose blue scrubs, standing in line at the grocery store. You can’t help but wonder if the young physician is lost. After all, it appears that he or she belongs in an emergency room – not the dairy section.

The oversized bottoms, dangling bright orange pajama knot, deep V-neck and beeper ensemble not only look out of place, but lead to a slew of thoughts. Is he coming from or going to a shift? Could her clothes carry some sort of hospital microbe? What detritus has the outfit picked up on public transit or in line at the ATM that will track back to an operating room or patient? Has the American trend toward casual attire gone too far?

Regardless of profession, we all play out the sartorial ritual of considering colors, textures, and garments for work, school and play.

Clothing for doctors is more than just a matter of personal style: it is an emblem of their specialty, training and culture.

Making a good first impression

In some cases, a physician’s attire is functional. A surgeon’s scrubs protect regular clothes from stains and patients from infection.

Sometimes, it’s about creating a good first impression and projecting the more professional, conservative image often associated with medicine.

Go to a doctor’s office, for instance, and you’re more likely to find physicians donning a shirt and tie, or jacket and blouse when interacting with their patients. In almost all of these cases, the emblematic uniform of physicians – the white coat – is present.

This month, about 20,000 newly minted physicians will enter residency programs across America, to begin their professional journeys. Each will care for and influence the lives of countless patients.

And each has been trained to avoid “anchoring bias,” or to not to take the first thing they learn about a patient as the most important, lest they reach a biased conclusion or incorrect diagnosis. Yet few doctors or medical students consider the first impression they make on patients. And clothes have a lot to do with that.

In an informal survey in our hospital, only two out of 30 medical students said that they actually thought about their dress when caring for hospitalized patients. Yet, over half of the medical students we spoke to agreed that what they wear is likely to influence patient opinions about their doctors. This illustrates a larger discrepancy between what doctors ought to wear and what they do wear – one that may arise from competing concerns or lack of guidance.

Just like the treatment doctors provide, that guidance should be grounded in evidence. For instance, a special report from infection-prevention experts found little evidence that germs on male doctors’ neckties, long sleeves, or white coats actually spread infections in a nonsurgical setting. So bans on such garments, such as those in place in some countries, may go too far.

Looking sharp.
Doctors via www.shutterstock.com.

Patients really like white coats

We recently published a study reviewing all available evidence regarding patient preferences for physician attire. We examined more than 30 studies that evaluated how patients viewed physicians’ attire.

In 21 of those studies, we found that patients had strong preferences about what physicians wore. And it looks like patients more often prefer for their doctors to wear formal clothing and white lab coats than not. Indeed, in 18 of the studies we reviewed, patients had a preference or positive association with this style of attire.

But as we reviewed these studies, three keys themes that suggest important variations in what patients may prefer their doctors to wear emerged. First, studies involving older patients or those from Europe or Asia all reported higher satisfaction when physicians wore formal attire.

Second, in emergency, surgical or intensive care settings, scrubs were not only preferred by patients, but also more often equated with professionalism. This makes sense, as in these more “hands-on,” procedure-oriented settings, formal suits, shirts and ties clearly seem out of place.

Finally, in doctors’ offices and outpatient clinics, scrubs were viewed unfavorably and often resulted in negative impressions.

Thus, from the patient’s perspective, a “one size fits all” approach may not work for doctor attire. Rather, the context in which a patient interacts with their doctor influences what they expect to see.

Given the tension between infection risk and patient preferences, it is not surprising that disagreement about dress code also exists among physicians.

After our study came out, the medical news website MedPage Today reported results from an informal, but still telling, online survey of over 2,000 patients and physicians about the “best approach to dressing for patient encounters.”

About 30% of doctors polled stated that they preferred to wear scrubs, casual attire or had no particular preference when caring for patients. However, more than 60% stated that doctors should wear white coats.

The online comments differed widely, with some physicians defiant in stating that they had “never worn a white coat in 30 years,” while others proclaimed, “priests and judges have their robes, we have our white coats.”

And despite clear patient preferences about what doctors wear while working, even the top-ranked hospitals in the nation, only a handful offer formal guidance on attire. Many vaguely recommend that clothes be “professional,” but without defining what professional means?

Keep your scrubs in the OR.
Surgery via www.shutterstock.com.

A dress code for docs?

How, then, should doctors dress when caring for patients? Clearly, more evidence is needed to guide members of the medical community. So we have launched a large study that aims to better understand what patients prefer when it comes to physician attire.

We plan to survey thousands of patients from the US, Italy, Switzerland and Japan in settings that span outpatient clinics, doctors offices and hospitals. Because generational effects and familiarity matter, we will specifically assess how factors such as age or how often a person interacts with the health system shape patient opinions.

While we collect data for this study, what best practices can we recommend in the interim, especially those 20,000 brand new residents?

When in doubt, formal attire with long-sleeved shirts and ties for men, and business attire for women, should prevail in nonemergency or nonoperative settings.

This practice should hold true not just for weekdays, but also when physicians are working weekends and after typical business hours. Patients and their expectations remain unchanged, regardless of hour or day.

While scrubs are appropriate for operating or emergency rooms, we suggest changing into more formal attire to visit patients in the hospital or the clinic. Regardless of the occasion, flip-flops, showy jewelry or jeans simply don’t belong in the hospital, just as scrubs do not belong outside the hospital environment. Especially not in the grocery store.

The Conversation

Vineet Chopra is Assistant Professor of Internal Medicine at University of Michigan.
Sanjay Saint is Professor of Internal Medicine at University of Michigan.

This article was originally published on The Conversation.
Read the original article.

How Yersinia pestis evolved its ability to kill millions via pneumonic plague


The mere mention of the plague brings to mind the devastating “Black Death” pandemic that spread across Europe in the 1300s. Mass graves were piled high with the corpses of its millions of victims, while the disease rampaged across Europe for many decades. Yersinia pestis, the bacterium responsible for that plague pandemic, still persists in the environment among rodent and flea populations today, and human outbreaks regularly occur around the world. Most recently, an outbreak of plague was confirmed late last year in Madagascar as well as within a prairie dog colony in Colorado just this June.

The various routes of transfer between hosts of Y. pestis bacteria, which are the cause of bubonic plague in the United States.
CDC, CC BY

Y. pestis can cause three different forms of plague: bubonic, pneumonic and septicemic. Pneumonic plague infects the lungs, causing severe pneumonia. It’s the most serious form of the disease, with fatality rates approaching 100% if untreated, although recovery is possible with antibiotics if caught in time. While increased basic hygiene and developments in modern medicine have greatly reduced the severity of plague outbreaks, the symptoms of pneumonic plague are so similar to that of the flu that misdiagnosis or delays in treatment can have fatal consequences.

Y. pestis is known to have evolved from the relatively mild gut pathogen Yersinia pseudotuberculosis sometime within the last 5,000 to 10,000 years – very recently on an evolutionary timescale. Sometime during this evolution Y. pestis developed new modes of transmission and disease manifestations, which allowed it to adapt to new animals and environments. Rather than simply causing an upset stomach, the bacterium became the killer we know from the Middle Ages.

A mother and son, suspected carriers of the pneumonic plague, share a bed in an Indian hospital.
Kamal Kishore / Reuters

One of our lab’s major research goals is to figure out how Y. pestis developed its ability to specifically cause pneumonic plague. Our research, recently published in Nature Communications, offers new insights into how small genetic changes fundamentally affected the emergence of Y. pestis as a severe respiratory pathogen.

Prior to our study, the consensus in the field has been that pneumonic plague was a secondary byproduct of the invasive disease associated with bubonic plague. As pneumonic plague represents only 5%–10% of current plague infections in humans, the field has presumed that pneumonic plague occurs only once Y. pestis reaches the lungs following systemic infection, as might occur during bubonic plague. While this may be the case now, it may not necessarily represent what occurred in the past, especially as Y. pestis was just emerging from its ancestor Y. pseudotuberculosis.

Plague infection in the lungs. Untreated, death results within a week.
CDC/ Dr Jack Poland, CC BY

First, target the lungs

Therefore, we began our study by asking a relatively simple question: “When did Y. pestis develop the ability to infect the lung and cause pneumonic plague?” Remember, it was only recently, evolutionarily speaking, that it started targeting the lungs rather than the gut. Y. pestis is believed to have emerged as a species 5,000–10,000 years ago, but the first known pandemic of plague in humans didn’t occur until the Justinian Plague that afflicted the Byzantine empire about 1,500 years ago.

Excavation of skeletal remains of victims of the Black Death.
Museum of London, Schuenemann et al PNAS vol. 108 no. 38

A recent discovery helped us investigate. Scientists successfully recovered DNA from Y. pestis from human skeletons in a Black Death mass grave in London, England. The genetic material from the historic site is very similar to DNA isolated from recent modern plague outbreaks. The fact that the DNA from then is similar to the DNA from now indicates that today’s Y. pestis has maintained its devastating disease-causing capability.

To answer the question of how Y. pestis made that crucial leap to targeting the lung and therefore being able to cause pneumonic plague, we used strains of both ancestral and modern Y. pestis in our study. These ancestral strains of Y. pestis, isolated from voles in the Transcauscaian highland, carry characteristics of both modern, pandemic Y. pestis and the relatively benign predecessor species Y. pseudotuberculosis that still exists today.

Thus, these ancestral versions can be considered “intermediate” strains, trapped somewhere between the gut Yersiniae and modern, virulent Y. pestis. Indeed, these “intermediate” lineage ancestral strains are as closely related to Y. pseudotuberculosis as we can get while still technically representing species of Y. pestis. Because of their unique genetic characteristics, these ancestral strains can provide crucial insights into how this bacterium may have adapted to new host environments as it evolved from Y. pseudotuberculosis.

Surprisingly, we found that these ancestral strains were able to cause pneumonic plague in a manner indistinguishable from that of modern Y. pestis in mice – but only if the bacteria carried the gene for a single protein called Pla. Pla is unique to Y. pestis and was acquired very early in the evolution of the species.

Almost all ancestral strains of Y. pestis carry the gene for Pla, but there still exist a few that represent ancestral Y. pestis just prior to acquisition of Pla. We were able to test if these pre-Pla strains were able to cause pneumonic plague – and they did not. But as soon as Y. pestis picked up this gene, the bacteria could cause epidemics of pneumonic plague. No further changes were necessary, even though there are dozens of additional differences between these ancestral strains and modern Y. pestis. So Y. pestis was able to cause pneumonic plague much earlier in its history than had previously been thought – as soon as it acquired this single gene for Pla.

Scanning electron micrograph of Yersinia pestis.
Justin Eddy, Lindsay Gielda, et al, CC BY-ND

Second, increase infectiousness

But that’s not where the story ends. It turns out that all modern pandemic strains of Y. pestis contain a single amino acid mutation in Pla compared to ancestral Y. pestis. This change slightly alters the function of the Pla protein. The mutation, however, plays no role in the ability of any Y. pestis isolates to cause pneumonic plague – ancestral or modern.

Quite surprisingly, this modification allowed the Y. pestis to spread deeper into host tissue following a bite from an infected flea or rodent, leading to the development of bubonic plague with its trademark swollen lymph nodes. This suggests that Y. pestis was first a respiratory pathogen before it was able to efficiently cause invasive infections.

This discovery challenges our traditional notion of how plague evolved. Rather than pneumonic plague being a late addition to Y. pestis’s arsenal as commonly believed, its ability to target the lung came before the change that makes it such an infectious pathogen. Our research suggests that the acquisition of Pla and its ability to cause pneumonic plague occurred well before 1,500–5,000 years ago. But the amino acid modification didn’t occur until just prior to 1,500 years ago, allowing Y. pestis to become much more deadly. All strains of Y. pestis from the time of the Justinian Plague and after have the deadly modification of Pla, while strains prior do not.

Physician attire for protection from the Black Death.
Paul Fürst

Our results may explain how, through one small amino acid change, Y. pestis quickly transitioned from causing only localized outbreaks of disease to the pandemic spread of Y. pestis as seen during the Justinian Plague and the Black Death.

And it raises the ominous possibility that other respiratory pathogens could emerge from similar small genetic changes.

The Conversation

Daniel Zimbler is Postdoctoral Fellow in Bacteriology at Northwestern University.
Wyndham Lathem is Assistant Professor of Microbiology-Immunology at Northwestern University.

This article was originally published on The Conversation.
Read the original article.

When we want something, we think everyone does


We can’t read minds, but that doesn’t stop us from trying to guess what other people are thinking. Will the person in line ahead of us order the last everything bagel? Will that group of people occupy that spot at the bar you’ve been eyeing? We anticipate other people’s intentions and goals because we often assume everyone else wants exactly the same thing we do, even if they don’t. How often do we correctly guess what the person in line ahead of us will do?

This is a well-documented psychological phenomenon known as “goal projection.” Given little to no information about someone else, we will resort to the only knowledge we have access to – our own thoughts – and project this knowledge onto them. Projecting one’s goals can have vast consequences for our behavior, like how we act toward the person onto whom we are projecting. For example, we can become unnecessarily competitive with that person or even act more helpfully toward that person, depending on the situation.

But do we always project our goals onto others? Are we more likely to do it in certain situations and less likely in others? The answer is yes – researchers have found that the more committed you are to reaching your goal, the more likely you will project it onto others.

To find out, researchers randomly approached people at a multiplex movie theater in New York City preparing to buy tickets. These test subjects were asked to identify the movie they came to see and how committed they were to watching that movie. Then researchers pointed out the first person waiting in line to purchase a ticket at the multiplex and asked the test subject which movie they thought the individual was going to see.

The results indicated that the more committed a person was to the goal of watching the movie of his/her choice, the more he/she projected that same goal onto the other movie patron. So if you go into the theatre thinking that you really want to see Mad Max: Fury Road, you’ll assume everyone else wants to as well, and not, say, Jurassic World.

This effect even remained when researchers controlled how often test subjects attended the movies and the popularity of the movies playing at the multiplex – two variables that may increase the likelihood of making informed guesses rather than ones driven by goal projection.

Going my way?
Departure/arrival board via www.shutterstock.com.

We think people who are similar to us have the same goals

Researchers also pointed out that people are more prone to project onto someone else when they perceive that person to be similar to them.

In a different study examining commuters at Penn Station in New York City, researchers approached people waiting for the track number of their train to appear. Dozens of trains depart every hour during rush hours from Penn Station. Test subjects were first asked to indicate their destination and their level of goal commitment to get there.

At this point, experimenters singled out another commuter who was waiting in close to the subject and was easily observable. Test subjects were asked to indicate how similar they felt that person was to themselves and how likely it was that person was headed to their own destination.

Test subjects with strong goal commitment were more likely to believe the target person would go to the same destination the more that person was perceived to be similar. In other words, viewing someone as more similar to yourself might enhance your goal projection.

Goal attained.
Cashier and shopper via www.shutterstock.com.

We think other people want what we want, until we get what we want

So when we really want something, we assume that people around us do too. But there are times when we don’t project our goals onto others. For example, if you were really focused on getting blueberries to make a pie, you might assume everyone else in the store might also be there to buy blueberries. But once you leave the store, with a box of blueberries in hand, you would no longer project that goal onto others. Now that you’ve got your blueberries, you stop assuming everyone else was after the same thing.

This is precisely what was found in a final study. Researchers identified when people are especially unlikely to project their goal – after they’ve attained it.

At a grocery store, researchers surveyed people before they had gone grocery shopping (yet to attain their goal) and people after they had finished grocery shopping (goal attained). Test subjects were asked to name the main item (eg, blueberries) they came to purchase, or had just purchased (depending on when they were approached), then indicated their goal commitment to purchase that item.

Then researchers chose another shopper who was just about to enter the supermarket at that very moment. Test subjects indicated how similar they felt that new shopper was to themselves, and the probability that the shopper would purchase the same item (blueberries) they purchased or were planning to purchase.

Results showed that people projected their goal onto the other shopper only when their own goal commitment was strong and they viewed other shopper as similar.

However, when people attained their goal (they had finished shopping), they no longer projected their goal. So we assume other people have the same goals we do, until we achieve those goals ourselves.

Such research could be used to explain why there is so much tension within crowds even before doors open on Black Friday and why people might offer unwarranted and unsolicited advice to others (for instance, offering a tip on how to get to Doctor A’s office when the person actually wants to go to Doctor B).

When there is no other way to make informed guesses about other people’s intentions and goals, people have no choice but to rely on their own internal states – their own intentions and goals – and project them onto others.

The Conversation

Janet Naju Ahn is Postdoctoral Research Scientist, Teachers College at Columbia University.

This article was originally published on The Conversation.
Read the original article.

The science behind sexual orientation


This article is part of a series The Conversation Africa is running on issues related to LGBTI in Africa. You can read the rest of the series here.

People who are attracted to others of the same sex develop their orientation before they are born. This is not a choice. And scientific evidence shows their parents cannot be blamed.

Research proving that there is biological evidence for sexual orientation has been available since the 1980s. The links have been emphasised by new scientific research.

In 2014, researchers confirmed the association between same-sex orientation in men and a specific chromosomal region. This is similar to findings originally published in the 1990s, which, at that time, gave rise to the idea that a “gay gene” must exist. But this argument has never been substantiated, despite the fact that studies have shown that homosexuality is a heritable trait.

Evidence points towards the existence of a complex interaction between genes and environment, which are responsible for the heritable nature of sexual orientation.

These findings are part of a report released by the Academy of Science South Africa. The report is the outcome of work conducted by a panel put together in 2014 to evaluate all research on the subject of sexual orientation done over the last 50 years.

It did this against the backdrop of a growing number of new laws in Africa which discriminate against people attracted to others of the same sex. The work was conducted in conjunction with the Ugandan Academy of Science.

Existing research

The academy looked at several scientific studies with different focus areas that have all provided converging findings. These include family and twin studies. The studies have shown that homosexuality has both a heritable and an environmental component.

Family studies have shown that homosexual men have more older brothers than heterosexual men. Homosexual men are also more likely to have brothers that are also homosexual. Similarly, family studies show that lesbian women have more lesbian sisters than heterosexual women.

Studies on identical twins are important as identical twins inherit the same genes. This can shed light on a possible genetic cause. Studies on twins have established that homosexuality is more common in identical (monozygotic) twins than in non-identical (dizygotic) twins. This proves that homosexuality can be inherited.

However, the extent of the inheritance between twins was lower than expected. These findings contribute to the notion that although homosexuality can be inherited, this does not occur according to the rules of classical genetics. Rather, it occurs through another mechanism, known as epigenetics.

Epigenetics likely to be an important factor

Epigenetics relates to the influence of environmental factors on genes, either in the uterus or after birth. The field of epigenetics was developed after new methods were found that identify the molecular mechanisms (epi-marks) that mediate the effect of the environment on gene expression.

Epi-marks are usually erased from generation to generation. But under certain circumstances, they may be passed on to the next generation.

Normally all females have two X-chromosomes, one of which is inactive or “switched off” in a random manner. Researchers have observed that in some mothers who have homosexual sons there is an extreme “skewing” of inactivation of these X-chromosomes. The process is no longer random and the same X-chromosome is inactivated in these mothers.

This suggests that a region on the X-chromosome may be implicated in determining sexual orientation. The epigenetics hypothesis suggests that one develops a predisposition to homosexuality by inheriting these epi-marks across generations.

External environmental factors such as medicinal drugs, chemicals, toxic compounds, pesticides and substances such as plasticisers can also have an impact on DNA by creating epi-marks.

These environmental factors can also interfere with a pregnant woman’s hormonal system. This affects the levels of sex hormones in the developing foetus and may influence the activity of these hormones.

Future studies will determine whether these factors may have a direct impact on areas of the developing brain associated with the establishment of sexual orientation.

Looking to evolution

From an evolutionary perspective, same-sex relationships are said to constitute a “Darwinian paradox” because they do not contribute to human reproduction. This argument posits that because same-sex relationships do not contribute to the continuation of the species, they would be selected against.

If this suggestion were correct same-sex orientations would decrease and disappear with time. Yet non-heterosexual orientations are consistently maintained in most human populations and in the animal kingdom over time.

There also appear to be compensating factors in what is known as the “balancing selection hypothesis”, which accounts for reproduction and survival of the species. In this context, it has been demonstrated that the female relatives of homosexual men have more children on average than women who do not have homosexual relatives.

Future studies

The academy found that a multitude of scientific studies have shown sexual orientation is biologically determined. There is not a single gene or environmental factor that is responsible for this – but rather a set of complex interactions between the two that determines one’s sexual orientation.

However, more evidence is leading investigators to a specific region on the X-chromosome, and possibly a region on another chromosome.

The identification of these chromosomal regions does not imply that homosexuality is a disorder – nor does it imply that there are mutations in the genes in these regions, which still remain to be identified. Rather, for the first time, it suggests that there is a specific region on a chromosome that determines sexual orientation.

Although research has not yet found what the precise mechanisms are that determine sexual orientation – which may be heterosexual, homosexual, bisexual or asexual – the answers are likely to come to the fore through continued research. These findings will be important for the field of genetics and, more importantly, for those attracted to others of the same sex and society as a whole.


This article draws from the ASSAf report.

The Conversation

Michael Sean Pepper is Director of the Institute for Cellular and Molecular Medicine at University of Pretoria.
Beverley Kramer is Assistant Dean: Research and Postgraduate Support in the Faculty of Health Sciences at University of the Witwatersrand.

This article was originally published on The Conversation.
Read the original article.

Young adults don’t understand health insurance basics – and that makes it hard to shop for a plan


The health and success of the Affordable Care Act (ACA) depends on a lot of factors, and enrolling enough “young invincibles” in health insurance is one of them.

Under the ACA, insurers in the individual market have to cover everyone who wants to enroll. Insurers are also restricted in how much they can vary premiums based on age. That means that older people who have higher medical costs (on average) pay premiums lower than what might cover their care, and young people with lower medical costs (on average) pay premiums sometimes above their expected medical costs. So enrolling young people in health insurance helps keep costs stable. In addition, young adults have historically been highly represented when looking at the uninsured population.

And so millions of young adults were targeted for enrollment in the ACA’s health insurance marketplaces during the first open enrollment period in early 2014.

Enrolling in health insurance can be hard; choosing a health insurance plan that provides the amount of coverage you’ll likely need at the right cost is a difficult task. It’s challenging for consumers who have been through the process several times before, and likely even more so for young people who may be selecting from plan options for the first time.

Choosing the right insurance can be tough.
Mike Segar/Reuters

What young adults want in a health insurance plan

I led a research team at the University of Pennsylvania that examined the experience of young people when they enroll in health insurance on HealthCare.gov, the federal insurance marketplace. At the time of our study, Pennsylvania was one of 34 states that did not have a state-run health insurance exchange. If you don’t have employer-sponsored health insurance, or are too old to remain on a parent’s health insurance, in states like Pennsylvania you have the opportunity to go on HealthCare.gov to choose a plan.

We studied 33 highly educated young adults aged 19-30 in Philadelphia during the first year of HealthCare.gov. Some of the people we followed had health insurance at the beginning of the study, but wanted to look at insurance options on HealthCare.gov because they’d heard from friends that they might get better, cheaper coverage on the marketplace. In fact, one of the findings of our study is that young adults were often not only shopping for coverage on HealthCare.gov, but also comparing those plans to options outside the marketplace, like plans offered by schools, employers or their parents’ health insurance.

From January to March of 2014, we observed the young adults as they shopped for insurance plans on HealthCare.gov, asking them to “think aloud” to capture their reactions in real time. We then interviewed participants about their thoughts on health insurance in general and what they saw on HealthCare.gov.

One said:

I just wasn’t able to comprehend all of the things on the Healthcare.gov – I got confused. I’m not a person to give up, not at all – but with the system, I just wanted to quit.

The young adults we followed were looking for an affordable health insurance option. They placed a lot of emphasis on the monthly premium cost and the amount of plan deductible (though see below on their confusion about what deductible actually means).

Most considered a monthly premium of over US$100 unaffordable, yet the least expensive plan without tax credits in Philadelphia was closer to $200 per month. Luckily several of the participants qualified for tax credits, which brought their premiums as low as $0.13 per month. Others, however, did not qualify for any discounts and chose to remain uninsured, stating that they could not afford any of the options, even though they may have to pay a penalty for not having insurance.

One said:

I will just pay whatever that tax consequence is, $95 or something, right?, because $200 a month right now is way too much. I don’t know how my friends with student loans do it.

Topping the list of coverage benefits they wanted was access to affordable primary and preventative care. One participant said:

I would really like to get a physical to just see where I’m at. I haven’t been to a doctor in a long time, but I wanna see if there’s anything I should be concerned about – blood pressure, cholesterol…

Interestingly, however, many participants in the study did not realize that preventive care was included in all plans at no additional cost under the ACA. Hence, one of the recommendations coming out of this study was that plans should emphasize the availability of no-cost preventive care, like birth control and routine visits, especially in efforts targeting enrollment of young adults.

‘What’s a deductible?’ – young adults aren’t familiar with insurance terms

As one of the young adults was looking at his plan options, he said:

This plan is $20 to see a primary doctor, and this one is 10% coinsurance after deductible – and I just don’t understand that. What is the deductible to see my primary doctor?

It became clear early in the study that one of the biggest challenges the young people faced in choosing a plan was their lack of familiarity with basic health insurance terms like “deductible” or “coinsurance.”

Only half of the participants could correctly define “deductible,” while less than one in five could define “coinsurance.” These concepts are fundamental to understand for anyone who hopes to make an informed health insurance choice. And misunderstanding these terms can lead to a rude awakening after purchasing and trying to use the insurance. This happened to one participant who said:

Before I signed up for it, I didn’t really know what deductible meant. I thought it was saying it would cover $6,000 worth of stuff, and anything over that, then I would have to pay the rest. But I found out it was the other way around.

Preventative care is included.
stethoscope via www.shutterstock.com.

More support needed for young beneficiaries

While this is a small study that was conducted in a single city and state that uses healthcare.gov, it shows that even the highly educated young people in our study had difficulty making health insurance choices. However, our findings on the confusion over health insurance terms have also been demonstrated in studies of consumers across a variety of demographic groups. Other researchers have also verified, mostly in experimental settings, that people have a hard time making optimal health insurance choices, even after ensuring that they understand basic health insurance concepts or conducting their insurance experiments in a population of MBA students.

Their findings and ours help describe how young adults navigate the insurance selection process, and point to many areas where consumers could be better supported in the health insurance selection process.

In the area of health insurance literacy, tools to help consumers could be as simple as providing pop-up explanations of key terms, like “deductible,” when you hover your cursor over the term on the screen. Other tools might include total cost estimators that do the math for the consumer. This could provide an estimate that takes into account a plan’s deductible, coinsurance, copay and premium amounts, as well as how often that person predicts they’ll use their insurance (such as how many times they visit the doctor and how many medications they take).

We are sharing our findings with those getting HealthCare.gov and the other state-run health insurance marketplaces ready for the next open enrollment period in November 2015.

The Conversation

Charlene Wong is Clinical scholar and pediatrician at University of Pennsylvania.

This article was originally published on The Conversation.
Read the original article.

Ancient DNA reveals how Europeans developed light skin and lactose tolerance


Food intolerance is often dismissed as a modern invention and a “first-world problem”. However, a study analysing the genomes of 101 Bronze-Age Eurasians reveals that around 90% were lactose intolerant.

The research also sheds light on how modern Europeans came to look the way they do – and that these various traits may originate in different ancient populations. Blue eyes, it suggests, could come from hunter gatherers in Mesolithic Europe (10,000 to 5,000 BC), while other characteristics arrived later with newcomers from the East.

About 40,000 years ago, after modern humans spread from Africa, one group moved north and came to populate Europe as well as north, west and central Asia. Today their descendants are still there and are recognisable by some very distinctive characteristics. They have light skin, a range of eye and hair colours and nearly all can happily drink milk.

However, exactly when and where these characteristics came together has been anyone’s guess. Until now.

Clash of cultures

Throughout history, there has been a pattern of cultures rising, evolving and being superseded. Greek, Roman and Byzantine cultures each famously had their 15 minutes as top dog. And archaeologists have defined a succession of less familiar cultures that rose and fell before that, during the Bronze Age. So far it has been difficult to work out which of these cultures gave rise to which – and eventually to today’s populations.

The Bronze Age (around 3,000–1,000 BC) was a time of major advances, and whenever one culture developed a particularly advantageous set of technologies, they become able to support a larger population and to dominate their neighbours. The study found that the geographical distributions of genetic variations at the beginning of the Bronze Age looked very different to today’s, but by the end it looked pretty similar, suggesting a level of migration and replacement of peoples not seen in western Eurasia since.

One people that was particularly important in the spread of both early Bronze-Age technologies and genetics were the Yamnaya. With a package of technologies including the horse and the wheel, they exploded out of the Russian and Ukrainian Steppe into Europe, where they met the local Neolithic farmers.

Yamnaya skull
Natalia Shishlina.

By comparing DNA from various Bronze-Age European cultures to that of both Yamnaya and the Neolithic farmers, researchers found that most had a mixture of the two backgrounds. However the proportions varied, with the Corded Ware people of northern Europe having the highest proportion of Yamnaya ancestry.

And it appears that the Yamnaya also moved east. The Afanasievo culture of the Altai-Sayan region in central Asia seemed to be genetically indistinguishable from the Yamnaya, suggesting a colonisation with little or no interbreeding with pre-existing populations.

Mutations traced

So how have traits that were rare or non-existent in our African ancestors come to be so common in western Eurasia?

The DNA of several hunter gatherers living in Europe long before the Bronze Age was also tested. It showed that they probably had a combination of features quite striking to the modern eye: dark skin with blue eyes.

The blue eyes of these people – and of the many modern Europeans who have them – are thanks to a specific mutation near a gene called OCA2. As none of the Yamnaya samples have this mutation, it seems likely that modern Europeans owe this trait to their ancestry from these European hunter gatherers of the Mesolithic (10,000-5,000 BC).

Reconstruction of a Yamnaya person from the Caspian steppe in Russia about 5,000-4,800 BC.
Alexey Nechvaloda

Two mutations responsible for light skin, however, tell quite a different story. Both seem to have been rare in the Mesolithic, but present in a large majority by the Bronze Age (3,000 years later), both in Europe and the steppe. As both areas received a significant influx of Middle Eastern farmers during this time, one might speculate that the mutations arose in the Middle East. They were probably then driven to high levels by natural selection, as they allowed the production of sufficient vitamin D further north despite relatively little sunlight, and/or better suited people to the new diet associated with farming.

Another trait that is nearly universal in modern Europeans (but not around the world) is the ability to digest the lactose in milk into adulthood. As cattle and other livestock have been farmed in western Eurasia since long before, one might expect such a mutation to already be widespread by the Bronze Age. However the study revealed that the mutation was found in around 10% of their Bronze Age samples.

Interestingly, the cultures with the most individuals with this mutation were the Yamnaya and their descendents. These results suggest that the mutation may have originated on the steppe and entered Europe with the Yamnaya. A combination of natural selection working on this advantageous trait and the advantageous Yamnaya culture passed down alongside it could then have helped it spread, although this process still had far to go during the bronze age.

This significant study has left us with a much more detailed picture of Bronze Age Europeans: they had the light skin and range of eye colours we know today. And although most would have got terrible belly ache from drinking milk, the seeds for future lactose tolerance were sown and growing.

The Conversation

Daniel Zadik is Postdoctoral researcher in genetics at University of Leicester.

This article was originally published on The Conversation.
Read the original article.

case for putting psychologists in trauma programs


Imagine this: A 17-year-old girl has been shot in an apparent mall robbery. Her parents rush to the hospital only to be told they cannot see their daughter yet – the medical team is actively working to save her life. The unknown is terrifying, the waiting torturous.

When they are finally allowed to enter the trauma center to see their daughter, they see 10-15 professionals, all gowned and gloved with masks on their faces, surrounding their daughter, each attending to a different task such as starting an IV, monitoring her vital signs, preparing to take bedside X-rays. Her parents just want to know if will she be OK. But this is exactly the information the treatment team is not able to provide, at least not yet.

Nearly 2.5 million people a year are hospitalized because of a traumatic injury. When someone is taken to a trauma center or emergency room they enter a chaotic place. They are surrounded by bright lights and blinking machines. Their clothes are cut off. Doctors and nurses wearing masks and sterile clothing crowd around, asking questions and giving them medications and examining their body. This assault to the senses is traumatic on its own, beyond the injuries that brought them to the hospital in the first place. And it not only affects the patient, but their family and friends as well, who are helpless to “make it all better.”

And yet routine psychological care for victims of traumatic injuries and their families is not available at all hospitals. Exact numbers are hard to come by, but we know of less than ten trauma programs nationally that routinely employ a psychologist as a member of the core trauma treatment team.

Some trauma centers address the emotional needs of patients through pastoral care or social services, while others only call in a psychologist or psychiatrist for isolated cases such as suicide attempts or a loved one’s death.

The emotional injuries from trauma can be as disabling as the physical injuries, so doesn’t it make sense to have psychologists routinely in trauma programs to look after their patients’ psychological well-being?

Controlled chaos.
Surgeons via www.shutterstock.com.

Traumatic injuries don’t just leave physical scars

Think about that 17-year-old girl and her parents. She needs intense care for her physical injuries. Her parents might need psychological support. And when that girl pulls through, she might need psychological care to cope with her injuries and recovery as well.

Trauma centers are required to have surgeons, emergency medicine physicians, neurosurgeons, cardiovascular surgeons and orthopedists on faculty. But psychologists are not on that list. While trauma centers are mandated to provide psychosocial support programs during the immediate recovery phase, what is not routine is ongoing psychological care for patients and families.

Health care has been slow to acknowledge the invisible wounds trauma creates and the impact this has on emotional and physical recovery. How someone responds to these emotional demands is key to their overall recovery. Studies have shown that trauma survivors are at increased risk for many psychological issues including depression, post-traumatic stress disorder (PTSD), and substance abuse.

Indeed, as researchers on post-traumatic stress disorder have noted:

“We may reduce the costs of trauma, both personal and social, by beginning to address the collateral social and psychological complications of injury with the same intensity as we approach the physical.”

Such psychological wounds, the hidden suffering in trauma, must be recognized and treated. Trauma programs need to treat the whole person. That’s where we come in.

Putting psychologists within the core trauma program

As medical trauma psychologists on faculty for a very busy Level I Trauma Center and trauma program at the University of Florida (UF), we confront this all the time. Everyday we help patients and families manage the emotional burden of a traumatic injury and its aftermath. Most have never before experienced a trauma center or an intensive care unit except for what they have seen on television, and none of them planned to be here.

Though some individuals arrive to the hospital seeking emergency care for psychiatric conditions such as schizophrenia or major depressive disorder, these patients represent only a small number of the people with whom we serve. Our primary patient population are people with traumatic physical injuries and their families.

Our hospital was first in the nation to hire a licensed psychologist as a full faculty member for its in-patient trauma program. Dr Kamela Scott began developing the UF Psychological Services program back in 1995 after the hospital realized that trauma patients weren’t receiving the psychological care that they needed. In most medical trauma systems at the time, the emotional needs of patients and families either were not addressed at all, or were only addressed on an as needed basis. The idea behind the UF Psychological Services program is to treat psychological care as a routine and integral part of treatment for patients and families after trauma.

Since then it’s grown – adding another full-time faculty psychologist and a post-doctoral fellow. We also train surgery residents on medical ethics and decision making, effective communication with families, managing disruptive patients and families, and conflict resolution.

The trauma team can call us to evaluate a patient, just like they might call a neurosurgeon or cardiovascular specialist. We might be asked to help a patient and family adjust to life after traumatic brain or spinal cord injury. Or it could mean intervention with victims of domestic violence or assault. Or we might provide support to families when the decision is made to end artificial life support for a loved one.

Through this collaborative approach, the entire treatment team works together to help the patient recover from devastating injuries physically and psychologically and works to help people adjust to and cope with the overall effects of their injuries.

And we also look after the needs of patient’s families. That might mean addressing their fears and concerns and offering guidance about how to support their loved one’s physical and emotional recovery.

Patient’s families need support too.
Waiting room via www.shutterstock.com.

Care doesn’t end when the patients goes home

We can’t view the impact of trauma in terms of physical injury alone. If we do, we miss an entire aspect of what it means to be human: Emotion.

Victims of motor vehicle crashes can become too anxious to get behind the wheel of a car again, and that might make them withdraw socially. Survivors of physical assault may fear walking the streets or into parking lots which can make it hard to return to normal daily life. Someone with facial scarring or a loss of physical function due to injury may have problems with self esteem. They see a different face in the mirror. He or she may be unable to return to work due to physical disability.

Medical trauma psychologists, like us, serve as psychological case-managers for life after discharge from the hospital. Time and again we have found that our patients simply don’t know what resources are available to help them or how to access those services.

We can connect them to community mental health providers with expertise in various treatments needed, such as treatment for anxiety disorders, depression or substance abuse. We can also link our patients to providers within their health insurance networks, and to support groups close to home.

Psychological follow-up care provides a safety net for patients and families throughout the recovery process to help them reach the ultimate goal of returning to function, physically and emotionally.

The Conversation

Kamela K Scott is Associate Professor and Director, Psychological Services, College of Medicine, at University of Florida.
David Chesire is Associate Professor & Licensed Psychologist, College of Medicine at University of Florida.

This article was originally published on The Conversation.
Read the original article.

Human experiments – the good, the bad, and the ugly


Research involving human subjects is littered with a history of scandal that often shapes people’s views of the ethics of research. Often the earliest cited case is English physician Edward Jenner’s development of the smallpox vaccine in 1796, where he injected an eight-year-old child with the pus taken from a cowpox infection and then deliberately exposed her to an infected carrier of smallpox.

Although Jenner’s experiment was, fortunately, successful, the method of exposing a child to a deadly disease in this way would undoubtedly nowadays be seen as unacceptable. Perhaps the most notorious cases of unethical research were revealed during the Nuremberg trials concerning Nazi experiments on concentration camp prisoners. This “research” included involuntary sterilisation, inducing hypothermia, and exposing subjects to diseases such as tuberculosis.

Mustard gas burns.
Balcer

There are also examples of government-run research that took advantage of the vulnerability of the subjects to ensure their participation and which resulted in the subjects experiencing severe harms, such as the Tuskegee Syphilis trials or the UK-run Porton Down chemical experiments in which 11,000 military personnel were exposed to mustard and nerve gas between 1939 and 1989.

Human subjects

Yet, despite the litany of failures to maintain ethical standards in research, these remain the exceptions and a focus on scandals can seriously distort proper discussion about research ethics.

Research involving human subjects is not intrinsically ethically dubious. That is not to say it doesn’t contain ethical challenges, but these concerns can often be met. Nor does it diminish the immense social importance of involving human subjects in experiments and the huge improvement in the quality of lives and number of lives saved through such research.

The most pressing question in research ethics is often not whether we should be doing research but how can we balance or justify exposing individual human subjects to risk for the sake of the advancement of science?

Sometimes, in the case of therapeutic trials, research subjects potentially stand to benefit should the treatment prove successful (some have argued that this should go even further with the recruitment of the terminally ill for experimental drugs). However, such cases are rare when considered against the time it takes for the results of research to be fully developed. The benefits are therefore often distributed among future populations rather than the individuals taking part in the trial. Matters are made even more complicated in cases where trials are conducted on subjects who are potentially vulnerable or desperate.

Balancing tensions

The crucial feature about research ethics is to understand that in order to carry out ethically justified research, we have to be particularly aware of where the imbalances lie between researchers and their subjects and what might be best done to avoid ethical conflict. Much of the ethical conflict is based in the tension that arises between the researcher’s concerns for the interests of the subject on the one hand and the interests of science, society and future patients on the other.

Rotavirus close up.
Cell culture

Unethical practice can still occur where this hasn’t been properly thought through – especially when it comes to exposing study participants risk through placebos. In 2014, a trial for an experimental vaccine for rotavirus in India was heavily criticised for giving more than 2,000 children a placebo. In 1997, one US government-funded trial into preventing HIV spread from pregnant women to their babies saw them given a placebo instead of a known drug that was effective in prevention.

The role of the committee

Abuse can also happen because researchers themselves may, consciously or unconsciously, favour the interests of carrying out research over the interests of the subjects involved in the research.

Research ethics committees (RECs) are widely used to assess and review trial designs. These committees are designed to scrutinise with a broad societal view – including both professional and lay perspectives – as to whether the research is ethically acceptable. In many cases, this involves ensuring that many of the standard safeguards, such as proper consent or anonymising data, are in place to protect research subjects, and ensuring that subjects are not exposed to unnecessary risks.

What will the committee think?
Decide by Shutterstock

There will inevitably be cases where research cannot meet the usual ethical safeguards, for example, when the very nature of the research requires that the subjects do not know they are participating in research (as in the case of certain kinds of behavioural study, where knowing that you were the subject of research would change your behaviour patterns and render the research useless).

It then becomes a much more demanding question as to whether the potential benefits of the research are sufficiently great to justify overriding standard practices, and whether there are ever limits to the sorts of risks we are willing to allow human subjects to undertake.

The limits of consent

We tend to deem it much too risky to allow those those least able to protect their own interests, such as children, adults with cognitive impairments, or those whose circumstances that leave them more open to harm, to participate in research. But it is not unheard of when the research cannot be done on any other subject population other than the vulnerable group.

Research into dementia treatments, for instance, or research into child behavioural disorders would each require at least some involvement of vulnerable groups to be effective. For such research to be ethically acceptable, the methodological necessity of using members of these groups as subjects must also go hand-in-hand with a range of safeguards to protect them from harm.

As these subjects are less able to protect their own interests adequately, these safeguards must also be much more stringent and wide-reaching than might be the case for research involving less vulnerable research subjects.

Even in cases of studying particular conditions, such as childhood diseases, research is usually only seen as ethically justified if it imposes no real risk of harm or is likely to have some direct therapeutic benefit. The drawback, some argue, is that this inhibits drugs being developed that are targeted at specific populations such as children. This has led to changes in law in cancer trials, for example, that make it easier to include children.

When it comes to well-informed, competent adults, some believe that any level of risk is acceptable as long as the subject agrees to it. Others think that the degree of risk needs to be offset by particular gains for the individual taking part – as was the case with the recent use of an experimental Ebola vaccine on healthcare workers exposed to the virus in Africa.

Ultimately, there is no universally accepted position as to how such research should proceed. Laws and codes are far too general for deciding such cases, which is where ethical judgements, committees, and arguments come in that allow agreement to be reached. These can delay research or draw on resources available for a trial, but they are essential if we are to maintain a high level of scrutiny in often complex situations and prevent further scandalous cases from arising.

This is the first part of The Conversation’s series On Human Experiments. Please click on the links below to read other articles:

Part Two: How national security gave birth to bioethics

The Conversation

Anthony Wrigley is Senior Lecturer in Ethics at Keele University.

This article was originally published on The Conversation.
Read the original article.