Category Archives: Science

When I grow up, I want to be a researcher…


Jérémy Filet, Université de Lorraine and Lisa Jeanson, Université de Lorraine

“So what’s your PhD topic again?”… The Conversation

Nowadays, this is the question most commonly asked to early-career researchers, and the answer is becoming more and more complex. While an interdisciplinary approach is favoured in English-speaking world, the French academic system often keeps doctoral students within methodological limits.

So why maintain such an inflexible, discipline-focused system? How can young researchers make their fields and the scientific foundation on which the build their work their own?

Breaking down boundaries: the end of labels

A basic trend: The longstanding boundaries between classic disciplines are breaking down or, at least, being blurred, and many academics feel disoriented. One explanation for this radical shift is probably the development of new media. While “traditionalists” try to hold on to their “specialties”, open-minded researchers use new technologies to break down the walls between disciplines. Indeed, since the 1980s, the English-speaking research world has witnessed the birth of new fields of multidisciplinary research. A model from which France and other countries have begun to draw inspiration in the last decade.

For 21st-century PhD students – the first generation of “digital natives” – the web has been a simple fact for their entire lives. They tend to refuse labels, and unlike their predecessors, early career researchers do not want to choose between specialties, methodologies, schools of thought or countries. They want to embrace them all.

And why would they have to choose, anyway? Thanks to the Internet they have access to almost unlimited knowledge, through MOOCs, TED talks, online publications… In a nutshell, open sources. Many doctoral candidates have graduated with two or three masters and followed several transdisciplinary pathways already. They are thus entitled to diversify their experience, and they wish to keep this privilege, and even cultivate it, when writing their thesis.

“Y generation” researchers

The training of the current generation of researchers – strewn with pitfalls and migrations – is not so “unusual” anymore. In a sense, their professional lives will be that way as well. For “Y generation” researchers, a certain volatility becomes necessary, if not essential, to fully comprehend new nomadic objects of research. Why not dissect a Latin text in the same way we examine DNA? Could a philosopher learn something from an examination of African tax systems?

For the 2016 Early Career Researchers conference at the University of Lorraine, PhD students discussed their take on interdisciplinarity and its potential benefits. The 2017 edition takes place on June 16 this year returns with a new theme: “Which questions for what research? The Humanities at the crossroads of disciplines”.

Asking the right questions

In 2017 we need to discover what kinds of questions are being asked in research. What are the purposes of research? Which questions best correspond to which types of research? What is our take on fundamental research? What is the split between social-science research, applied research, or interventionist research? With the multiplication of ground-breaking concepts, should research fields be restructured?

How particular disciplines are mastered is clearly defined by French institutions, such as the National Counsel of Universities or the competitive exams for secondary-school teaching in the French national education system. Therefore, we should question the legitimisation of new fields of research within a given academic institutional system. As such, cultural studies have often been strongly criticised in France, whereas their popularity within the English-speaking world is easily understandable considering their interdisciplinary nature.

Certain disciplines taught at universities also have their equivalent in the secondary-school system, and many research departments limit their recruitment of lecturers to candidates who have passed the secondary-school exam. Notwithstanding the many differences between teaching in secondary school and conducting research at university, should academics in France continue this historical mode of recruitment? Can research fields be as easily delimited as the disciplinary knowledge one needs to teach in secondary schools? This issue is all the more pressing as new technologies bolster the constant evolution of research questions. Can they enable the Y generation of researchers to free themselves from the ancient methods of “mastering” disciplines and go beyond the more “traditional” fields of research?

Towards enhanced research

While fields of research are increasingly changing, should they all intersect and perfectly match taught disciplines, or could they be much more enriched and flexible? A good example is gender studies, which combines history, psychology, sociology and even medicine. Similarly, shouldn’t we consider research fields within the context and needs of society? It is only logical to question the axiological positioning of the researcher with regard to political militancy or societal debates, especially when their research deals with current affairs.

Moreover, an increasing number of companies and other organisations are now proposing collaborations and partnerships with researchers. Industrial agreements for training through research (for example, the French CIFRE program) establish a partnership between a partner – most often a firm – a research department, and a PhD student. What methodologies can be applied in such collaborations? How do we reconcile the researcher’s methods and the partners’ expectations? We also need to question the uses and the limits such cooperative efforts. Concisely put: how do we distinguish between disciplines? Should we talk about a disciplinary area or should we replace it with the definition of a research domain? Is the creation of inter- and/or trans-disciplinary research teams always necessary? Are they really beneficial?

Facing the multiplication of such questions, early-career researchers need to develop innovative research practices and find ways to address the position of today’s researchers.

Novel practices, new questions

For its 2017 edition, the organisers of the Early Career Researchers conference invite PhD students of all disciplinary backgrounds to reflect on the following axes:

  • What epistemological and deontological approaches should researchers adopt today?

  • What are the implications of the human factors behind research?

  • Which methodolog(y/ies) for what research: where are the boundaries between disciplines?

  • Inter/transdisciplinarity and the contributions of research and new technologies to society: how can different perspectives be reconciled?

  • How does research interact with its foundations?

Far from being an isolated initiative, those considerations are beginning to be tackled at international congresses. These include the 2017 PhD colloquium of the French Society for the Study of English (SAES), to be held June 1-3 in Reims, and “Designations of Disciplines and Their Content: The Paradigm of Studies”, which took place at Paris 13–USPC in January 2017.


It is with this in mind that the early-career researcher conference will be held June 16, 2017, in Metz, France. For more information, visit the conference website.

Jérémy Filet, Doctorant en civilisation Britannique du XVIIIème siècle, Université de Lorraine and Lisa Jeanson, Doctorante en Ergonomie Cognitive, Université de Lorraine

Japanese space agency’s mission aims to uncover how moons of Mars formed


The Japan Aerospace Exploration Agency (JAXA) has announced a mission to visit the two moons of Mars and return a rock sample to Earth. It’s a plan to uncover both the mystery of the moons’ creation and, perhaps, how life began in our Solar System. The Conversation

The Solar System’s planets take their names from ancient Greek and Roman mythology. Mars is the god of war, while the red planet’s two moons are named for the deity’s twin sons: Deimos (meaning panic) and Phobos (fear).

Unlike our own Moon, Phobos and Deimos are tiny. Phobos has an average diameter of 22.2km, while Deimos measures an even smaller 13km. Neither moon is on a stable orbit, with Deimos slowly moving away from Mars while Phobos will hit the Martian surface in around 20 million years.

The small size of the two satellites makes their gravity too weak to pull the moons in spheres. Instead, the pair have the irregular, lumpy structure of asteroids. This has led to a major question about their formation: were these moons formed from Mars or are they actually captured asteroids?

Impact or capture?

Our own Moon is thought to have formed when a Mars-sized object hit the early Earth. Material from the collision was flung into the Earth’s orbit to coalesce into our Moon.

A similar event could have produced Phobos and Deimos. The terrestrial planets were subjected to a rain of impacts during the final throes of Solar System formation.

Mars shows possible evidence of one such major impact, as the planet’s northern hemisphere is sunk an average of 5.5km lower than the southern terrain. Debris from this or other impacts could have given birth to the moons.

Alternatively, Phobos and Deimos could be asteroids that were scattered inwards from the asteroid belt by the looming gravitational influence of Jupiter. Snagged by Mars’s gravity, the planet could have stolen its two moons. This mechanism is how Neptune acquired its moon, Triton, which is thought to have once been a Kuiper belt object, like Pluto.

There are compelling arguments for both the #TeamImpact and #TeamCapture scenario.

The orbits of the two moons are circular and in the plane of Mars’s own rotation. While the chance of this happening during a capture event are extremely low, observations of the moons suggest they may have a composition similar to that of other asteroids.

Definite determination of the moons’ composition would act as a fingerprint to distinguish the two models. A collision event should result in moons made from the same rock as Mars. But if the moons were captured, they would have formed in a different part of the Solar System with distinct minerals.

This is where the new mission comes in. JAXA’s Martian Moon eXploration Mission (MMX) is due to launch in September 2024 and arrive at Mars in August 2025. The spacecraft will then spend the next three years exploring the two moons and the environment around the red planet.

During this time, MMX will drop to the surface of Phobos and collect a sample to return to Earth in the summer of 2029.

Due to their weak gravity, collecting a sample from small rocky bodies is a difficult challenge. But this is JAXA’s speciality. The space agency has previously returned samples from asteroid Itokawa in 2010. The sequel to that mission, Hayabusa2, is due to arrive at asteroid Ryugu next year.

International collaborations

The excitement for a Mars moon mission has led to strong international involvement in MMX. On April 10, JAXA president Naoki Okumura met his counterpart from France’s Centre National d’Etudes Spatiales (CNES), Jean-Yves Le Gall.

The meeting cemented a collaboration between the two space agencies. CNES will provide an instrument for MMX as well as combining expertise on flight dynamics for the tricky encounter with the Martian moons.

The French instrument will combine a high-resolution infrared camera and spectrometer, a technique that analyses the composition of each image pixel. This will allow the rocks of the Martian moons to be investigated down to a few tenths of a metre.

With a pixel size an order of magnitude smaller than that of similar instruments on missions such as NASA’s Mars Reconnaissance Orbiter and ESA’s Mars Express, the spectrometer will also be able to help MMX select the best landing site on Phobos and take the sample.

CNES will also explore the possibility of building a rover to explore the surface of Phobos. A decision will be taken in November this year.

In addition to the collaboration with France, MMX will carry an instrument from NASA. While the CNES spectrometer will examine the type of minerals on the moons, the NASA instrument will pick out individual chemical elements. This is done by analysing the high-energy Gamma rays and neutrons that are produced during the bombardment of cosmic rays from the Sun or more distant sources.

Together, these instruments will reveal a more thorough composition of Mars’s mysterious satellites.

Both #TeamCapture and #TeamImpact offer fantastic science. Moons formed from collisional debris would be preserved time capsules of conditions on the young Mars. In this early epoch, Mars and the Earth are expected to have been far more similar than now. A sample from this time could reveal how a planet becomes habitable.

But moons captured from the asteroid belt would be kin to the meteorites that rained down on the young Earth. These are thought to be the deliverers of our oceans and even our first organic molecules. A sample from Phobos would be a taste of the package that was flung around the early Solar System.

Phobos’s ever decreasing distance to Mars also means that the top layer of the moon’s soil should be littered with meteorites scattered from the planet. Such a short journey would allow much lower-density material to survive the trip, unlike the Martian meteorites that manage to reach Earth.

This transferred material will also be from all over Mars, rather than the small patch that rovers have examined. And it might result in a more complete picture of Mars, as well as of its moons.

MMX is an exciting mission, bringing information about moon formation, Mars and water delivery around our Sun. As we wait for 2024, are you voting for #TeamImpact or #TeamCapture?

Elizabeth Tasker, Associate Professor, Japan Aerospace Exploration Agency (JAXA)

This article was originally published on The Conversation. Read the original article.

Why Mexican immigrants are healthier than their US-born peers


Anna Waldstein, University of Kent

Supporters of Donald Trump’s wall might have us believe that Mexicans who enter the US illegally carry disease and take advantage of America’s healthcare system. But several large public health surveys suggest that most Mexican immigrants are healthier than the average American citizen. So what can Americans learn about health from their Mexican neighbours? The Conversation

The “Hispanic health paradox” was first identified in 1980, in the Hispanic health and nutrition examination survey. Results of the survey were compared with a second part of the survey, which looks at all Americans. Of all Hispanic groups, people from Mexico have some of the best health compared with the rest of Americans. For example, Mexicans have lower rates of high blood pressure, cardiovascular disease and most cancers than the general US population.

But, by the second or third generation, people of Mexican descent do not seem to have any health advantage over other Americans. This suggests that the paradox depends on cultural factors, such as physical activity, eating habits and family support.

I conducted research for my PhD thesis in “Los Duplex,” one of the first Mexican immigrant neighbourhoods in the city of Athens, in Georgia. I wanted to know if traditional medical practices migrated with people from Mexico to the US.

The World Health Organisation (WHO) defines traditional medicine as “the sum total of the knowledge, skills, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness”. Traditional medicine is recognised and promoted by the WHO as an important healthcare resource.

At the time of my study, about 75% of the 131 homes in Los Duplex were occupied by Mexican tenants. Most were recent immigrants who worked in a nearby poultry processing plant. As part of my research, I collected family health histories and self-assessments of health, which were generally positive. I also found that Mexicans in Los Duplex approached health and healing by drawing on both traditional and mainstream medicine.

Living well in Los Duplex

Mexican immigrants in Los Duplex cared for and supported each other physically, emotionally and financially. Strong social networks help migrants cross the border and find jobs in the US. They also help spread knowledge of medical resources and traditional practices, which together form a holistic system of healthcare.

The Mexican families in my study had a relatively easy time accessing the mainstream medical system of Athens. But mainstream medicine was seen as a last resort. Immigrants reported that traditional Mexican health practices can often prevent or resolve problems before they require medical attention. Such practices promoted keeping calm, staying active and maintaining a positive attitude, to consuming traditional foods and herbal remedies.

Most Mexican women routinely cooked meals for their families made with beans and corn tortillas (the traditional Mexican staple foods), as well as meat and vegetables. Food was prepared fresh with a variety of seasonings, like onion, garlic, mint, chillies, cumin, and oregano. These add micronutrients and antioxidants, as well as flavour. Although children consumed sweets and fizzy drinks, which could be bought in the neighbourhood, meals were usually served with homemade drinks made from fresh fruit.

Mexican migrants in Los Duplex also used a variety of medicinal plants, as well as other home remedies. They generally used these medicines for the health problems they experienced most frequently in Georgia: respiratory tract infections and digestive disorders. And there is some evidence for these traditional remedies. For example, the herb gordolobo (Gnaphalium spp.), which is used in Los Duplex for coughs and chest congestion, has been shown to have anti-inflammatory and antimicrobial properties that are useful in preventing and treating respiratory disease. And manzanilla (Matricaria recutita), which is used to relieve stomachache and menstrual cramps, has antimicrobial and antispasmodic properties.

Mexican immigrants use a variety of medicinal plants, but also eat well.
13Smile/Shutterstock

Mexican women described herbal remedies as inexpensive, natural and safe. But they were wary of most pharmaceuticals, even though they sometimes used them. Reasons for mistrusting pharmaceuticals related to their side effects. Drugs to counteract the side effects of other drugs were seen as especially problematic.

Too much, too little, just right

I asked people in Los Duplex why they thought Mexican immigrants were healthier than Americans. They attributed this to Americans’ overconsumption of fast foods, as well as the consumption of too many “pastillas” (pills). Most Americans do indeed rely heavily on pharmaceutical drugs, even for relatively minor conditions. Overmedication may in fact be undermining the health of many Americans.

Of course, under-medication is also a problem for uninsured Americans and other groups with limited access to medical care, including some Mexican migrants. For example, undocumented Mexican farm workers earn so little they can lack the means to see a doctor or pay for medicine. This is particularly problematic as these migrants often live in makeshift housing, with limited facilities for cooking or making herbal remedies. For these and other reasons (such as pesticide exposure), Mexican migrant farm workers have worse health than Americans.

My research on Mexican immigrants suggests that both too much and too little mainstream medicine is a potential threat to health. Because of their traditional medical knowledge, the Mexicans of Los Duplex were able to achieve the right balance between complementary and mainstream medicine. Their holistic approach to health and healing provides a valuable lesson for American citizens.

Anna Waldstein, Lecturer in Medical Anthropology, University of Kent

The state of US forests: Six questions answered


Thomas J. Straka, Clemson University

Editor’s note: The first Earth Day, on April 22, 1970, catalyzed a wave of laws to protect the environment and natural resources. Here Thomas Straka, a professor of forest economics and management and former industrial forester, answers questions about the current state of U.S. forests. The Conversation

1. How forested is the United States? Here’s a surprising fact about our increasingly urbanized nation: About one-third of it is forested. Forests have an enormous impact on our water resources, economy, wildlife, recreational activities and cultural fabric. They also are major economic assets: The forest products industry manufactures more than US$200 billion worth of products yearly, and is one of the top 10 manufacturing employers in 47 states.

2. Who owns U.S forests? About 58 percent of the nation’s forestland is privately owned, mostly by families and other individuals. The public owns the rest. About three-quarters of the public forestland is owned by the federal government, mostly in national forests, with the rest controlled by states, counties and local governments. Forests in the eastern United States are mostly private; in the West, they are mostly public.

National forests were created to protect our watersheds and timber supply. Much of the water that ends up in rivers, streams and lakes comes from forested watersheds that filter the water naturally as it flows through. Forests also help control soil erosion by slowing the rate at which water enters streams.

Environmental pressure has caused timber production to become less of a priority in national forests. Since 1960, national forests have been managed under multiple use policy, which calls for balancing timber yield with other values like wildlife, recreation, soil and water conservation, aesthetics, grazing and wilderness protection.

Click to zoom.
U.S. Forest Service/Wikipedia

3. How are forests regulated? The U.S. Forest Service is the largest agency within the Department of Agriculture, with a $6 billion budget and 35,000 employees. It manages 193 million acres of national forests and grasslands – an area equivalent to that of the state of Texas – spread across 44 states and Puerto Rico.

Starting in the 1970s, laws such as the National Environmental Policy Act, the Clean Water Act and the Endangered Species Act created a new regulatory environment for forestry. As an example, listing northern spotted owls in the West and red-cockaded woodpeckers in the South as endangered species had major impacts on timber production because the law requires land managers to identify and protect “critical habitat” for listed species.

Government policies also affect private forests. The federal tax code provides for capital gains treatment for timber, allowing income from timber sales to be taxed at lower rates. Almost all states have a property tax classification program to encourage active forest management. Most of them value forestland based on its current use when they assess it, rather than its potential value if it were developed, as a way to keep trees on land.

4. Are any national forests still in their natural state? Yes. Congress can designate wilderness areas in national forests, national parks and other public lands. Road-building and other development are barred in these areas, but they are open for hiking and camping. There are 37 million acres of wilderness areas in national forests.

Wilderness designation offers a high level of environmental protection, but it also can cause resentment. Barring timber harvesting and mining and restricting recreational activities (for example, prohibiting off-road vehicles) can affect the economies of nearby communities. Debates about wilderness protection are part of a broader, long-standing controversy over federal control of land in the West.

5. What are the most serious stresses on U.S. forests? Climate change, insect infestations and decreased logging in national forests are making wildfires larger and more frequent. The Forest Service currently spends more than half of its budget on controlling wildfires.

Hotshot crews head out to fight the Happy Camp Complex fire in California’s Klamath National Forest in 2014. The fire was started by lightning strikes and burned 134,000 acres.
Kari Greer, US Forest Service/Flickr, CC BY

For family forest owners, parcelization and fragmentation are major issues. Forestlands are being broken into smaller and smaller tracts over time, which can impact forest management, wildlife populations and water quality. One in six family forest owners plans to sell or transfer his or her forestland in the next five years, and smaller forests are likely to result. Owners of smaller tracts are less likely to produce timber or actively manage their forest.

In my home state of South Carolina, the forest industry used to own 2.7 million acres of timberland. Today it owns 170,000 acres, controlled mainly by timberland investment groups and real estate investment trusts. They manage it well, but they also tend to buy and sell it on a regular basis, and often chop off parcels that are better suited for development.

6. How may forests fare under the Trump administration? The new administration has a clear utilitarian focus, so I expect it to encourage land use and development. Wilderness areas are also likely to be an area of contention. If Congress amends major laws such as the Endangered Species Act, it could affect forest management.

Changes in the tax code or in cost-share programs that encourage reforestation and forest management could impact private forests, especially family forests. Investing in rural infrastructure, which was one of Trump’s campaign priorities, could benefit private forests.

Conservation groups are worried that Sonny Perdue, President Trump’s nominee for secretary of agriculture, may increase logging in national forests and has questioned mainstream climate science. As governor of Georgia, a major timber state, Perdue supported commercial timber harvesting. At USDA he will choose deputies to oversee the Forest Service.

President Trump’s first budget proposal proposes a 21 percent cut in USDA’s funding, including unspecified cuts to the national forest system, although it also pledges to maintain full funding for wildland firefighting.

Thomas J. Straka, Professor of Forestry and Environmental Conservation (Forest Resource Management and Economics), Clemson University

This article was originally published on The Conversation. Read the original article.

We need to get rid of carbon in the atmosphere, not just reduce emissions


Eelco Rohling, Australian National University

Getting climate change under control is a formidable, multifaceted challenge. Analysis by my colleagues and me suggests that staying within safe warming levels now requires removing carbon dioxide from the atmosphere, as well as reducing greenhouse gas emissions. The Conversation

The technology to do this is in its infancy and will take years, even decades, to develop, but our analysis suggests that this must be a priority. If pushed, operational large-scale systems should be available by 2050.

We created a simple climate model and looked at the implications of different levels of carbon in the ocean and the atmosphere. This lets us make projections about greenhouse warming, and see what we need to do to limit global warming to within 1.5℃ of pre-industrial temperatures – one of the ambitions of the 2015 Paris climate agreement.

To put the problem in perspective, here are some of the key numbers.

Humans have emitted 1,540 billion tonnes of carbon dioxide gas since the industrial revolution. To put it another way, that’s equivalent to burning enough coal to form a square tower 22 metres wide that reaches from Earth to the Moon.

Half of these emissions have remained in the atmosphere, causing a rise of CO₂ levels that is at least 10 times faster than any known natural increase during Earth’s long history. Most of the other half has dissolved into the ocean, causing acidification with its own detrimental impacts.

Although nature does remove CO₂, for example through growth and burial of plants and algae, we emit it at least 100 times faster than it’s eliminated. We can’t rely on natural mechanisms to handle this problem: people will need to help as well.

What’s the goal?

The Paris climate agreement aims to limit global warming to well below 2℃, and ideally no higher than 1.5℃. (Others say that 1℃ is what we should be really aiming for, although the world is already reaching and breaching this milestone.)

In our research, we considered 1℃ a better safe warming limit because any more would take us into the territory of the Eemian period, 125,000 years ago. For natural reasons, during this era the Earth warmed by a little more than 1℃. Looking back, we can see the catastrophic consequences of global temperatures staying this high over an extended period.

Sea levels during the Eemian period were up to 10 metres higher than present levels. Today, the zone within 10m of sea level is home to 10% of the world’s population, and even a 2m sea-level rise today would displace almost 200 million people.

Clearly, pushing towards an Eemian-like climate is not safe. In fact, with 2016 having been 1.2℃ warmer than the pre-industrial average, and extra warming locked in thanks to heat storage in the oceans, we may already have crossed the 1℃ average threshold. To keep warming below the 1.5℃ goal of the Paris agreement, it’s vital that we remove CO₂ from the atmosphere as well as limiting the amount we put in.

So how much CO₂ do we need to remove to prevent global disaster?

Are you a pessimist or an optimist?

Currently, humanity’s net emissions amount to roughly 37 gigatonnes of CO₂ per year, which represents 10 gigatonnes of carbon burned (a gigatonne is a billion tonnes). We need to reduce this drastically. But even with strong emissions reductions, enough carbon will remain in the atmosphere to cause unsafe warming.

Using these facts, we identified two rough scenarios for the future.

The first scenario is pessimistic. It has CO₂ emissions remaining stable after 2020. To keep warming within safe limits, we then need to remove almost 700 gigatonnes of carbon from the atmosphere and ocean, which freely exchange CO₂. To start, reforestation and improved land use can lock up to 100 gigatonnes away into trees and soils. This leaves a further 600 gigatonnes to be extracted via technological means by 2100.

Technological extraction currently costs at least US$150 per tonne. At this price, over the rest of the century, the cost would add up to US$90 trillion. This is similar in scale to current global military spending, which – if it holds steady at around US$1.6 trillion a year – will add up to roughly US$132 trillion over the same period.

The second scenario is optimistic. It assumes that we reduce emissions by 6% each year starting in 2020. We then still need to remove about 150 gigatonnes of carbon.

As before, reforestation and improved land use can account for 100 gigatonnes, leaving 50 gigatonnes to be technologically extracted by 2100. The cost for that would be US$7.5 trillion by 2100 – only 6% of the global military spend.

Of course, these numbers are a rough guide. But they do illustrate the crossroads at which we find ourselves.

The job to be done

Right now is the time to choose: without action, we’ll be locked into the pessimistic scenario within a decade. Nothing can justify burdening future generations with this enormous cost.

For success in either scenario, we need to do more than develop new technology. We also need new international legal, policy, and ethical frameworks to deal with its widespread use, including the inevitable environmental impacts.

Releasing large amounts of iron or mineral dust into the oceans could remove CO₂ by changing environmental chemistry and ecology. But doing so requires revision of international legal structures that currently forbid such activities.

Similarly, certain minerals can help remove CO₂ by increasing the weathering of rocks and enriching soils. But large-scale mining for such minerals will impact on landscapes and communities, which also requires legal and regulatory revisions.

And finally, direct CO₂ capture from the air relies on industrial-scale installations, with their own environmental and social repercussions.

Without new legal, policy, and ethical frameworks, no significant advances will be possible, no matter how great the technological developments. Progressive nations may forge ahead toward delivering the combined package.

The costs of this are high. But countries that take the lead stand to gain technology, jobs, energy independence, better health, and international gravitas.

Eelco Rohling, Professor of Ocean and Climate Change, Australian National University

This article was originally published on The Conversation. Read the original article.

How English-style drizzle killed the Ice Age’s giants


Alan Cooper, University of Adelaide; Matthew Wooller, University of Alaska Fairbanks, and Tim Rabanus-Wallace, University of Adelaide

Wet weather at the end of the last ice age appears to have helped drive the ecosystems of large grazing animals, such as mammoths and giant sloths, extinct across vast swathes of Eurasia and the Americas, according to our new research. The Conversation

The study, published in Nature Ecology and Evolution today, shows that landscapes in many regions became suddenly wetter between 11,000 and 15,000 years ago, turning grasslands into peat bogs and forest, and ushering in the demise of many megafaunal species.

By examining the bone chemistry of megafauna fossils from Eurasia, North America and South America over the time leading up to the extinction, we found that all three continents experienced the same dramatic increase in moisture. This would have rapidly altered the grassland ecosystems that once covered a third of the globe.

The period after the world thawed from the most recent ice age is already very well studied, thanks largely to the tonnes of animal bones preserved in permafrost. The period is a goldmine for researchers – literally, given that many fossils were first found during gold prospecting operations.

Our work at the Australian Centre for Ancient DNA usually concerns genetic material from long-dead organisms. As a result, we have accrued a vast collection of bones from around the world during this period.

But we made our latest discovery by shifting our attention away from DNA and towards the nitrogen atoms preserved the fossils’ bone collagen.

Lead Author Tim Rabanus-Wallace hunts for megafaunal fossils in the Canadian permafrost in 2015.
Julien Soubrier

Chemical signatures

Nitrogen has two stable isotopes (atoms with the same number of protons but differing number of neutrons), called nitrogen-14 and nitrogen-15. Changes in environmental conditions can alter the ratio of these two isotopes in the soil. That, in turn, is reflected in the tissues of growing plants, and ultimately in the bones of the animals that eat those plants. In arid conditions, processes like evaporation preferentially remove the lighter nitrogen-14 from the soil. This contributes to a useful correlation seen in many grassland mammals: less nitrogen-14 in the bones means more moisture in the environment.

We studied 511 accurately dated bones, from species including bison, horses and llamas, and found that a pronounced spike in moisture occurred between 11,000 and 15,000 years ago, affecting grasslands in Europe, Siberia, North America, and South America.

Alan Cooper inspects ice age bones from the Yukon Palaeontology Program’s collection, Canada, 2015.
Julien Soubrier

At the time of this moisture spike, dramatic changes were occurring on the landscapes. Giant, continent-sized ice sheets were collapsing and retreating, leaving lakes and rivers in their wake. Sea levels were rising, and altered wind and water currents were bringing rains to once-dry continental interiors.

The study shows that a peak in moisture occurred between the time of the ice sheets melting, and the invasion of new vegetation types such as peatlands (data shown from Canada and northern United States).
http://nature.com/articles/doi:10.1038/s41559-017-0125

As a result, forests and peatlands were forming where grass, which specialises in dry environments, once dominated. Grasses are also specially adapted to tolerate grazing – in fact, they depend upon grazers to distribute nutrients and clear dead litter from the ground each season. Forest plants, on the other hand, produce toxic compounds specifically to deter herbivores. For decades, researchers have discussed the idea that the invading forests drove the grassland communities into collapse.

Our new study provides the crime scene’s smoking gun. Not only was moisture affecting the grassland mammals during the forest invasion and the subsequent extinctions, but this was happening right around the globe.

Extinction rethink

This discovery prompts a rethink on some of the key mysteries in the extinction event, such as the curious case of Africa. Many of Africa’s megafauna — elephants, wildebeest, hippopotamus, and so on — escaped the extinction events, and unlike their counterparts on other continents have survived to this day.

It has been argued that this is because African megafauna evolved alongside humans, and were naturally wary of human hunters. However, this argument cannot explain the pronounced phase of extinctions in Europe. Neanderthals have existed there for at least 200,000 years, while anatomically modern humans arrive around 43,000 years ago.

We suggest instead that the moisture-driven extinction hypothesis provides a much better explanation. Africa’s position astride the Equator means that its central forested monsoon belt has always been surrounded by continuous stretches of grassland, which graded into the deserts of the north and south. It was the persistence of these grasslands that allowed the local megafauna to survive relatively intact.

Our study may also offers insights into the question of how the current climate change might affect today’s ecosystems.

Understanding how climate changes affected ecosystems in the past is imperative to making informed predictions about how climate changes may influence ecosystems in the future. The consequences of human-induced global warming are often depicted using images of droughts and famines. But our discovery is a reminder that all rapid environmental changes — wet as well as dry — can cause dramatic changes in biological communities and ecosystems.

In this case, warming expressed itself not through parched drought but through centuries of persistent English drizzle, with rain, slush and grey skies. It seems like a rather unpleasant way to go.

Alan Cooper, Director, Australian Centre for Ancient DNA, University of Adelaide; Matthew Wooller, Professor, University of Alaska Fairbanks, and Tim Rabanus-Wallace, PhD candidate, University of Adelaide

This article was originally published on The Conversation. Read the original article.

Make our soil great again


David R. Montgomery, University of Washington

Most of us don’t think much about soil, let alone its health. But as Earth Day approaches, it’s time to recommend some skin care for Mother Nature. Restoring soil fertility is one of humanity’s best options for making progress on three daunting challenges: Feeding everyone, weathering climate change and conserving biodiversity. The Conversation

Widespread mechanization and adoption of chemical fertilizers and pesticides revolutionized agriculture. But it took a hidden toll on the soil. Farmers around the world have already degraded and abandoned one-third of the world’s cropland. In the United States, our soils have already lost about half of the organic matter content that helped make them fertile.

What is at stake if we don’t reverse this trend? Impoverished trouble spots like Syria, Libya and Iraq are among the societies living with a legacy of degraded soil. And if the world keeps losing productive farmland, it will only make it harder to feed a growing global population.

But it is possible to restore soil fertility, as I learned traveling the world to meet farmers who had adopted regenerative practices on large commercial and small subsistence farms while researching my new book, Growing A Revolution: Bringing Our Soil Back to Life. From Pennsylvania to the Dakotas and from Africa to Latin America, I saw compelling evidence of how a new way of farming can restore health to the soil, and do so remarkably fast.

Workshop on cover crops, weed management and no-till practices at the Stark Ranch in Gainesville, Texas.
Noble Foundation /Flickr, CC BY-NC-ND

These farmers adopted practices that cultivate beneficial soil life. They stopped plowing and minimized ground disturbance. They planted cover crops, especially legumes, as well as commercial crops. And they didn’t just plant the same thing over and over again. Instead they planted a greater diversity of crops in more complex rotations. Combining these techniques cultivates a diversity of beneficial microbial and soil life that enhances nutrient cycling, increases soil organic matter, and improves soil structure and thereby reduces erosive runoff.

Farmers who implemented all three techniques began regenerating fertile soil and after several years ended up with more money in their pocket. Crop yields and soil organic matter increased while their fuel, fertilizer, and pesticide use fell. Their fields consistently had more pollinators — butterflies and bees — than neighboring conventional farms. Using less insecticide and retaining native plants around their fields translated into more predatory species that managed insect pests.

Innovative ranchers likewise showed me methods that left their soil better off. Cows on their farms grazed the way buffalo once did, concentrating in a small area for a short period followed by a long recovery time. This pattern stimulates plants to push sugary substances out of their roots. And this feeds soil life that in return provides the plants with things like growth-promoting hormones and mineral nutrients. Letting cows graze also builds soil organic matter by dispersing manure across the land, rather than concentrating it in feedlot sewage lagoons.


USDA/Wikipedia

Soil organic matter is the foundation of the soil food web, and the consensus among scientists I talked with was that soil organic matter is the single best indicator of soil health. How much carbon could the world’s farmers and ranchers park underground through soil building practices that incorporate plant residue and stimulate microbial activity? Estimates vary widely, but farmers I visited had more than doubled the carbon content of their soil over a decade or two. If farmers around the world did this, it could help partially offset fossil fuel emissions for decades to come.

Soil restoration will not solve world hunger, stop climate change, or prevent further loss of biodiversity. No single thing can solve these problems. But the innovative farmers I met showed me that adopting the full suite of conservation agriculture practices can provide a better livelihood and significant environmental benefits on conventional and organic farms alike.

Restoring fertility to degraded agricultural soils is one of humanity’s most pressing and under-recognized natural infrastructure projects, and would pay dividends for generations to come. It’s time for a moonshot-like effort to restore the root of all prosperous civilizations: Our soil, the skin of the Earth.

David R. Montgomery, Professor of Earth and Space Sciences, University of Washington

This article was originally published on The Conversation. Read the original article.

Fishing for DNA: Free-floating eDNA identifies presence and abundance of ocean life


Mark Stoeckle, The Rockefeller University

Ocean life is largely hidden from view. Monitoring what lives where is costly – typically requiring big boats, big nets, skilled personnel and plenty of time. An emerging technology using what’s called environmental DNA gets around some of those limitations, providing a quick, affordable way to figure out what’s present beneath the water’s surface. The Conversation

Fish and other animals shed DNA into the water, in the form of cells, secretions or excreta. About 10 years ago, researchers in Europe first demonstrated that small volumes of pond water contained enough free-floating DNA to detect resident animals.

Researchers have subsequently looked for aquatic eDNA in multiple freshwater systems, and more recently in vastly larger and more complex marine environments. While the principle of aquatic eDNA is well-established, we’re just beginning to explore its potential for detecting fish and their abundance in particular marine settings. The technology promises many practical and scientific applications, from helping set sustainable fish quotas and evaluating protections for endangered species to assessing the impacts of offshore wind farms.

Who’s in the Hudson, when?

In our new study, my colleagues and I tested how well aquatic eDNA could detect fish in the Hudson River estuary surrounding New York City. Despite being the most heavily urbanized estuary in North America, water quality has improved dramatically over the past decades, and the estuary has partly recovered its role as essential habitat for many fish species. The improved health of local waters is highlighted by the now regular fall appearance of humpback whales feeding on large schools of Atlantic menhaden at the borders of New York harbor, within site of the Empire State Building.

Preparing to hurl the collecting bucket into the river.
Mark Stoeckle, CC BY-ND

Our study is the first recording of spring migration of ocean fish by conducting DNA tests on water samples. We collected one liter (about a quart) water samples weekly at two city sites from January to July 2016. Because the Manhattan shoreline is armored and elevated, we tossed a bucket on a rope into the water. Wintertime samples had little or no fish eDNA. Beginning in April there was a steady increase in fish detected, with about 10 to 15 species per sample by early summer. The eDNA findings largely matched our existing knowledge of fish movements, hard won from decades of traditional seining surveys.

Our results demonstrate the “Goldilocks” quality of aquatic eDNA – it seems to last just the right amount of time to be useful. If it disappeared too quickly, we wouldn’t be able to detect it. If it lasted for too long, we wouldn’t detect seasonal differences and would likely find DNAs of many freshwater and open ocean species as well as those of local estuary fish. Research suggests DNA decays over hours to days, depending on temperature, currents and so on.

Fish identified via eDNA in one day’s sample from New York City’s East River.
New York State Department of Environmental Conservation: alewife (herring species), striped bass, American eel, mummichog; Massachusetts Department of Fish and Game: black sea bass, bluefish, Atlantic silverside; New Jersey Scuba Diving Association: oyster toadfish; Diane Rome Peeples: Atlantic menhaden, Tautog, Bay anchovy; H. Gervais: conger eel., CC BY-ND

Altogether, we obtained eDNAs matching 42 local marine fish species, including most (80 percent) of the locally abundant or common species. In addition, of species that we detected, abundant or common species were more frequently observed than were locally uncommon ones. That the species eDNA detected matched traditional observations of locally common fish in terms of abundance is good news for the method – it supports eDNA as an index of fish numbers. We expect we’ll eventually be able to detect all local species – by collecting larger volumes, at additional sites in the estuary and at different depths.

In addition to local marine species, we also found locally rare or absent species in a few samples. Most were fish we eat – Nile tilapia, Atlantic salmon, European sea bass (“branzino”). We speculate these came from wastewater – even though the Hudson is cleaner, sewage contamination persists. If that is how the DNA got into the estuary in this case, then it might be possible to determine if a community is consuming protected species by testing its wastewater. The remaining exotics we found were freshwater species, surprisingly few given the large, daily freshwater inflows into the saltwater estuary from the Hudson watershed.

Filtering the estuary water back in the lab.
Mark Stoeckle, CC BY-ND

Analyzing the naked DNA

Our protocol uses methods and equipment standard in a molecular biology laboratory, and follows the same procedures used to analyze human microbiomes, for example.

eDNA and other debris left on the filter after the estuary water passed through.
Mark Stoeckle, CC BY-ND

After collection, we run water samples through a small pore size (0.45 micron) filter that traps suspended material, including cells and cell fragments. We extract DNA from the filter, and amplify it using polymerase chain reaction (PCR). PCR is like “xeroxing” a particular DNA sequence, producing enough copies so that it can easily be analyzed.

We targeted mitochondrial DNA – the genetic material within the mitochondria, the organelle that generates the cell’s energy. Mitochondrial DNA is present in much higher concentrations than nuclear DNA, and so easier to detect. It also has regions that are the same in all vertebrates, which makes it easier for us to amplify multiple species.

We tagged each amplified sample, pooled the samples and sent them for next-generation sequencing. Rockefeller University scientist and co-author Zachary Charlop-Powers created the bioinformatic pipeline that assesses sequence quality and generates a list of the unique sequences and “read numbers” in each sample. That’s how many times we detected each unique sequence.

To identify species, each unique sequence is compared to those in the public database GenBank. Our results are consistent with read number being proportional to fish numbers, but more work is needed on the precise relationship of eDNA and fish abundance. For example, some fish may shed more DNA than others. The effects of fish mortality, water temperature, eggs and larval fish versus adult forms could also be at play.

Just like in television crime shows, eDNA identification relies on a comprehensive and accurate database. In a pilot study, we identified local species that were missing from the GenBank database, or had incomplete or mismatched sequences. To improve identifications, we sequenced 31 specimens representing 18 species from scientific collections at Monmouth University, and from bait stores and fish markets. This work was largely done by student researcher and co-author Lyubov Soboleva, a senior at John Bowne High School in New York City. We deposited these new sequences in GenBank, boosting the database’s coverage to about 80 percent of our local species.

Study’s collection sites in Manhattan.
Mark Stoeckle, CC BY-ND

We focused on fish and other vertebrates. Other research groups have applied an aquatic eDNA approach to invertebrates. In principle, the technique could assess the diversity of all animal, plant and microbial life in a particular habitat. In addition to detecting aquatic animals, eDNA reflects terrestrial animals in nearby watersheds. In our study, the commonest wild animal detected in New York City waters was the brown rat, a common urban denizen.

Future studies might employ autonomous vehicles to routinely sample remote and deep sites, helping us to better understand and manage the diversity of ocean life.

Mark Stoeckle, Senior Research Associate in the Program for the Human Environment, The Rockefeller University

This article was originally published on The Conversation. Read the original article.

The world’s five deadliest volcanoes … and why they’re so dangerous


Matthew Blackett, Coventry University

An eruption of Mount Etna recently caught out some BBC journalists who were filming there. The footage was extraordinary and highlighted the hazards volcanoes pose to humans and society. The Conversation

Since 1600, 278,880 people have been killed by volcanic activity, with many of these deaths attributed to secondary hazards associated with the main eruption. Starvation killed 92,000 following the 1815 Tambora eruption in Indonesia, for example, and a volcanic tsunami killed 36,000 following the 1883 Krakatoa eruption.

Since the 1980s, deaths related to volcanic eruptions have been rather limited, but this is not entirely a result of increased preparedness or investment in hazard management – it is significantly a matter of chance.

Research shows that volcanic activity has shown no let up since the turn of the 21st century – it just hasn’t been around population centres. Indeed, there remain a number of volcanoes poised to blow which pose a major threat to life and livelihood.

Vesuvius, Italy

Known for its 79AD eruption, which destroyed the towns of Pompeii and Herculaneum, Vesuvius is still a significant hazard given that it overshadows the city of Naples and its surrounds, which are home to over 3m people.

It is also known for a particularly intense form of eruption. Plinian (after Pliny the Younger who was the first to describe the 79AD event) eruptions are characterised by the ejection of a vast column of gas and ash which extends into the stratosphere, far higher than commercial airliners fly.

Mount Vesuvius looms over Naples.
Shutterstock

Were such an eruption to occur at Vesuvius today, it is likely that much of the population would already have been evacuated as a precursory swarm of earthquakes would likely herald its imminent approach. But those who remained would initially be showered with huge pumice rocks too large to be kept aloft by the column of gas.

Then, as the volcano began to run out of energy, the column itself would collapse, causing smaller particles of rock (from fine ash to small boulders) to fall from the sky and back to Earth at high velocity. Asphyxiating clouds of gas and pulverised rock – pyroclastic density currents – would then flood down the slopes of the volcano, annihilating anything in their path. Such gas-ash features have been known to travel tens of kilometres and at terrifying speeds, potentially turning modern Naples into a new Pompeii.

Nyiragongo, Democratic Republic of Congo

This central African volcano has erupted several times over the last few decades and while its eruptions aren’t particularly explosive, it produces a particularly runny – and dangerous – form of lava. Once effused, this lava can rapidly move down the flanks of the volcano and inundate areas with little or no warning.

The fiery heart of Nyiragongo.
Shutterstock

In 2002, the lava lake at the volcano’s summit was breached, resulting in streams of lava hurtling towards the nearby city of Goma at 60km/h, engulfing parts of it to a depth of two metres.

Fortunately, warnings had been issued as the volcano’s unrest has made it the focus of intense research – and over 300,000 people were evacuated in time. Should such an event occur again, we have to hope that the authorities are equally prepared, but this is a politically unstable area and it remains seriously vulnerable.

Popocatepetl, Mexico

“Popo”, as the locals call it, is just 70km south-west of the one of the largest cities in the world: Mexico City, home to 20m people. Popo is regularly active and its most recent bout of activity in 2016 sent a plume of ash to an altitude of five kilometres.

In recent times, and indeed throughout much of its history, eruptive events at Popo have consisted of similarly isolated ash plumes. But these plumes coat the mountain in a thick blanket of ash which, when mixed with water, can form a dense muddy mixture which has the potential to flow for many kilometres and at relatively high speeds.

Letting off steam: Popocatepetl.
Shutterstock

Such phenomena, known as “lahars”, can be extremely deadly, as exemplified by the Nevado del Ruiz disaster of 1985 when around 26,000 people were killed in the town of Armero, Colombia, by a lahar with a volcanic source that was 60km away.

The Nevado del Ruiz tragedy was the direct result of volcanic activity melting ice at the volcano’s summit, but a large volume of rainfall or snowmelt could feasibly generate a similar lahar on Popo. This could flow down-slope towards nearby settlements with little or no warning.

Krakatoa, Indonesia

Otherwise named Krakatau, Krakatoa’s name is infamous; 36,000 people were killed by the tsunami triggered by its 1886 eruption, which released more energy than 13,000 Hiroshima atomic bombs. The eruption destroyed the volcanic island completely, but within 50 years, a new island had appeared in its place.

Anak Krakatau erupts in 2011.
Shutterstock

The new island is named Anak Krakatau (Child of Krakatoa) and since the 1920s, it has been growing in episodic phases, reaching about 300 metres in height today. New and significant activity commenced in 2007 and since this time, further episodes of activity were noted at the volcano, most recently in March 2017.

No one knows for sure whether or not the spectacular growth of Anak Krakatau means it may one day repeat the catastrophe its “father” unleashed, but its location between Indonesia’s two most populated islands, Java and Sumatra, means it poses a grave threat to life.

Changbaishan, China

Few have heard of this volcano in a remote part of Asia – and its last eruption was in 1903. However, its history tells a rather scarier story. In around 969AD, the volcano produced one of the largest eruptions of the last 10,000 years, releasing three times more material than Krakatoa did in 1886.

One of the chief hazards is posed by the massive crater lake at its peak (with a volume of about nine cubic kilometres). If breached, this lake could generate lahars that would pose a significant threat to the 100,000 people that live in the vicinity.

Changbaishan: looks peaceful, but …
Shutterstock

In the early 2000s, scientists began monitoring the hitherto under-monitored volcano, and determined that its activity was increasing, that its magma chamber dormancy was coming to an end, and that it could pose a hazard in the following decades.

Further complicating things is the fact that Changbaishan straddles the border of China and North Korea. Given such a geo-politically sensitive location, the effects of any volcanic activity here would likely be very hard to manage.

Matthew Blackett, Senior Lecturer in Physical Geography and Natural Hazards, Coventry University

This article was originally published on The Conversation. Read the original article.

After 25 years of trying, why aren’t we environmentally sustainable yet?


Michael Howes, Griffith University

In 1992, more than 170 countries came together at the Rio Earth Summit and agreed to pursue sustainable development, protect biological diversity, prevent dangerous interference with climate systems, and conserve forests. But, 25 years later, the natural systems on which humanity relies continue to be degraded. The Conversation

So why hasn’t the world become much more environmentally sustainable despite decades of international agreements, national policies, state laws and local plans? This is the question that a team of researchers and I have tried to answer in a recent article.

We reviewed 94 studies of how sustainability policies had failed across every continent. These included case studies from both developed and developing countries, and ranged in scope from international to local initiatives.

Consider the following key environmental indicators. Since 1970:

  • Humanity’s ecological footprint has exceeded the Earth’s capacity and has risen to the point where 1.6 planets would be needed to provide resources sustainably.

  • The biodiversity index has fallen by more than 50% as the populations of other species continue to decline.

  • Greenhouse gas emissions that drive climate change have almost doubled while the impacts of climate change are becoming increasingly apparent.

  • The world has lost more than 48% of tropical and sub-tropical forests.

The rate at which these indicators deteriorated was largely unchanged over the two decades either side of the Rio summit. Furthermore, humanity is fast approaching several environmental tipping points. If crossed, these could lead to irreversible changes.

If we allow average global temperatures to rise 2℃ above pre-industrial levels, for example, feedback mechanisms will kick in that lead to runaway climate change. We’re already halfway to this limit and could pass it in the next few decades.

What’s going wrong?

So what’s going wrong with sustainability initiatives? We found that three types of failure kept recurring: economic, political and communication.

The economic failures stem from the basic problem that environmentally damaging activities are financially rewarded. A forest is usually worth more money after it’s cut down – which is a particular problem for countries transitioning to a market-based economy.

Political failures happen when governments can’t or won’t implement effective policies. This is often because large extractive industries, like mining, are dominant players in an economy and see themselves as having the most to lose. This occurs in developed and developing countries, but the latter can face extra difficulties enforcing policies once they’re put in place.

Communication failures centre on poor consultation or community involvement in the policy process. Opposition then flourishes, sometimes based on a misunderstanding of the severity of the issue. It can also be fed by mistrust when communities see their concerns being overlooked.

Again, this happens around the world. A good example would be community resistance to changing water allocation systems in rural areas of Australia. In this situation, farmers were so opposed to the government buying back some of their water permits that copies of the policy were burned in the street.

These types of failure are mutually reinforcing. Poor communication of the benefits of sustainable development creates the belief that it always costs jobs and money. Businesses and communities then pressure politicians to avoid or water down environmentally friendly legislation.

Ultimately, this represents a failure to convince people that sustainable development can supply “win-win” scenarios. As a result, decision-makers are stuck in the jobs-versus-environment mindset.

What can we do?

The point of our paper was to discover why policies that promote sustainability have failed in order to improve future efforts. The challenge is immense and there’s a great deal at stake. Based on my previous research into the way economic, social and environmental goals can co-exist, I would go beyond our most recent paper to make the following proposals.

First, governments need to provide financial incentives to switch to eco-efficient production. Politicians need to have the courage to go well beyond current standards. Well-targeted interventions can create both carrot and stick, rewarding eco-friendly behaviour and imposing a cost on unsustainable activities.

Second, governments need to provide a viable transition pathway for industries that are doing the most damage. New environmental tax breaks and grants, for example, could allow businesses to remain profitable while changing their business model.

Finally, leaders from all sectors need to be convinced of both the seriousness of the declining state of the environment and that sustainable development is possible. Promoting positive case studies of successful green businesses would be a start.

There will of course be resistance to these changes. The policy battles will be hard fought, particularly in the current international political climate. We live in a world where the US president is rolling back climate policies while the Australian prime minister attacks renewable energy.

Michael Howes, Associate Professor in Environmental Studies, Griffith University

This article was originally published on The Conversation. Read the original article.