Category Archives: Environment

Don’t panic: the northern lights won’t be turning off anytime soon


Nathan Case, Lancaster University

The northern lights are nature’s very own magnificent light show. They are the mesmerising end result of electrically charged particles from the sun colliding with the Earth’s upper atmosphere. Though more frequently witnessed from the polar regions, the UK and other places on similar latitudes are lucky enough for the aurora borealis to occasionally grace their night sky. The Conversation

But recent reports now claim the phenomenon may no longer be visible from places such as the UK – instead confined to the North Pole. But is this correct?

The northern lights are driven by activity on the sun and the sun’s activity waxes and wanes over an 11-year period known as a solar cycle. The number of large-scale aurora events, the type that is visible from places such as the UK, tends to follow this cycle. But each solar cycle is different, with the maximum and minimum activity varying between each cycle.

Predicting solar activity

Current predictions suggest that we are headed for a period of particularly weak solar cycles, where the solar maximum of each cycle will not result in much solar activity. We call this a grand solar minimum.

The number of sunspots observed on the sun.
Global Warming Art/Wikipedia

Grand solar minimums can last for several decades or even centuries and have occurred throughout history. Although solar output does decline during these periods, it doesn’t mean that we are heading for a new ice age.

A study recently published in Nature has modelled the perhaps most well-known grand solar minimum, called the “Maunder minimum”. This particular grand solar minimum started in 1645 and finally ended 70 years later. During this time only 50 sunspots, structures on the sun that act as a measure of its activity, were observed. This is compared to the 40,000-50,000 that we would expect during a period of “normal” activity lasting that long.

Sunspots (black) visible on the sun.
NASA/SDO/AIA/HMI/Goddard Space Flight Center

The authors of the study found that during the Maunder minimum, the solar wind, which drives the aurora, dramatically weakened. They also illustrate that as the solar wind weakens, so too will the aurora.

If we are in fact heading into a new grand solar minimum, it stands to reason that we might see less of nature’s beautiful spectacle. But does that mean we’ll stop seeing it from the UK altogether as some have suggested?

Lessons from the past

Looking back at historical records of aurora sightings might provide the answer. Fortunately, a study has done just that. The authors analysed auroral observations during two grand solar minimums– including the Maunder minimum. They found that the number of aurora sightings from below 56° magnetic latitude (which is similar to geographic latitude but measured from the magnetic pole rather than the geographic pole) did indeed decrease. But they did not stop altogether.

That value of 56° magnetic latitude is actually quite important as it happens to coincide with the magnetic latitude of the UK (more specifically somewhere close to Lancaster, England).

The aurora captured from Groomsport, Northern Ireland (UK).
Philip McErlean/flickr

So what’s my prediction for the aurora over the next century? If the models are correct and we do head into a grand solar minimum, then solar activity is going to decrease – and remain at very low levels for decades to come. With this decrease in solar activity, aurora sightings from outside the polar regions are going to become rarer. But that doesn’t necessarily mean they’ll stop altogether. It also isn’t certain that we are heading for a grand solar minimum or – even if we are – when it might occur.

So while that elusive light show might get even more elusive, don’t fret just yet: the northern lights aren’t going out anytime soon.

Nathan Case, Senior Research Associate in Space and Planetary Physics, Lancaster University

This article was originally published on The Conversation. Read the original article.

CDC Cancels Climate Change Conference in February


A major, almost overnight shift of Democrat to Republican values in the White House has yielded chilling effects throughout the country, climate change being affected the most

Yesterday, Badlands National Park had tweeted pro-climate change scientific data, only to be taken down hours later and a statement released by the park saying the person who tweeted was a “former employee who was not authorized” to use the Twitter account. This was amid the gag-order placed on the Environmental Protection Agency and the complete removal of all mentions of the word “climate change” on the White House website.

And today the CDC has announced via their mailing list that they are postponing their February conference on climate change and did not offer up a reason. The American Public Health Association’s executive director, Georges Benjamin, said agency officials decided to preemptively call off the event:

“They ran it up the flagpole and realized that it was so close to the inauguration, the chances of it being canceled were pretty real with the administration that was coming in. Some might argue they should have said, ‘We’re going to do this and make them tell us no.’ But that was the decision they made. We should think of this as a strategic retreat.”

 

Putting CO2 away for good by turning it into stone


We seriously need to do something about CO2 emissions. Besides shifting to renewable energy sources and increasing energy efficiency, we need to start putting some of the CO2 away before it reaches the atmosphere. Perhaps the impacts of human-induced climate change will be so severe that we might even have to capture CO2 from the air and convert it into useful products such as plastic materials or put it someplace safe.

A group of scientists from several European countries and the United States including myself met in the middle, in Iceland, to figure out how CO2 could be put away safely – in the ground. In a recently published study, we demonstrated that two years after injecting CO2 underground at our pilot test site in Iceland, almost all of it has been converted into minerals.

The injection well that pumped waste CO2 and hydrogen sulfide gas from a geothermal well underground.
Martin Stute, Author provided

Mineralization

Iceland is a very green country; almost all of its electricity comes from renewable sources including geothermal energy. Hot water from rocks beneath the surface is converted into steam which drives a turbine to generate electricity. However, geothermal power plants there do emit CO2 (much less than a comparable coal-fired power plant) because the hot steam from deep wells that runs the turbines also contains CO2 and sometimes hydrogen sulfide (H2S). Those gases usually just get released into the air.

Is there another place we could put these gases?

Conventional carbon sequestration deposits CO2 into deep saline aquifers or into depleted oil and natural gas reservoirs. CO2 is pumped under very high pressure into these formations and, since they held gases and fluids already over millions of year in place, the probability of CO2 leaking out is minuscule, as many studies have shown.

In a place like Iceland with its daily earthquakes cracking the volcanic rocks (basalts), this approach would not work. The CO2 could bubble up through cracks and leak back into the atmosphere.

However, basalt also has a great advantage: it reacts with CO2 and converts it into carbonate minerals. These carbonates form naturally and can be found as white spots in the basalt. The reactions also have been demonstrated in laboratory experiments.

Dissolving CO2 in water

For the first test, we used pure CO2 and pumped it through a pipe into an existing well that tapped an aquifer containing fresh water at about 1,700 feet of depth. Six months later we injected a mixture of CO2 and hydrogen sulfide piped in from the turbines of the power plant. Through a separate pipe we also pumped water into the well.

In the well, we released the CO2 through a sparger – a device for introducing gases into liquids similar to a bubble stone in an aquarium – into water. The CO2 dissolved completely within a couple of minutes in the water because of the high pressure at depth. That mixture then entered the aquifer.

We also added tiny quantities of tracers (gases and dissolved substances) that allow us to differentiate the injected water and CO2 from what’s already in the aquifer. The CO2 dissolved in water was then carried away by the slowly flowing groundwater.

Downstream, we had installed monitoring wells that allowed us to collect samples to figure out what happened to the CO2. Initially, we saw some of the CO2 and tracers coming through. After a few months, though, the tracers kept arriving but very little of the injected CO2 showed up.

Where was it going? Our pump in the monitoring well stopped working periodically, and when we brought it to the surface, we noticed that it was covered by white crystals. We analyzed the crystals and found they contained some of the tracers we had added and, best of all, they turned out to be mostly carbonate minerals! We had turned CO2 into rocks.

The CO2 dissolved in water had reacted with the basalt in the aquifer and more than 95 percent of the CO2 precipitated out as solid carbonate minerals – and it all happened much faster than anticipated, in less than two years.

The fracture in this basalt rock shows the white calcium carbonate crystals that form from the injection of CO2 with water at the test site.
Annette K. Mortensen, CC BY

This is the safest way to put CO2 away. By dissolving it in water, we already prevent CO2 gas from bubbling up toward the surface through cracks in the rocks. Finally, we convert it into stone that cannot move or dissolve under natural conditions.

One downside of this approach is that water needs to be injected alongside the CO2. However, because of the very rapid removal of the CO2 from the water in mineral form, this water could be pumped back out of the ground downstream and reused at the injection site.

Will it work elsewhere?

Ours was a small-scale pilot study, and the question is whether these reactions would continue into the future or pores and cracks in the subsurface basalt stone would eventually clog up and no longer be able to convert CO2 to carbonate.

Our Iceland geothermal power plant has increased the amount of gas injected several times in the years since our experiment was started using a different nearby location. No clogging has been encountered yet, and the plan is to soon inject almost all waste gases into the basalt. This process will also prevent the toxic and corrosive gas hydrogen sulfide from going into the atmosphere, which currently still can be detected at low levels near the power plant because of its characteristic rotten egg smell.

The very reactive rocks found in Iceland are quite common on Earth; about 10 percent of the continents and almost all of the ocean floors are made of basalt. This technology, in other words, is not limited to emissions from geothermal power plants but could also be used for other CO2 sources, such as fossil fuel power plants.

The commercial viability of the process still needs to be established in different locations. Carbon mineralization adds costs to a power plant’s operation, so this, like any form of carbon sequestration, needs an economic incentive to make it feasible.

People like to live near coasts, and many power plants have been built near their customers. Perhaps this technology could be used to put away CO2 emissions in coastal areas in nearby offshore basalt formations. Of course, there would be no shortage of water to co-inject with the CO2.

If we are forced to lower atmospheric CO2 levels in the future because we underestimate the damaging effects of climate change, we could perhaps use wind or solar-powered devices on an ocean platform to capture CO2 from the air and then inject the CO2 into basalt formations underneath.

Carbon mineralization, as demonstrated in Iceland, could be part of the solution of our carbon problem.

The Conversation

Martin Stute, Professor of Environmental Science, Columbia University

This article was originally published on The Conversation. Read the original article.

NASA Satellite Finds Unreported Sources of Toxic Air Pollution


Using a new satellite-based method, scientists at NASA, Environment and Climate Change Canada, and two universities have located 39 unreported and major human-made sources of toxic sulfur dioxide emissions.

A known health hazard and contributor to acid rain, sulfur dioxide (SO2) is one of six air pollutants regulated by the U.S. Environmental Protection Agency. Current, sulfur dioxide monitoring activities include the use of emission inventories that are derived from ground-based measurements and factors, such as fuel usage. The inventories are used to evaluate regulatory policies for air quality improvements and to anticipate future emission scenarios that may occur with economic and population growth.

But, to develop comprehensive and accurate inventories, industries, government agencies and scientists first must know the location of pollution sources.

“We now have an independent measurement of these emission sources that does not rely on what was known or thought known,” said Chris McLinden, an atmospheric scientist with Environment and Climate Change Canada in Toronto and lead author of the study published this week in Nature Geosciences. “When you look at a satellite picture of sulfur dioxide, you end up with it appearing as hotspots – bull’s-eyes, in effect — which makes the estimates of emissions easier.”

The 39 unreported emission sources, found in the analysis of satellite data from 2005 to 2014, are clusters of coal-burning power plants, smelters, oil and gas operations found notably in the Middle East, but also in Mexico and parts of Russia. In addition, reported emissions from known sources in these regions were — in some cases — two to three times lower than satellite-based estimates.

Altogether, the unreported and underreported sources account for about 12 percent of all human-made emissions of sulfur dioxide – a discrepancy that can have a large impact on regional air quality, said McLinden.

The research team also located 75 natural sources of sulfur dioxide — non-erupting volcanoes slowly leaking the toxic gas throughout the year. While not necessarily unknown, many volcanoes are in remote locations and not monitored, so this satellite-based data set is the first to provide regular annual information on these passive volcanic emissions.

“Quantifying the sulfur dioxide bull’s-eyes is a two-step process that would not have been possible without two innovations in working with the satellite data,” said co-author Nickolay Krotkov, an atmospheric scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

First was an improvement in the computer processing that transforms raw satellite observations from the Dutch-Finnish Ozone Monitoring Instrument aboard NASA’s Aura spacecraft into precise estimates of sulfur dioxide concentrations. Krotkov and his team now are able to more accurately detect smaller sulfur dioxide concentrations, including those emitted by human-made sources such as oil-related activities and medium-size power plants.

Being able to detect smaller concentrations led to the second innovation. McLinden and his colleagues used a new computer program to more precisely detect sulfur dioxide that had been dispersed and diluted by winds. They then used accurate estimates of wind strength and direction derived from a satellite data-driven model to trace the pollutant back to the location of the source, and also to estimate how much sulfur dioxide was emitted from the smoke stack.

“The unique advantage of satellite data is spatial coverage,” said Bryan Duncan, an atmospheric scientist at Goddard. “This paper is the perfect demonstration of how new and improved satellite datasets, coupled with new and improved data analysis techniques, allow us to identify even smaller pollutant sources and to quantify these emissions over the globe.”

The University of Maryland, College Park, and Dalhousie University in Halifax, Nova Scotia, contributed to this study.

For more information about, and access to, NASA’s air quality data, visit:

http://so2.gsfc.nasa.gov/

NASA uses the vantage point of space to increase our understanding of our home planet, improve lives, and safeguard our future. NASA develops new ways to observe and study Earth’s interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.

For more information about NASA Earth science research, visit:

http://www.nasa.gov/earth

-end-

Steve Cole
Headquarters, Washington
202-358-0918
[email protected]

Media Relations
Environment and Climate Change Canada, Toronto
844-836-7799
[email protected]

Should Florida ‘frack’ its limestone for oil and gas? Two geophysicists weigh in


Florida is on the front lines of a debate over the spread of the controversial drilling technique hydraulic fracturing, or fracking, which raises a crucial question: are the state’s unique geology and hydrology safe for expanded oil and gas drilling?

Over the past several months, a number of counties and cities in Florida have banned fracking over environmental concerns. Earlier this year, the state legislature considered but did not pass a bill to regulate fracking at the state level, which would have superseded local bans.

So far, there has been at least one exploratory well in Florida using fracking, but the practice is not widespread. However, the question of how and whether to allow fracking is likely to come back up again, as early as next year.

How would fracking be done in Florida and what environmental and geologic questions are worth considering? A close look at the particular conditions of the Florida peninsula reveals a number of unresolved areas of concern.

Acid fracturing

In some respects, Florida is an unlikely site for this battle. Florida ranks 31st of the 50 states in energy production. The state currently has two regions with conventional hydrocarbon production – the Sunniland trend in South Florida and the western Panhandle. Hydrocarbons are stored within carbonate rocks, which are composed of limestone and dolostone in South Florida and carbonates and sand in the Panhandle.

A protest in 2014 against a bill that would allow fracking in Florida, where there are worries over the effects to the drinking water and tourism industry.
astronomygal/flickr, CC BY-NC

Potential hydrocarbon reservoir rocks in Florida are distinct from shales – the layers of sedimentary rock in other parts of the U.S. where fracking has led to a drilling boom in natural gas and oil. The rock under Florida generally has a higher permeability, making it easier for liquids to move through it.

A fracas ensued when one company in 2013 tested fracking before receiving a permit in Florida, which resulted in a cease and desist order from the Department of Environment Protection (DEP).

The company used a technique known as acid fracturing, which is substantially different than what’s more commonly practiced elsewhere in the U.S. In this method, which is suitable only for carbonate reservoirs, acidic water is injected at high pressure into a well to dissolve the rock. Because carbonate rocks are highly soluble, acids can increase pore size and permeability, allowing oil or gas to flow.

What we know

Elsewhere in the U.S., fracking has gained attention due to its association with two hazards: earthquakes and groundwater contamination.

Fracking is a well stimulation technique that entails injecting a mixture of water, sand and chemicals at high pressure into oil and natural gas wells. The fracking fluid pressure breaks up the rocks hosting the oil and gas, increasing their permeability and allowing the oil, gas, natural brine in the rock (called produced waters) and fracking fluid to migrate quickly to the surface. The oil and gas make their way to market, and the fracking fluids are often recovered and reused.

The practice of fracturing rock with high pressure does not cause earthquakes but disposal of water that comes up after wells have been drilled has been linked to seismic activity in Oklahoma and other places in the U.S.
EPA

However, the briny produced waters can pose a problem. They are too laden with dissolved salts to release on the surface, where they would constitute a major pollutant. So, these brines are generally reinjected into the Earth in very deep wells, called injection wells.

A growing body of research based on high-quality seismic data collected at surface sites around these injection wells clearly shows that voluminous wastewater injection affects seismicity. Earthquakes in Oklahoma and several other pockets of midcontinent U.S. – including two in Oklahoma with magnitudes greater than 5 since 2011 – have been associated with high-volume deep-well injection of wastewater, a byproduct of oil and gas production.

An additional problem associated with oil and gas production is the potential for contamination of drinking water and irrigation aquifers by either fracking fluids or produced wastewaters. Done correctly, production wells can be constructed and cemented to avoid the migration of fluids into the well. However, natural gas and chemicals used in fracking have been found in the aquifers of the Marcellus Shale in Pennsylvania, and poor drilling practices have been blamed for methane entering aquifers. Surface operations associated with drilling may also contribute to contamination.

As a result of the experiences in other states, the possibility of fracking in Florida has met strong opposition in some quarters.

What we still don’t know

What would the environmental impact of fracking be in Florida?

At this point, there are more questions than answers. The specifics of proposed fracking in Florida are complicated by the very different regional geology of the peninsula.

Much of Florida sits atop what is called a karst terrane, a geological formation characterized by a complex, highly permeable and porous carbonate aquifer system. The geology includes an equally complex set of less permeable rock units – called confining units – that are distributed within, around and throughout the aquifer system.

Key unknowns for Florida include:

  • Are there are extractable oil and gas reservoirs outside of the currently producing regions of the Panhandle and South Florida? The recent local bans include many regions with no confirmed oil or gas reserves. Shales exist below the carbonate rocks in some locations, but it is unclear if conditions were right for oil/gas to form in those shale formations.

  • Where are the faults that could produce earthquakes in Florida? Wastewater injection occurs in many locations, but earthquakes are much less common. Earthquakes due to wastewater injection require a combination of factors. First, there must be faults that can produce earthquakes and sufficient stresses. Second, there must be fluid pathways within the rock through which injected wastewater can increase the fluid pressure significantly. The locations of basement faults in Florida are poorly known, and although none have been known to generate earthquakes, their ultimate impact on seismicity in the state will depend on knowing their proximity to proposed locations of wastewater injection.

  • Where and how deeply will wastewater be injected? Currently, some of the wastewater from Florida’s oil and gas drilling is injected where it is produced, which are in zones below drinking water. In South Florida, the “Boulder Zone” lies above layers from which oil and gas are drawn and below the tapped ends of the Floridan aquifer system. This cavernous zone receives injected wastewater from oil and gas and from municipalities with little pressure increase. This could possibly indicate that induced seismicity may not occur even following rapid and high volume wastewater injection. Many other parts of Florida do not have such a permeable zone similar to the Boulder Zone, but the precise distribution of permeable and impermeable zones in the Florida subsurface is poorly known, so safe wastewater disposal is highly uncertain.

A sinkhole in Florida. The state’s unique geology means water moves rapidly, making the state’s aquifers potentially vulnerable to spills and contaminants.
innovationschool/flickr, CC BY-NC

Florida’s geology is significantly different from Oklahoma, where there has been the most seismic activity. In Florida, wastewater injection has generally been above oil/gas producing zones. This means they are farther from deep formations. That could decrease the risk of earthquakes, since earthquake-producing faults occur in these deep formations in locations such as Oklahoma.

On the other hand, Florida’s practice results in wastewater injection closer to drinking water aquifers. The confining units within the Floridan aquifer system (FAS) have been extremely difficult to map and are highly variable in thickness and properties. A comprehensive effort to map these and zones of high permeability – which could be suitable for injecting and storing wastewater – and rapid groundwater flow would be a monumental task requiring full-time work from many geologists and geophysicists for decades. In other words, understanding with certainty how effective Florida’s geology is for storing wastewater from oil and gas drilling and its ultimate effect on aquifers will be a huge undertaking.

Furthermore, Florida cities have generally tapped shallower aquifers until now. However, as these aquifers become overused, deeper brackish to saline portions of the FAS are being considered as source of freshwater through desalination. These aquifers could be used to store freshwater during wet periods, which would be pumped later during dry times (aquifer storage and recovery). Thus, zones of water used for human consumption may approach those where wastewater would be injected.

South Florida’s aquifers also have rapid flow. In a 2003 dye tracer study in the Miami region, dyes reached the Miami Dade County well field in hours rather than the expected days. Not only did the dye turn the water red, it exposed the vulnerability of Florida’s carbonate aquifers to contamination. Contaminants could reach irrigation and drinking water systems rapidly enough to pose economic and health risks before any effective warnings could be issued.

So although there’s been a sharp debate over fracking in Florida, the focus on “fracking” alone risks losing sight of the bigger picture. Florida’s aquifers are potentially vulnerable to injected wastes, contaminant migration through poorly sealed wells and from surface activities, regardless of whether fracking is involved.

The Conversation

Ray Russo, Associate Professor of Geophysics, University of Florida and Elizabeth Screaton, Professor of Geology, University of Florida

This article was originally published on The Conversation. Read the original article.

Recreating forests of the past isn’t enough to fix our wildfire problems


Tiger, tiger, burning bright

In the forests of the night…

-William Blake

There is general agreement that America’s landscapes, certainly its wildlands, are out of whack with their fires. Wildfires are bigger, hotter, more savage and more expensive than in the past.

There is wide agreement, too, that America’s deeper fire problem is not that malignant megafires are crashing into our communities. Instead, it’s that we’ve lost the older benign versions of fire that once washed over and benefited our ecosystems. Surely, the thinking goes, restoring fire’s former regimes would quell the outbursts and bolster forests’ ecological resilience to multiple threats.

But active restoration has proved trickier, more controversial, and more limited than advocates assumed. It works, but not everywhere, and not everyone wants it.

The roots of suppression

For 50 years after the Great Fires of 1910 traumatized the U.S. Forest Service, the country committed to a program of what we might call fire resistance. It sought both to quit lighting fires and to extinguish every fire that did occur before it could grow large and damaging. Then in the 1960s, the fire community reconsidered because the project was self-defeating and had suppressed good fires as well as bad ones. Many biotas were adapted to particular kinds of fires and suffered when those fires vanished or changed character.

By 1978 the federal agencies adopted a program to restore the fire regimes that had prevailed before the ax and hoof of settlement, and the onset of organized fire suppression, had confirmed our new disorder. The project embodied not only the prevailing science but a kind of atonement for the wreckage done. Fire officers would light fires under prescriptions and they would allow natural fires more room to roam.

Following devastating fires in 1910, the U.S. Forest Service put a priority on putting fires out, but later changed policies. Here a ranger locates a distant fire in Montana in 1909.
Forest History Society, CC BY-NC

On restoration as a guiding principle, consensus exists. On its practice, however, confusion and confrontation abound. Why?

Basically, we can’t agree on what those prior conditions were, or by what methods we might recreate them, or for some observers whether the past is in truth prologue to the future.

Consider how varied some of the best studied landscapes are. Tallgrass prairie requires fire. Probably most tallgrass environments burned every three years or so before European settlement. Longleaf pine, once pervasive on the southeastern coastal plains, burned like a savanna, its wiregrass understory carrying flames among the woods nearly annually. Its western counterpart, ponderosa pine, also behaved like a grassland with big trees clumped throughout it. Likely it burned every 3-8 years. These are all surface fires that occasionally torched pockets of woody thickets or trees during drought and high wind.

Lodgepole pine, by contrast, burns in eruptive patches, killing the existing stand and preparing for a mass reseeding in the ash. The patches burned perhaps every 80-120 years. And then there is California chaparral. Forty years ago the best science suggested it burned weakly until the primary species reached 20-25 years, and then more fiercely with each passing year. No fuel like an old fuel.

Politics of wildland fire

Advocates of restoration argued that more good fire would reduce bad fire and improve ecosystem health. Our understanding of past fire patterns would help write the necessary guidelines.

But some prescribed fires escape control (probably a comparable fraction to those that escape initial suppression). Smoke drifts with the wind. Some sites need preburn preparations. And there are always dissenters. All this costs not only money but social and political capital.

Most tallgrass preserves, for instance, are tiny; there is always a butterfly or beetle, with human partisans for its cause, that thrives best in a more varied mixture of fire. This complicates the social politics of actually putting fire on the ground.

The idea that forests need to burn sometimes is well accepted but each ecosystem is different are there are often opponents to prescribed fires.
edsuom/flickr, CC BY-NC

Longleaf – the “forest that fire made” – displays its greatest biodiversity by having a range of fires across seasons and years. Overall, it’s probably impossible to overburn it, but practice requires guidelines, and that demands social consensus beyond the belief that fire belongs.

Ponderosa forests have generally become overgrown with understories of young trees that can carry fire from the surface to the canopy – a revived fire but not one that allows the ponderosa to survive. This has led to arguments for thinning, a kind of woody weeding, to restore the former structure of the forest, so that it can sustain the right kind of fire.

But removing chainsaws was a major triumph of many environmental groups, who do not wish to see them return as stealth silviculture, and there are outlier researchers who insist that severe fires have always been a part of the scene. Mainstream scientists disagree.

Lodgepole patches have grown more extensive with fire’s removal, which not only feeds larger fires but has encouraged beetle invasions, which further unhinge the structure of fuels and complicate putting fire back in. Since controlled crown fires are at best tricky, and prescribed commercial logging (rather than thinning) is generally unwanted, the options for deliberate restoration are few.

And the chaparral? There are researchers who insist that wind, not fuel, is the driving factor, and argue that fuel mitigation measures, including prescribed fire, will only invite invasive species, destroy native ones, and not make a whit of difference to fire size and intensity. Besides, they say, the strategic issue is urban sprawl, and the fire concern is overall ecological integrity and resilience, not fuel.

A pragmatic hybrid

For several decades restoration has been an informing theme for America’s fire community. It can point to many successes. Florida now burns over two million acres a year under prescription, and the Florida model has propagated throughout the region. A template for southwestern ponderosa pine, loosely known as the Flagstaff model for the site of its demonstration plots, has disseminated throughout many montane forests in the West.

But the Florida model does not work in the chaparral shrublands of Southern California. The Flagstaff model does not work in the pinyon-juniper complex of the eastern Great Basin or the lodgepole of the west-side central Rockies. Each biota needs its own guidelines. Active restoration programs cost money. And prescribed burning becomes more encumbered with restrictions and caveats each year.

The Flagstaff model: before (left) and after (right) treatments by thinning and burning in ponderosa pine. By almost all measures, the treated plot is ecologically healthier and more resilient to fire. The photos are not of the identical scene but representative examples from the same site.
Stephen Pyne, Author provided

Plenty of partisans would prefer we let nature sort out the imbalances, not pretend, with costly hubris, that we know enough or are skilled enough to do the right thing. People caused the problem; removing them altogether is the surest means to set matters right. Less active management, not more, is the way to reconcile past conditions with future wishes.

And for those obsessed with the no-analog future promised by that constellation of global changes lumped under the label Anthropocene, restoration is beside the point. The future will be radically different. We need to prepare for it, not waste scarce resources on recreating a prelapsarian past.

In brief, fire regimes are varied, science frequently conflicted, and restoration intellectually compromised by irony, which adds no cultural value, since we can never truly go back.

The responses to these challenges will vary – as they should. In the American West, however, the cumulative burdens are pushing fire officers away from the former restoration ideal into something akin to a resilience model. They know they need more fire. Their experience tells them they won’t get it waiting for Congress or navigating, project by project, the reviews required by the National Environmental Policy Act.

Instead of attacking the fire problem head on, they are trying to flank it. Of course there are some fires that bolt away from the moment of ignition, or threaten communities, municipal watersheds, or critical biotic assets and must be fought from the first kindling.

But many other fires allow for varied responses. Backing off and burning out – not letting fires roam freely but loose-herding them with selective firefights and burnouts along their perimeter – is a way to get some good fire on the ground.

It’s not restoration as the old order understood it. It’s not a case of science informing and management applying, of rationally getting ahead of the problem. They accept they won’t get ahead of the problem: they have to ride it out. Some patches will burn too severely; some patches won’t burn at all. In a way it’s a pragmatic solution, replacing a goal that we can’t agree on, with a process – returned fire – that we can. The hand is solving what the head cannot.

It now appears that while restoration may be a permanent principle, one widely adopted, it is not a transcendent one. It only has meaning in particular places and practices and, we might add, times. It has to compete with other values like wilderness. What is replacing it is a kind of intellectual and institutional mashup, the paradox of a managed wildfire. It’s a way to improve control by loosening our standards of control. This is not what the new era imagined as it sought to tame the bright-burning tiger, but it offers us a means to ride that tiger into the future.

The Conversation

Stephen Pyne, Regents Professor in the School of Life Sciences, Arizona State University

This article was originally published on The Conversation. Read the original article.

Will taxpayers foot the cleanup bill for bankrupt coal companies?


Coal’s share of the U.S. energy market is rapidly plunging. Low-cost fracking-generated natural gas has overtaken the use of coal at America’s power plants. Impending implementation of the Obama administration’s proposed Clean Power Plan, which would place stringent regulations on coal-fired power plant emissions, has also helped to drive coal production to its lowest level in decades. Government sources predict further decline.

Fifty U.S. coal companies have filed for bankruptcy since 2012. Competition and more stringent environmental regulations played a role in this decline. But, just before coal prices collapsed, speculating top producers borrowed billions to finance unwise acquisitions. Now, unable to pay loan interest and principal, they have sought bankruptcy protection to restructure US$30 billion in debt. The bankrupt companies include Arch Coal, Alpha Natural Resources, Patriot Coal and Jim Walter Resources.

Last month Peabody Energy Corp., the world’s biggest private-sector coal producer, followed suit. Peabody seeks to restructure $8.4 billion in debt. Its capitalization has fallen from $20 billion in 2011 to $38 million at the time of bankruptcy.

Amid this turmoil, many observers fear that bankrupt coal companies will be able to shift their huge liabilities for reclamation, or restoring land that has been mined, to taxpayers.

Panoramic image of mountaintop removal mining, West Virginia.
Dennis Dimick/Flickr, CC BY-NC-ND

Congress passed the Surface Mining Control & Reclamation Act, or SMCRA, in 1977 to prevent such a scenario. But, in my view, state and federal coal regulators have failed to ensure that coal companies have enforceable financial guarantees in place, as the law requires.

I have interacted with the coal industry for 40 years, first as a government enforcement lawyer and then litigating issues relating to coal mine reclamation cases on behalf of conservation organizations and coalfield communities. I believe that if the unfunded liabilities of bankrupt coal companies are not covered by new guarantees and additional companies seek bankruptcy protection, there is a real chance that taxpayer-funded billion-dollar bailouts will be necessary to cover their cleanup costs.

Planning for reclamation

SMCRA was designed to prevent bankrupt coal companies from foisting onto taxpayers the costs of restoring thousands of acres of mined land and treating millions of gallons of polluted mine water.

When Congress enacted the law, it identified many of the adverse impacts when mined land was not reclaimed:

…mined lands burden and adversely affect commerce and the public welfare by destroying or diminishing the utility of land for commercial, industrial, residential, recreational, agricultural, and forestry purposes, by causing erosion and landslides, contributing to floods, polluting the water, destroying fish and wildlife habitats, impairing natural beauty, damaging the property of citizens, creating hazards dangerous to life and property, degrading the quality of life in local communities, and by counteracting governmental programs and efforts to conserve soil, water, and other natural resources.

In the decades preceding SMCRA’s enactment, thousands of bankrupt companies abandoned mines without reclaiming them. Many of these sites remain untreated today. According to the U.S. Geological Survey, restoring streams and watersheds across Pennsylvania that were damaged by acidic drainage from mines abandoned before 1977 would cost $5 billion to $15 billion. Similarly, reclaiming mining lands abandoned in West Virginia before SMCRA will cost an estimated $1.3 billion or more.

Impacts of acid mine drainage in Pennsylvania.
U.S. Geological Survey

SMCRA is designed to force a coal company to address and incorporate the cost of reclamation in its business planning. The law mandates that when state or federal regulators issue mining permits, coal companies must provide bonds or other financial guarantees to ensure that if they fail to fully reclaim mines, the state will have money available to do the job.

Most coalfield states administer the federal law through state-law-based regulatory programs overseen by the Department of the Interior. SMCRA offers states several options. They include requiring companies to provide financial guarantees in the form of corporate surety bonds, collateral bonds or self-bonds.

When companies use site-specific surety or collateral bonds, SMCRA requires states to calculate the cost of reclamation before any mining can begin. These studies must consider each mine site’s topography, geology, water resources and revegetation potential.

Strip mining, Powder River Basin, Wyoming.
WildEarth Guardians/Flickr, CC BY-NC-ND

States may also set up an “alternate” to a bonding system that achieves the objectives and purposes of a bonding program. This option has been described by a court as a “collective risk-spreading system that … allows a State to discount the amount of the required site-specific bond to … less than the full cost needed to complete reclamation of the site in the event of forfeiture.”

Surety bonds and collateral bonds are backed by cash, real property assets and financial guarantees from banks and surety companies. If a coal company goes bankrupt, regulators can collect on these bonds and use the money to fully reclaim abandoned mined land. However, state-approved “alternative” reclamation funding systems and self-bonding by coal companies do not provide the same certainty.

For example, both Pennsylvania and West Virginia approved systems in which coal operators paid nonrefundable fees into state funds that would be used to reclaim any bankrupt coal company sites. But neither required site-specific calculations of what reclamation would actually cost. Pennsylvania imposed a per-acre permit fee, and West Virginia required a few cents per-mined-ton reclamation fee.

Regulators in these states – enabled by lax federal oversight – failed to ensure that companies set aside enough funds. As a result, these agencies have exposed taxpayers to potentially enormous reclamation liability.

Reclamation IOUs

In 2001 a federal district court found that West Virginia’s federally approved state “alternate” bonding fund was hugely underfunded and could not guarantee reclamation of mines abandoned by bankrupt coal companies as required by SMCRA. The court held that state and federal regulators’ decade-long failure to institute a fully funded bonding system had created

[A] climate of lawlessness, which creates a pervasive impression that continued disregard for federal law and statutory requirements goes unpunished, or possibly unnoticed. Agency warnings have no more effect than a wink and a nod … Financial benefits accrue to the owners and operators who were not required to incur the statutory burden and costs attendant to surface mining …

SMCRA also allows companies to self-bond, if they meet rigorous asset requirements. But a self-bonding corporation’s promise to reclaim is little more than an IOU backed by company assets.

In 2014 federal regulators began, in the Interior Department’s words, “exploring concerns related to the efficacy of self-bonding practices and procedure” used by states. Instead of taking action, they opted to study the issue despite strong indications of financial collapse on the horizon. Now enormous western surface mines and mountaintop removal strip mines in central Appalachia are covered by $3.6 billion in self-bonding obligations, of which $2.4 billion is held by bankrupt Peabody, Arch and Alpha.

Home below a strip mine, Campbell County, Tennessee.
Appalachian Voices/Flickr, CC BY

Companies reorganizing under federal bankruptcy laws will continue to mine and market coal, hoping to shed mountains of debt and eventually emerge from bankruptcy. It remains to be seen whether they will be able to obtain conventional surety bonds after they reorganize, or whether bankruptcy courts will direct the companies to use their remaining assets to partially fulfill their self-bonding obligations.

One thing is clear, however. Against the backdrop of a century of coal company bankruptcies and attendant environmental damage, regulators ignored a looming coal market collapse with a wink and a nod. Properly administered, SMCRA’s reclamation bonding requirements should have required secure financial guarantees collectible upon bankruptcy.

Unfortunately, coal regulators viewed America’s leading coal companies like Wall Street’s mismanaged banks – too big to fail. As a result, American taxpayers may have to pick up an enormous reclamation tab for coal producers.

The Conversation

Patrick McGinley, Professor of Law , West Virginia University

This article was originally published on The Conversation. Read the original article.

Five Solomon Islands Submerged Due To Rising Sea Level


Rising sea levels are something climate change deniers simply cannot ignore anymore as five entire islands in the Pacific are now almost completely submerged.

The idea that islands could be swallowed whole had always been a fear of the far future, after all the ice caps had melted and the atmosphere was more and more difficult to breath, but in the picturesque and often paradise-like Pacific Ocean, we are losing islands much faster than previously thought.

The Solomon Islands are home to 6 major islands and around 900 smaller islands, as well as being a sovereign nation in the South Pacific. While it doesn’t look like it’s travel industry, known for scuba diving and WWII-era relics, will be affected too much any time soon, but it’s definitely something everyone there is beginning to get concerned over. While the residents of these islands had been saying that their homes are disappearing, it’s finally been confirmed that scientifically. CNN reports,

“In the past 20 years, sea levels in the archipelago have risen 7 to 10 mm (.28 to .39 inches) annually, three times the global average. According to the International Panel for Climate Change, global rises will reach 5mm annually in the second half of the century.”

Among its acclaimed diving destinations are enormous Marovo Lagoon and Iron Bottom Sound, which is littered with dozens of sunken warships. Guadalcanal, a province and one of the archipelago’s largest islands, honors fallen Allied soldiers at its U.S. War Memorial.

Getting more energy from the sun: how to make better solar cells


Global demand for energy is increasing by the hour as developing countries move toward industrialization. Experts estimate that by the year 2050, worldwide demand for electricity may reach 30 terawatts (TW). For perspective, one terawatt is roughly equal to the power of 1.3 billion horses.

Energy from the sun is limitless – the sun provides us 120,000 TW of power at any given instant – and it is free. But today solar energy provides only about one percent of the world’s electricity. The critical challenge is making it less expensive to convert photo-energy into usable electrical energy.

To do that, we need to find materials that absorb sunlight and convert it into electricity efficiently. In addition, we want these materials to be abundant, environmentally benign and cost-effective to fabricate into solar devices.

Researchers from around the world are working to develop solar cell technologies that are efficient and affordable. The goal is to bring the installation cost of solar electricity below US$1 per watt, compared to about $3 per watt today.

At Binghamton University’s Center for Autonomous Solar Power (CASP), we are investigating ways to make thin film solar cells using materials that are abundant in nature and nontoxic. We want to develop solar cells that are reliable, highly efficient at converting sunlight to electricity and inexpensive to manufacture. We have identified two materials that have great potential as solar absorbers: pyrite, better known as fool’s gold because of its metallic luster; and copper-zinc-tin-sulfide (CZTS).

Seeking the ideal material

Today’s commercial solar cells are made from one of three materials: silicon, cadmium telluride (CdTe) and copper-indium-gallium-selenide (CIGS). Each has strengths and weaknesses.

Silicon solar cells are highly efficient, converting up to 25 percent of the sunlight that falls on them into electricity, and very durable. However, it is very expensive to process silicon into wafers. And these wafers have to be very thick (about 0.3 millimeters, which is thick for solar cells) to absorb all of the sunlight that falls on them, which further increases costs.

Silicon solar cells – often referred to as first-generation solar cells – are used in the panels that have become familiar sights on rooftops. Our center is studying another type called thin film solar cells, which are the next generation of solar technology. As their name suggests, thin film solar cells are made by putting a thin layer of solar absorbent material over a substrate, such as glass or plastic, which typically can be flexible.

A CASP center fabricated CZTS solar cell on a flexible glass substrate made by Corning.
Tara Dhakal/Binghamton University, Author provided

These solar cells use less material, so they are less expensive than crystalline solar cells made from silicon. It is not possible to coat crystalline silicon on a flexible substrate, so we need a different material to use as a solar absorber.

Although thin film solar technology is improving rapidly, some of the materials in today’s thin film solar cells are scarce or hazardous. For example, the cadmium in CdTe is highly toxic to all living things and is known to cause cancer in humans. CdTe can separate into cadmium and tellurium at high temperatures (for example, in a laboratory or housefire), posing a serious inhalation risk.

We are working with pyrite and CZTS because they are nontoxic and very inexpensive. CZTS costs about 0.005 cents per watt, and pyrite costs a mere 0.000002 cents per watt. They also are among the most abundant materials in the Earth’s crust, and absorb the visible spectrum of sunlight efficiently. These films can be as thin as 1/1000th of a millimeter.

Testing CZTS solar cells under simulated sunlight.
Tara Dhakal/Binghamton University, Author provided

We need to crystallize these materials before we can fabricate them into solar cells. This is done by heating them. CZTS crystallizes at temperatures under 600 degree Celsius, compared to 1,200 degrees Celsius or higher for silicon, which makes it less expensive to process. It performs much like high-efficiency copper indium gallium selenide (CIGS) solar cells, which are commercially available now, but replaces the indium and gallium in these cells with cheaper and more abundant zinc and tin.

So far, however, CZTS solar cells are relatively inefficient: they convert less than 13 percent of the sunlight that falls upon them to electricity, compared to 20 percent for more expensive CIGS solar cells.

We know that CZTS solar cells have a potential to be 30 percent efficient. The main challenges are 1) synthesizing high-quality CZTS thin film without any traces of impurities, and 2) finding a suitable material for the “buffer” layer underneath it, which helps to collect the electric charges that sunlight creates in the absorber layer. Our lab has produced a CZTS thin film with seven percent efficiency; we hope to approach 15 percent efficiency soon by synthesizing high-quality CZTS layers and finding suitable buffer layers.

Structure of a CZTS solar cell.
Tara Dhakal/Binghamton University, Author provided

Pyrite is another potential absorber that can be synthesized at very low temperatures. Our lab has synthesized pyrite thin films, and now we are working to layer those films into solar cells. This process is challenging because pyrite breaks down easily when it is exposed to heat and moisture. We are researching ways to make it more stable without affecting its solar absorbency and mechanical properties. If we can solve this problem, “fool’s gold” could turn into a smart photovoltaic device.

In a recent study, researchers at Stanford University and the University of California at Berkeley estimated that solar power could provide up to 45 percent of U.S. electricity by 2050. To meet that target, we need to keep driving down the cost of solar power and find ways to make solar cells more sustainably. We believe that abundant, nontoxic materials are key to realizing the potential of solar power.

The Conversation

Tara P. Dhakal, Assistant Professor of Electrical and Computer Engineering, Binghamton University, State University of New York

This article was originally published on The Conversation. Read the original article.

Should fracking decisions be made locally?


The future role of gas in the UK is the subject of significant debate. There is controversy about how much gas we could use and for how long, and whether this will be compatible with statutory climate change targets. As North Sea supplies decline, there are also starkly differing views about whether some of the gas we will need in future should come from domestic shale gas resources.

Despite the number of headlines about shale gas, there has been very little development activity so far. Fracking for shale gas has only been carried out at one site near Blackpool, where operations by Cuadrilla caused minor earthquakes in 2011. This means that it is almost impossible to determine whether significant UK shale gas production would make economic sense. The recent falls in oil and gas prices have added to this uncertainty, but are likely to make commercial viability more challenging.

During the recent 14th licensing round for onshore oil and gas, 159 areas were awarded licenses for development – 75% of these were for unconventional oil and gas extraction, which has sparked local debates in many of the affected areas.

Two planning applications submitted by Cuadrilla for exploration at sites in Lancashire were recently turned down by the local council on the grounds of noise and traffic. One of these was refused against the advice of council officers. An appeal by Cuadrillia is currently underway. Whether or not it goes in favour of the council or the developer, it raises broader questions about the role of local democracy and decision-making.

Last August the government announced the introduction of fast-track planning regulations designed to limit the length of local planning processes for unconventional oil and gas operations. Greg Clark, the secretary of state for communities and local government, also said he expects to have the final say over the Lancashire applications.

What is Fracking?

This intention to constrain local planning processes has understandably led to concerns about local democracy. It is not the first time national government has tried to intervene in local decision-making, especially when it comes to the development of new large-scale infrastructures or natural resources.

While national government may emphasise a particular course of action, like the development of shale gas, there is no guarantee that local decision-makers will simply agree. Furthermore, selective limits on local planning risk exacerbating public mistrust. A Sciencewise project on public engagement with shale gas and oil, commissioned by the government, revealed significant unease among participants about decision-making processes.

A waste of energy?

Given that large-scale changes to energy infrastructures are very likely to be required across the UK as the energy system decarbonises, this issue goes well beyond shale gas. Local opposition has also been significant for other energy developments such as wind farms, solar farms, gas storage sites and electricity transmission lines.

The government’s approach to different energy sources appears to be inconsistent – most notably between onshore wind and shale gas. In contrast with the approach for shale, local planners will determine whether new onshore wind projects go ahead or not. Ministers have defended this situation on the grounds that a lot of wind farms are already being deployed, while shale gas is at a very early stage.

Although the government’s regular energy opinion poll no longer asks specific questions about onshore wind, other polls suggest it still has significant public support – as well as being the cheapest low carbon electricity generation technology.

Where should our energy come from?
Pexels

The focus on shale and wind could also be a missed opportunity for a broader conversation about the UK’s sustainable energy transition. This conversation should not be restricted to which technologies or resources should be used, and what they might cost. Previous research from the UK Energy Research Centre suggests that people are also interested in how energy systems can reflect values such as fairness, sustainability and efficiency. A focus on individual sources like shale gas in isolation leaves little space for this broader conversation to be held.

The Conversation

Jim Watson, Research Director, UK Energy Research Centre

This article was originally published on The Conversation. Read the original article.