Category Archives: Energy

Deep sea mining could help develop mass solar energy – is it worth the risk?


Jon Major, University of Liverpool

Scientists have just discovered massive amounts of a rare metal called tellurium, a key element in cutting-edge solar technology. As a solar expert who specialises in exactly this, I should be delighted. But here’s the catch: the deposit is found at the bottom of the sea, in an undisturbed part of the ocean. The Conversation

People often have an idealised view of solar as the perfect clean energy source. Direct conversion of sunlight to electricity, no emissions, no oil spills or contamination, perfectly clean. This however overlooks the messy reality of how solar panels are produced.

While the energy produced is indeed clean, some of the materials required to generate that power are toxic or rare. In the case of one particular technology, cadmium telluride-based solar cells, the cadmium is toxic and the telluride is hard to find.

Cadmium telluride is one of the second generation “thin-film” solar cell technologies. It’s far better at absorbing light than silicon, on which most solar power is currently based, and as a result its absorbing layer doesn’t need to be as thick. A layer of cadmium telluride just one thousandth of a millimetre thick will absorb around 90% of the light that hits it. It’s cheap and quick to set up, compared to silicon, and uses less material.

As a result, it’s the first thin-film technology to effectively make the leap from the research laboratory to mass production. Cadmium telluride solar modules now account for around 5% of global installations and, depending on how you do the sums, can produce lower cost power than silicon solar.

Topaz Solar Farm in California is the world’s fourth largest. It uses cadmium telluride panels.
Sarah Swenty/USFWS, CC BY

But cadmium telluride’s Achilles heel is the tellurium itself, one of the rarest metals in the Earth’s crust. Serious questions must be asked about whether technology based on such a rare metal is worth pursuing on a massive scale.

There has always been a divide in opinion about this. The abundancy data for tellurium suggests a real issue, but the counter argument is that no-one has been actively looking for new reserves of the material. After all, platinum and gold are similarly rare but demand for jewellery and catalytic converters (the primary use of platinum) means in practice we are able to find plenty.

The discovery of a massive new tellurium deposit in an underwater mountain in the Atlantic Ocean certainly supports the “it will turn up eventually” theory. And this is a particularly rich ore, according to the British scientists involved in the MarineE-Tech project which found it. While most tellurium is extracted as a by-product of copper mining and so is relatively low yield, their seabed samples contain concentrations 50,000 times higher than on land.

The submerged mountain, ‘Tropic Seamount’, lies off the coast of north-west Africa.
Google Earth

Extracting any of this will be formidably hard and very risky for the environment. The top of the mountain where the tellurium has been discovered is still a kilometre below the waves, and the nearest land is hundreds of miles away.

Even on dry land, mining is never a good thing for the environment. It can uproot communities, decimate forests and leave huge scars on the landscape. It often leads to groundwater contamination, despite whatever safeguards are put in place.

And on the seabed? Given the technical challenges and the pristine ecosystems involved, I think most people can intuitively guess at the type of devastation that deep-sea mining could cause. No wonder it has yet to be implemented anywhere yet, despite plans off the coast of Papua New Guinea and elsewhere. Indeed, there’s no suggestion that tellurium mining is liable to occur at this latest site any time soon.

Is deep sea mining worth the risk?

However the mere presence of such resources, or the wind turbines or electric car batteries that rely on scarce materials or risky industrial processes, raises an interesting question. These are useful low-carbon technologies, but do they also have a requirement to be environmentally ethical?

There is often the perception that everyone working in renewable energy is a lovely tree-hugging, sandal-wearing leftie, but this isn’t the case. After all, this is now a huge industry, one that is aiming to eventually supplant fossil fuels, and there are valid concerns over whether such expansion will be accompanied by a softening of regulations.

We know that solar power is ultimately a good thing, but do the ends always justify the means? Or, to put it more starkly: could we tolerate mass production of solar panels if it necessitated mining and drilling on a similar scale to the fossil fuels industry, along with the associated pitfalls?

Tolerable – as long as it’s for solar panels.
Peter Gudella / shutterstock

To my mind the answer is undoubtedly yes, we have little choice. After all, mass solar would still wipe out our carbon emissions, helping curb global warming and the associated apocalypse.

What’s reassuring is that, even as solar becomes a truly mature industry, it has started from a more noble and environmentally sound place. Cadmium telluride modules for example include a cost to cover recycling, while scarce resources such as tellurium can be recovered from panels at the end of their 20-year or more lifespan (compare this with fossil fuels, where the materials that produce the power are irreparably lost in a bright flame and a cloud of carbon).

The impact of mining for solar panels will likely be minimal in comparison to the oil or coal industries, but it will not be zero. As renewable technology becomes more crucial, we perhaps need to start calibrating our expectations to account for this.

At some point mining operations in search of solar or wind materials will cause damage or else some industrial production process will go awry and cause contamination. This may be the Faustian pact we have to accept, as the established alternatives are far worse. Unfortunately nothing is perfect.

Jon Major, Research Fellow, Stephenson Institute for Renewable Energy, University of Liverpool

Someone Holds A Patent On Mind-Control Via TV Screens & Computer Monitors


Yes, there’s someone out there who actually holds a patent on mind control technology. The question is who and to what end?

Patent no. US 6506148 B2 entitled “Nervous system manipulation by electromagnetic fields from monitors” claims to be able to control a human using electromagnetic pulses from TV screens and computer monitors. The abstract alone is enough to raise an eyebrow, let alone beg the question of who’s been using the technology since 2003.

Physiological effects have been observed in a human subject in response to stimulation of the skin with weak electromagnetic fields that are pulsed with certain frequencies near ½ Hz or 2.4 Hz, such as to excite a sensory resonance. Many computer monitors and TV tubes, when displaying pulsed images, emit pulsed electromagnetic fields of sufficient amplitudes to cause such excitation. It is therefore possible to manipulate the nervous system of a subject by pulsing images displayed on a nearby computer monitor or TV set. For the latter, the image pulsing may be imbedded in the program material, or it may be overlaid by modulating a video stream, either as an RF signal or as a video signal. The image displayed on a computer monitor may be pulsed effectively by a simple computer program. For certain monitors, pulsed electromagnetic fields capable of exciting sensory resonances in nearby subjects may be generated even as the displayed images are pulsed with subliminal intensity.

While conspiracy theorists such as Alex Jones have used the existence of this patent and others to make a case for mass mind control enslavement of the human race, the real question we should be asking is how this patent has actually been used. Unfortunately, the inventor, Hendricus G. Loos, is a shadowy scientific researcher with only his published works shedding any light on who he is and what his objectives are.

However, put in another light and Loos may actually be a key to revealing the types of scientific research American’s tax dollars are funding through the Department of Defense. For example, a book describing how to contain plasma, published by the U.S. Air Force in 1958, could have something to do with jet engines, though sci-fi geeks might view it as an indirect attempt at creating the technology to produce plasma rifles.

Max Klaassen
Public enema xenomorphic robot from the dimension Zrgauddon.

Architecture changes to prepare for extreme weather


The human and economic losses resulting from extreme weather events during the last several years vividly demonstrate the US’ historically shortsighted approach to development. The ill-advised, fast-paced construction of human settlements in low-lying, coastal and riverine environments prone to flooding has long been the American way. From Galveston to Hoboken, we have laid out our grids and thrown up our houses with little regard for the consequences.

Galveston, Texas in 1871, ‘but a waif of the ocean,…liable, at any moment, and certain, at no distant day, of being engulfed and submerged by the self-same power that gave it form.’
Camille N. Drie

And the consequences can be devastating. Hurricane Sandy, which hit the East Coast in 2012 just one year after Hurricane Irene, another “100-year” storm, “filled up Hoboken like a bathtub.” The storm’s impact all across the eastern seaboard was staggering: 147 people were killed, 650,000 homes were damaged or destroyed, and 8.5 million residences lost power, some for weeks. In the end, the costs of the storm were pegged at over US$60 billion, making Sandy the second costliest natural disaster in US history after Hurricane Katrina in 2005.

Storms like Sandy are a harbinger of extreme weather events to come as a result of climate change. Without concerted action, the costs, in lives and property, of future weather events will only multiply. It’s time we recognize not only that the climate is changing but that the development patterns that have hardly served us well in the past certainly won’t serve us well in the future. Changing course will require a reassessment of risks as they relate not only to how but also to where we build. In our larger, more densely populated regions and cities, massive storm protection projects are both necessary and economically viable, but in many places we would be much better served to move out of harm’s way.

Torrential rains in May 2015 flooded this Houston, Texas apartment complex.
Reuters Photographer

Climate change means more extreme weather events

It’s beyond dispute that the planet is warming. The year 2014 was the warmest on record, and projections suggest that by 2100, average global temperatures could increase by between 2 and 11 Fahrenheit. And with rising temperatures come rising sea levels. Globally, sea level rose 7 inches during the 20th century, and projections for the 21st century are alarming, with estimates ranging from between 1 and 4 feet globally.

The rise in global temperature and sea level has been accompanied by an increase in flood events and hurricane strength and activity in the Atlantic. Since 1958, intense rainfall events have increased 71% in the Northeast. This May, rainstorms in Texas dumped 35 trillion gallons of water, enough to cover the entire state to a depth of eight inches. Here again, projections don’t bode well for the future.

Along the Atlantic coast, stronger, wetter and more frequent storms will result in ever-increasing levels of damage – especially when combined with bigger storm surges due to rising sea levels, less protection due to the loss of storm-buffering wetlands and more exposure due to increasing development in low-lying areas.

Inland, an increase in extreme precipitation events combined with more floodplain development and greater stormwater runoff over increasingly impervious ground surfaces will lead to more frequent and intense flooding.

Idea: let’s rethink building cities on floodplains.
Department of Environment and Climate Change, NSW, CC BY

Dumb development decisions

The US Congressional Budget Office predicts that the costs of hurricane damage in 2075 will double due to climate change alone and could increase fivefold with additional coastal development. And without significant changes in our land use policies, we will see additional development. Over the past 40 years, there’s been a 60% increase nationally in the number of people living in coastal floodplains. And those floodplains are growing; with each new upstream development, another downstream site is compromised. Over the last 20 years, increased runoff from new development in Houston, also known as the Bayou City, has added 55 square miles to the 100-year floodplain.

It should be clear by now that the rewards reaped from our current development patterns don’t outweigh the risks we face. In the past, we built our cities and settlements, not always wisely or well, with the assumption that the future would be similar to the past.

The evidence is now overwhelming that the future will be nothing like the past. But we continue, in many places, to act as if it will. Believing, perhaps, that if you ignore the science, the projections won’t come to pass, officials in North Carolina, Pennsylvania and Florida required that the term “climate change” be removed from official communications and state websites. Claiming “‘climate change’ is a political agenda which attempts to control every aspect of our lives,” Republican leaders in Texas “reject the use of this natural process to promote more government regulation of the private economy.

What happens after the floodwaters recede?
Chuck Patch, CC BY-NC

Subsidizing risk

The problem with this logic, however, is that government action often tends to stimulate rather than impede private economic actions that both drive and increase our vulnerability to the impacts of climate change. In the United States, $18.5 billion in federal fossil fuel subsidies not only hamper efforts to reduce greenhouse gas emissions but contribute directly to the expansion of the type of low-density sprawling development that increases the risks we face from extreme weather events, through increased runoff and the destruction of wetlands and open space.

The government further subsidizes risky development through the National Flood Insurance Program (NFIP), which is currently $23 billion in debt due to claims from Hurricanes Katrina and Sandy. By keeping the cost of insurance below real actuarial rates – 20% of its 5 million policies are explicitly subsidized – and by continuing to offer insurance on repetitive loss properties – one single-family home in New Jersey has filed 16 claims for a total of $1.3 million — the NFIP shields property owners from the real risks to which they are exposed. In redistributing the costs of individual choices onto all taxpayers, the NFIP actively encourages development in vulnerable, high-risk areas. Recent attempts to reform the NFIP were thwarted by a coalition of coastal residents and the National Association of Homebuilders.

We continue, in many cases, to let individual short-term interests trump collective long-term security, ignoring what climate science has proven with 95% certainty – odds that any gambler would pray for.

Embracing resilient design

There are some positive signs of changing attitudes and approaches. With six innovative proposals funded in New York and New Jersey, the US Department of Housing and Urban Development’s Rebuild by Design competition signals a willingness, at all levels of government, to invest in progressive, evidence-based resilient design efforts. Recognizing the densely populated region’s massive exposure to the threats posed by extreme weather, HUD is investing over $900 million in infrastructure-scale projects. They include the creation of a multipurpose berm and storm-buffering nature preserve in the Meadowlands of New Jersey and a multi-pronged protection plan including new bulkheads, stormwater pumps and green infrastructure in Hoboken.

Such projects are a wise investment. The Army Corps of Engineers estimates that for every dollar spent on preparing for the anticipated effects of climate change – adapting, in other words – four to five are saved in post-disaster recovery and reconstruction costs. Unfortunately business goes on as usual in many places, where bearing the expense of disaster recovery trumps investing in preparedness.

Consolidating development is one way to build smarter.
‘Resilient Collective Housing’, New Jersey Institute of Technology College of Architecture and Design studio project by Taryn Wefer and Naomi Patel. Instructors: Keith Krumwiede and Martina Decker, CC BY-ND

Smarter choices in where and how we build

Rather than continuing to encourage shortsighted development practices, we should prioritize the development of denser, compact communities. Such communities offer economic, environmental and social benefits that make them inherently more resilient than sprawling low-density developments. With their smaller footprint, such communities have lower infrastructure costs per capita and provide for the preservation, or restoration, of natural habitats and storm-buffering wetlands. They also reduce energy consumption and thus greenhouse gas emissions at both the household and neighborhood level. When properly designed, such developments balance the individual needs of each household with the collective needs of the larger neighborhood, encouraging a sense of mutual respect and responsibility that is critical to the resilience of the community.

We have the means to encourage adaption of this type. The voluntary buyout of flood-prone properties is particularly effective, from both a cost and resiliency perspective. Instead of continuing to subsidize flood insurance for properties in areas at risk of flooding — an estimated one-third of all claims paid through the NFIP are for repetitive loss properties — public funds should be used to acquire and restore the land to its natural state. A study done following a buyout of properties in Kentucky showed a return of $2.45 for every dollar invested in buyouts.

It no longer makes sense to rebuild in the same places that keep getting hit.
JaxStrong, CC BY

In addition to withdrawing from flood-prone areas, the creation of resilient, compact communities requires identifying and guiding development toward more opportune locations. In addition to having no adverse impact on existing floodplains, such locations should accommodate greater density while providing access to jobs, education and recreation through a variety of transportation choices. Unfortunately, current zoning often discourages compact development.

In concert with enacting zoning changes to promote more resilient development, communities can utilize a technique called transfer of development rights (TDR). Most simply, TDR provides for the transferring of development rights from one location to another. Because zoning changes lower the development potential for some property owners while raising it for others, TDR essentially severs the right to develop land from the land itself. In this way, property owners seeking to build in areas where more development is desired would buy development rights from property owners in the area where less development is wanted.

It’s time to wise up

Each year brings more evidence of the human and economic impacts of climate change. It’s time that we stop throwing good money after bad. Rather than spending $25 million on PR campaigns to convince ourselves we’re “stronger than the storm,” we should start making choices that prove we’re smarter. For while we can’t say when the next hurricane with the force of Sandy (or even greater force) will batter the Atlantic Coast or when extreme flooding will hit Texas, we do know that there will be a next time. And we’re still fundamentally unprepared for it. We can’t continue to bet against climate change; we’ll lose in the end.

The Conversation

Keith Krumwiede is Associate Professor of Architecture at New Jersey Institute of Technology.

This article was originally published on The Conversation.
Read the original article.

EPA Clean Power Plan Re-Sparks US climate debate


For the first time this summer, the nation’s fleet of existing power plants will face limits on carbon dioxide emissions.

Depending on whom you ask, the release of the EPA’s final Clean Power Plan is either an important step in addressing the challenge of climate change, an example of overreach by the federal government or largely insignificant.

Understanding the structure and potential impacts of the Clean Power Plan requires some context.

First, it is difficult to overstate the pace and scale of the transition already under way in the nation’s electric power sector. Natural gas prices, once characterized by significant volatility, are projected to remain low for the foreseeable future due to the rapid expansion of shale gas production via hydraulic fracturing. Electric utilities are facing new limits on emissions of mercury, sulfur dioxide, nitrogen oxides and particulate matter.

The cost of solar energy continues to drop. Wind power is increasing. Future electricity demand is expected to remain relatively flat due in part to improved efficiency of appliances and electronics. Utilities are also retiring a large number of older coal-fired power plants. Together, these factors are driving fundamental changes in the production and consumption of electricity.

Second, the EPA is developing the Clean Power Plan based on its authority under the Clean Air Act. This is the latest in a string of steps following the 2007 US Supreme Court case Massachusetts v EPA that concluded the Clean Air Act applied to greenhouse gas emissions. The specific section of the law at play here – section 111(d) – has rarely been triggered and there are no direct judicial decisions interpreting the statutory language.

This lack of precedent, combined with the broad terms that Congress included in section 111(d), grants the EPA and the states significant flexibility as they assess strategies to reduce CO2 emissions from the electric power sector.

State targets

State-specific emissions targets lie at the heart of the Clean Power Plan. The EPA has proposed calculating the state emission targets based on four criteria: improving efficiency at existing coal-fired power plants; increasing the use of existing natural gas facilities; increasing or maintaining generation from zero-emitting sources (including renewable and nuclear facilities); and increasing energy efficiency.

Individual state targets will differ because the potential for reducing emissions under each category varies from one state to another.


Reuters

State officials will then have wide latitude to develop their own plans to meet the targets. If states refuse to submit a plan, or if the EPA determines that the submitted plan is inadequate, a federal plan would apply.

The long-term impacts of this new regulation will depend on forthcoming decisions by the EPA, the states and the courts.

The more stringent the emissions limits, for example, the more steps that state regulators and electric utilities must take to comply and the greater the reduction in the nation’s greenhouse gas emissions.

Long-term impacts will also turn on the degree of guidance the EPA provides regarding compliance options. Many states have limited capacity to evaluate the full range of the options on their own. If the EPA reduces administrative and technical hurdles for some choices, there is a good chance that many states will pursue those options.

How states choose to implement the actual targets will determine how utilities respond to the new rule. For example, whether or not states approach compliance on an individual basis or as part of a multistate effort may have a major impact on the overall cost of the program.

With the exception of most of Texas, the nation’s electricity grids span state borders. If neighboring states make different implementation choices, it could affect how companies operate their existing power plants and where they site new facilities.

Courts will also play an important role in determining whether and how the Clean Power Plan moves forward. The EPA has already survived one court challenge to the Clean Power Plan, but more are certain to follow issuance of the final rule.

The EPA initially proposed requiring states to submit their plans within one year, with the possibility of a one- or two-year extension. That timeline will likely change in the final rule. Legal challenges could also potentially delay implementation.

New discussion

While it will take some time to assess the long-term implications of the Clean Power Plan, the regulatory process has already produced a notable result.

By moving forward under its existing legal authority, the EPA has shifted the policy debate from “whether to regulate CO2 emissions” to “how to regulate CO2 emissions.”

This, in turn, has reinvigorated serious conversations among state regulators, utility executives and environmental groups regarding policy options to achieve meaningful environmental benefits in a cost-effective manner.

States have always been at the forefront of efforts to address climate change, and that leadership will continue under the Clean Power Plan. With deliberate planning, this process could provide state regulators with a tool for guiding the electricity sector into the future.

The Conversation

Jonas Monast is Climate and Energy Program Director, Nicholas Institute for Environmental Policy Solutions; Senior Lecturing Fellow, Duke Law School at Duke University.

This article was originally published on The Conversation.
Read the original article.

Smart pool pumps ‘batteries’ store renewable energy


Sean Meyn, University of Florida

As more wind and solar energy comes online, the people who run the power grid have a problem: how do they compensate for the variable nature of the sun and wind?

California plans to spend billions of dollars for batteries to even out the flow of power from solar and wind, much the way shock absorbers smooth out bumps on the road. But do they need to? Not at all!

In my research, I’ve found that we can accommodate a grid powered 50% by renewable energy without the use of batteries.

Systems flexible enough to accommodate the ups and downs of solar and wind production can be made by adjusting the power at millions of homes and businesses on a minute-by-minute or even second-by-second basis. This approach requires no new hardware, some control software and a bit of consumer engagement.

Massive balancing act

Already, electric power procured from the wind or sun is leading to large and small “bumps” in the energy fed to the grid.

For example, on a typical week in the Pacific Northwest, power can increase or decrease by more than one gigawatt in an hour. That’s the equivalent of the output from one huge nuclear power plant able to supply a million homes.

Look at the green line. Wind power generation is volatile and not always in sync with the actual demand for power (red line, below the blue).
Bonneville Power Administration

This is an enormous challenge to grid operators in this region. Massive fluctuations in power require equally massive storage devices that can charge when the wind is blowing, and discharge during periods of calm.

Now, the balance of supply and demand for power is primarily done by generating more power rather than storage.

Grid operators draw on what is called the balancing reserves obtained from fossil fuel generators or hydro plants, when available. These power plants ramp up and down their output in response to a signal from a grid balancing authority. This is just one of many ancillary services required to maintain a reliable grid.

Many states are now scrambling to find new sources of ancillary services, and the federal government is also searching for incentives: Federal Energy Regulatory Commission (FERC) orders 745, 755 and 784 are recent responses by a government agency to create financial incentives for responsive resources to balance the grid.

Are batteries the solution?

Storage is everywhere, but we have to think beyond electricity.

Consider a large office building. Will anyone notice if the fan power is reduced or increased by 10% for 10 or 15 minutes? This makes no demands on the comfort of occupants of the building, but the resulting deviations in power can provide a substantial portion of the needs of the grid. A building can be regarded as a virtual battery because of thermal inertia – a form of thermal storage.

What about for longer time periods? Residential pool pumps (as well as pumps used in irrigation) are a significant load in Florida and California – well over one gigawatt in each state – that can be run at different times of the day.

Turning down, or turning on, many of these = enough power smooth out solar and wind, while still cleaning the pool.
Pixabay

Through local intelligence – in the form of a chip on each device or a home computer for many devices – the collection of one million pools in Florida can be harnessed as massive batteries. Through one-way communication, each pool will receive a regulation signal from the grid operator. The pool will change state from on to off based on its own requirements, such as recent cleaning hours, along with the needs of the grid. Just as in the office building, each consumer will be assured of desired service.

Pools are, of course, just one example of a hungry but flexible load.

On-off loads such as water pumps, refrigerators or water heaters require a special kind of intelligence so that they can accurately erase the variability created from renewable generation. Randomization is key to success: To avoid synchronization (we don’t want every pool to switch off at once), the local intelligence includes a specially designed “coin-flip”; each load turns on or off with some probability that depends on its own environment as well as the state of the grid.

It is possible to obtain highly reliable ancillary service to the grid, while maintaining strict bounds on the quality of service delivered by each load. With a smart thermostat, for example, indoor temperature will not deviate by more than one degree if this constraint is desired. Refrigerators will remain cool and reliable, and pools will be free of algae.

Where do we go from here?

We first must respect the amazing robustness of the grid today.

This is the result of ingenious control engineering, much like the automatic control theory that brought the first human to the moon and makes our airplanes so reliable today. We cannot pretend that we can transform the grid without partnering with the control and power engineers who understand the mysterious dynamics of the grid. Instabilities and blackouts occur when we are too aggressive in attempting to balance supply and demand, just as they occur when we are too slow to respond.

We are certain that the engineering challenges will be largely solved in the upcoming years – it is an exciting time for power!

“Intelligent” loads, or devices with controllers, can balance supply and demand of power along with generators and batteries.
Author, Author provided

The next challenge is participation.

Today, about 750,000 homeowners in Florida have signed contracts with utility Florida Power & Light, allowing them to shut down pool pumps and water heaters in case of emergencies. How can we expand on these contracts to engage millions of homeowners and commercial building operators to supply the virtual storage needed? Recent FERC rules that offer payments for ancillary services for balancing the grid are a valuable first step in providing incentives.

It is possible that little incentive is required since we are not subjecting consumers to any loss of comfort: it is the pool or fridge that provides flexibility, and not the homeowner.

A sustainable energy future is possible and inexpensive with a bit of intelligence and flexibility from our appliances.

The Conversation

Sean Meyn is Professor of Electrical and Computer Engineering at University of Florida.

This article was originally published on The Conversation.
Read the original article.

When will we have better batteries than lithium-ion for gadgets and electric vehicles?


Many of us would be hard-pressed to spend a day without using a lithium-ion battery, the technology that powers our portable electronics. And with electric vehicles (EVs) and energy storage for the power grid around the corner, their future appears pretty bright.

So bright that the iconic California-based upstart Tesla Motors stated that their newly announced residential Powerwall battery is sold out until mid-2016 and that the strong market demand could meet the capacity of their upcoming battery “gigafactory” of 35 gigawatt-hours per year – the daily electrical energy needs of 1.2 million US households.

When released by Sony in the early 1990s, many considered lithium-ion batteries to be a breakthrough in rechargeable batteries: with their high operating voltage and their large energy density, they outclassed the then state-of-the-art nickel metal hydride batteries (NiMH). The adoption of the lithium-ion technology fueled the portable electronic revolution: without lithium-ion, the battery in the latest Samsung Galaxy smartphones would weigh close to four ounces, as opposed to 1.5 ounces, and occupy twice as much volume.

Yet, in recent years lithium-ion batteries have gathered bad press. They offer disappointing battery life for modern portable devices and limited driving range of electric cars, compared to gasoline-powered vehicles. Lithium-ion batteries also have safety concerns, notably the danger of fire.

This situation raises legitimate questions: What is coming next? Will there be breakthroughs that will solve these problems?

Better lithium chemistries

Before we attempt to answer these questions, let’s briefly discuss the inner mechanics of a battery. A battery cell consists of two distinct electrodes separated by an insulating layer, conveniently called a separator, which is soaked in an electrolyte. The two electrodes must have different potentials, or a different electromotive force, and the resulting potential difference defines the cell’s voltage. The electrode with the largest potential is referred to as the positive electrode, the one with the lowest potential as the negative electrode.

Next-generation batteries could improve on energy density, allowing for longer run-time on electronics and driving range on EVs.
Author and Wikipedia, Author provided

During discharge, electrons flow through an external wire from the negative electrode to the positive electrode, while charged atoms, or ions, flow internally to maintain a neutral electrical charge. With rechargeable batteries, the process is reversed during charging.

Lithium-ion batteries’ energy density, or the amount of energy stored per weight, has increased steadily by about 5% every year, from 90 watt-hours/kilogram (Wh/kg) to 240 Wh/kg over 20 years, and this trend is forecast to continue. It’s due to incremental refinements in electrodes and electrolyte compositions and architectures, as well as increases in the maximum charge voltage, from 4.2 volts conventionally to 4.4 volts in the latest portable devices.

Picking up the pace of energy density improvements would require breakthroughs on both the electrodes’ materials and the electrolyte fronts. The biggest awaited leap would be to introduce elemental sulfur or air as a positive electrode and use metallic lithium as a negative electrode.

In the labs

Lithium-sulfur batteries could potentially bring a twofold improvement over the energy density of current lithium-ion batteries to about 400 Wh/kg. Lithium-air batteries could bring a tenfold improvement to approximately 3,000 Wh/kg, mainly because using air as an off-board reactant – that is, oxygen in the air rather than an element on a battery electrode – would greatly reduce weight.

A lithium air battery uses oxygen from the air to drive an electrochemical reaction – if it would work outside the lab.
Na9234/wikimedia, CC BY

Both systems are intensively studied by the research community, but commercial availability has been elusive as labs struggle to develop viable prototypes. During the discharge of the sulfur electrodes, the sulfur can be dissolved in the electrolyte, disconnecting it from the electronic circuit. This reduces the amount of lithium that could be removed from the sulfur during the charge and hurts the overall reversibility of the system.

To make this technology viable, critical milestones must be reached: improve the positive electrode architecture to better retain the active material or develop new electrolytes in which the active material is not soluble.

The lithium-air battery, too, suffers from this difficulty of being repeatedly recharged as a result of problems caused by reactions between the electrolyte and air. Also, with both technologies, protection of the lithium electrode is an issue that needs to be solved.

Savior in sodium?

For all of the aforementioned batteries, lithium is an essential component of the battery. Lithium is a fairly abundant element around the world but unfortunately only at trace levels, which prevents its worldwide commercial extraction. Although it is found in harvestable conditions in a few ores that could be mined, most of the production of lithium comes from brines of high-altitude salt lakes, mostly in the Andes in South America.

Despite this relatively difficult extraction, lithium carbonate can be found at around US$6 per kilogram, and since an electric vehicle battery pack requires only about three kilograms of lithium carbonate, its cost is not a major concern to date.

Where the lithium in your batteries come from: the high salt lakes of South America.
Ricampelo/wikimedia, CC BY

The concern here is more about geopolitics: every country seeks energy independence, and replacing oil with lithium batteries as a transportation fuel simply shifts the dependence from the Middle East to South America.

One possible solution would be to replace lithium with the element sodium, which is 2,000 times more abundant.

Electrochemically speaking, sodium is almost comparable to lithium, which makes it an extremely good candidate for batteries. Sodium-ion batteries research has exploded in recent years, and their performance, once commercialized, could be on par with their lithium-ion counterparts.

While sodium-ion batteries might not bring any significant cost or performance advantage over lithium-ion technology, it could offer a path for every country to manufacture their own batteries with readily available resources.

No cure-all

No matter what, all of these emerging technologies are likely to suffer from the same safety concerns as the current lithium-ion cells. The threat comes from the flammable solvent-based electrolyte which makes it possible to operate at voltages above two volts.

Indeed, because water splits into oxygen and hydrogen above two volts, it cannot be used in three volt-class lithium or sodium batteries and has been replaced by expensive flammable carbonate solvents. Alternatives such as solvent-free electrolytes do not provide a good enough conductivity for ions at room temperature to handle high-power applications, such as powering a car, and are therefore not used in commercial cells.

Fortunately, with the current lithium-ion technology, it has been estimated that only one in 40 million cells undergoes dramatic failure of a fire. Although the risk cannot be fully suppressed, engineering controls and conservative designs can keep it in check.

In sum, the current lithium-ion batteries offer fairly good performances. Emerging chemistries such as lithium-sulfur or lithium-air have the potential to revolutionize portable energy storage applications, but they are still at the lab research stage with no guarantee of becoming a viable product.

For stationary energy storage applications such as storing wind and solar energy, other types of batteries, including high-temperature sodium-sulfur batteries or the redox flow batteries, might prove more sustainable and cost-effective candidates than lithium-ion batteries, but that could be a story for another article.

The Conversation

Matthieu Dubarry is Assistant Researcher in Electrochemistry and Solid State Science at the Hawaii Natural Energy Institute at University of Hawaii.
Arnaud Devie is Postdoctoral Research Fellow at the Hawaii Natural Energy Institute at University of Hawaii.

This article was originally published on The Conversation.
Read the original article.

Harvesting usable fuel from nuclear waste – and dealing with the last chemical troublemakers


This article is part of The Conversation’s worldwide series on the Future of Nuclear. You can read the rest of the series here.

Nuclear energy provides about 11% of the world’s total electricity today. This power source produces no carbon dioxide during plant operation, meaning it doesn’t contribute to climate change via greenhouse gas emissions. It can provide bulk power to industry and households around the clock, giving it a leg up on the intermittent nature of solar and wind.

It also receives widespread contempt for a variety of reasons – many purely emotional and with little or no scientific grounding. The most pressing legitimate issue is the management of used nuclear fuel, the waste by-product that needs to be removed from the reactor and replaced with fresh fuel to sustain power generation.

Ongoing research is tackling this problem by attempting to figure out how to transform much of what is currently waste into usable fuel.

The nuclear fuel cycle.
Nuclear Regulatory Commission, CC BY

How do reactors generate nuclear waste?

The reaction that produces energy in a nuclear reactor takes place in the nuclei of atoms – hence the name. One atom of uranium-235 (which contains 92 protons and 143 neutrons) absorbs a neutron and splits into two new atoms. This process releases large amounts of energy and, on average, 2.5 new neutrons that can be absorbed by other uranium-235 atoms, propagating a chain reaction. This process is called fission. The two new atoms are called fission products. They contribute to most of the short- to medium-term radioactivity of the fuel upon discharge from the reactor.

Replacing some of the core and replacing with fresh fuel.
IAEA Imagebank, CC BY-SA

Fission is most likely to take place in heavy atoms. Nuclear engineers and nuclear chemists focus on the heaviest elements – that is, the actinides, located at the very bottom of the periodic table. The fission process continues, consuming fuel, until the amount of burnable (fissile) atoms is no longer economical to keep using. Then the reactor is temporarily shut down for refueling. A third of the core is removed and replaced with fresh fuel. The remaining two-thirds of the core is shuffled around to optimize the power production. The leftover material, the used fuel, is highly radioactive and physically hot, and must therefore be cooled and shielded for safety reasons.

In a commercial power reactor, brand new unused fuel consists of 3%-5% uranium-235, with the balance being uranium-238. The heavier uranium-238 isotope will not fission but can transform to an even heavier isotope, uranium-239, via a process called neutron capture. Continued neutron capture eventually produces a suite of elements heavier than uranium (so called trans-uranics), some of which will fission and produce power, but some of which will not.

These trans-uranic, actinide elements – including neptunium, plutonium, americium and curium – have one thing in common: they contribute to the long-term radioactivity of the used fuel. After the energy-generating fission reaction, the fission products’ radioactivity decreases rapidly. But because of the other trans-uranic elements in the mix, the material needs to be isolated until deemed safe – on the order of millions of years.

At least 23 feet of water covers the fuel assemblies in the spent fuel pool at the Brunswick Nuclear Power Plant in Southport, North Carolina.
Matt Born/Wilmington Star-News, CC BY

Upon discharge from the reactor, the used fuel contains only about 3%-4% fission products. The rest is uranium and trans-uranics that weren’t part of the fission reaction. Most of the material is the original uranium-238, still perfectly suited to use in new fuel, as is the remaining uranium-235 and the plutonium-239 (combined about 1.5% of the used fuel).

Disposing of this material as waste is like taking one small bite of a sandwich and then throwing the rest in the trash. It’s no surprise then that several countries are recycling nuclear fuel to recover the remaining useful material. Other countries are revisiting these options, at least on a research basis.

Scope of the waste problem

A typical power reactor (1 GWe) produces about 27 metric tons of used fuel each year, in order to generate the electricity needed to power 700,000 homes (assuming an average American home consumes about 11,000 kWh annually and a power plant has an average capacity factor of 85%). For comparison, a coal plant of similar power output will produce 400,000 metric tons of ash.

Once spent fuel has cooled, it’s loaded into special canisters.
Nuclear Regulatory Commission, CC BY-NC-ND

The world’s nuclear power capacity is on the order of 370 GW, which corresponds to about 10,000 metric tons of used fuel generated each year worldwide. The total amount of used fuel in the world (as of September 2014) is around 270,000 metric tons, of which the US is storing about 70,000 metric tons.

The first round of reprocessing waste

Removing uranium and plutonium from used fuel relies on a chemical process. Reprocessers dissolve the used fuel in acid and treat it with organic solvents to selectively remove the elements of interest and leave the unwanted elements behind. Commercial plants all use more or less the same method, PUREX (Plutonium Uranium Reduction EXtraction).

Originally invented in the US in the late 1940s, over the years PUREX has been adapted slightly to improve its performance. This process doesn’t separate out elements heavier than plutonium. The waste product after the reprocessing still needs to be isolated for what is essentially an eternity.

The benefit, though, is that it can recycle about 97% of the spent fuel, massively decreasing the volume of waste. The bulk of the material can then be made into new reactor fuel containing a mix of uranium and plutonium, so-called mixed oxide or MOX-fuel.

Major reprocessing plants are located in the UK, France and Russia. India has some capacity, and Japan has a reasonably large plant that was recently completed but is currently not used. Global reprocessing capacity of commercial fuel is around 4,000 metric tons per year. To date about 90,000 metric tons of used fuel has been reprocessed, about 30% of the total amount of used fuel produced in commercial reactors.

Some countries that do not have their own reprocessing plants ship material to countries that do, such as France. It’s expensive to invest in reprocessing infrastructure. It can also be a political decision not to do so, as in the US, because the technology can be used to create material for weapons (this was the original use in the 1940s). Of course, all reprocessing plants are under the scrutiny of the International Atomic Energy Agency, and must account for all processed material to ensure that nothing is diverted for potential use in weapons.

IAEA inspectors seal the spent fuel pond at Dukovany Nuclear Power Plant in the Czech Republic.
IAEA Imagebank, CC BY-SA

Dealing with that last 3%

But that level of reprocessing doesn’t completely solve the issue of used nuclear fuel. My research at UC Irvine, as well as that of other labs around the world, focuses on new ways to deal with the last few troublemakers in the used nuclear fuel.

We’re working on how to remove the remaining long-lived trans-uranic actinides with an efficiency high enough that the remaining nuclear waste’s isolation time would be decreased to 1,000 years or less. Maybe this still sounds like a long time, but the world is full of structures that have lasted for more than 1,000 years; we should be confident that we can construct something that will last a millennium. We could also, with reasonable confidence, create signs or informational material to mark the storage that people 1,000 years from now could reliably interpret.

While removing uranium and plutonium is readily done (as via PUREX), the next separation step is a grand challenge for various reasons. One is that many of the remaining fission products behave chemically very similar to americium and curium. This requires highly specialized chemicals that are often complex and expensive to synthesize. The radioactive nature of the material provides an additional layer of complexity; the radiation is not only hazardous for people but will also break down the chemicals needed for separation and may speed up corrosion and damage the equipment used in these processes.

The research efforts under way focus on developing new chemical reagents that are more stable with regard to radiation, more selective for the elements we are interested in recovering, and easier to make. Because of this, a lot of effort goes to fundamental studies of the chemical interactions between reagents and elements in used fuel. The problem at hand has been described as a chemists’ playground and an engineers’ challenge.

The bottom line is that none of this is science fiction. Getting to a point at which almost all nuclear waste can be repurposed poses a grand challenge, perhaps comparable to putting a man on the moon, but it is not impossible.

This article is part of The Conversation’s worldwide series on the Future of Nuclear. You can read the rest of the series here.

The Conversation

Mikael Nilsson is Associate Professor of Chemical Engineering and Materials Science at University of California, Irvine.

This article was originally published on The Conversation.
Read the original article.

Sorry Nerds, There’s No Warp Drive


It makes for a sensational headline but NASA didn’t even come close to discovering warp technology.

The mechanism behind their fuel-free propulsion has no clear link to warping space-time. In fact, space-time is not proven or understood to exist as a material substance able to warp. It’s all nonsense. So what really happened?

Richard Feynman once said: “The first principle is that you must not fool yourself – and you are the easiest person to fool.”

You should have been suspicious when the story made the rounds on social media. The headlines were claiming NASA successfully tested something called the EM Drive. The EM drive is awesome, and it’s real science. It’s a propulsion engine doesn’t use propellant, which seems to violate the laws of physics by creating a reaction with no initial action.

First, let’s examine the actual finding. NASA has developed a hollow device that can be  pumped full of electromagnetic radiation which reflects back-and-forth, tapped inside the chamber, generates thrust, causing the device to accelerate in a direction based onthe shape of the chamber. You might ahve seen the story or similar reports over the last year because iterations of it have been built by Roger Shawyer (the EM Drive), one from a Chinese group led by Juan Yang, and one from Guido Fetta (the Cannae Drive), all claiming successful thrust. The stories on science news sites claim the acceleration created is caused by warped space of an Alcubierre Drive, the completely fictional “Star Trek” design.

Here are some problems. First off, none of the tests showed results from gadations in power. If this is a viable prototype for an engine, the science behind it hasn’t proven why a tiny acceleration in relation to a huge amount of relative power is worth any sort of real consideration for space travel. It’s a weak engine with no sign of how it can be scaled.

Secondly, the thrust they created is so small it might just be a mistake in mathematics or caused by an unknown factor, unrelated to warp tech. A true test requires an isolated environment, with atmospheric, gravitational and electromagnetic effects removed from the equation.

Thirdly, good science is reproducible. These tests lack a transparent design so no one else can verify that this actually works.
Finally, a real report has to be created that can be peer-reviewed and understood before irresponsibly publishing the claims.

Optimism of this sort, claiming to be able to put people on mars with a warp engine, is not scientifically valid. This latest group declared they have broken the previously-held laws of physics. They assume we can scale up and implement this engine for space propulsion just because of some questionably positive results. They claim to be distorting space, they claim they might be causing light to go faster by approximately 10^-18 m/s. They made these claims without actually proving them, and told the general public, spreading misinfo.

Harold “Sonny” White at NASA, has made extraordinary claims about warp drive in the past. He is totally the kind of guy who would jump to warp drive as a conclusion. There is nothing in NASA’s report that shows they’ve created a warp drive. Sorry, Star Trek and Star Wars fans. Most likely this is a public relations move to get America and the world science communities more excited about space travel and science education.

Jonathan Howard
Jonathan is a freelance writer living in Brooklyn, NY

Fact Check: has the amount of renewable electricity trebled?


Renewable electricity has nearly trebled under this government.

Ed Davey, Liberal Democrat energy and climate change minister, during an environment debate held by the Daily Politics show.

Amid the climate of mistrust about claims made by politicians that tends to accompany election campaigns, it is reassuring to report that the evidence supports the minister’s statement.

According to official figures on renewable electricity, installed generation capacity in the UK in 2014 was 2.6 times higher than in 2010, while actual generation of renewable electricity was 2.5 times higher. For further detail see the methodology for data collection by the Department of Energy and Climate Change.


CC BY

As well as electricity generated by onshore and offshore wind, solar photovoltaic, hydro, and shoreline wave/tidal energy, these figures include electricity generated by organic material. This includes landfill gas, sewage sludge, waste, animal biomass (poultry litter, meat and bone), wet biomass waste (such as animal manure and slurry), and plant biomass (including straw and energy crops).

Although most of this involves generating electricity by combustion, which yields CO2 as a byproduct in the same way as fossil fuel combustion, these organic materials decompose naturally to produce CO2 and other greenhouse gases anyway, so using them to generate electricity has little effect on net greenhouse gas emissions.

The main growth area was wind, which accounts for about half of Britain’s renewable electricity, followed by photovoltaic and plant biomass.

Provisional data shows that renewables contributed a record 19.2% of electricity generation in 2014.

This increase in renewable electricity can be attributed in part to the Renewables Obligation set up by the previous Labour government in 2002, which obliged electricity supply companies to source an increasing proportion of their electricity from renewable sources. However, the Contracts for Difference feed-in tariff system introduced by the coalition for large-scale energy generation – which established a set price that is high enough to enable investors to make a profit – is probably responsible for increasing the rate of growth. Expert studies have shown that feed-in tariffs are more effective than quota systems in encouraging investment in renewable electricity.

The flaw in the coalition’s strategy is that while in other countries, electricity supply companies are obliged to buy all the electricity generated by renewable sources at the feed-in tariff price, which is higher than the market price, in Britain, supply companies buy at the market price and the government pays the difference. As a consequence, expansion of renewable electricity is limited not only by investors’ willingness to invest and planning issues, but also by budgetary constraints.

As spending cuts are the order of the day, this means that we cannot be sure that the current rate of expansion will be sustained.

Verdict

Ed Davey’s claim that renewable electricity has almost trebled during the coalition’s term of office is accurate, but flaws in Britain’s feed-in tariff system mean that further expansion may be limited.

Review

The coalition certainly deserves some credit for its record on creating new sources of renewable electricity, and it is also true that the fear of electoral unpopularity regarding high energy bills has hampered the efficacy of its achievements. These could have been even higher if the energy companies had been required to purchase higher tariff renewable energy from distributed sources.

However, when energy for heating and transport (which still depends primarily on fossil fuels) is taken into account, renewables represent only 5% of energy supply in the UK, according to the 2014 Digest of UK Energy Statistics. There is widespread acceptance, including by the UK government, that greenhouse gas emissions need to be reduced to keep global temperatures from rising beyond 2°C.

The Centre for Alternative Technology’s report Zero Carbon Britain suggests that it is helpful to think the world has a finite amount of greenhouse gases it can emit to keep within the 2°C threshold. This is known as the emissions budget. The report predicts the UK’s share of the global emissions budget (offering a 75% chance of keeping below 2°C) at about 10,000 MtCO2e between now and 2050. At the current emission rate the country will produce 16,000 MtCO2e by the middle of the century.

Voters would do well to avoid the rhetoric over renewable electricity and ask which party’s policies are most likely to decarbonise all parts of the country’s energy budget, and how quickly they say they can do this. – Erik Bichard

The Conversation is fact checking political statements in the lead-up to the May UK general election. Statements are checked by an academic with expertise in the area. A second academic expert reviews an anonymous copy of the article.

Click here to request a check. Please include the statement you would like us to check, the date it was made, and a link if possible. You can also email [email protected]

The Conversation

This article was originally published on The Conversation.
Read the original article.

No more meters? Make energy a service, not a commodity


Imagine never again receiving an energy bill. Instead, you could pay a flat fee for “comfort”, “cleanliness” or “home entertainment” alongside a premium for more energy-demanding TVs, kettles or fridge-freezers. This isn’t the stuff of science fiction – it’s emerging right now. Recent changes in technology and regulation are enabling the development of new ways to provide electricity and gas.

The energy economy is changing fast. By the 2030s, the power sector will have to be substantially decarbonised if the UK is to meet its emissions targets. This means a lot more renewables, which in turn means more intermittent and variable electricity supply.

Related technological developments, for example in solar and storage, and wider developments in IT and data including the roll-out of smart meters, have the potential to transform contemporary systems of provision.

For instance, the capacity to pool and aggregate data about patterns of energy demand may enable the emergence of new business models in which “energy” suppliers also have a role in providing or managing appliances and the forms of heating, lighting, cooling, computing and entertainment that these enable.

Developments such as these call the very identity of “the provider” into question: as small-scale wind and solar power becomes more common, consumers are increasingly also producers. Even when this is not the case, there are various new “non-traditional” market entrants some of whom have new non-traditional ambitions like those of ensuring that energy is more affordable, especially for those on low incomes or the elderly.

An example is the growing interest of public bodies (local authorities, housing associations and the like) in energy generation and supply, with the aim of delivering greater affordability and fairness to consumers. Alternatively, these offerings may be provided to local communities or communities of interest and may bundle basic energy services with additional offerings such as energy efficiency measures.

In effect energy supply is being repackaged as a form of service provision.

Thinking of energy as a service

We all pay energy bills and we understand that energy is delivered through wires and pipes into boilers, TVs, kettles and so forth. However, it is not the energy, as such, that consumers’ value.

In paying energy bills, people are really paying for the services that energy makes possible: for thermal comfort, for entertainment or for a cooked meal. In other words, it is the ability to watch a favourite TV soap (while consuming a favourite TV dinner) and the cosiness of the home that really matters.

‘Entertainment’, not kilowatt-hours.
Al Ibrahim, CC BY-SA

This isn’t just an academic distinction. Whether energy is seen as a commodity or a service is fast becoming a crucial factor in how the sector is organised and regulated.

The rhetoric of consumer “empowerment” in the energy market and the ambition to provide people with more knowledge about their energy use only makes sense if we think of it as a uniform commodity.

By contrast, if we see energy as being embedded in a huge variety of different practices – that is, if we think of energy as something that is in a sense part of writing emails, watching TV, or making dinner – then demand reduction is not about energy as such, it is about changing the details of daily life.

Recognising these differences helps understand otherwise puzzling features like why more and better information about energy use doesn’t automatically translate into energy-saving actions.

Who will provide this service?

The commodity-service distinction is also useful in thinking about how relationships between consumers and providers might be configured now and in the future. If we think of energy not as a commodity but as something that is incorporated in the provision of services we should also think of energy providers as service providers.

We are already seeing the emergence of Energy Service Companies (ESCos) which guarantee a fixed energy bill as long as the company can install efficiency measures in your home or office. Other providers offer multi-utility tariffs, bundling together rent, water and energy into a single bill.

At the moment it is unclear whether these novel forms of “energy-plus” provision are forerunners of arrangements that are set to become the norm, of if they will remain niche solutions for a few. Whatever else, these moments of flux remind us that energy and energy services are never set in stone.

The Conversation

This article was originally published on The Conversation.
Read the original article.