Category Archives: Energy

Why is low-carbon energy innovation so slow?


The world needs a lot of energy. Global energy demand is expected to increase by 37% percent over the next 25 years, according to the International Energy Agency’s World Energy Outlook 2014. Meeting this demand without significantly increasing carbon emissions requires new energy technologies that involve low or no carbon dioxide emissions.

It is widely recognized that the government plays a critical role in advancing innovation and technology development. State and federal policies have driven deployment of solar and wind power as well as development of hydraulic fracturing, which helped spark a boom in oil and gas drilling in the US.

In the future, many argue that we’ll need many more innovations. For example, the American Energy Innovation Council, a group of high-profile business people, have called for a significant increase in the amount of government funding in research, development and demonstration (RD&D) of energy technologies. The group, which includes Bill Gates, General Electric CEO Jeffrey Immelt and others, also proposed ways to better deploy federal research funds.

The report raises a question vital for the future: what government policies are most effective at accelerating energy innovations? A study of the economic forces in energy and environment, some of which are different than other industries, can help answer this.

Where markets fail

To start, it is important to understand that there are two fundamental market failures in the RD&D of clean energy technologies. A market failure is a situation where the allocation of resources does not happen efficiently. These market failures lead to underinvestment in energy innovation by the private sector and provide justification for government interventions. Separate policy instruments need to be in place to deal with these different market failures.

When one company invests in research, it often leads to technology and knowledge that benefit society overall. That’s good for advancing energy innovation. But because other companies and users can take advantage of that knowledge, individual companies cannot capture the full value of investing in RD&D.

A car made by a 3-D printer (really) came out of a government program to develop light-weight and energy-efficient vehicles.
US Department of Energy.

As a result, the invisible hand of the market produces too little innovation. The more basic and long-term is the research, the worse the problem tends to be. There are also additional market failures in the adoption and diffusion of innovation, as the cost or value of a new technology to one user may depend on how many other users there are.

Second, greenhouse gas emissions constitute an “externality,” or unintended consequence, because the costs of climate change are borne by parties other than the emitters. Hence, current market prices for fossil fuels don’t reflect the full social cost of their consumption. With the “mispricing” of fossil fuels, the invisible hand of the market suppresses demand for alternative energy technologies that are substitutes for fossil fuel technologies.

To overcome the market failure in knowledge creation, the government needs to fund and invest in basic research and provide subsidies and tax incentives for corporate investment of RD&D in energy innovation.

Moreover, public support of costly and risky early-stage deployment and demonstration of alternative energy technologies is well justified. It is important to disseminate information about the operation, maintenance and incremental improvement of new energy technologies that currently might be more expensive and/or less reliable than existing ones. For example, a research program that achieves a better battery for storing energy on the electric grid could benefit all electricity users.

Current US Energy secretary, right, has been advocate of the ARPA-E, a research agency formed to fund breakthrough energy technologies, such as this project to store more natural gas in vehicles.
ARPA-E/Department of Energy

The government also needs to address the other market failure of the “mispricing” of fossil fuels. It may impose a tax on carbon or a “cap and trade” system that alters prices to reflect more adequately the environmental costs. Or it could enact a specific regulatory requirement (such as emission or performance targets) to correct the failure. Such policies would create a higher demand for low-emission technologies and provide incentives for private investment in climate-friendly technologies.

Insights from other industries

In a intriguing book titled Accelerating Energy Innovation: Insights from Multiple Sectors, experts on the history of innovation offered interesting insights about the roles that the US federal government has historically played in catalyzing innovation and technology development in a variety of industries including agriculture, chemical, life sciences, the computer and semiconductor industry, and the internet. They found there were three broad mechanisms through which government policies have served to accelerate innovation:

  1. provision of “substantial, sustained and effectively managed federal funding” for basic research

  2. creation of a growing demand for innovation, either through procurement or through the market, and

  3. encouragement of extensive competition and entry by newly founded firms.

Note that these industries, unlike energy, don’t face a severe market failure of climate and environmental externalities. Nevertheless, the government still played a critical role in stimulating demand, particularly in the early stages of technology development. Thus, it is all the more important to have policies to address the “mispricing” of fossil fuels to increase the demand for and induce private investment in low-emission technologies.

Moreover, history shows the importance of the government’s role in enabling vigorous competition and entry to stimulate innovation. This is particularly relevant for energy innovation. Combating climate change will involve development and deployment of many different technologies in a diverse array of sectors, varying from the electric grid to transportation and other industries. That’s why it is important for the government to foster rigorous competition, rather than pick “winners” and “losers” among technologies and companies.

The Conversation

This article was originally published on The Conversation.
Read the original article.

How the energy grid handles the surge after a solar eclipse


The solar eclipse due to cover much of Europe on March 20 will be the continent’s first for 16 years. Back in 1999, as people stopped staring at the sun and got back on with their day they caused a power surge which still stands as a UK record – greater than anything after a football match or royal wedding.

At the time, solar power made up just 0.1% of all Europe’s electricity produced from renewables. Since then it has increased to at least 5% as countries subsidise renewables to meet EU targets. The installed capacity of solar power in continental Europe is expected to reach 90 GW this year, comparable to 150 coal fired plants.

Under clear skies, regulators expect some 35 GW of solar energy to fade away with the eclipse before being re-injected into Europe’s electrical system.

This is a big test for solar power. It’s the first time when such an event could have a significant impact on those European countries with lots of solar panels such as Germany (with 44% of continental Europe’s installed capacity), Italy, Spain and France. Much more, European grid integration means everyone could be affected.

Keeping the grid stable during the eclipse is the main concern for power utilities. The electrical grids of continental Europe are linked together in what’s called the European synchronous area, allowing countries to juggle excess energy between them to meet demand. Grid regulators will have to coordinate across regions to manage the solar drop off along and demand, all in real time.

Europe’s solar superpowers are all part of the same big grid.
Wikimedia, CC BY-SA

According to ENTSOE, Europe’s association of national grid operators, around 50% of the lost power will come from Germany and 21% from Italy. The grid will be losing 0.4 GW/minute at the start of the eclipse and gaining 0.7 GW/minute as the sun returns.

This is a huge amount of electricity generation to come online at once. The European power system will need to adapt in real time, with countries helping each other by providing the necessary reserves of coal, gas and hydropower to keep things running.

Electric grids operate on the principle that demand and supply need to be carefully balanced. In continental Europe, grids are connected as one synchronous network. The normal operating frequency is 50 Hz, with two statutory limits, the upper limit, which is 50.5 and the lower limit which is 49.5. Above the upper limit, the generators will trip, while below the lower limit demand will disconnect completely, that means power won’t reach end users and blackouts will happen.

Don’t drop below 50, Keanu.
Speed / 20th Century Fox

Think of it like driving a car with a target speed of 50 miles/hr (+/-0.5mph). The accelerator is generation, the forces dragging on the car represent the demand. Keeping the frequency at 50Hz means maintaining secure operation of the system, quality of supply and operate economically.

Solar surging

Despite the impact on solar energy the eclipse’s most significant effect on the power grid will actually be as a result of human behaviour. During the 1999 eclipse in the UK, people stopped working to enjoy the phenomenon, as highlighted in this graph by Solar Power Portal.

UK power demand during the 1999 eclipse (red) compared with the previous day (blue).
Solar Power Portal / National Grid

The 3 GW surge was greater than the 2.8 GW surge after England played Germany in the 1990 world cup semi-final, or the 2.4 GW surge after the 2011 royal wedding.

Something similar will happen this time round. There will be a drop in demand before maximum eclipse, and increased demand after as people come back inside, turn on the tv and fire up the kettle.

In this time the system will have to adapt to the changing load and this could be a challenge. Planning is critical, as you can’t just instantly increase electricity generation. But electricity surges happen from time to time – the 2014 World Cup provides a recent example – and grid coordinators have contingency measures in place to balance supply and demand.

The solar eclipse on Friday is a good test for the future, when even more electricity will be produced from renewables like solar and wind, which are more volatile and dispersed generation. Germany, Italy and the US are among the countries where solar will make up a large proportion of overall power. In such situations, even cloudy weather will create a big decrease in production.

If you want to look at what happened during the eclipse, then BM reports provides near real time and historic data on the UK’s national grid balancing system – electricity generation, system prices, warnings, frequency, wind forecasts, market activity, and so on. So, while most people will spend Friday morning observing at the sun with their pinhole cameras, I’ll be keeping one eye on the grid.

The Conversation

This article was originally published on The Conversation.
Read the original article.

The green billboard


The new Porter School of Environmental Studies at Tel Aviv University is now the first platinum LEED certified building in Israel, a credit given to the most exemplary green projects worldwide. It stands in a select group of only 17 buildings internationally that have passed the 90 credits mark. Alongside this impressive credential, it enjoys a… Continue reading

Pumping CO2 underground can help fight climate change. Why is it stuck in second gear?


There are many uncertainties with respect to global climate change, but there is one thing about which I have no doubts: we will not solve climate change by running out of fossil fuels.

Understanding this leads to three possible pathways we can follow to lower greenhouse gas concentrations, and explains why I’ve chosen to focus my research on carbon capture and storage. We can:

  • continue to burn fossil fuels with little or no restrictions. We will blow by atmospheric CO2 concentrations of 450 parts per million (ppm), the level many scientists think is dangerous to exceed, and we will keep on going. We have enough fossil fuels to easily go above 1,000 ppm. The impact will be significant, perhaps catastrophic

  • restrict fossil fuel use so that we leave most of our coal, oil and gas in the ground (see chart below). More than 85% of commercial energy today is supplied by fossil fuels. Given that there is no political will to increase the gasoline tax even by a nickel, do we have the political will to pass policy that will cause hundreds of trillions of dollars of assets to remain in the ground?

  • find a technology that lets us use our fossil fuel reserves without emitting CO2 into the atmosphere. Such a technology exists today. It is called Carbon dioxide Capture and Storage, or CCS for short.

To deploy CCS on the scale required is a monumental task. We need to store billions of tons of CO2 annually. However, this is the level of effort needed to address climate change. Similar efforts will be needed with other climate mitigation technologies, such as renewables, nuclear and efficiency. There is no silver bullet; we need them all.

As of now, however, CCS is used very little, nowhere near the scale required to make a meaningful dent in emissions. Why? The reasons have less to do with technology maturity and more to do with government policies and the commercial incentives they create.

How does CCS work

A CCS system has three major components:

  • capture, where the CO2 is removed by chemical processes from power plants and other large industrial facilities, such as refineries or cement plants. The captured CO2 is generally compressed to a liquid-like state

  • transport, mainly through pipelines

  • storage, into deep geologic formations at depths greater than 800 meters (2,600 feet).

All the necessary components of a CCS system are in commercial use today somewhere in the economy. To be considered commercial-grade, these different components need to be integrated and scaled up.

How much carbon is in the ground and how burning it would increase CO2 in the atmosphere.
UN IPCC

The basic process for capturing CO2 from gas streams was invented in the 1930s. The first installation on an industrial boiler started up in 1978 in Trona, California. In Saskatchewan, Canada, the first commercial scale operation at a power plant (a coal-burning plant that generates 110 megawatts and emits more than 1 million tons of CO2 per year) started in October 2014 and, its operators say, is “exceeding expectations.”

In the US, there are CO2 pipeline networks with more than 4,000 miles of pipe. These pipelines were built primarily to bring CO2 from natural occurring wells to oil fields. In a practice known as enhanced oil recovery (EOR), CO2 gas is pumped into existing wells to force the release of more oil. About 50 million tons of CO2 per year are transported this way.

Injection of CO2 and other gases into geological formations has been practiced for many years. As early as 1915, natural gas was stored underground. The first EOR operation injecting CO2 started in 1972. Acid gases (including CO2) have been stored in geologic formations since 1989, primarily in Canada. Several CCS demonstration projects, starting with Norway’s Sleipner in the North Sea in 1996, store CO2 at the million-ton-per-year scale.

Two Decades of Progress

The first published paper referring to what would become CCS was in 1977.

However, it was 15 years later that the field of CCS research achieved critical mass. In 1991, the International Energy Agency launched the Greenhouse Gas R&D Programme, also focusing on CCS. The US established its own research program at the Department of Energy in 1997, investing $1 million a year. That figure has since ballooned to more than $200 million a year today. In 2005, the Intergovernmental Panel on Climate Change (IPCC) released a Special Report on Carbon Dioxide Capture and Storage, confirming CCS’s position as a major climate mitigation option.

A Department of Energy graphic shows the basic idea of carbon capture and storage (or sequestration.)
US Department of Energy

By all measures, CCS grew dramatically from 1990 to 2009. One metric is the attendance at the premier international meeting on CCS (see graphic, below). Dozens of demonstration projects were announced. The research and development activities in laboratories and pilot plants exploded. New and improved solvents to capture CO2 from exhaust gases were developed. Processes that integrated power production and CO2 capture were designed, developed and tested. There were field tests for injecting CO2 underground, regulatory frameworks developed and public outreach programs.

The vision was to have 20 large-scale demonstration projects on-line by 2020, at which time CCS could be considered commercial. In 2009, billions of dollars in stimulus money from the US and Europe were appropriated to help fund the vision. We were on the road to commercialization, but then we hit some potholes.

A Lack of Markets

In order for any technology to be commercial, there is a need to establish markets. Unlike other low-carbon technologies such as renewables and nuclear, CCS has only one purpose: to reduce CO2 emissions. Therefore, markets will only be established by climate policy aimed at reducing greenhouse gas emissions to the atmosphere. In 2009, it looked like that policy was imminent. There were cap-and-trade bills in the US Congress. The Copenhagen climate meeting at the end of 2009 was expected to result in a new international protocol.

Then it all fell apart. Climate policy became a partisan issue in the US, blocking any effective policy measures, and Copenhagen failed to achieve a protocol. Markets that would have driven CCS forward have been put off for at least a decade, maybe longer.

As an indicator of waning interest in CCS, attendance to the Greenhouse Gas Control Technologies conference has waned in recent years.
Howard Herzog/MIT, Author provided

The participation of industry, such as electric utilities and oil companies, in developing CCS is crucial. However, as markets for the technology became more uncertain and pushed into the future, companies reexamined their commitment to CCS.

And now the support level has dropped significantly, as nearer-term needs have taken priority. Government programs that grew at double-digit rates for many years have flattened out or, in some cases, declined. As a result, less than half of the 20 hoped for demonstrations will be on-line in 2020, and CCS commercialization will be pushed back at least until 2030.

CCS at a crossroads

The latest IPCC Assessment Report on Mitigation mentioned CCS 35 times in the summary for policymakers. The International Energy Agency has repeatedly said CCS is a key technology for mitigating climate change.

However, just as CCS is making great progress in building demonstration plants, developing new and improved technologies, and understanding and managing risks, the funding to carry on these activities at the level needed has begun to shrink. A short-term focus has replaced long-term strategies.

This is a concern not just for people who consider CCS a critical technology, but should be worrisome for all who believe that mitigating climate change is a critical priority. We are not making the investments needed to meet a long-term goal of 80% reductions in CO2 emissions by mid century.

I prefer to have an economy-wide carbon price to create markets for low-carbon technology. Then markets, not advocates, will make decisions about the technology mix. I believe deployment of CCS would be significant under such a policy. However, today we rely on government programs like fuel-efficiency standards for light-duty vehicles and renewable portfolio mandates for utilities to help reduce CO2 emissions.

Economists generally agree that these programs are less effective and more costly than a carbon price for reducing CO2 emissions. If energy policies did focus on a long-term reduction of CO2, we would not see the slowdown in CCS we see today.

The Conversation

This article was originally published on The Conversation.
Read the original article.

New nanomaterials will boost renewable energy


Global energy consumption is accelerating at an alarming rate. There are three main causes: rapid economic expansion, population growth, and increased reliance on energy-based appliances across the world.

Our rising energy demand and the environmental impact of traditional fuels pose serious challenges to human health, energy security, and environmental protection. It has been estimated that the world will need to double its energy supply by 2050 and it is critical that we develop new types of energy to meet this challenge.

Fuel cells usually use expensive platinum electrodes, but a non-metal alternative could be an affordable solution for energy security. Fuel cells generate electricity by oxidizing fuel into water, providing clean and sustainable power.

Hydrogen can be used as the fuel. First, hydrogen is split into its constituent electrons and protons. Then the flow of electrons generates electrical power, before the electrons and protons join with reduced oxygen, forming water as the only by-product.

This technology has high energy conversion efficiency, creates virtually no pollution, and has the potential for large-scale use. However, the vital reaction which generates reduced oxygen in fuel cells requires a catalyst – traditionally a platinum electrode. Unfortunately, the high cost and limited resources have made this precious metal catalyst the primary barrier to mass-market fuel cells.

The high cost of platinum can make electrodes – as well as engagements – prohibitively expensive.
1791 Diamonds, CC BY

Ever since fuel cells using platinum were developed for the Apollo lunar mission in the 1960s, researchers have been developing catalysts made from alloys containing platinum alongside cheaper metals. These alloy catalysts have a lower platinum content, yet commercial mass production still requires large amounts of platinum. To make fuel cells a viable large-scale energy option, we need other efficient, low cost, and stable electrodes.

We previously discovered a new class of low-cost metal-free catalysts based on carbon nanotubes with added nitrogen, which performed better than platinum in basic fuel cells. The improved catalytic performance can be attributed to the electron-accepting ability of the nitrogen atoms, which aids the oxygen reduction reaction. These carbon-based, metal-free catalysts could dramatically reduce the cost of commercialising of fuel cell technology. Unfortunately, they are often found to be less effective in acidic conditions – the typical conditions in mainstream fuel cells.

Using carbon composites with a porous structure to increase surface area and nanotubes to enhance conductivity, our latest research demonstrates that our nanomaterials are able to catalyse oxygen reduction as efficiently as the state-of-the-art non-precious metal catalysts – and with a longer stability. This first successful attempt at using carbon-based metal-free catalysts in acidic fuel cells could facilitate the commercialisation of affordable and durable fuel cells.

In addition to fuel cells, these new metal-free carbon nanomaterial catalysts are also efficient electrodes for low-cost solar cells, supercapacitors for energy storage, and water splitting systems which generate fuel from water. The widespread use of carbon-based metal-free catalysts will therefore result in better fuel economy, a decrease in harmful emissions, and a reduced reliance on petroleum sources. This could dramatically affect life in the near future.

The Conversation

This article was originally published on The Conversation.
Read the original article.

Why a submerged island is the perfect spot for the world’s biggest wind farm


Planning permission has been given for what could become the world’s largest offshore wind farm on the Dogger Bank, off England’s east coast. If fully constructed the project will have up to 400 turbines with a total generation capacity of 2.4 GW. That’s enough to power 1.9 million households – more than Manchester and Birmingham combined.

So why now? And why so big? It seems the UK government is essentially taking a punt on the future of offshore wind. Investment in a more expensive renewable technology at an earlier stage means a premium is being paid in the hope it will kick-start a whole industry. This would in turn reduce costs, while generating low-carbon electricity out of sight.

Dogger Bank today.
NASA

Dogger Bank, located more than 80 miles off the Yorkshire coast, is indeed far out of sight. It seems a good location for a wind farm. The region was an island during much of the last ice age and today it has shallow water, seabed conditions well-suited to the foundations of wind turbines and of course strong, consistent wind.

Its development nevertheless raises a number of technical and logistical challenges, notably linked to the influence of the weather on the maritime supply chain. No wind farm has yet been built that far from land. Furthermore, it still needs to secure contracts for government subsidies.

With fewer neighbours to annoy, offshore wind farms are generally less contested than their onshore equivalents. Offshore wind can have nevertheless a detrimental impact on the natural environment, like disturbance to the seabed by laying cables. But it seems there could be an overall positive effect on marine wildlife as the “artificial reef effect” helps fish group together.

Offshore is costly – for now

Electricity generated by onshore wind can now compete on cost with electricity generated with conventional sources such as gas. But offshore projects from the last licensing “round 3” issued in 2010 are often located in large zones far from the coast such as Dogger Bank, which pushes up costs.

Doggerland 9,000 years ago, before it was cut adrift.
Max Naylor, CC BY-SA

Nevertheless, prices could easily come down. Indeed, as solar power has recently shown, renewable energy technologies can become dramatically cheaper once they achieve maturity and the technology is scaled up. While long-term projections of future electricity costs are always uncertain, offshore wind could become cheaper through better technology, economies of scale and through optimised designs becoming standardised.

Dogger Bank could be a big part of this – the first of a series of large wind farms which will gain strength in numbers and reduce generating costs.

There are many reasons a country might want to reduce reliance on fossil fuels. Energy generated from oil and gas causes air pollution and environmental degradation, while still requiring subsidies just like renewables. As demonstrated by the late economist Shimon Awerbuch, volatile fossil fuel prices on the international market have a detrimental impact on modern economies. More renewables makes sense to mitigate the impact of future international energy crisis.

High-low range of projected energy costs in 2030, according to the Committee on Climate Change, an independent advisory body.
The Renewable Energy Review 2011 / Mott McDonald

It looks as though offshore wind is now the preferred path for the UK government to meet renewable energy targets. It hopes offshore will provide 8-10% of the UK’s electricity by 2020. Offshore carries a much lower political cost than its onshore equivalent, with wind turbines anywhere near either homes or areas of natural beauty facing very vocal opposition.


BBC – DECC, 2013

From 2030s, it can be expected that the cost per unit for this technology will be cheaper than for so called “clean” coal or “clean” gas generation while generating far less carbon emissions and pollution.

But, if the UK is to decrease drastically its dependence on fossils fuels and decarbonise its electricity sector, offshore wind cannot be a substitute to the development of other renewable energy sources and to the systematic implementation of large-scale energy savings programmes.

The Conversation

This article was originally published on The Conversation.
Read the original article.

How artificial lagoons can be used to harvest energy from the tides


The search for alternative energy sources in the age of climate change has overlooked tidal energy: a vast and unexploited worldwide resource.

For three decades now, tidal lagoon schemes have been recommended as an economically and environmentally attractive alternative to tidal barrages. More recently, two proposals for tidal lagoons in Swansea Bay, Wales have emerged and there have been several reports documenting how such a project there could have the potential to harness significant energy resources.

Tidal energy involves constructing a barrage, a dam or some other sort of barrier to harvest power from the height difference between high and low tides.

The power is generated by running the water through turbines, found within the barrier. The technology used is very similar to that found in hydropower schemes, however unlike rivers tidal currents run in two directions.

Where a tidal barrage blocks off an entire estuary, a tidal lagoon instead impounds an artificially created area of the sea or estuary. A lagoon doesn’t necessarily have to be connected to the shore – it could even sit out in the ocean.

As the tide goes out the lagoon remains closed, and full. It then opens the flood gates to let the water out until water levels on each side of the lagoon wall are even. When the tide comes in the process is reversed.

How lagoon power works.
George Aggidis, Author provided

It’s tough to estimate exactly how much tidal power can be exploited, but the UK may have close to half of Europe’s total. And few potential sites worldwide are as close to electricity users and the transmission grid as those in the UK.

Why Swansea?

Swansea Bay is located in the Bristol Channel on the South Wales coastline. As part of the Severn Estuary it experiences one of the world’s largest tidal ranges, often reaching 10m.

A tidal lagoon has been mooted in the bay before, back in 2004, but the latest proposals are on a grander scale. The structure shown below would cover 11.5 km2, cost £913m to construct, and would be capable of generating 495 GWh per year – enough energy to power 155,000 homes.

The lagoon will take up a big chunk of Swansea Bay.
Case

Rising tides

The Swansea Bay scheme demonstrates a renewed interest in tidal power, which has many advantages compared to other renewable sources. It is well documented that increasing integration of volatile, unpredictable sources of renewable energy such as wind and solar power jeopardises the stability of the power grid.

In order for the grid to remain stable the power generated at any instance has to match demand, therefore it is important that the transmission network contains power sources that are immediately available. While the sun may stop shining, and the wind can drop, the tides remain predictable – an obvious advantage for tidal power and a great help for National Grid forecasters.

Worldwide tides.
NASA

Overcoming barriers

Yet improvements are still needed. The upfront costs remain high, and there are some ecological implications. Experiences with artificially closed compounds have demonstrated that the costs of managing an artificial tidal basin (for example in the case of La Rance, Brittany, and Cardiff Bay), Wales are high and need careful monitoring and planning.

Turbines can become more efficient, perhaps learning from the wind industry about aspects such as varying the speed of turbines. We need to develop better 3D modelling to get a better sense of how the tides ebb and flow, and how turbines perform under turbulence.

But there are important positives that should lead to more tidal power. The re-opening of dams and barriers, often built between the 1950s and 1970s can have great ecological benefits for the water bodies behind them due to a creation of a gradient that is beneficial to aquatic ecology (brackish water) and an increased oxygen content; in such instances, tidal technology can also be used as a tool for water quantity management, while generating power.

They can actually improve some ecosystems and have additional societal benefits besides renewable energy such as flood defence, environmental and ecological water quality improvement, fisheries and even tourism functions.

New technologies are being developed that would allow energy to be harvested from new areas, where the difference between high and low tides are measured in centimetres rather than metres. All this make an investment in a tidal lagoon for Swansea Bay seem like a strong investment in the future.

The Conversation

This article was originally published on The Conversation.
Read the original article.