Category Archives: Biology

Fishing for DNA: Free-floating eDNA identifies presence and abundance of ocean life

Mark Stoeckle, The Rockefeller University

Ocean life is largely hidden from view. Monitoring what lives where is costly – typically requiring big boats, big nets, skilled personnel and plenty of time. An emerging technology using what’s called environmental DNA gets around some of those limitations, providing a quick, affordable way to figure out what’s present beneath the water’s surface. The Conversation

Fish and other animals shed DNA into the water, in the form of cells, secretions or excreta. About 10 years ago, researchers in Europe first demonstrated that small volumes of pond water contained enough free-floating DNA to detect resident animals.

Researchers have subsequently looked for aquatic eDNA in multiple freshwater systems, and more recently in vastly larger and more complex marine environments. While the principle of aquatic eDNA is well-established, we’re just beginning to explore its potential for detecting fish and their abundance in particular marine settings. The technology promises many practical and scientific applications, from helping set sustainable fish quotas and evaluating protections for endangered species to assessing the impacts of offshore wind farms.

Who’s in the Hudson, when?

In our new study, my colleagues and I tested how well aquatic eDNA could detect fish in the Hudson River estuary surrounding New York City. Despite being the most heavily urbanized estuary in North America, water quality has improved dramatically over the past decades, and the estuary has partly recovered its role as essential habitat for many fish species. The improved health of local waters is highlighted by the now regular fall appearance of humpback whales feeding on large schools of Atlantic menhaden at the borders of New York harbor, within site of the Empire State Building.

Preparing to hurl the collecting bucket into the river.
Mark Stoeckle, CC BY-ND

Our study is the first recording of spring migration of ocean fish by conducting DNA tests on water samples. We collected one liter (about a quart) water samples weekly at two city sites from January to July 2016. Because the Manhattan shoreline is armored and elevated, we tossed a bucket on a rope into the water. Wintertime samples had little or no fish eDNA. Beginning in April there was a steady increase in fish detected, with about 10 to 15 species per sample by early summer. The eDNA findings largely matched our existing knowledge of fish movements, hard won from decades of traditional seining surveys.

Our results demonstrate the “Goldilocks” quality of aquatic eDNA – it seems to last just the right amount of time to be useful. If it disappeared too quickly, we wouldn’t be able to detect it. If it lasted for too long, we wouldn’t detect seasonal differences and would likely find DNAs of many freshwater and open ocean species as well as those of local estuary fish. Research suggests DNA decays over hours to days, depending on temperature, currents and so on.

Fish identified via eDNA in one day’s sample from New York City’s East River.
New York State Department of Environmental Conservation: alewife (herring species), striped bass, American eel, mummichog; Massachusetts Department of Fish and Game: black sea bass, bluefish, Atlantic silverside; New Jersey Scuba Diving Association: oyster toadfish; Diane Rome Peeples: Atlantic menhaden, Tautog, Bay anchovy; H. Gervais: conger eel., CC BY-ND

Altogether, we obtained eDNAs matching 42 local marine fish species, including most (80 percent) of the locally abundant or common species. In addition, of species that we detected, abundant or common species were more frequently observed than were locally uncommon ones. That the species eDNA detected matched traditional observations of locally common fish in terms of abundance is good news for the method – it supports eDNA as an index of fish numbers. We expect we’ll eventually be able to detect all local species – by collecting larger volumes, at additional sites in the estuary and at different depths.

In addition to local marine species, we also found locally rare or absent species in a few samples. Most were fish we eat – Nile tilapia, Atlantic salmon, European sea bass (“branzino”). We speculate these came from wastewater – even though the Hudson is cleaner, sewage contamination persists. If that is how the DNA got into the estuary in this case, then it might be possible to determine if a community is consuming protected species by testing its wastewater. The remaining exotics we found were freshwater species, surprisingly few given the large, daily freshwater inflows into the saltwater estuary from the Hudson watershed.

Filtering the estuary water back in the lab.
Mark Stoeckle, CC BY-ND

Analyzing the naked DNA

Our protocol uses methods and equipment standard in a molecular biology laboratory, and follows the same procedures used to analyze human microbiomes, for example.

eDNA and other debris left on the filter after the estuary water passed through.
Mark Stoeckle, CC BY-ND

After collection, we run water samples through a small pore size (0.45 micron) filter that traps suspended material, including cells and cell fragments. We extract DNA from the filter, and amplify it using polymerase chain reaction (PCR). PCR is like “xeroxing” a particular DNA sequence, producing enough copies so that it can easily be analyzed.

We targeted mitochondrial DNA – the genetic material within the mitochondria, the organelle that generates the cell’s energy. Mitochondrial DNA is present in much higher concentrations than nuclear DNA, and so easier to detect. It also has regions that are the same in all vertebrates, which makes it easier for us to amplify multiple species.

We tagged each amplified sample, pooled the samples and sent them for next-generation sequencing. Rockefeller University scientist and co-author Zachary Charlop-Powers created the bioinformatic pipeline that assesses sequence quality and generates a list of the unique sequences and “read numbers” in each sample. That’s how many times we detected each unique sequence.

To identify species, each unique sequence is compared to those in the public database GenBank. Our results are consistent with read number being proportional to fish numbers, but more work is needed on the precise relationship of eDNA and fish abundance. For example, some fish may shed more DNA than others. The effects of fish mortality, water temperature, eggs and larval fish versus adult forms could also be at play.

Just like in television crime shows, eDNA identification relies on a comprehensive and accurate database. In a pilot study, we identified local species that were missing from the GenBank database, or had incomplete or mismatched sequences. To improve identifications, we sequenced 31 specimens representing 18 species from scientific collections at Monmouth University, and from bait stores and fish markets. This work was largely done by student researcher and co-author Lyubov Soboleva, a senior at John Bowne High School in New York City. We deposited these new sequences in GenBank, boosting the database’s coverage to about 80 percent of our local species.

Study’s collection sites in Manhattan.
Mark Stoeckle, CC BY-ND

We focused on fish and other vertebrates. Other research groups have applied an aquatic eDNA approach to invertebrates. In principle, the technique could assess the diversity of all animal, plant and microbial life in a particular habitat. In addition to detecting aquatic animals, eDNA reflects terrestrial animals in nearby watersheds. In our study, the commonest wild animal detected in New York City waters was the brown rat, a common urban denizen.

Future studies might employ autonomous vehicles to routinely sample remote and deep sites, helping us to better understand and manage the diversity of ocean life.

Mark Stoeckle, Senior Research Associate in the Program for the Human Environment, The Rockefeller University

This article was originally published on The Conversation. Read the original article.

In our Wi-Fi world, the internet still depends on undersea cables

Recently a New York Times article on Russian submarine activity near undersea communications cables dredged up Cold War politics and generated widespread recognition of the submerged systems we all depend upon.

Not many people realize that undersea cables transport nearly 100% of transoceanic data traffic. These lines are laid on the very bottom of the ocean floor. They’re about as thick as a garden hose and carry the world’s internet, phone calls and even TV transmissions between continents at the speed of light. A single cable can carry tens of terabits of information per second.

While researching my book The Undersea Network, I realized that the cables we all rely on to send everything from email to banking information across the seas remain largely unregulated and undefended. Although they are laid by only a few companies (including the American company SubCom and the French company Alcatel-Lucent) and often funneled along narrow paths, the ocean’s vastness has often provided them protection.

2015 map of 278 in-service and 21 planned undersea cables.

Far from wireless

The fact that we route internet traffic through the ocean – amidst deep sea creatures and hydrothermal vents – runs counter to most people’s imaginings of the internet. Didn’t we develop satellites and Wi-Fi to transmit signals through the air? Haven’t we moved to the cloud? Undersea cable systems sound like a thing of the past.

The reality is that the cloud is actually under the ocean. Even though they might seem behind the times, fiber-optic cables are actually state-of-the-art global communications technologies. Since they use light to encode information and remain unfettered by weather, cables carry data faster and cheaper than satellites. They crisscross the continents too – a message from New York to California also travels by fiber-optic cable. These systems are not going to be replaced by aerial communications anytime soon.

A tangled cable caught by fishermen in New Zealand.

A vulnerable system?

The biggest problem with cable systems is not technological – it’s human. Because they run underground, underwater and between telephone poles, cable systems populate the same spaces we do. As a result, we accidentally break them all the time. Local construction projects dig up terrestrial lines. Boaters drop anchors on cables. And submarines can pinpoint systems under the sea.

Most of the recent media coverage has been dominated by the question of vulnerability. Are global communications networks really at risk of disruption? What would happen if these cables were cut? Do we need to worry about the threat of sabotage from Russian subs or terrorist agents?

The answer to this is not black and white. Any individual cable is always at risk, but likely far more so from boaters and fishermen than any saboteur. Over history, the single largest cause of disruption has been people unintentionally dropping anchors and nets. The International Cable Protection Committee has been working for years to prevent such breaks.

An undersea cable lands in Fiji.
Nicole Starosielski, CC BY-ND

As a result, cables today are covered in steel armor and buried beneath the seafloor at their shore-ends, where the human threat is most concentrated. This provides some level of protection. In the deep sea, the ocean’s inaccessibility largely safeguards cables – they need only to be covered with a thin polyethelene sheath. It’s not that it’s much more difficult to sever cables in the deep ocean, it’s just that the primary forms of interference are less likely to happen. The sea is so big and the cables are so narrow, the probability isn’t that high that you’d run across one.

Sabotage has actually been rare in the history of undersea cables. There are certainly occurrences (though none recently), but these are disproportionately publicized. The World War I German raid of the Fanning Island cable station in the Pacific Ocean gets a lot of attention. And there was speculation about sabotage in the cable disruptions outside Alexandria, Egypt in 2008, which cut 70% of the country’s internet, affecting millions. Yet we hear little about the regular faults that occur, on average, about 200 times each year.

Redundancy provides some protection

The fact is it’s incredibly difficult to monitor these lines. Cable companies have been trying to do so for more than a century, since the first telegraph lines were laid in the 1800s. But the ocean is too vast and the lines simply too long. It would be impossible to stop every vessel that came anywhere near critical communications cables. We’d need to create extremely long, “no-go” zones across the ocean, which itself would profoundly disrupt the economy.

Fewer than 300 cable systems transport almost all transoceanic traffic around the world. And these often run through narrow pressure points where small disruptions can have massive impacts. Since each cable can carry an extraordinary amount of information, it’s not uncommon for an entire country to rely on only a handful of systems. In many places, it would take only a few cable cuts to take out large swathes of the internet. If the right cables were disrupted at the right time, it could disrupt global internet traffic for weeks or even months.

The thing that protects global information traffic is the fact that there’s some redundancy built into the system. Since there is more cable capacity than there is traffic, when there is a break, information is automatically rerouted along other cables. Because there are many systems linking to the United States, and a lot of internet infrastructure is located here, a single cable outage is unlikely to cause any noticeable effect for Americans. is an interactive platform developed by Erik Loyer and the author that lets users navigate the transpacific cable network.

Any single cable line has been and will continue to be susceptible to disruption. And the only way around this is to build a more diverse system. But as things are, even though individual companies each look out for their own network, there is no economic incentive or supervisory body to ensure the global system as a whole is resilient. If there’s a vulnerability to worry about, this is it.

The Conversation

Nicole Starosielski, Assistant Professor of Media, Culture and Communication, New York University

This article was originally published on The Conversation. Read the original article.

Astrobiologists Use Biosignature Gases To Search For Aliens

Professor Sara Seager of Massachusetts Institute of Technology says her team of scientists is looking for biosignatures from gases emitted by alien life forms on habitable extrasolar planets. Many of these gases could be detected remotely by telescopes, but could end up having quite different compositions from those in the atmosphere of our planet.

Prof. Seager and her colleagues explained,

“Thousands of exoplanets are known to orbit nearby stars. Plans for the next generation of space-based and ground-based telescopes are fueling the anticipation that a precious few habitable planets can be identified in the coming decade. Even more highly anticipated is the chance to find signs of life on these habitable planets by way of biosignature gases.”

Seager’s team proposes in their paper published online in the journal Astrobiology that all stable and potential volatile molecules should be considered as possible biosignature gases, laying the groundwork for identifying such gases by conducting a massive search for molecules with six or fewer non-hydrogen atoms in order to maximize their chances of recognizing biosignature gases. They say they promote the concept that “all stable and potentially volatile molecules should initially be considered as viable biosignature gases.”

The scientists created a list of about 14,000 molecules that contain up to 6 non-H atoms. About 2,500 of these are CNOPSH (C – carbon, N – nitrogen, O – oxygen, P – phosphorus, S – sulfur, and H – hydrogen) compounds.

This means that instead of the costly and controversial method of netting strange creatures from the bottom of the sea, these scientists have decided to search and find thousands of curious, potentially biogenic gas molecules.

To fight Zika, let’s genetically modify mosquitoes – the old-fashioned way

The near panic caused by the rapid spread of the Zika virus has brought new urgency to the question of how best to control mosquitoes that transmit human diseases. Aedes aegypti mosquitoes bite people across the globe, spreading three viral diseases: dengue, chikungunya and Zika. There are no proven effective vaccines or specific medications to treat patients after contracting these viruses.

Mosquito control is the only way, at present, to limit them. But that’s no easy task. Classical methods of control such as insecticides are falling out of favor – they can have adverse environmental effects as well as increase insecticide resistance in remaining mosquito populations. New mosquito control methods are needed – now.

The time is ripe, therefore, to explore a long-held dream of vector biologists, including me: to use genetics to stop or limit the spread of mosquito-borne diseases. While gene editing technologies have advanced dramatically in the last few decades, it is my belief that we’ve overlooked older, tried and true methods that could work just as well on these insects. We can accomplish the goal of producing mosquitoes incapable of transmitting human pathogens using the same kinds of selective breeding techniques people have been using for centuries on other animals and plants.

Technicians from Oxitec inspect genetically modified Aedes aegypti mosquitoes in Campinas, Brazil.
Paulo Whitaker/Reuters

Techniques on the table

One classic strategy for reducing insect populations has been to flood populations with sterile males – usually produced using irradiation. When females in the target population mate with these males, they produce no viable offspring – hopefully crashing population numbers.

The modern twist on this method has been to generate transgenic males that carry a dominant lethal gene that essentially makes them sterile; offspring sired by these males die late in the larval stage, eliminating future generations. This method has been promulgated by the biotech company Oxitec and is currently used in Brazil.

Rather than just killing mosquitoes, a more effective and lasting strategy would be to genetically change them so they can no longer transmit a disease-causing microbe.

The powerful new CRISPR gene editing technique could be used to make transgenes (genetic material from another species) take over a wild population. This method works well in mosquitoes and is potentially a way to “drive” transgenes into populations. CRISPR could help quickly spread a gene that confers resistance to transmission of a virus – what scientists call refractoriness.

But CRISPR has been controversial, especially as applied to human beings, because the transgenes it inserts into an individual can be passed on to its offspring. No doubt using CRISPR to create and release genetically modified mosquitoes into nature would stir up controversy. The U.S. Director of National Intelligence, James Clapper, has gone so far as to dub CRISPR a potential weapon of mass destruction.

But are transgenic technologies necessary to genetically modify mosquito populations?

Examples of successful artificial selection of various traits through the years. In the center is a cartoon of the ‘block’ scientists would like to select for in mosquitoes so they can’t pass on the virus.
Jeff Powell, Author provided

Selective breeding the old-fashioned way

Genetic modification of populations has been going on for centuries with great success. This has occurred for almost all commercially useful plants and animals that people use for food or other products, including cotton and wool. Selective breeding can produce immense changes in populations based on naturally occurring variation within the species.

Artificial selection using this natural variation has proven effective over and over again, especially in the agricultural world. By choosing parents with desirable traits (chickens with increased egg production, sheep with softer wool) for several consecutive generations, a “true breeding” strain can be produced that will always have the desired traits. These may look very different from the ancestor – think of all the breeds of dogs derived from an ancestor wolf.

To date, only limited work of this sort has been done on mosquitoes. But it does show that it’s possible to select for mosquitoes with reduced ability to transmit human pathogens. So rather than introducing transgenes from other species, why not use the genetic variation naturally present in mosquito populations?

Deriving strains of mosquitoes through artificial selection has several advantages over transgenic approaches.

  • All the controversy and potential risks surrounding transgenic organisms (GMOs) are avoided. We’re only talking about increasing the prevalence in the population of the naturally occurring mosquito genes we like.
  • Selected mosquitoes derived directly from the target population would likely be more competitive when released back to their corner of the wild. Because the new refractory strain that can’t transmit the virus carries only genes from the target population, it would be specifically adapted to the local environment. Laboratory manipulations to produce transgenic mosquitoes are known to lower their fitness.
  • By starting with the local mosquito population, scientists could select specifically for refractoriness to the virus strain infecting people at the moment in that locality. For example, there are four different “varieties” of the dengue virus called serotypes. To control the disease, the selected mosquitoes would need to be refractory to the serotype active in that place at that time.
  • It may be possible to select for strains of mosquitoes that are unable to transmit multiple viruses. Because the same Aedes aegypti mosquito species transmits dengue, chikungunya and Zika, people living in places that have this mosquito are simultaneously at risk for all three diseases. While it has not yet been demonstrated, there is no reason to think that careful, well-designed selective breeding couldn’t develop mosquitoes unable to spread all medically relevant viruses.

Fortunately, Ae. aegypti is the easiest mosquito to rear in captivity and has a generation time of about 2.5 weeks. So unlike classical plant and animal breeders dealing with organisms with generations in years, 10 generations of selection of this mosquito would take only months.

Researchers are working out mass rearing techniques for Aedes mosquitoes – their generation time is only 2.5 weeks.
IAEA Imagebank, CC BY-NC-ND

This is not to imply there may not be obstacles in using this approach. Perhaps the most important is that the genes that make it hard for these insects to transmit disease may also make individual insects weaker or less healthy than the target natural population. Eventually the lab-bred mosquitoes and their offspring could be out-competed and fade from the wild population. We might need to continuously release refractory mosquitoes – that is, the ones that aren’t good at transmitting the disease in question – to overcome selection against the desirable refractory genes.

And mosquito-borne pathogens themselves evolve. Viruses may mutate to evade any genetically modified mosquito’s block. Any plan to genetically modify mosquito populations needs to have contingency plans in place for when viruses or other pathogens evolve. New strains of mosquitoes can be quickly selected to combat the new version of the virus – no costly transgenic techniques necessary.

Today, plant and animal breeders are increasingly using new gene manipulation techniques to further improve economically important species. But this is only after traditional artificial selection has been taken about as far as it can to improve breeds. Many mosquito biologists are proposing to go directly to the newest fancy transgenic methodologies that have never been shown to actually work in natural populations of mosquitoes. They are skipping over a proven, cheaper and less controversial approach that should at least be given a shot.

The Conversation

Jeffrey Powell, Professor, Yale University

This article was originally published on The Conversation. Read the original article.