Most New York restaurants will be closed by 2021 due to coronavirus restrictions
No new economic relief coming for restaurant owners
It is now September and indoor dining is still prohibited in New York State, drastically reducing the workforce and sales in the restaurant industry. New York restaurants closed due to air circulation having a detrimental effect of spreading the virus fast in other countries.
According to a recent study by the New York State Restaurant Association, over half the remaining restaurants will be closed, estimated at a whopping 66% permanent closures.
Upwards of 66% of the state’s cafés, bars and restaurants could be gone for all time by the end of this year on the very likely chance that they don’t get considerable government help, as per the study. The review surveyed 1,042 of the state’s 50,000+ eateries and discovered that 63.6 percent of restaurateurs were “likely” or “fairly likely” to close their doors completely in the following four months ahead. The greater part of those who responded said they would be compelled to close their entryways before November in the event that they don’t get some type of monetary alleviation.
Numerous cafés got advances through the widely popular Paycheck Protection Program (PPP) when the pandemic first hit and a stimulus was created through congress, yet more and more establishments have been shutting down as that cash runs out and further monetary alleviation is withheld from the bargaining table in the state congress.
Reviews like those from the NYSRA are simply expectations, but on the other hand they’re probably the best proportions of the pandemic’s cost for the New York restaurant industry at the present time, given that there’s no state or government organization reporting eatery terminations progressively.
Recently, the NYSRA anticipated that upwards of 11 percent of the state’s eateries and bars — amounting to roughly about 5,500 organizations — would shudder their doors by May 2020 due to COVID-19. In all reality, however, specialists state that number is likely a lot higher and will just keep on ascending without an unmistakable guide for indoor eating or considerable government mediation, including the $120 billion bill to help free eateries from the perils of this unprecedented economic downturn.
Easter is once again upon us and for many people it is a time when a little more chocolate than usual is consumed. Chocolate gives many of us pleasure mainly because it has physiological effects that make it moreish – if not downright addictive.
Some research studies even claim that certain types of chocolate are a “super food” – something that’s particularly good for us. After all, one of the ingredients of chocolate is cocoa, which is a good source of iron, magnesium, manganese, phosphorous and zinc. But is this really the case?
In dark chocolate – which has a high cocoa level – there is some evidence to show that small amounts may reduce the risk of heart disease. This is because of the presence of flavonoids – a type of plant chemical.
Flavonoids are said to be a powerful antioxidant with anti-inflammatory and immune system benefits. Health benefits include better blood sugar control and better insulin sensitivity – which are both indicators of protection from diabetes.
But despite this evidence, few neutral studies have been done, and work has only ever been done over the short term.
So before we can say for certain whether chocolate is actually a super food, there need to be far longer trials – that are not funded by chocolate manufacturers.
There is also the issue of the other ingredients apart from cocoa – given that your average Easter egg is likely to contain more sugar and saturated fat than plain cocoa.
There’s also the fact that there is little or no nutritional benefit to standard milk chocolate. So the only reason to eat it is because it gives us pleasure.
But whether its dark, milk or white, if you only binge on it once a year, the type of chocolate is not going to make much difference. What matters most is the rest of your lifestyle – what your diets like over the rest of the week, and how much you move around and exercise.
Maybe instead of worrying about the health benefits of chocolate, we should just see it for what it is – an indulgence or a treat – leaving us to get on with enjoying it occasionally.
With this in mind, we recently conducted an experiment that split people into three groups. The first group consumed a drink which contained calories from sugar only. The second group drank the same beverage but then did some gentle walking. And the third group drank a beverage with the same calories but from protein and a little fat, and not so much sugar.
When we traced everyone’s blood sugar levels over the next two hours, we found that the second and third groups had a much lower spike in blood sugar.
This is a good indicator that gentle exercise after eating or consuming foods which contain a mixture of protein and fat – rather than sugar alone – helps us to maintain steady blood sugar levels.
So maybe rather than worrying about chocolate as an occasional treat you should just enjoy it this Easter – and combine it with a nice spring walk.
Because at the end of the day, Easter is once a year, and your annual chocolate egg is unlikely to make a huge difference to your overall health or weight. So go ahead and enjoy – because that’s what Easter eggs are for. Just take advantage of the bank holiday to go for a walk as well.
Spring is just around the corner, and with it comes another growing season. Eating a diet rich in fruits and vegetables can help lower calorie intake; reduce risks for heart disease, obesity and Type 2 diabetes; and protect against certain cancers.
With all these benefits, why do some consumers choose to avoid produce? Approximately three-quarters of people in the U.S. don’t eat enough fruits and vegetables, according to the 2015 Dietary Guidelines for Americans.
A lot of factors could explain the shortfall, including fear. Media stories about topics such as GMOs and pesticides may convince some consumers that it’s not safe to eat certain fruits and vegetables. There’s no question that negative news about produce can affect consumer choices. One survey found that, among 510 low-income shoppers, those who heard messages about pesticide residues on produce were less likely to purchase any type of fruits and vegetables.
One high-profile report intended to drive consumer choices is the Environmental Working Group (EWG) Dirty Dozen™ report, a listing of fruits and vegetables it claims have the highest levels of pesticide residues. The EWG is an American nonprofit environmental organization that specializes in research and advocacy in a number of areas, including toxic chemicals.
This year’s report, published on March 8, also came paired with the EWG’s 2017 Shopper’s Guide, which promises to “[help] protect your family from pesticides!” Both publications are based on an analysis of more than 36,000 samples of 48 popular fruits and vegetables, taken by the U.S. Department of Agriculture (USDA) in 2016.
But while a list like the Dirty Dozen may attract attention from concerned consumers, it doesn’t use the same rigorous methods for measuring risk that food scientists typically do. A report by the World Health Organization and United Nations found that the Dirty Dozen results in negative consumer perceptions about fruits and vegetables, which goes against dietary advice to eat more of them.
We are not challenging EWG’s right to publish this list, but its failure to provide proper context is a concern. Without that, an informed decision is impossible. As scientists interested in food safety, we believe the most appropriate context would be to compare that list to the actual USDA reports, which are developed under their Pesticide Data Program (PDP). The results suggest there is far less to fear from our produce than some would have you believe.
Looking at the list
To build the annual Dirty Dozen™ list, the EWG says it looks at six measures of pesticide contamination. For each metric, it ranks each food based on its individual USDA test results, then normalizes the scores on a 1 to 100 scale, with 100 being the highest. A food’s final score is the total of these six normalized scores from each metric.
This year, it lists strawberries at #1 (the “dirtiest” of the dirty), apples as #4, peaches as #5 and celery as #9. We did a comparison of the last 10 years of the Shopper’s Guide™ published by EWG. These four commodities were included on every list since 2007.
The USDA produces the most comprehensive pesticide residue database in the country. These data enable the EPA to assess dietary exposure, particularly among commodities popular with infants and children, and to provide guidance to governmental agencies.
Over the 20 years the USDA has tested residues, about 99 percent of crops and commodities have tested below – often significantly below – EPA tolerance levels. The USDA has consistently emphasized that “based on the PDP data, consumers can feel confident about eating a diet that is rich in fresh fruits and vegetables.”
Because the USDA doesn’t test every food every year, the EWG says that it generally uses the most recent sampling period for each food. However, using data that are as much as four years old to put together its annual lists seems more than a little arbitrary. It appears to us that some commodities have been mistakenly targeted over the last decade by the EWG as dangerous.
Importantly, the USDA actually analyzes the pesticide residues on fruits and vegetables. The EWG merely relies on the USDA data and scores risk simply by whether pesticide residues can be measured. By emphasizing fear over facts, it reinforces irrational perceptions.
What’s missing from the Dirty Dozen
The EWG says that their “goal is to show a range of different measures of pesticide contamination, to account for uncertainties in the science.” They claim their approach “best captures the uncertainties about the risks and consequences of pesticide exposure.”
However, the EWG errs by considering any and all pesticides as equally toxic, rather than relating detected pesticide residues to known safety standards.
All pesticides must be registered with the U.S. Environmental Protection Agency (EPA), which evaluates an extensive amount of scientific data.
The EPA assesses risks and benefits of a product’s use; provides label directions to control how products are used; and can suspend or cancel a product’s registration. The EPA also sets pesticide tolerances – that is, maximum permissible residue levels – for each and every pesticide used in or on food. The tolerance for an individual pesticide is tailored to reflect the specific scientific data, including toxicology studies, for that pesticide.
The EPA requires a large battery of studies to measure the effect and safety of a new pesticide. First, it determines the highest dosage at which there is no observable adverse effect. That dosage is then divided by uncertainty factors of up to 1,000 to calculate the allowable daily intake, and by an additional uncertainty factor of up to 10 to calculate the reference dose, or maximum acceptable dose.
One paper from 2011 looked at mean exposure to pesticides in each of that year’s Dirty Dozen. All pesticide levels were well below even a fraction of the reference dose. Indeed, the vast majority were less than 0.01 percent of the reference dose.
Finally, according to the Food Quality Protection Act, the EPA must determine that a pesticide poses “a reasonable certainty of no harm” before it can be registered for use on food or feed.
Since analytical instruments are able to pick up increasingly smaller concentrations, many crops have detectable residues yet no significant risk. The USDA reports its data on pesticide residues in parts per million, or ppm. To put that in context, one ppm is roughly equivalent to a single minute in two years.
In addition, the overwhelming understanding within the scientific community is that any risk assessment carries some level of uncertainty. The EWG twists this fact around to suggest that uncertainty equals harm. But its approach ignores the fact that uncertainties are a part of risk assessment. It effectively misrepresents the consequences of pesticide exposure.
Despite the claims made by the EWG, the mere presence of pesticide residues cannot constitute a risk. A “risk” is, by definition, dependent on the level of exposure, and the EPA has set specific tolerance levels for each and every pesticide. The challenge lies in accurately communicating risk and, by extension, safety, to consumers.
The real risk
While some groups, such as the EWG, promote organic produce over conventional produce, there are a far greater number of regulatory safeguards in place for use of conventional pesticides. As noted above, the EPA requires an extensive amount of scientific research to support an application for a new conventional pesticide. Organic pesticides are managed by the USDA National Organic Program (NOP). The NOP does not specifically list each allowable natural substance that can be used for organic farming. Rather, it sets the criteria for determining if a substance is natural. The reality is that data on pesticide use in organic farming are limited. Importantly, the NOP is managed by the 15 members of the National Organic Standards Board, of whom only one is listed as a scientist.
Finally, a huge body of peer-reviewed research shows the positive benefits of a diet rich in fruits and vegetables, including conventionally grown produce. Consumers should not avoid fruits and vegetables simply out of fear, or because they cannot afford the often higher cost of organic fruits and vegetables. No fruits or vegetables are nutritious until they’re eaten.
Hannah Rose Park at Michigan State University contributed to this article.
Marijuana-infused foods – often called edibles – are becoming more and more popular in states such as Colorado, where recreational marijuana is sold.
In the first quarter of 2014, the first year recreational sales were allowed in Colorado, edibles made up 30 percent of legal sales. By the third quarter of 2016, that grew to 45 percent.
Edibles come in a variety of forms, from candy and baked goods, to trail mix and even coffee or soda.
As a social and behavioral scientist who studies the prevention of adolescent substance initiation and misuse, the legalization of recreational marijuana has been on my mind a lot lately. The younger people are when they start using substances, the greater the risk for developing subsequent neurocognitive, mentalhealth and furthersubstance-related problems.
Marijuana-infused edibles raise a lot of concerns. Young children may accidentally eat edibles meant to look like candy or other foods, and adolescents may not think edibles are as risky as smoking marijuana.
Concerns about edibles
Edibles are manufactured with varying levels of Tetrahydrocannabinol (THC), the active ingredient in marijuana. They can take the form of almost any type of food, most notably candy and baked goods.
When products are shaped like candy, there is a concern that children will accidentally ingest them. They may mistake these marijuana-infused products for regular food, as they may not be able to read the labels and markings indicating that the products contain THC, or may not understand what the labels actually mean.
A recent retrospective study examined unintentional exposure to marijuana among children who were treated at a children’s hospital and regional poison center in Colorado between 2009 and 2015, the year after it became legal for recreational marijuana to be sold in the state.
Findings indicated a five-fold increase in the number of children under 10 who were exposed to marijuana, from nine cases in 2009 to 47 cases in 2015. The poison center saw an average increase of 34 percent, while there was an average increase of 19 percent for the rest of the country.
Edibles were implicated in over half of the exposures, which included baked goods, candy, and popcorn products.
Teens may accidentally ingest edibles as well. In these cases, though, the concern is that teens may give edibles to their peers without telling them what is in the product.
Risk perception of edibles is lower
In 2015, only 12.3 percent of high school seniors believed that trying marijuana once or twice was harmful (down from 18.5 percent in 2009). Less than one in three believed smoking marijuana regularly to be harmful, down from 52.4 percent in 2009.
My professional opinion is that marijuana-infused edibles will continue to push down this trend in perceived risk among adolescents, as edibles may appear much less risky than smokable forms. In other words, it is a much shorter leap from not using marijuana at all to eating an edible product to get high, compared to the leap between no use and smoking the drug for the first time through a joint or a bong.
Additionally, edibles make it significantly easier for those who are underage to use marijuana covertly, which may make it more appealing. This could potentially increase frequency of use.
The risk of getting caught using marijuana at home or at school may be lower if an adolescent is eating a cookie or candy infused with THC than smoking the drug. That could make this mode of use appealing for teens who may otherwise abstain out of the fear of repercussions associated with “smelling like pot”.
Recent qualitative research found that that some teens did use edibles in school specifically for this reason. Moreover, some of the females reported that they found edibles appealing because using the drug in this manner made them less likely to appear in public as a “marijuana user”.
How much THC is in that cookie?
Dosing is another concern. Colorado and Washington state have set a limit of 10 milligrams per individual serving, while Oregon and Alaska set a limit of five milligrams for an individual serving. In theory, regulations surrounding dosing in specific milligram increments would be helpful in allowing consumers to self-regulate how much they ingest.
The size of the dose is one issue, but how it is administered is another. If a drug is inhaled, it is quickly absorbed into the bloodstream and then goes to the brain. This is the fastest way of administering a drug.
The slowest mode of administration is ingestion, such as through pills or food. The drug enters the bloodstream through the lining of the stomach and small intestine, and then has to travel to the brain. This means it can take about 30-60 minutes to feel the effects. Eating a product with a drug in it may take even longer, because the body has to digest it as food first before the substance gets into the lining of the stomach.
There are also different types of marijuana, specifically sativa and indica. Sativa has a more stimulating, energetic effect, whereas indica has more of a psychoactive and sedating effect. There are also hybrids of the two, if users are interested in a combination of the different effects.
However, research on marijuana has been limited, so not much has been examined with the different types and different strains, especially among adolescents, making it a challenge to understand the full picture.
Policies regarding edibles
Like all marijuana, cannabis-infused edibles are not regulated by the FDA. At the moment, polices and regulations about edibles are being drafted by states that are voting to legalize recreational marijuana.
The states with the longest history of legalized recreational marijuana, Colorado and Washington, have the most extensive policies surrounding edibles. As problems associated with the recreational use of marijuana have surfaced, these policies have – and continue to – change.
Washington has strict and extensive policies around edibles, including explicit language that needs to be put on labels and packaging. The state also bans images on packaging that would appeal to young children, such as cartoons and toys, as well as edible candy that would be most appealing to young people, such as gummy bears and jelly beans.
California, Maine, Massachusetts and Nevada all voted to legalize recreational marijuana in the 2016 election.
California’s Prop. 64, the ballot proposition for recreational marijuana restricts edibles with a 10 milligram portion size requirement, and additional requirements that packaging be childproof and, as with other states, not to appeal to children.
Maine, Massachusetts and Nevada are drafting legislation to regulate recreational sales of marijuana, including edibles.
Where do we go from here?
As a professional in this field, I understand the desire to offer multiple routes of administration of marijuana, particularly for those who are using it for medical reasons.
However, I believe that edible marijuana, especially in forms that are appealing to young people, is extremely problematic.
Adolescent substance misuse prevention professionals have their work cut out for them, as prevention efforts will have to specifically target marijuana edibles moving forward. Additionally, it is up to policymakers to do everything in their power to make it harder for those who are under 21 to acquire and also consume edible marijuana. This will require a lot of time preparing before the policies go into effect.
One network got me $50 worth of free pasta. Another gave me all the laundry detergent I needed for a year. Yet another opened me up to the awesome world of free pudding cups. Had I tapped into the world’s largest underground food network?
Imagine if you had unfettered access to hundreds of coupons every month, and using each coupon every time you shopped could turn a $200/week grocery bill into a $50/week grocery bill. Would you still pay full price for Gatorade? Vegetables? Tide? Kleenex? What about that kitty litter bill, or the trash bags for your household trash bins?
If you’re one of the millions of people in America having trouble keeping up with the bills, this is the answer to your prayers: coupon networks.
Coupon networks offer a trove of up-to-date, valuable coupons for every day items you look for every time you go grocery shopping, and they’re growing fast. One network offered me free laundry soap for an entire year, and that wasn’t even the best part. They also offered me coupons that amounted to thousands of dollars in savings just for being a free member.
These networks don’t come without their pitfalls, but they are few and far between. The only downside I’ve managed to come across is that the offers only last a short time, such as a week or two, sometimes a month to six months, but usually on the short side. That’s why it’s important to check them out sooner rather than later and don’t procrastinate if you want to save a lot of money on your grocery shopping.
Below are a few of the best coupon networks and coupon offers that I’ve found. All of them are free to sign up:
In case you’ve forgotten the section on the food web from high school biology, here’s a quick refresher.
Plants make up the base of every food chain of the food web (also called the food cycle). Plants use available sunlight to convert water from the soil and carbon dioxide from the air into glucose, which gives them the energy they need to live. Unlike plants, animals can’t synthesize their own food. They survive by eating plants or other animals.
Clearly, animals eat plants. What’s not so clear from this picture is that plants also eat animals. They thrive on them, in fact (just Google “fish emulsion”). In my new book, “A Critique of the Moral Defense of Vegetarianism,” I call it the transitivity of eating. And I argue that this means one can’t be a vegetarian.
Chew on this
I’ll pause to let the collective yowls of both biologists and (erstwhile) vegetarians subside.
A transitive property says that if one element in a sequence relates in a certain way to a second element, and the second element relates in the same way to a third, then the first and third elements relate in the same way as well.
Take the well-worn trope “you are what you eat.” Let’s say instead that we are “who” we eat. This makes the claim more personal and also implies that the beings who we make our food aren’t just things.
How our food lives and dies matters. If we are who we eat, our food is who our food eats, too. This means that we are who our food eats in equal measure.
Plants acquire nutrients from the soil, which is composed, among other things, of decayed plant and animal remains. So even those who assume they subsist solely on a plant-based diet actually eat animal remains as well.
This is why it’s impossible to be a vegetarian.
For the record, I’ve been a “vegetarian” for about 20 years and nearly “vegan” for six. I’m not opposed to these eating practices. That isn’t my point. But I do think that many “vegetarians” and “vegans” could stand to pay closer attention to the experiences of the beings who we make our food.
For example, many vegetarians cite the sentience of animals as a reason to abstain from eating them. But there’s good reason to believe that plants are sentient, too. In other words, they’re acutely aware of and responsive to their surroundings, and they respond, in kind, to both pleasant and unpleasant experiences.
I suspect how some biologists may respond: first, plants don’t actually eat since eating involves the ingestion – via chewing and swallowing – of other life forms. Second, while it’s true that plants absorb nutrients from the soil and that these nutrients could have come from animals, they’re strictly inorganic: nitrogen, potassium, phosphorus and trace amounts of other elements. They’re the constituents of recycled minerals, devoid of any vestiges of animality.
As for the first concern, maybe it would help if I said that both plants and animals take in, consume or make use of, rather than using the word “eat.” I guess I’m just not picky about how I conceptualize what eating entails. The point is that plants ingest carbon dioxide, sunlight, water and minerals that are then used to build and sustain their bodies. Plants consume inasmuch as they produce, and they aren’t the least bit particular about the origins of the minerals they acquire.
With respect to the second concern, why should it matter that the nutrients drawn by plants from animals are inorganic? The point is that they once played in essential role in facilitating animals’ lives. Are we who we eat only if we take in organic matter from the beings who become our food? I confess that I don’t understand why this should be. Privileging organic matter strikes me as a biologist’s bias.
Then there’s the argument that mineral recycling cleanses the nutrients of their animality. This is a contentious claim, and I don’t think this is a fact of the matter. It goes to the core of the way we view our relationship with our food. You could say that there are spiritual issues at stake here, not just matters of biochemistry.
Changing how we view our food
Let’s view our relationship with our food in a different way: by taking into account the fact that we’re part of a community of living beings – plant and animal – who inhabit the place that we make our home.
We’re eaters, yes, and we’re also eaten. That’s right, we’re part of the food web, too! And the well-being of each is dependent on the well-being of all.
From this perspective, what the self-proclaimed “farmosopher” Glenn Albrecht calls sumbiotarianism (from the Greek word sumbioun, to live together) has clear advantages.
Sumbioculture is a form of permaculture, or sustainable agriculture. It’s an organic and biodynamic way of farming that’s consistent with the health of entire ecosystems.
Sumbiotarians eat in harmony with their ecosystem. So they embody, literally, the idea that the well-being of our food – hence, our own well-being – is a function of the health of the land.
In order for our needs to be met, the needs and interests of the land must come first. And in areas where it’s prohibitively difficult to acquire the essential fats that we need from pressed oils alone, this may include forms of animal use – for meat, manure and so forth.
Simply put, living sustainably in such an area – whether it’s New England or the Australian Outback – may well entail relying on animals for food, at least in a limited way.
All life is bound together in a complex web of interdependent relationships among individuals, species and entire ecosystems. Each of us borrows, uses and returns nutrients. This cycle is what permits life to continue. Rich, black soil is so fertile because it’s chock full of the composted remains of the dead along with the waste of the living.
Indeed, it’s not uncommon for indigenous peoples to identify veneration of their ancestors and of their ancestral land with the celebration of the life-giving character of the earth. Consider this from cultural ecologist and Indigenous scholar-activist Melissa Nelson:
The bones of our ancestors have become the soil, the soil grows our food, the food nourishes our bodies, and we become one, literally and metaphorically, with our homelands and territories.
You’re welcome to disagree with me, of course. But it’s worth noting that what I propose has conceptual roots that may be as old as humanity itself. It’s probably worth taking some time to digest this.
Regular Vitamin D doses can tame inflammation linked to chromic diseases
Adequate time in the sun can supplement bolsters immune cell function
Vitamin D deficiency can lead to soft bones
VITAMIN D supplements can control inflammation associated with chronic conditions such as obesity, heart disease or diabetes.
This finding is based on a review by Curtin University scientists of 23 immune cell studies.
“We found evidence that vitamin D was able to indirectly quench reactive oxygen species, which are accepted as a major factor in the onset and development of chronic diseases including type 2 diabetes,” Professor Philip Newsholme says.
“In fact, inflammation may contribute to a multitude of diseases,” Prof Newsholme says
The results showed for the average person, if they were getting adequate levels of sun exposure or taking a vitamin D supplement, then their immune cell function would benefit.
People who had a good immune cell defence were more likely to have good overall health, according to the review.
Vitamin D deficiency can lead to soft bones or osteoporosis, with symptoms often not evident or ranging from muscle or joint pain, depression and fatigue. It can only be diagnosed via a blood test.
However, this can be avoided by taking the daily supplement or spending time in the sun.
“These kinds of diseases are associated from chronic inflammation and may well benefit from ensuring people have adequate amounts of vitamin D so that they can then supress any adverse levels of inflammation,” he says.
The researchers examined vitamin D in its active form being injected into human cells, focusing their attention on chronic conditions such as obesity and diabetes.
Prof Newsholme says the findings were exciting because it fed into longer term studies, in particular examining the effects of vitamin D levels in humans and its impact on metabolism.
“We believe vitamin D is important for regulation of immune cell metabolism and function, therefore may impact and reduce the onset of chronic inflammatory diseases related to ageing such as cardiovascular disease and diabetes,” he says.
He is now part of a clinical trial which takes blood samples from people and examines their vitamin D levels in winter, these participants are due to be tested again in February.
In a study performed by Erin Hanlon, a research associate at the University of Chicago, researchers revealed that lack of sleep induced higher levels of endocannibinoids, a brain chemical that binds to the same receptors as marijuana and regulates our appetite.
As the CDC states, one in three American adults don’t get enough sleep, which happens to match around the same percentage of obese American adults. Hanlon was interested in connecting these two problems and found that there may, indeed, be a connection between obesity and adequate sleep.
The study compared the appetites of adults who got 8.5 hours of sleep vs those who received only 4.5 hours of sleep. The result was that those with less sleep were more apt to eat unhealthy junk food high in sugar the longer they stayed awake. These people’s endocannibinoid levels were higher than those who got a full night’s sleep.
“We are trying to get out awareness that people need to think of adequate sleep as an important aspect of maintaining good health,” Hanlon told CNN.com
While there are many studies published regarding the causes of obesity, sleep deprivation hasn’t received the attention Hanlon felt it deserved, so she got the support of researchers from Universite Libre de Bruxelles and Medical College of Wisconsin. The researchers were able to measure the concentration of a specific endocannabinoid called 2AG in the blood and found the results surprising.
Whether quaffing artisanal cocktails at hipster bars or knocking back no-name beers on the couch, Americans are drinking more heavily – and binge-drinking more often, too, concludes a major study of alcohol use.
Heavy drinking among Americans rose 17.2 percent between 2005 and 2012, largely due to rising rates among women, according to the study by the Institute for Health Metrics and Evaluation at the University of Washington, published Thursday in the American Journal of Public Health.
The Centers for Disease Control and Prevention defines heavy drinking as exceeding an average of one drink per day during the past month for women and two drinks per day for men. Binge drinking is defined as four or more drinks for women and five or more drinks for men on a single occasion.
The increases are driven largely by women’s drinking habits as social norms change, researchers found. In Santa Clara County, Calif., for example, women’s binge drinking rates rose by nearly 36 percent between 2002 and 2012, compared with 23 percent among men.
Nationwide over the course of the decade, the rate of binge drinking among women increased more than seven times the rate among men.
“It seems like women are trying to catch up to the men in binge drinking,” said Ali Mokdad, a lead author of the study. “It’s really, really scary.”
The study is the first to track adult drinking patterns at the county level.
Madison County, Idaho, reported the lowest rate of binge drinking in 2012, at 5.9 percent, while Menominee, Wis., had the highest, at 36 percent. Hancock County, Tenn. had the fewest heavy drinkers (2.4 percent of residents) and Esmeralda County, Nev., recorded the most (22.4 percent).
About 88,600 U.S. deaths were attributed to alcohol in 2010, the researchers note, and the cost of excessive drinking has been estimated at more than $220 billion per year.
The increase in binge drinking doesn’t surprise Terri Fukagawa, clinical director of the New Life Recovery Centers in San Jose, Calif., where 15 of her 24 treatment beds are filled with clients primarily addicted to alcohol. She said she’s seen more people seeking treatment for alcoholism in the past four years.
Still, she noted, “there are a lot of people still out there needing treatment, but they won’t come in unless they have a consequence like losing a job or [getting] a DUI. They think they have control over it.”
Public health experts offer a number of cultural and economic explanations for the increase in drinking.
Social norms have changed – it’s now more acceptable for women to drink the way men traditionally have, said Tom Greenfield, scientific director at the Alcohol Research Group at the Oakland, Calif.-based Public Health Institute.
Young people are more likely to binge drink, and affluent people have the money to drink more. So the influx of wealthy professionals in cities like San Francisco, San Jose and Oakland – many in hard-working, hard-partying tech jobs – may have helped spur significant spikes in drinking rates in the Bay Area and similar communities, experts said.
Taxes on alcohol have not risen along with the Consumer Price Index, so wine, beer and liquor have gotten cheaper over time in real dollars, he said.
Alcohol advertising, particularly for hard liquor, has increased in recent years. A Federal Trade Commission study found that companies spent about $3.45 billion to advertise alcoholic beverages in 2011.
Alcohol control policies, such as limits on when and where alcohol can be sold and how long bars can stay open, have weakened in past decades, Greenfield said. That may partly explain rising consumption nationwide, particularly in some states where “blue laws” once prohibited alcohol sales on Sundays or in supermarkets.
To conduct the study, researchers analyzed data on about 3.7 million Americans aged 21 and older from the Behavioral Risk Factor Surveillance System, an ongoing telephone survey of health behaviors conducted by the U.S. Centers for Disease Control.
Many people have strong opinions about genetically modified plants, also known as genetically modified organisms or GMOs. But sometimes there’s confusion around what it means to be a GMO. It also may be much more sensible to judge a plant by its specific traits rather than the way it was produced – GMO or not.
This article is not about judging whether GMOs are good or bad, but rather an explanation of how plants with modified genomes are made. (There are non-plant GMOs, but in this article we will only refer to plant GMOs.) First of all, it’s necessary to define what we mean by a GMO. For the purposes of this discussion, I’m defining GMOs as plants whose genetic information (found in their genomes) has been modified by human activity.
Humans have changed the genomes of virtually all the plants in the grocery store
If we think of GMOs as plants that have genomes modified by humans, then quite a lot of the plants sold in any grocery store fit that description. But many of these modifications didn’t occur in the lab. Farmers select plants with superior, desirable traits to cultivate in a process known as agricultural evolution. Thousands of years of traditional agricultural breeding has changed plant genomes from those of their original wild ancestors.
Broccoli, for example, is not a naturally occurring plant. It’s been bred from undomesticated Brassica oleracea or ‘wild cabbage’; domesticated varieties of B. oleracea include both broccoli and cauliflower. Broccoli, along with any seedless variety of fruit (including what you think of as bananas), and most of the crops grown on farms today would not exist without human intervention.
However, these aren’t the plants that people typically think of when they think of GMOs. It’s easy to understand how farmers can breed better plants on farms (by choosing to plant seeds from the biggest or best-yielding plants, for example, imposing artificial selection on the crop species) so even though this activity changes plant genomes in ways nature wouldn’t have, most people don’t consider these plants GMOs.
Creating “lab” GMOs
Once plant genes had been studied enough, researchers could turn to backcrossing. This technique involves breeding the offspring back with the parents to try to get a desired, stable combination of parental traits. Genes previously linked to desirable plant traits, such as higher yield or pest-resistance, could be identified and screened for using molecular biology techniques and linkage maps. These maps lay out the relative position of genes along a chromosome, based on how often they are passed along together to offspring. Closer genes tend to travel together.
Researchers used molecular markers – specific, known gene sequences, present in the linkage maps – to select individual plants that contained both the new marker gene and the greatest proportion of other favorable genes from the parents. The combinations of genes passed to offspring are always due to random recombination of the parents’ genes. Researchers weren’t able to drive particular combinations themselves, they had to work with what arose naturally; so in this marker-assisted selection approach, there’s a lot of effort and time spent trying to find plants with the best combinations of genes.
In this system, a laboratory needs to screen the genomes, using molecular biology methods to look for particular gene sequences for desirable traits in the bred offspring. Sometimes a lab even breeds the plants in cases using tissue culture – a way to propagate many plants simultaneously while minimizing the resources needed to grow them.
Inserting non-plant genes into GMOs
In the early 1980s, the plant biotechnology era began with Agrobacterium tumifaciens. This bacterium naturally infects plants and, in the wild, creates tumors by transferring DNA between itself and the plant it has infected. Scientists use this natural property to transfer genes to plant cells from an A. tumifaciens bacterium modified to contain a gene of interest.
For the first time, it was possible to insert specific genes into a plant genome, even genes that do not come from that species – or even from a plant. A. tumifaciens does not affect all plants, however, so researchers went on to develop DNA-transferring methods inspired by this system which would work without it. They include microinjection and “gene guns,” where the desired DNA was physically injected into the plant, or covered tiny particles that were literally shot into the nuclei of plant cells.
A recent review summarizes eight new methods for altering genes in plants. These are molecular biology techniques that use different enzymes or nucleic acid molecules (DNA and RNA) to make changes to a plant’s genes. One route is to alter the sequence of a plant’s DNA. Another is to leave the sequence alone but make other epigenetic modifications to the structure of a plant’s DNA. For instance, scientists could add arrangements of atoms called methyl groups to some of the nucleotide building blocks of DNA. These epigenetic modifications, while not altering the order of the DNA or of genes, change how genes can be expressed and thus the observable traits a plant has.
GMO doesn’t mean glyphosate-resistant
Calling a plant a genetically modified organism means only that – its genome has been modified by the activity of humans. But lots of people conflate the idea of a GMO plant with one that’s been created to be resistant to the herbicide glyphosate, also known by the brand name Roundup. It’s true that the most well-known GMO crops currently grown contain a gene that makes them resistant to glyphosate, which allows farmers to spray the chemical to kill weeds while allowing their crop to grow. But that’s just one example of a gene inserted into a plant.
The so-called “fish tomato” contains an antifreeze protein (gene name afa3), found naturally in winter flounder, that increases frost tolerance in the tomato plant. The tomato doesn’t actually contain fish tissue, or even necessarily DNA taken from fish tissue – just DNA of the same sequence present in the fish genome. The Afa3 protein is produced from the afa3 gene in the tomato cells using the same machinery as other tomato proteins.