November 8, 2013
In 1908, over a bowl of seaweed soup, Japanese scientist Kikunae Ikeda asked a question that would change the food industry forever: what gave dashi, a ubiquitous Japanese soup base, its meaty flavor? In Japanese cuisine, dashi, a fermented base made from boiled seaweed and dried fish, was widely used by chefs to add extra oomph to meals–pairing well with other savory, but meatless foods like vegetables and soy. For some reason that was generally accepted but inexplicable, dashi made these meatless foods meaty–and Ikeda was determined to find out why.
Ikeda was able to isolate the main substance of dashi–the seaweed Laminaria japonica. He then took the seaweed and ran it through a series of chemical experiments, using evaporation to isolate a specific compound within the seaweed. After days of evaporating and treating the seaweed, he saw the development of a crystalline form. When he tasted the crystals, he recognized the distinct savory taste that dashi lent to other foods, a taste that he deemed umami, from the Japanese umai (delicious.) It was a breakthrough that challenged a cornerstone of culinary thinking: instead of four tastes—sweet, salty, bitter and sour—there were now five. A new frontier of taste had been discovered, and Ikeda wasted no time monopolizing on his discovery.
He determined the molecular formula of the crystals: C5H9NO4, the same as glutamic acid, an amino acid designated as non-essential because the human body, as well as a large smattering of other plants and animals is able to produce it on its own. In the body, glutamic acid is often found as glutamate, a different compound that has one less hydrogen atom. Glutamate is one of the most abundant excitatory neurotransmitters in brain, playing a crucial role in memory and learning. The FDA estimates that the average adult consumes 13 grams of it a day from the protein in food. Non-meat food sources like tomatoes and Parmesan cheese have high levels of glutamic acid.
In 1909, Ikeda began mass-producing Ajinomoto (meaning “essence of taste”), an additive that came out of his creation of the first method of industrially producing glutamate by way of fermented vegetable proteins. The resulting sodium salt form of glutamic acid (the acid with just a single sodium molecule) became famous for its ability to imbue a meaty flavor into dishes, or just naturally enhance the flavor of food. It was touted as a nutritional wonder, helping bland but nutritious food become delicious. A growing number of Japanese housewives used the product, and by the 1930s, recipes included Ajinomoto use in their directions. The sodium salt of glutamic acid remains prevalent today–anyone who has eaten KFC or Doritos has ingested it; it’s just known by a different name: monosodium glutamate, or MSG.
Few letters have the power to stop conversation in its tracks more than MSG, one of the most infamous additives in the food industry. The three little letters carry so much negative weight that they’re often whispered sheepishly or, more often, decidedly preceded by the modifier “NO” that seems to make everyone breathe a collective sigh of relief when they go out to eat. Nobody wants MSG in their food—the protest goes—it causes headaches, stomachaches, dizziness and general malaise. It’s unhealthy and, maybe even worse, unsexy, used by lazy chefs as an excuse for flavor, not an enhancement.
On the other side of the spectrum lies umami: few foodie buzzwords pop off the lips with such entertaining ease. Enterprising young chefs like David Chang (of Momofuku fame) and Adam Fleischman, of the LA-based chain Umami Burger, have built their culinary careers on the basis of the fifth taste, revitalizing an interest in the meaty-depth of umami. It’s difficult to watch the Food Network or Travel Channel or any food-based program without hearing mention of the taste wunderkind, a host or chef cooing over the deep umami flavors of a Portobello mushroom. Where MSG is scary, umami is exciting.
What few people understand is that the hated MSG and the adored umami are chemically related: umami is tasted by the very receptors that MSG targets. At a MAD Symposium in Denmark, a TED-like conference for the food industry, Chang spoke about MSG and umami: “For me, the way that I’m looking at umami, it’s the same way I look at MSG. It’s one in the same.” But if chefs like Chang (neither inept nor lazy when it comes to flavor, as his Michelin stars would attest to) are down with MSG, why does the additive retain such a bad reputation?
After gaining a foothold in Japanese cooking columns, MSG spread throughout Asia, becoming especially popular in Chinese cooking for enhancing both stocks and vegetarian dishes. Everyone knows this connection, and probably associates MSG use in America most heavily with Chinese restaurants–thanks in large part to the absurdly racist name for MSG sensitivity “Chinese Restaurant Syndrome.” But MSG’s foray into American cuisine came from more than Chinese dishes; MSG became popular in the United States during World War II thanks in large part to the country’s increasing military-industrial complex. The military thought that they had found in MSG an answer to the flavorless rations allotted to soldiers, and when the war ended, the troops came home and so did the industrialization of food production. From canned vegetables to frozen dinners, industrially created food was met with wonder in the United States.
That all changed in the 1960s, when trust in industrial food began to wane. In 1962, Rachel Carson published Silent Spring, a manifesto against pesticides that kicked off the environmental movement. As pesticides quickly fell from grace, faith in the industry of yesteryear–of the chemicals and additives born from the war—declined as well. In 1968, MSG’s death knell rang in the form of a letter written to the New England Journal of Medicine by Robert Ho Man Kwok, a Chinese-American doctor from Maryland. Kwok claimed that after eating at Chinese restaurants, he often came down with certain unpleasant symptoms, namely “numbness at the back of the neck, gradually radiating to both arms and the back” and “general weakness and palpitation.” After Kwok’s letter ran, the journal received a deluge of letters from other readers, all claiming to suffer from the same affliction, deemed “Chinese Restaurant Syndrome” by editors. Some readers presented the same symptoms as Kwok, but most were extremely varied, ranging from cold sweats to extreme dizziness. In response, the Journal offered up MSG as the likely culprit for their reader’s unpleasant symptoms.
Public interest spurred a number of scientific inquiries into the potential danger of MSG. According to food historian Ian Mosby’s exploration of MSG in “That Won-Ton Soup Headache” these inquiries went one of two ways: they either sought to prove the harmful short-term effects of MSG (and Chinese Restaurant Syndrome) or they looked to identify more long-term damage caused by the additive. Initially, researchers had success proving both the short-term and long-term dangers of MSG: mice injected with the additive showed signs of brain lesions, and humans fed 3 grams of MSG per 200 ml of soup presented symptoms congruent with “Chinese Restaurant Syndrome.” Subsequent studies, however, provided mixed results: some confirmed findings of brain lesions in animals or symptoms in humans, but other studies were unable to replicate the results. Double-blind studies often showed little correlation between MSG and adverse symptoms. Parties on both sides of the debate slung accusations at the other, with the anti-MSG researchers claiming that studies were being funded by MSG producers, and pro-MSG researchers accusing the other side of fear-mongering.
From the FDA to the United Nations to various governments (Australia, Britain and Japan) the public bodies that have investigated MSG have deemed it a safe food additive. The FDA states on their website:
FDA considers the addition of MSG to foods to be “generally recognized as safe” (GRAS). Although many people identify themselves as sensitive to MSG, in studies with such individuals given MSG or a placebo, scientists have not been able to consistently trigger reactions.
Scientific interest in its deleterious effects seems to be waning: one of the last studies to gain public attention was published in 2011. The authors of that study claimed to have found a link between MSG and obesity, though those results have been questioned. While the general scientific consensus seems to be that only in large doses and on an empty stomach can MSG temporarily affect a small subset of the population, MSG’s reputation is still maligned in the public eye.
On the other hand, MSG’s glutamic cousin umami suffers no public scorn: in 2010, umami was deemed one of the most delicious food trends to watch. When Adam Fleischman’s Umami Burger (a burger chain devoted to all things umami) opened a New York outpost, the wait for a meaty bite stretched on for three-hours. In addition to piling natural glutamates onto their burger to ensure the most umami flavor, Umami Burger enhances the burger with their “umami dust,” a blend of dried mushrooms and seaweed, and umami sauce, which includes soy and Marmite. Altogether, an original Umami Burger contains 2,185 mg of glutamate.
“Most people don’t know the connection between umami and MSG. They know about it from the fifth taste, and the fifth taste was always called umami and not MSG,” Fleischman explains. “We didn’t feel that using MSG was creative enough. We wanted to do it ourselves. By doing it ourselves, we could create a flavor that was umami without the stigma of MSG. MSG, whether you like it or not, has been marketed so poorly, it sounds like this horrible thing.”
By harnessing natural glutamates for their burgers, Umami Burger avoids negative connotations associated with MSG. But the “natural” glutamates in an Umami Burger aren’t chemically any different from glutamtes in MSG.
“The short answer is that there is no difference: glutamate is glutamate is glutamate,” says Richard Amasino, professor of biochemistry at University of Wisconsin-Madison. “It would be identical unless different things created a different rate of uptake.”
Glutamtes that occur naturally in food come intertwined with different chemicals or fiber, which the body is naturally inclined to regulate, explains Amy Cheng Vollmer, professor of biology at Swarthmore College. MSG, however, comes without the natural components of food that help the body regulate glutamic levels. It’s like taking an iron supplement versus obtaining iron from spinach or red meat: the iron supplement creates an expressway between the iron and your bloodstream that you wouldn’t find in natural iron sources.
“The bottom line here is context is everything,” Vollmer adds.
So does MSG deserve its bad rap? For the small section of the population that shows sensitivity to it, probably. But for the rest of America, maybe it’s time to reconsider exactly what we’re so afraid of when it comes to MSG.
October 21, 2013
It was the second day of autumn term at a small boys’ school in South London in 1979. Without warning, 78 schoolboys and a handful of monitors simultaneously fell ill. Symptoms included vomiting, diarrhea, abdominal pain and, in severe cases, depression of the central nervous system. Several patients were comatose with episodes of convulsive twitching and violent fits of fever. In many patients, there were signs of peripheral circulatory collapse. Within five days of the initial outbreak, all patients recovered in full, though some hallucinated for several days, Mary McMillan and J.C. Thompson report in the Quarterly Journal of Medicine. But what could cause such a sudden and mysterious illness?
Turns out, a bag of potatoes left in storage from the previous summer term.
After careful analysis of the sequence of events, the onset of symptoms was pinpointed to about four to 14 hours after the boys had eaten boiled potatoes that had a high concentration of the toxin, solanine, a glycoalkaloid that was first isolated in 1820 in the berries of a European black nightshade. Nightshade is the term used to describe over 2,800 species of plants in the scientific family, Solanaceae. Eggplants, tomatoes, and some berries are common members of the nightshade family—many of them contain highly toxic alkaloids.
That said, the potato is the most common cause of solanine poisoning in humans. But how do you know when solanine is present in a potato? The tuber is turning green.
Though the green color that forms on the skin of a potato is actually chlorophyll, which isn’t toxic at all (it’s the plant’s response to light exposure), the presence of chlorophyll indicates concentrations of solanine. The nerve toxin is produced in the green part of the potato (the leaves, the stem, and any green spots on the skin). The reason it exists? It’s a part of the plant’s defense against insects, disease and other predators.
If you eat enough of the green stuff, it can cause vomiting, diarrhea, headaches, paralysis of the central nervous system (as evidenced by the incident above) but in some rare cases the poisoning can cause coma—even death. Studies have recorded illnesses caused by a range of 30 to 50 mg of solanine per 100 grams of potato, but symptoms vary depending on the ratio of body weight of the toxin and the individual’s tolerance of the alkaloid. The following cases recorded in various medical journals include examples of some of the most severe cases of solanine poisoning (many of which resulted in death):
1899: After eating cooked potatoes containing 0.24 mg of solanine per gram of potato, 56 German soldiers experienced solanine poisoning. Though all recovered, in a few cases, jaundice and partial paralysis were observed.
1918: In Glasgow, Scotland, 61 people from 18 separate households were affected at once by a bad batch of potatoes. The following day, a five-year-old boy died of strangulation of the bowel following extreme retching and vomiting. According to “An Investigation of Solanine Poisoning” by S. G. Willimott, PhD, B.Sc. published in 1933, the case was investigated by scientists, R. W. Harris and T. Cockburn, who concluded in their article, “Alleged Poisoning By Potatoes” (1918), that the poisoning was the result of eating potatoes which contained five or six times the amount of solanine found in normal potatoes. Willimott cites this particular occurrence as an example of the toxin’s prevalence: “A review of the literature reveals the fact that authentic cases of solanine poisoning are not so rare as authorities appear to believe.”
1925: Seven members of a family were poisoned by greened potatoes. Two of them died. According to reports, symptoms included vomiting, extreme exhaustion, but no convulsions like that of the schoolboys in London. Breathing was rapid and labored until consciousness was lost a few hours before death.
1948: A case of solanine poisoning involving the potato’s nightshade relative, the berry, was recorded in the article “A Fatal Case of Solanine Poisoning“ published in the British Medical Journal. On August 13 of that year, a 9-year-old girl with a bad habit of snacking on the berries that grew along the railroad tracks by her house was admitted to the hospital with symptoms of vomiting, abdominal pain, and distressed breathing. She died two days later. An autopsy found hemorrhages in the mucosa of stomach and middle section of her small intestine. The stomach contained about one pint of dark brown fluid.
1952: According to the British Medical Journal, solanine poisoning is most common during times of food shortage. In the face of starvation, there have been accounts of large groups eating older potatoes with a higher concentration of the toxin. In North Korea during the war years of 1952-1953, entire communities were forced to eat rotting potatoes. In one area alone, 382 people were affected, of whom 52 were hospitalized and 22 died. The most severe cases died of heart failure within 24 hours of potato consumption. Some of the less severe symptoms included irregular pulses, enlargement of the heart, and blueing lips and ears. Those who displayed these ailments died within 5 or 10 days. Authors John Emsley and Peter Fell explain their book Was It Something You Ate?: Food Intolerance: What Causes It and How to Avoid It: ”In the final stages [of the illness] there were sometimes a state of high excitability with shaking attacks and death was due to respiratory failure.”
1983: Sixty-one of 109 school children and staff in Alberta, Canada, fell ill within five minutes of eating baked potato. Forty-four percent of those affected noted a green tinge and a bitter taste in the potatoes.
Not to worry though, fatal cases of solanine poisoning are very rare these days. Most commercial varieties of potatoes are screened for solanine, but any potato will build up the toxin to dangerous levels if exposed to light or stored improperly. Often, the highest concentrations of solanine are in the peel, just below the surface and in the sprouted “eyes”—things that are typically removed in cooking preparation—though Warren would argue even boiling water in potato prep dissolves only a little of the alkaloid. Emsley and Fell continue:
Most people can easily cope with the solanine in the average portion of potato and show no symptoms of poisoning because the body can break it down and rapidly and excrete the products in the urine. But if the level of solanine is as high as 40 mg per 100 g of potato, symptoms include diarrhea…even coma.
The best way to prevent solanine poisoning is to store tubers in a cool, dark place and remove the skin before consumption. A general rule for avoiding illnesses like the ones described above? Green and sprouted? Throw it out.
September 9, 2013
Whiskey drinkers know that the moment they swirl a bit of the smoky spirit in their mouth, they’re bound to find a world of flavors: some oak, some smoke, a little vanilla, maybe a slight bite from tannin. Brown liquors — from scotch to bourbon and all the whiskeys in between — are complex spirits that lend themselves to purposeful tasting, creating connoisseurs willing to shell out top dollar for the most peaty scotch or their favorite spicy bourbon. When it comes to the magic of whiskey, their complex profiles might be explained by the chemical fingerprints that separate them from one another — and change the way that they taste.
It’s an idea that the aptly-named Tom Collins, a researcher at the University of California, Davis, is actively pursuing. “I worked on my Ph.D., and it was a project looking at aroma and flavor chemistry in wine [fermented] in oak barrels,” Collins explains, crediting the barrels with sparking his initial interest in the chemistry of spirits. “It sort of seemed a natural extension to look from the chemistry of wine to the chemistry of whiskeys, because the chemistry of oak barrels play a huge role in what you see in whiskeys of all sorts.”
Collins and researchers at Davis set out to see if they could determine the chemical differences among 60 different whiskeys: 38 straight bourbon whiskeys, 10 rye whiskeys, five Tennessee whiskeys and seven other American whiskeys, varying in age from two-to-15 years old. What they found was a spectacular testament to the spirit’s complex chemistry–over 4,000 different non-volatile compounds across the different samples, results which he presented today at the 246th National Meeting & Exposition of the American Chemical Society. “It’s very complex,” Collins says of the chemistry. “There are components that are barrel derived, as we would expect, but there are also things that are related to the grains that are used to make the distillates in the first place—so the corn and wheat and rye and things that are fermented to form the distillate. We see some components that appear to be grain related, and there are also likely to be components that are derived from the yeast that are used do the fermentation.”
Of the thousands of chemical compounds Collins found, there was a fair amount of overlap between the different spirits. But Collins found that each spirit contained unique compounds, or unique concentrations of compounds, that he could use to distinguish a scotch from a bourbon, or a Tennessee whiskey from a bourbon, simply by looking at the liquor’s chemistry. “If you try to make sense of all of the components that are there, it’s essentially overwhelming, but if you filter out the things that are not used in Tennessee whiskeys, or things that are only present in some of the bourbons, you can sort of whittle away down to the things that define what a bourbon is or what a Tennessee whiskey is chemically,” Collins said.
It might be the perfect answer that eternal question of novice whiskey drinkers everywhere: what exactly is the difference between a whiskey and a bourbon?
The confusing answer is that bourbon is always whiskey, but all whiskey isn’t bourbon. This has always been true from a historical and regulatory perspective. Historian Michael Veach spoke with Food and Think in June and dispelled the myths that bourbon has its roots in Bourbon County, Kentucky, and that all bourbons must originate there. “‘People started asking for ‘that whiskey they sell on Bourbon Street,’ Veach says, ‘which eventually became ‘that bourbon whiskey.’”
The regulatory distinction presents a slight complication: some Tennessee whiskeys, from a regulatory standpoint, actually qualify as bourbons, but choose not to market themselves as such (Jack Daniels, for example, adamantly markets itself as a Tennessee whiskey, even when it meets regulatory standards for being a bourbon). Natalie Wolchover at Live Science outlines the regulatory standards for bourbon:
While bourbon whiskey has its roots in Kentucky, and continues to be primarily produced there, it is now manufactured in distilleries all over the United States. Manufacturers must meet the following requirements in order to advertise their whiskey product as “bourbon”:
It must be produced in the U.S. from a grain mixture (called “mash”) made up of at least 51 percent corn. It must be distilled to a maximum strength of 160 proof, bottled at a strength of at least 80 proof, and barreled for aging at no more than 125 proof. It must be aged in new, charred oak barrels. To qualify as “straight bourbon,” the spirits must meet the above requirements as well as being aged for at least two years and containing no added coloring, flavoring or other spirits.
Many bourbon whiskey distilleries in Kentucky advertise their use of unique water filtered by the limestone shelf in Bourbon County; while this feature may add to the allure of Kentucky bourbon whiskey, the federal trade regulations do not stipulate about what water must be used.
Collins thinks he might have a more chemically elegant answer to the conundrum. As his team discovered, there are 50 to 100 chemical compounds such as fatty acids and tannins that can be used to distinguish a Tennessee whiskey from a bourbon to such an extent that Collins can tell the difference between them without tasting either. Chemically, it’s often a question of concentration–how much of a plant derived compound does a spirit have? How much tannin? “There are, in many cases, certain compounds that are only found in one or the other, but more often, there are compounds that are present in both but at different concentrations. Those are the tannins, the fatty acids, and in some cases, turpentine – compounds that are plant-derived.”
These compounds complicate the matter further–certain chemicals are extracted from the wood barrels during the aging process, which might not be unique to the distillate itself. As Collins notes, barrels are, after all, made from trees–an unarguable plant substance. So how do they discern the unique plant-derived elements in the distillates from the compounds that might come from the barrel? “Some of the ways we get through that is to look at whiskeys that have been freshly distilled, and haven’t been put in barrels yet, so we can see what’s there in the fresh distillate before we put it in oak, and then we can see what changes between the newly distilled spirit and the spirit that has been aged in barrels for some period of time,” Collins explains. “That helps us to understand what the things are that come from the barrels, versus the things that come from the distillate itself.”
Collins and his team have yet to embark on the next step of their experiments–relating the differences in chemical makeup to potential sensory differences in aroma and flavor–but he feels fairly confident that the two are related. “I think–being a chemist–that the sensory differences arise from the chemistry,” Collins admits. Take, for example, the chemical compounds that arise when the spirit is being aged in a charred barrel. “The sensory component that you smell, that you associated with toasted oak, or charred oak, is going to be related to the compounds that are extracted by the whiskey from the wood,” Collins explains.
Understanding the delicate interplay between chemistry and aroma could be a huge help to distillers looking to tweak their whiskey to encapsulate that perfect blend of smoky and spicy. “This could be a tool [distillers] could use to understand if they make a change to their distillation processes, how does that impact the resulting whiskey,” Collins said, noting that the better distillers understand how the process of distillation impacts the final product, the better they can manipulate the process to their advantage. “It’s a tool that can be used by distillers large and small to understand the impact of what they’re doing on the chemistry, and then the sensory.”
It’s research that means that the perfect whiskey–smoky, spicy, or however you want it–might not be so elusive after all.
July 31, 2013
A glass of champagne is often synonymous with toasting some of life’s biggest moments—a big promotion at work, weddings, the New Year. So too, is the tickle that revelers feel against their skin when they drink from long-stemmed flutes filled with bubbly.
There’s more to that fizz than just a pleasant sensation, though. Inside a freshly poured glass of champagne, or really any sparking wine, hundreds of bubbles are bursting every second. Tiny drops are ejected up to an inch above the surface with a powerful velocity of nearly 10 feet per second. They carry aromatic molecules up to our noses, foreshadowing the flavor to come.
In Uncorked: The Science of Champagne, recently revised and translated into English, physicist Gerard Liger-Belair explains the history, science and art of the wine. His book also features high-speed photography of champagne bubbles in action and stop-motion photography of the exact moment a cork pops (potentially at a speed of 31 miles per hour (!). Such technology allows Liger-Belair to pair the sommelier with the scientist. “Champagne making is indeed a three-century-old art, but it can obviously benefit from the latest advances in science,” he says.
Liger-Belair became interested in the science of bubbles while sipping a beer after his finals at Paris University about 20 years ago. The bubbles in champagne, he explains, are actually vehicles for the flavor and smell of champagne, elements that contribute to our overall sipping experience. They are also integral to the winemaking process, which produces carbon dioxide not once but twice. Stored away in a cool cellar, the champagne, which could be a blend of up to 40 different varietals, ferments slowly in the bottle. When the cork is popped, the carbon dioxide escapes in the form of Liger’s beloved bubbles. Once poured, bubbles form on several spots on the glass, detach and then rise toward the surface, where they burst, emitting a crackling sound and sending a stream of tiny droplets upward.
These bubble-forming hot spots launch about 30 bubbles per second. In beer, that rate is just 10 bubbles a second. But without this phenomenon, known as effervescence, champagne, beer and soda would all be flat.
Once the bubbles reach the top of the flute, the tension of the liquid below becomes too great as it pulls on them. The bubbles pop in a matter of microseconds. When they burst, they release enough energy to create tiny auditory shock waves; the fizzing sound is a chorus of individual bubbles bursting. By the time champagne goes flat, nearly 2 million bubbles have escaped from the glass.
The collapse of bubbles at the surface is Liger-Belair’s favorite thing about champagne. “Bubbles collapsing close to each other produce unexpected lovely flower-shaped structures unfortunately completely invisible by the naked eye,” he says. “This is a fantastic example of the beauty hidden right under our nose.”
Europeans, though, once considered the bubbling beverage a product of poor winemaking. In the late 1400s, temperatures plunged suddenly on the continent, freezing many of the continent’s lakes and rivers, including the Thames River and the canals of Venice. The monks of the Abbey of Hautvillers in Champagne, where high-altitude made it possible to grow top quality grapes, were already hard at work creating reds and whites. The cold temporarily halted fermentation, the process by which wine is made. When spring arrived with warmer temperatures, the budding spirits began to ferment again. This produced an excess of carbon dioxide inside wine bottles, giving the liquid inside a fizzy quality.
In 1668, the Catholic Church called upon a monk by the name of Dom Pierre Pérignon to finally control the situation. The rebellious wine was so fizzy that bottles kept exploding in the cellar, and Dom Pérignon was tasked with staving off a second round of fermentation.
In time, however, tastes changed, starting with the Royal Court at Versailles. By the end of the 17th century, Dom Pérignon was asked to reverse everything he was doing and focus on making champagne even bubblier. Although historical records show that a British doctor developed a recipe for champagne six years before Pérignon began his work, Pérignon would come to be known as the father of champagne thanks to his blending techniques. The process he developed, known as the French Method, incorporated the weather-induced “oops” moment that first created champagne—and it’s how champagne is made today.
So the next time you raise a glass of bubbly, take a second to appreciate its trademark tickle on another level—a molecular one.
July 19, 2013
It’s hard to imagine anything positive coming out of the dreaded hangover, that ultimate punishment doled out by the universe in the form of headaches, nausea and general discomfort. After a night of revelry, the unlucky afflicted often retreat to their beds, nursing aches and pains with rest and water. A brave few, however, have surged ahead, grasping at a mix of science and migraine-induced cravings to create their own remedies for the infamous day-after-blues. While some of these inventive cures have failed the test of time (deep-fried canary was a favorite of the Romans which, thankfully, you won’t find on your nearest diner menu), others have reached a level of success so mainstream that you might be surprised by their more nefarious origins.
Brunch: Though currently a popular venue for weekend gossip and day drinking, this portmanteau-meal actually began as a hangover cure. Before English writer Guy Beringer proposed the most ingenious combination of breakfast and lunch, weekend feasting was strictly reserved for the early Sunday dinner, where heavy fare like meat and pies were served to the after-church crowd. Instead of forcing this early dinner, Beringer argued that life would be happier for all if a new meal was created, “served around noon, that starts with tea or coffee, marmalade and other breakfast fixtures before moving along to the heavier fare.” By letting people sleep in on Sundays, and awake later for a meal, Beringer noted that life would be made easier for “Saturday night carousers.” Beyond the appeal of a nice, substantial meal after a night of debauchery, Beringer testified to the soothing social interaction brunch brings, reasoning that it helped to “sweep away the cobwebs of the weekend.” Brunch didn’t gain traction with the American crowd, however, until the 1920s and into the 1930s, when celebrities and socialites hosted brunch parties in their homes. Brunch received an even larger following in the 70s and 80s, when church attendance dropped nationwide, and Americans swapped their religious dedication to breaking bread with the secular tradition of breaking yolks.
The Bloody Mary: Battling a hangover with more drinking has been around as a cure since alcohol itself. Famously referred to as “hair of the dog” (which actually comes from an old cure for rabies, wherein the afflicted would rub a bit of dog hair into the wound) the hungover have often turned to libations as a way to ease their pain. Perhaps no iteration of this is more famous than the Bloody Mary, ubiquitous on brunch menus (see above). But the drink itself wasn’t created to cause hangovers – instead, it was created to cure them. As bartender Josh Krist explains, the roaring crowd of ex-pats that populated Paris in the 1920s required a drink that could ease the pain from their The Sun Also Rises-esque gallivanting of the night before. In response to such demand, Fernand Petiot, bartender at Harry’s New York Bar in Paris, first created the concoction by adding equal parts vodka and tomato juice. In terms of scientific hangover cures, the one half of the libation is fairly ingenious, because tomato juice contains high amounts of both lycopene and potassium, which help stimulate blood flow and replenish electrolytes (hair of the dog, however, has been debunked as a healthy way to hurdle the hangover slump).
Fernet: Continuing the fine tradition of spirits invented to cure an over-indulgence in spirits (again, see above) Fernet, a famous Italian liquor now used as a post-meal digestive, was actually created for curing hangovers. As the story goes, Italian spice trader Bernadino Branca invented the spirit in 1845, adding the traditional hangover cure-all myrrh to a lot of grape infused spirits. He then added a plethora of other flavorings and ingredients, including rhubarb, chamomile, aloe, cardamom, peppermint oil, and – get this – opiates. The resulting mix certainly succeeding in perking drinkers up after a night on the town and, in much more extreme cases, patients suffering from cholera.
Eggs Benedict: If we are sensing a trend here, it’s that the world of brunch is very meta (a hangover cure that inspired other hangover cures…like some headache-ridden version of Groundhog Day). We’ve all heard of the greasy breakfast – eggs, bacon, whatever your heaving stomach can handle – as a cure for the hangover, but if you thought of eggs Benedict as too highbrow to constitute the classic “greasy breakfast,” think again: lore surrounding the origin of this famous brunch food actually cites one seriously hungover Wall Street worker as the original Benedict. In 1942, The New Yorker published an article claiming the dish had its roots in a man named Lemuel Benedict, a Wall Street worker known for his eccentric-for-the-time lifestyle choices (like marrying a woman who worked as an opera singer) and heavy partying habits. After one especially raucous night of partying, Lemuel awoke in the morning and went to breakfast at the Waldorf Hotel, where he invented his own breakfast sandwich of two poached eggs, bacon, buttered toast, and a pitcher of hollandaise sauce. Lemuel’s inventive sandwich caught the eye of the Waldorf’s famous maître d’hôtel Oscar, who sampled the sandwich, made some personal alterations (ham was swapped for bacon, an English muffin for the toast), put the sandwich on the menu, and sailed peacefully into history, much to the delight of hungover brunch attendees everywhere.
Coca-Cola: Brunch, eggs Benedict, Bloody Marys – these items are already so associated with post-drinking maladies that their origin in the history of hangovers might not come as a huge surprise. But that ever-present Coca-Cola bottle in the vending machine and corner stores, it too was a brainchild of those looking to cure hangovers. Coca-Cola went public in 1886, but the recipe that the popular beverage was based off of had been popular for years at pharmacist John Pemberton’s Atlanta drugstore and soda fountain. By mixing caffeine from cola nuts with cocaine from coca leaves, and adding a thick syrupy base, Pemberton’s original cola sold widely as a miracle hangover remedy. Soon, the beverage’s enjoyable taste made it popular with a non-drinking crowd, and Coca-Cola erupted into the famous soda we know today.