November 8, 2013
In 1908, over a bowl of seaweed soup, Japanese scientist Kikunae Ikeda asked a question that would change the food industry forever: what gave dashi, a ubiquitous Japanese soup base, its meaty flavor? In Japanese cuisine, dashi, a fermented base made from boiled seaweed and dried fish, was widely used by chefs to add extra oomph to meals–pairing well with other savory, but meatless foods like vegetables and soy. For some reason that was generally accepted but inexplicable, dashi made these meatless foods meaty–and Ikeda was determined to find out why.
Ikeda was able to isolate the main substance of dashi–the seaweed Laminaria japonica. He then took the seaweed and ran it through a series of chemical experiments, using evaporation to isolate a specific compound within the seaweed. After days of evaporating and treating the seaweed, he saw the development of a crystalline form. When he tasted the crystals, he recognized the distinct savory taste that dashi lent to other foods, a taste that he deemed umami, from the Japanese umai (delicious.) It was a breakthrough that challenged a cornerstone of culinary thinking: instead of four tastes—sweet, salty, bitter and sour—there were now five. A new frontier of taste had been discovered, and Ikeda wasted no time monopolizing on his discovery.
He determined the molecular formula of the crystals: C5H9NO4, the same as glutamic acid, an amino acid designated as non-essential because the human body, as well as a large smattering of other plants and animals is able to produce it on its own. In the body, glutamic acid is often found as glutamate, a different compound that has one less hydrogen atom. Glutamate is one of the most abundant excitatory neurotransmitters in brain, playing a crucial role in memory and learning. The FDA estimates that the average adult consumes 13 grams of it a day from the protein in food. Non-meat food sources like tomatoes and Parmesan cheese have high levels of glutamic acid.
In 1909, Ikeda began mass-producing Ajinomoto (meaning “essence of taste”), an additive that came out of his creation of the first method of industrially producing glutamate by way of fermented vegetable proteins. The resulting sodium salt form of glutamic acid (the acid with just a single sodium molecule) became famous for its ability to imbue a meaty flavor into dishes, or just naturally enhance the flavor of food. It was touted as a nutritional wonder, helping bland but nutritious food become delicious. A growing number of Japanese housewives used the product, and by the 1930s, recipes included Ajinomoto use in their directions. The sodium salt of glutamic acid remains prevalent today–anyone who has eaten KFC or Doritos has ingested it; it’s just known by a different name: monosodium glutamate, or MSG.
Few letters have the power to stop conversation in its tracks more than MSG, one of the most infamous additives in the food industry. The three little letters carry so much negative weight that they’re often whispered sheepishly or, more often, decidedly preceded by the modifier “NO” that seems to make everyone breathe a collective sigh of relief when they go out to eat. Nobody wants MSG in their food—the protest goes—it causes headaches, stomachaches, dizziness and general malaise. It’s unhealthy and, maybe even worse, unsexy, used by lazy chefs as an excuse for flavor, not an enhancement.
On the other side of the spectrum lies umami: few foodie buzzwords pop off the lips with such entertaining ease. Enterprising young chefs like David Chang (of Momofuku fame) and Adam Fleischman, of the LA-based chain Umami Burger, have built their culinary careers on the basis of the fifth taste, revitalizing an interest in the meaty-depth of umami. It’s difficult to watch the Food Network or Travel Channel or any food-based program without hearing mention of the taste wunderkind, a host or chef cooing over the deep umami flavors of a Portobello mushroom. Where MSG is scary, umami is exciting.
What few people understand is that the hated MSG and the adored umami are chemically related: umami is tasted by the very receptors that MSG targets. At a MAD Symposium in Denmark, a TED-like conference for the food industry, Chang spoke about MSG and umami: “For me, the way that I’m looking at umami, it’s the same way I look at MSG. It’s one in the same.” But if chefs like Chang (neither inept nor lazy when it comes to flavor, as his Michelin stars would attest to) are down with MSG, why does the additive retain such a bad reputation?
After gaining a foothold in Japanese cooking columns, MSG spread throughout Asia, becoming especially popular in Chinese cooking for enhancing both stocks and vegetarian dishes. Everyone knows this connection, and probably associates MSG use in America most heavily with Chinese restaurants–thanks in large part to the absurdly racist name for MSG sensitivity “Chinese Restaurant Syndrome.” But MSG’s foray into American cuisine came from more than Chinese dishes; MSG became popular in the United States during World War II thanks in large part to the country’s increasing military-industrial complex. The military thought that they had found in MSG an answer to the flavorless rations allotted to soldiers, and when the war ended, the troops came home and so did the industrialization of food production. From canned vegetables to frozen dinners, industrially created food was met with wonder in the United States.
That all changed in the 1960s, when trust in industrial food began to wane. In 1962, Rachel Carson published Silent Spring, a manifesto against pesticides that kicked off the environmental movement. As pesticides quickly fell from grace, faith in the industry of yesteryear–of the chemicals and additives born from the war—declined as well. In 1968, MSG’s death knell rang in the form of a letter written to the New England Journal of Medicine by Robert Ho Man Kwok, a Chinese-American doctor from Maryland. Kwok claimed that after eating at Chinese restaurants, he often came down with certain unpleasant symptoms, namely “numbness at the back of the neck, gradually radiating to both arms and the back” and “general weakness and palpitation.” After Kwok’s letter ran, the journal received a deluge of letters from other readers, all claiming to suffer from the same affliction, deemed “Chinese Restaurant Syndrome” by editors. Some readers presented the same symptoms as Kwok, but most were extremely varied, ranging from cold sweats to extreme dizziness. In response, the Journal offered up MSG as the likely culprit for their reader’s unpleasant symptoms.
Public interest spurred a number of scientific inquiries into the potential danger of MSG. According to food historian Ian Mosby’s exploration of MSG in “That Won-Ton Soup Headache” these inquiries went one of two ways: they either sought to prove the harmful short-term effects of MSG (and Chinese Restaurant Syndrome) or they looked to identify more long-term damage caused by the additive. Initially, researchers had success proving both the short-term and long-term dangers of MSG: mice injected with the additive showed signs of brain lesions, and humans fed 3 grams of MSG per 200 ml of soup presented symptoms congruent with “Chinese Restaurant Syndrome.” Subsequent studies, however, provided mixed results: some confirmed findings of brain lesions in animals or symptoms in humans, but other studies were unable to replicate the results. Double-blind studies often showed little correlation between MSG and adverse symptoms. Parties on both sides of the debate slung accusations at the other, with the anti-MSG researchers claiming that studies were being funded by MSG producers, and pro-MSG researchers accusing the other side of fear-mongering.
From the FDA to the United Nations to various governments (Australia, Britain and Japan) the public bodies that have investigated MSG have deemed it a safe food additive. The FDA states on their website:
FDA considers the addition of MSG to foods to be “generally recognized as safe” (GRAS). Although many people identify themselves as sensitive to MSG, in studies with such individuals given MSG or a placebo, scientists have not been able to consistently trigger reactions.
Scientific interest in its deleterious effects seems to be waning: one of the last studies to gain public attention was published in 2011. The authors of that study claimed to have found a link between MSG and obesity, though those results have been questioned. While the general scientific consensus seems to be that only in large doses and on an empty stomach can MSG temporarily affect a small subset of the population, MSG’s reputation is still maligned in the public eye.
On the other hand, MSG’s glutamic cousin umami suffers no public scorn: in 2010, umami was deemed one of the most delicious food trends to watch. When Adam Fleischman’s Umami Burger (a burger chain devoted to all things umami) opened a New York outpost, the wait for a meaty bite stretched on for three-hours. In addition to piling natural glutamates onto their burger to ensure the most umami flavor, Umami Burger enhances the burger with their “umami dust,” a blend of dried mushrooms and seaweed, and umami sauce, which includes soy and Marmite. Altogether, an original Umami Burger contains 2,185 mg of glutamate.
“Most people don’t know the connection between umami and MSG. They know about it from the fifth taste, and the fifth taste was always called umami and not MSG,” Fleischman explains. “We didn’t feel that using MSG was creative enough. We wanted to do it ourselves. By doing it ourselves, we could create a flavor that was umami without the stigma of MSG. MSG, whether you like it or not, has been marketed so poorly, it sounds like this horrible thing.”
By harnessing natural glutamates for their burgers, Umami Burger avoids negative connotations associated with MSG. But the “natural” glutamates in an Umami Burger aren’t chemically any different from glutamtes in MSG.
“The short answer is that there is no difference: glutamate is glutamate is glutamate,” says Richard Amasino, professor of biochemistry at University of Wisconsin-Madison. “It would be identical unless different things created a different rate of uptake.”
Glutamtes that occur naturally in food come intertwined with different chemicals or fiber, which the body is naturally inclined to regulate, explains Amy Cheng Vollmer, professor of biology at Swarthmore College. MSG, however, comes without the natural components of food that help the body regulate glutamic levels. It’s like taking an iron supplement versus obtaining iron from spinach or red meat: the iron supplement creates an expressway between the iron and your bloodstream that you wouldn’t find in natural iron sources.
“The bottom line here is context is everything,” Vollmer adds.
So does MSG deserve its bad rap? For the small section of the population that shows sensitivity to it, probably. But for the rest of America, maybe it’s time to reconsider exactly what we’re so afraid of when it comes to MSG.
November 5, 2013
Highlighted by its distinctive gold-yellow label, a bottle of Veuve Clicquot champagne is hard to ignore. In 2012, it was the second highest selling brand of champagne in the world, with 1,474,000 nine-liter cases sold worldwide. But Veuve Clicquot wasn’t always so successful: if it weren’t for the efforts of a cunning 19th-century business mind, the champagne might never have existed. That remarkable mind belonged to the eponymous Widow (veuve in French0) Clicquot, one of the world’s first international businesswomen, who brought her wine business back from the brink of destruction and created the modern champagne market in the process.
The Widow Clicquot was born Barbe-Nicole Ponsardin, daughter of an affluent textile industrialist in Reims, France. Born in the years leading up to the French Revolution, Barbe-Nicole’s childhood was heavily influenced by the political leanings of her father, Ponce Jean Nicolas Philippe Ponsardin, which switched from monarchist to Jacobin as the tide of the Revolution turned against the monarchy. Through his shrewd politics, Barbe-Nicole’s family was able to escape the Revolution relatively unscathed, a rarity for an affluent bourgeoisie family.
Next door to Hôtel Ponsardin, the large family estate that Barbe-Nicole grew up on, lived the Clicquot family, under the patriarch Philippe. Philippe Clicquot also ran a successful textile business, making him the chief competitor to Barbe-Nicole’s father. In an attempt to consolidate the power of their two businesses, Mr. Ponsardin and Mr. Clicquot did what any shrewd business owner in the 18th century would have done: married their children. In 1798, when she was 21 years old, Barbe-Nicole married Francois Clicquot, Philippe Clicquot’s only son–the marriage was akin to an arranged marriage, a business deal devised by two industrial leaders in the small town of Reims.
Still, as the two embarked on their life together, a real partnership seemed to grow between them. Francois was a lively young man with large aspirations: instead of taking over his father’s textile industry, as his father wanted him to, Francois was interested in growing his family’s small wine business. Up to that point, the Clicquot’s family’s involvement in the wine industry constituted a minor portion of the family business. Philippe often only sold wine as an afterthought to his large textile business, adding bottles of still or sparkling white wine to orders only to round them out (once a boat had been commissioned and paid for, Philippe wanted to make sure he was getting his money’s worth). Though sparkling wine had been invented, the Champagne region was more famous for its still white wines, which Philippe would buy from wine producers and export on an as-needed basis. Philippe Clicquot had no intention of expanding its wine business to production, but Francois had a different plan.
Francois announced to his father his intention of expanding the family’s wine business, but was met with disapproval. As France plunged into the Napoleonic Wars, Philippe didn’t see wine as a profitable endeavor. Francois dismissed his father’s concerns, and set about learning the wine trade, along with his young wife. While Francois had little knowledge of wine-making, the craft ran in Barbe-Nicole’s family: one of her grandmothers had been part of a wine making operation generations earlier. Still, the two set out to learn the industry from the ground-up together.
Despite their apparent passion for the industry, Philippe Clicquot’s judgement seems to have been correct: their champagne business stalled and looked ready to collapse. In 1805, six years after their marriage, Francois fell suddenly ill with a fever; 12 days later, he was dead. Rumors swirled around the town that his death had been a suicide caused by despair at the failing business, though other accounts attribute his death to an infectious fever such as typhoid. Both Barbe-Nicole and Philippe were devastated by Francois’ death, and Philippe announced that by the end of the year, he would end the wine business.
Barbe-Nicole had other plans, and approached her father-in-law with a bold proposition.
“Barbe-Nicole goes to her father-in-law and says, ‘I’d like to risk my inheritance, I’d like you to invest the equivalent of an extra million dollars in me running this wine business.’ And he says yes,” explains Tilar Mazzeo, author of The Widow Clicquot. “It’s surprising that he would let a woman who has no business training take this on, and what it speaks to is that Philippe Clicquot was no fool. He understood how very keenly intelligent his daughter-in-law was.”
Keenly intelligent, perhaps, but at that point, Barbe-Nicole had been unsuccessful in selling champagne wine. So Philippe agreed under one condition: Barbe-Nicole would go through an apprenticeship, after which she would be able to run the business herself–if she proved her abilities. She entered into an apprenticeship with the well-known winemaker Alexandre Fourneaux, and for four years tried to make the dying wine business grow. It didn’t work, and at the end of her apprenticeship, the business was just as broke as before. So Barbe-Nicole went to her father-in-law a second time asking for money, and for a second time, Philippe Clicquot invested in his daughter-in-law’s business.
“That’s the time that comes right at the end of the Napoleonic Wars, when she has in her cellars what will become the legendary vintage of 1811, and she’s about ready to go bankrupt,” Mazzeo explains. Facing bankruptcy, Barbe-Nicole took a huge business gamble: she knew that the Russian market, as soon as the Napoleonic Wars ended, would be thirsty for the kind of champagne she was making–an extremely sweet champagne that contained nearly 300 grams of sugar (about double that of today’s sweet dessert wines, like a Sauterne). At this moment in champagne history, the champagne market was fairly small–but Russians were early enthusiasts. If she could appeal to their burgeoning desire for champagne and corner that market, Barbe-Nicole believed that success would be hers.
There was only one problem: the naval blockades that had crippled commercial shipping during the wars. Barbe-Nicole smuggled the vast majority of her best wine out of France as far as Amsterdam, where it waited for peace to be declared. As soon as peace was declared, the shipment made its way to Russia, beating her competitors by weeks. Soon after her champagne debuted in Russia, Tsar Alexander I announced that it was the only kind that he would drink. Word of his preference spread throughout the Russian court, which was essentially ground-zero for international marketing.
“She goes from being a very minor player to a name that everyone knows, and everybody wants her champagne,” Mazzeo says. Suddenly, the demand for her champagne increased so much that she was worried she would not be able to fill all the orders. Champagne making, at that time, was an incredibly tedious and wasteful business, and Barbe-Nicole realized that she would need to improve the process if she was going to keep up with the new demand for her product.
Champagne is made by adding sugar and live yeast to bottles of white wine, creating what is known as secondary fermentation. As the yeast digests the sugar, the bi-products created are alcohol and carbon dioxide, which give the wine its bubbles. There’s only one problem: when the yeast consumes all the sugar, it dies, leaving a winemaker with a sparkling bottle of wine–and dead yeast in the bottom. The dead yeast was more than unappetizing–it left the wine looking cloudy and visually unappealing. The first champagne makers dealt with this by pouring the finished product from one bottle to another in order to rid the wine of its yeast. The process was more than time-consuming and wasteful: it damaged the wine by constantly agitating the bubbles.
Barbe-Nicole knew there had to be a better way. Instead of transferring the wine from bottle to bottle to rid it of its yeast, she devised a method that kept the wine in the same bottle but consolidated the yeast by gently agitating the wine. The bottles were turned upside down and twisted, causing the yeast to gather in the neck of the bottle. This method, known as riddling, is still used by modern champagne makers.
Barbe-Nicole’s innovation was a revolution: not only was her champagne’s quality improved, she was able to produce it much faster. Her new technique was an extreme annoyance to her competitors, especially Jean-Rémy Moët, who could not replicate her method. It wasn’t an easy secret to keep, since Barbe-Nicole employed a large number of workers in her cellars–but no one betrayed her secret, a testament to her workers loyalty, Mazzeo explains. It would be decades before any of them became wise to the method of riddling, giving Barbe-Nicole another advantage over the champagne market.
With the production of champagne increasing, Barbe-Nicole set her sights on building a global empire. By the time she died in 1866, Veuve Clicquot was exporting champagne to the far reaches of the world, from Lapland to the United States. Veuve Clicquot helped turn champagne from a beverage enjoyed solely by the upper-class to a drink available to almost anyone in the middle-upper class–a seemingly small distinction, but one that vastly increased Barbe-Nicole’s market.
“The invention of riddling allows the mass-production of an artisanal and luxury product, just not at the tiny quantities that they were dealing with before,” Mazzeo explains. “Barbe-Nicole begins exporting wine around the world in large quantities and is known as being one of the great businesswomen of her century.”
In spite of the extent of her champagne empire, Barbe-Nicole never left France during her lifetime: it would have been inappropriate for a woman to travel alone during that time. She also never remarried, though there is evidence of mild flirtations with some of her business associates (“She was rumored to have had a penchant for handsome young men working in her company,” Mazzeo explains). Had she remarried, she would almost certainly have had to relinquish control of her business, an unthinkable act for the first modern businesswoman.
From risking her inheritance on a failing business to gambling her champagne against a naval blockade, Barbe-Nicole built her champagne empire on bold decisions, a business model she never regretted. As she wrote in the later years of her life in a letter to a grandchild: “The world is in perpetual motion, and we must invent the things of tomorrow. One must go before others, be determined and exacting, and let your intelligence direct your life. Act with audacity.”
October 11, 2013
For the privileged eaters of the Western world, so much of eating is done routinely: cereal for breakfast, a sandwich for lunch, probably a protein and vegetable for dinner. Sometimes, the act of eating is so second nature that the guidelines that dictate how and when we eat are invisible—guidelines such as eating a steak for dinner but not for breakfast, or eating lunch in the middle of the day. Eating wasn’t always dictated by these rules—so why is it now? That’s the question that food historian Abigail Carroll set out to answer in her new book, Three Squares: The Invention of the American Meal. Tracing the meal’s history from colonial America to present-day, Carroll explores why we eat cereal for breakfast, how dinner became American and how revisiting the history of our meal can have a tpositive impact on the future of eating. Carroll spoke with Smithsonian.com about the guidelines that control our dining.
How did the associations between certain meals and certain foods, like cereal for breakfast, form?
You start in the very early colonial era with one meal in the middle of the day—and it’s the hot meal of the day, dinner. Farmers and laborers ate earlier because they were up really early, and the elite were eating later in the day because they could sleep in. Breakfast and supper were kind of like glorified snacks, often leftovers or cornmeal mush, and there was not a lot of emphasis placed on these meals. Dinner, the main meal, at which people did tend to sit down together and eat, was really not the kind of social event that it has become. People did not emphasize manners, they did not emphasize conversation, and if conversation did take place it wasn’t very formal: it was really about eating and refueling. That’s the time where there are very blurry lines between what is and what isn’t a meal, and very blurry lines between what is breakfast, dinner and lunch.
Then, with the Industrial Revolution, everything changed, because people’s work schedules changed drastically. People were moving from the agrarian lifestyle to an urban, factory-driven lifestyle, and weren’t able to go home in the middle of the day. Instead, they could all come home and have dinner together, so that meal becomes special. And that’s when manners become very important, and protocol and formality. It’s really around then that people start to associate specific foods with certain meals.
Then, with dinner shifting you have the vacuum in the middle of the day that lunch is invented to fill. People are bringing pie for lunch, they’re bringing biscuits, but the sandwich really lends itself to lunch well. So the popularity of the sandwich really does have something to do with the rise of lunch—and especially the rise of children’s lunch, because it’s not messy. You don’t need utensils, you don’t have to clean up—you can stick it in a lunch pail really easily.
Why is it acceptable to eat cereal and eggs and a waffle for breakfast, but not for lunch or dinner? How did breakfast go from being a necessity meal—fueled by leftovers—to a meal with clear guidelines for what is acceptable to eat?
There was a problem during the Industrial Revolution: people were still eating a farmer’s diet, but they were shifting to a more sedentary lifestyle, which caused indigestion. People who were interested in health started looking into that and started coming up with solutions. Sylvester Graham, the reformer who became a preacher of health ideology, advocated for vegetarian food, and whole wheat as kind of a panacea for health problems, which becomes the answer to the question of breakfast. Then, people who ran sanitariums, including John Harvey Kellogg, in the late 1800s and early 1900s, really took that idea and flew with it and invented new ways to eat farinaceous [starchy] foods.
Entrepreneurs—some of whom worked in the sanitariums, like Charles C. Post–really build on these ideas and make them a healthy requirement. He creates all sorts of crazy testimonies that serve as advertisements for Grape-Nuts, where people’s lives are saved from chronic illness and they’re able to walk again.
Then, there’s also the history of orange juice and milk, with the discovery of vitamins in the 1910s. Milk came to be seen as a super food, and something that would keep you from getting deficiency diseases. It shows up at other meals too, but for much of the 20th century, it’s not a complete meal unless you have milk.
Why is it that, in America, we have maintained the feeling that lunch needs to be a quick meal in the middle of the day?
We still are working a lot—we’re working more hours in the United States than any other industrialized nation. Lunch is the original quick meal; it accommodated changing work schedules.
And dinner has taken on the ideological weight of the meal. Dinner has been the time when we celebrate family, and when we concentrate on having a nice, hot meal, ideally. Because dinner fulfilled that need, there was less of a need for the other meals to. Lunch doesn’t have a lot of cultural work to do; it just has to get us by.
But, if you think about it, it’s not just lunch—it’s breakfast too. We can just pour milk over cereal, or pop some toast in the toaster and walk out the door without even needing a plate or utensils. Breakfast accommodates work. It’s not the meal that shapes work, it’s the work that shapes the meal.
Could you talk about how dinner became a particularly American institution?
Dinner was not initially a strong identifying factor, in terms of nationality, for colonists. At first, they were eating more or less peasant food, porridges brought from England that said more about class than nationality. Then, dinner shifts in the 1700s to become an identifying factor in terms of being English. They’re in this new world, seen as primitive, and so they feel that they have to compensate for that. They inherit the fashions that cross the ocean, like eating a roast with dinner.
In the nineteenth century, the emerging middle class identifies itself through French food and French ways of eating. Things that we take for granted now, like starting a meal with soup or having a salad, were really French concepts. Dessert was largely a French concept, and many of the desserts that we adopted in the 19th century were French desserts. For the Victorian middle class, eating in the French way was a way to imitate the elite.
With the decline of servants in the late 1800s, people just couldn’t keep that up. Then there are the [World] Wars and the Depression, and those require Americans to be frugal. But they don’t just require Americans to be frugal—they give Americans the opportunity to celebrate frugality as patriotic. To eat frugally, to have a Victory Garden and can your own foods is patriotic. The model for dinner is no longer the French multicourse formal meal, but Thanksgiving. Thanksgiving becomes the model for the everyday American dinner. Of course, you don’t eat a whole roast every night, but the idea is that you have “a chicken in every pot,” which was Herbert Hoover’s 1928 campaign slogan. You would have some kind of meat on the table.
Are there any dishes or foods that you would classify as typically, or even exclusively, “American?”
A number of iconic foods—hot dogs and hamburgers, snack food—are hand-held. They’re novelties associated with entertainment. These are the kinds of food you eat at the ballpark, buy at a fair and eventually eat in your home. I think that there is a pattern there of iconic foods being quick and hand-held that speaks to the pace of American life, and also speaks to freedom. You’re free from the injunctions of Victorian manners and having to eat with a fork and knife and hold them properly, sit at the table and sit up straight and have your napkin properly placed. These foods shirk all that. There’s a sense of independence and a celebration of childhood in some of those foods, and we value that informality, the freedom and the fun that is associated with them.
Along those lines, there’s a lot of pushback against those processed foods today, with people wanting to recall old ways of eating, with eating local and fresh. But, how do you think that knowing the kinds of food that we used to eat and the ways that we used to eat, and think about eating, influences the future of American food?
History can play a really central role in thinking about the way that we want to eat in the future. The evolution of the meal is a process, and it continues.
With all of the talk of food and health, I think a really good question to ask is “Can we actually be healthy without eating meals?” And without even, perhaps, eating a family dinner? Studies show that eating together, we always eat better, always.
The family meal is the opportunity to put to work what we’re talking about. If we’re learning about fresh foods and ingredients, the family meal has potential to be another way of instructing our children and ourselves. There’s an interest in renewing the family meal, even reinventing it. We’re not going to be able to revive a Victorian notion of dining; I don’t think we’re interested in it. If we want to spend time together, if we want to invest in our children, if we want to be healthy, the family meal can be a vehicle for that.
October 3, 2013
Movie theater popcorn is a concession stand staple whose scent has spawned marketing ploys and copycat recipes, but movie theaters haven’t always been saturated with the tempting smell of salt and butter. The history of popcorn is vast, and it intersects with movies in the relatively recent past–a symbiosis of taste and place created to save the fledgling movie theater industry from near collapse during the Great Depression.
About 8,000 years ago, maize was cultivated from teosinte, a wild grass that doesn’t look much like the modern corn we know today. Popcorn–a name mostly associated with puffed kernels of corn–is actually a strain of corn, characterized by especially starchy kernels with hard kernel walls, which help internal pressure build when placed over heat. It was one of the first variations of maize cultivated in Central America. “Popcorn went north and it went south, but as far as I can see, it really only survived in South America,” says Andrew Smith, author of Popped Culture: A Social History of Popcorn. Eventually, trade and commerce brought the unique kernels northward. “Most likely, North American whalers went to Chile, found varieties of popcorn, picked them up and thought that they were cute, and brought them back to New England in the early 19th century,” Smith explains.
After popcorn made its way to the eastern part of North America, it spread rapidly. Eaters found the act of popping corn wildly entertaining, and by 1848, popcorn, the snack food, was prevalent enough to be included in the Dictionary of Americanisms. Popcorn had literally exploded onto the scene and was available everywhere–especially at entertainment sites like circuses and fairs. In fact, there was really only one entertainment site where the snack was absent: the theaters.
One reason for popcorn’s increasing popularity was its mobility: in 1885, the first steam-powered popcorn maker hit the streets, invented by Charles Cretor. The mobile nature of the machine made it the perfect production machine for serving patrons attending outdoor sporting events, or circuses and fairs. Not only was popcorn mobile, but it could be mass-produced without a kitchen, an advantage that another crunchy snack–the potato chip–lacked (the earliest potato chips were made in small batches in kitchens, not ideal for mass snack appeal). Another reason for its dominance over other snacks was its appealing aroma when popped, something that street vendors used to their advantage when selling popcorn. Still, movie theaters wouldn’t allow the popular street snack into their auditoriums.
“Movie theaters wanted nothing to do with popcorn,” Smith says, “because they were trying to duplicate what was done in real theaters. They had beautiful carpets and rugs and didn’t want popcorn being ground into it.” Movie theaters were trying to appeal to a highbrow clientele, and didn’t want to deal with the distracting trash of concessions–or the distracting noise that snacking during a film would create.
When films added sound in 1927, the movie theater industry opened itself up to a much wider clientele, since literacy was no longer required to attend films (the titles used early silent films restricted their audience). By 1930, attendance to movie theaters had reached 90 million per week. Such a huge patronage created larger possibilities for profits–especially since the sound pictures now muffled snacks–but movie theater owners were still hesitant to bring snacks inside of their theaters.
The Great Depression presented an excellent opportunity for both movies and popcorn. Looking for a cheap diversion, audiences flocked to the movies. And at 5 to 10 cents a bag, popcorn was a luxury that most people were able to afford. Popcorn kernels themselves were a cheap investment for purveyors, and a $10 bag could last for years. If those inside the theaters couldn’t see the financial lure of popcorn, enterprising street vendors didn’t miss a beat: they bought their own popping machines and sold popcorn outside the theaters to moviegoers before they entered the theater. As Smith explains, early movie theaters literally had signs hung outside their coatrooms, requesting that patrons check their popcorn with their coats. Popcorn, it seems, was the original clandestine movie snack.
Beyond wanting to maintain appearances, early movie theaters weren’t built to accommodate the first popcorn machines; the theaters lacked proper ventilation. But as more and more customers came to the theater with popcorn in hand, owners couldn’t ignore the financial appeal of selling the snack. So they leased “lobby privileges” to vendors, allowing them to sell their popcorn in the lobby of their theater (or more likely on a bit of street in front of the theater) for a daily fee. Vendors didn’t complain about this arrangement–selling popcorn outside the theater widened their business potential, as they could sell to both moviegoers and people on the street.
Eventually, movie theater owners realized that if they cut out the middleman, their profits would skyrocket. For many theaters, the transition to selling snacks helped save them from the crippling Depression. In the mid-1930s, the movie theater business started to go under. “But those that began serving popcorn and other snacks,” Smith explains, “survived.” Take, for example, a Dallas movie theater chain that installed popcorn machines in 80 theaters, but refused to install machines in their five best theaters, which they considered too high class to sell popcorn. In two years, the theaters with popcorn saw their profits soar; the five theaters without popcorn watched their profits go into the red. Eventually, movie theater owners came to understand that concessions were their ticket to higher profits, and installed concession stands in their theaters.
World War II further solidified the marriage between popcorn and the movie theaters. Competing snacks like candy and soda suffered from sugar shortages and in turn, rationing, as traditional sugar exporters like the Philippines were cut off from the United States
By 1945, popcorn and the movies were inextricably bound: over half of the popcorn consumed in America was eaten at the movie theaters. Theaters began pushing advertisements for their concessions harder, debuting commercials that played before (and sometimes in the middle of) movies that enticed audiences to check out the snacks in the lobby. Maybe the most famous of these is “Let’s All Go to the Lobby,” a 40-second advertisement that debuted in 1957.
In 2000, the advertisement was selected by the Library of Congress for preservation in the United States National Film Registry due to its cultural and historical value.
But for all their marketing ploys, movie theaters saw their popcorn sales steadily decrease into the 1960s. The culprit was a new technology, the television, which lessened the need to go out to the movies. “The popcorn industry sags in the ’50s as Americans begin to watch more and more television and go less and less to movie theaters,” Smith says.
Popcorn wasn’t widely eaten in homes, mostly due to how difficult it was to make: consumers needed a popper, oil, butter, salt and other ingredients to replicate their favorite movie theater snack at home. To ease this burden, one commercial product, EZ Pop, marketed itself as an all inclusive popcorn maker–simply move the container over a heat source, and the popcorn pops, completely flavored. After EZ Pop came Jiffy Pop, a famous at-home popcorn product that used the same “all-in-one” philosophy. By making popcorn an easy-to-make snack, commercial popcorn products were able to gain a foothold in the home. In the 1970s, microwave ovens become increasingly common in homes, creating another boom for popcorn: now, families can enjoy popcorn in minutes simply by pressing a button.
As popcorn re-entered the home, traditional associations of popcorn and movies, or popcorn and entertainment, persisted. Nordmende, a German electronics company, even used popcorn to advertise its microwave, purporting it to be a “sponsor of the midweek movie.”
Nowadays, the popcorn industry attaches itself to our home movie nights in a very direct way, through commercials that directly engage with popular films or “movie theater” styles of microwave popcorn that market themselves as a direct replica of the beloved theater snack.
But the relationship between popcorn and the movies has changed more than the smell of a theater lobby or the at-home movie night: it’s changed the popcorn industry itself. Before the Great Depression, most popcorn sold was a white corn variety–yellow corn wasn’t widely commercially grown, and cost twice as much as the white variety. Movie vendors, however, preferred yellow corn, which expanded more when it popped (creating more volume for less product) and had a yellowish tint that belied a coating of butter. People became accustomed to the yellow popcorn and would refuse to buy the white variety at markets, requesting the kind that looked like “the popcorn at the movies.” Today, white popcorn accounts for 10 percent of commercially grown popcorn; yellow popcorn takes up almost the rest of the commercial market (with some color varieties, like blue and black, grown in negligible amounts).
Popcorn is just as economically important to the modern movie theater as it was to movie theaters of old. Patrons often complain about the high prices of movie concessions, but there’s an economic basis for this: popcorn, cheap to make and easy to mark-up, is the primary profit maker for movie theaters. Movie theaters make an estimated 85 percent profit off of concession sales, and those sales constitute 46 percent of movie theater’s overall profits.
And so the history of popcorn and the movies was written in stone–sort of. In recent years, luxury theaters have begun popping up around the country–and they’re reinventing the popcorn-snack model. These theaters offer an old school approach to the movies, trying to make the experience of attending a movie theater tantamount to going to a live show (much like the earliest movie theater owners once tried to do). As Hamid Hashemi, the CEO of iPic Theaters, a luxury theater chain with nine locations, says, “Think about going to a live Broadway show—our movie theaters provide that kind of experience. The average time spent in the theater at our theaters is around four hours.” iPic Theaters still provide popcorn to patrons, but their focus is on a more gourmet level of movie theater dining, offering a menu of larger, cooked items like sliders and flatbreads.
Even as the demand for luxury theaters increases, Hashemi doesn’t think popcorn will ever be phased out. “Popcorn is the cheapest thing you can make, and to a lot of people it has that ritualistic experience,” he says, suggesting that for movie theater owners, a cheap snack never loses its golden appeal.
September 9, 2013
Whiskey drinkers know that the moment they swirl a bit of the smoky spirit in their mouth, they’re bound to find a world of flavors: some oak, some smoke, a little vanilla, maybe a slight bite from tannin. Brown liquors — from scotch to bourbon and all the whiskeys in between — are complex spirits that lend themselves to purposeful tasting, creating connoisseurs willing to shell out top dollar for the most peaty scotch or their favorite spicy bourbon. When it comes to the magic of whiskey, their complex profiles might be explained by the chemical fingerprints that separate them from one another — and change the way that they taste.
It’s an idea that the aptly-named Tom Collins, a researcher at the University of California, Davis, is actively pursuing. “I worked on my Ph.D., and it was a project looking at aroma and flavor chemistry in wine [fermented] in oak barrels,” Collins explains, crediting the barrels with sparking his initial interest in the chemistry of spirits. “It sort of seemed a natural extension to look from the chemistry of wine to the chemistry of whiskeys, because the chemistry of oak barrels play a huge role in what you see in whiskeys of all sorts.”
Collins and researchers at Davis set out to see if they could determine the chemical differences among 60 different whiskeys: 38 straight bourbon whiskeys, 10 rye whiskeys, five Tennessee whiskeys and seven other American whiskeys, varying in age from two-to-15 years old. What they found was a spectacular testament to the spirit’s complex chemistry–over 4,000 different non-volatile compounds across the different samples, results which he presented today at the 246th National Meeting & Exposition of the American Chemical Society. “It’s very complex,” Collins says of the chemistry. “There are components that are barrel derived, as we would expect, but there are also things that are related to the grains that are used to make the distillates in the first place—so the corn and wheat and rye and things that are fermented to form the distillate. We see some components that appear to be grain related, and there are also likely to be components that are derived from the yeast that are used do the fermentation.”
Of the thousands of chemical compounds Collins found, there was a fair amount of overlap between the different spirits. But Collins found that each spirit contained unique compounds, or unique concentrations of compounds, that he could use to distinguish a scotch from a bourbon, or a Tennessee whiskey from a bourbon, simply by looking at the liquor’s chemistry. “If you try to make sense of all of the components that are there, it’s essentially overwhelming, but if you filter out the things that are not used in Tennessee whiskeys, or things that are only present in some of the bourbons, you can sort of whittle away down to the things that define what a bourbon is or what a Tennessee whiskey is chemically,” Collins said.
It might be the perfect answer that eternal question of novice whiskey drinkers everywhere: what exactly is the difference between a whiskey and a bourbon?
The confusing answer is that bourbon is always whiskey, but all whiskey isn’t bourbon. This has always been true from a historical and regulatory perspective. Historian Michael Veach spoke with Food and Think in June and dispelled the myths that bourbon has its roots in Bourbon County, Kentucky, and that all bourbons must originate there. “‘People started asking for ‘that whiskey they sell on Bourbon Street,’ Veach says, ‘which eventually became ‘that bourbon whiskey.’”
The regulatory distinction presents a slight complication: some Tennessee whiskeys, from a regulatory standpoint, actually qualify as bourbons, but choose not to market themselves as such (Jack Daniels, for example, adamantly markets itself as a Tennessee whiskey, even when it meets regulatory standards for being a bourbon). Natalie Wolchover at Live Science outlines the regulatory standards for bourbon:
While bourbon whiskey has its roots in Kentucky, and continues to be primarily produced there, it is now manufactured in distilleries all over the United States. Manufacturers must meet the following requirements in order to advertise their whiskey product as “bourbon”:
It must be produced in the U.S. from a grain mixture (called “mash”) made up of at least 51 percent corn. It must be distilled to a maximum strength of 160 proof, bottled at a strength of at least 80 proof, and barreled for aging at no more than 125 proof. It must be aged in new, charred oak barrels. To qualify as “straight bourbon,” the spirits must meet the above requirements as well as being aged for at least two years and containing no added coloring, flavoring or other spirits.
Many bourbon whiskey distilleries in Kentucky advertise their use of unique water filtered by the limestone shelf in Bourbon County; while this feature may add to the allure of Kentucky bourbon whiskey, the federal trade regulations do not stipulate about what water must be used.
Collins thinks he might have a more chemically elegant answer to the conundrum. As his team discovered, there are 50 to 100 chemical compounds such as fatty acids and tannins that can be used to distinguish a Tennessee whiskey from a bourbon to such an extent that Collins can tell the difference between them without tasting either. Chemically, it’s often a question of concentration–how much of a plant derived compound does a spirit have? How much tannin? “There are, in many cases, certain compounds that are only found in one or the other, but more often, there are compounds that are present in both but at different concentrations. Those are the tannins, the fatty acids, and in some cases, turpentine – compounds that are plant-derived.”
These compounds complicate the matter further–certain chemicals are extracted from the wood barrels during the aging process, which might not be unique to the distillate itself. As Collins notes, barrels are, after all, made from trees–an unarguable plant substance. So how do they discern the unique plant-derived elements in the distillates from the compounds that might come from the barrel? “Some of the ways we get through that is to look at whiskeys that have been freshly distilled, and haven’t been put in barrels yet, so we can see what’s there in the fresh distillate before we put it in oak, and then we can see what changes between the newly distilled spirit and the spirit that has been aged in barrels for some period of time,” Collins explains. “That helps us to understand what the things are that come from the barrels, versus the things that come from the distillate itself.”
Collins and his team have yet to embark on the next step of their experiments–relating the differences in chemical makeup to potential sensory differences in aroma and flavor–but he feels fairly confident that the two are related. “I think–being a chemist–that the sensory differences arise from the chemistry,” Collins admits. Take, for example, the chemical compounds that arise when the spirit is being aged in a charred barrel. “The sensory component that you smell, that you associated with toasted oak, or charred oak, is going to be related to the compounds that are extracted by the whiskey from the wood,” Collins explains.
Understanding the delicate interplay between chemistry and aroma could be a huge help to distillers looking to tweak their whiskey to encapsulate that perfect blend of smoky and spicy. “This could be a tool [distillers] could use to understand if they make a change to their distillation processes, how does that impact the resulting whiskey,” Collins said, noting that the better distillers understand how the process of distillation impacts the final product, the better they can manipulate the process to their advantage. “It’s a tool that can be used by distillers large and small to understand the impact of what they’re doing on the chemistry, and then the sensory.”
It’s research that means that the perfect whiskey–smoky, spicy, or however you want it–might not be so elusive after all.