October 30, 2013
In 1971, Walt Disney World had just opened in Orlando, Florida. Led Zepplin was about to blow our minds, a prison riot had been shut down at Attica, and all across America, kids were pooping pink. Hundreds of mothers hospitalized their children for fecal testing out of fear of internal bleeding. Within that same year, not-so-coincidentally, General Mills released their classic monster cereals Count Chocula and Franken Berry. The latter was colored red using “Food, Drug and Cosmetics” (FD & C) Red No. 2 and No. 3., originally and chemically known as amaranth, a synthetic color named after the natural flower. The synthetic dye can’t be broken down or absorbed by the body.
A 1972 case study, “Benign Red Pigmentation of Stool Resulting from Food Coloring in a New Breakfast Cereal (The Franken Berry Stool),” published in Pediatrics explains the phenomenon later known as “Franken Berry Stool.” A 12-year-old boy was hospitalized for four days after being admitted for possible rectal bleeding. “The stool had no abnormal odor but looked like strawberry ice cream,” Payne reports. Further questioning of the mother revealed that the child had enjoyed a bowl of Franken Berry cereal two days and one day prior to his hospitalization. By the fourth day, they did a little experiment: They fed the boy four bowls of Franken Berry cereal and for the next two days, he passed bright pink stools. But other than pink poop, there were no other symptoms, Payne reports, “Physical examination upon admission revealed [a boy] in no acute distress and with normal vital signs…Physical examination was otherwise unremarkable.”
At the time of the study, the product had only been on the market for a few weeks. The author warns that “physicians should be aware of its potential for producing reddish stools.” Other monster cereals at the time also used dyes that caused stool to change colors. Booberry, which debuted in December of 1972, for example, uses Blue No. 1 (a dye currently banned in Norway, Finland and France) and turns stool green. Apparently, green stool seems less life-threatening than the reddish hue caused by Franken Berry.
But pink poop wasn’t always the worst side effect from colored confections. Ruth Winters’s A Consumer’s Dictionary of Cosmetic Ingredients details the history of commercial food dyes, including those later used in Franken Berry. At the turn of the 20th century, with virtually no regulation of more than 80 dyes used to color food, the same dyes used for clothes could also be used to color confections and other edibles.
In 1906, Congress passed the first legislation for food colors, the Pure Food and Drug Act, deeming seven colors suitable for use in food: orange, erythrosine, ponceu 3R, amaranth (the color later used in Franken Berry cereal), indigotin, naphthol yellow, and light green. Since then, upon further study, several of these choices have been delisted.
More than 20 years later, in 1938, Congress passed the Federal Food, Drug, and Cosmetic Act which gave these colors numbers instead of chemical names—every batch needed to be certified by the Food and Drug Administration, though some problems still arose: in the fall of 1950, many children became ill from eating an orange Halloween candy containing one to two percent FD&C Orange No. 1, for example.
Red Dye No. 2, the one used by the original Franken Berry cereal, was one of the most widely used color additives at the time, until a 1971 Russian study reported that the dyes caused tumors in female rats. Years of research led the FDA to find that even though the Russian study was extremely flawed (the FDA couldn’t even prove that amaranth was one of the dyes used), the agency would remove the dye from its Generally Regarded As Safe (GRAS) list in 1976. Between public outcry against the dye and the chance that trace elements could potentially have carcinogens, the FDA banned a number of other dyes as well. According to the FDA, 47 other countries, including Canada and the United Kingdom, still allow for the use of Red Dye No. 2.
That same year, Mars removed their red M&M’s from the candy-color spectrum for nearly a decade, even though Mars didn’t even use Red No. 2; the removal of the red candies was a response to the scare, livescience.com reports:
The red food coloring in question was not actually used in M&M’s chocolate candies, according to mms.com. “However, to avoid consumer confusion, the red candies were pulled from the color mix.”
Inquiries to General Mills as to when the Franken Berry ingredients switched to less poop-worrying dyes, were not responded to. These days, the only red colors accepted by the FDA are Red No. 40, which appears in all five of the General Mills monster cereals, and Red No. 3, typically used in candied fruits.
The symptoms of “Franken Berry Stool” were pretty benign compared to other more notable confectionary mishaps in history: The accidental poisoning of more than 200 people in Bradford, England in 1858 comes to mind. The sweets were accidentally made with arsenic. Let’s be thankful there’s a bit more regulation of food dyes these days.
Another stool scare in cereal history: Smurfberry Crunch Cereal, released in 1982 by Post Foods, turned the poop of those who ate it blue—the ultimate Smurfs experience. Post then changed the formula and re-released the cereal in 1987 as Magic Berries Cereal.
Looking for a sugar high now? You’re safe. When you open your celebratory, Franken Berry or any of the other monster cereals this Halloween, [for the first time, all five monsters are available in stores since the well-received re-release of Frute Brute (1975-1984) and Fruity Yummy Mummy (1987-1992)], expect a sugar high—without the pink poop aftermath. We tasted all five of the cereals and Count Chocula is the best by a long shot.
The best part is when the chocolate “sweeties,” as the marshmallows were called in the original commercials in 1971, are all gone: the plain milk becomes chocolate milk. Let’s be real, what child—or “adult”—prefers regular milk to chocolate? I haven’t met this kind of person.
October 28, 2013
When Bill Owens in Hayward, Calif. first brewed a pumpkin beer in the early 1980s, no one else in modern craft brewing history had done such a clever thing. His project, so it is said, was inspired by historical records indicating that George Washington had used squashes—and possibly pumpkins—in experimental homebrews. Buffalo Bill’s Pumpkin Ale became popular over the years and remains so some 30 years after its birth.
But today, that maverick beer stands modestly amid hundreds of others like it. For autumn beers celebrating America’s most iconic squash have become ubiquitous: The summer nears its end, and brewers across the continent get busy in unison adding a blizzard of spices and cooked pumpkin (sometimes fresh, sometimes out of a can) to their tanks of fermenting beer. By October and November, pumpkin brews are as commonplace as jack-o-lanterns, and from a glance at a supermarket beer aisle, one might think that America’s craft brewers had run out of ideas.
Many pumpkin beers taste about the same, brewed with roughly the same flurry of autumn spices–which is fine. Most beers of any given style, after all–whether IPAs, porters or pilsners–have a similar flavor profile. The trouble with pumpkin beers is that they can be hard to handle if too liberally spiced. William Bostwick, beer critic for the Wall Street Journal and author of the forthcoming history of beer and brewing, “The Brewer’s Tale,” notes that the standard potpourri of spices used in pumpkin beer–cinnamon and nutmeg, and usually a few others–can turn “acrid, bitter, and cloying” if they are boiled for too long. Bostwick says he has found the worst of these beers to “taste like allspice soup.”
He points out, too, that pumpkin beers generally don’t taste like pumpkin at all.
“On the whole, these are basically pumpkin pie beers,” Bostwick says. “What you taste is spices. I’m not sure most people even know what pumpkin itself really tastes like.”
Indeed, the flavor of pumpkin is so mild that it can be almost undetectable even in a lightly spiced beer. In Half Moon Bay, California, a town surrounded by pumpkin fields, the local brewery has been making a pumpkin beer every fall for 10 years. But this year, the Half Moon Bay Brewing Company toned down the recipe, from eight pounds of nutmeg, clove, allspice, cinnamon and mace in last year’s 500-gallon batch to just one meager pound for the current release.
“I specifically wanted it to taste like pumpkin, not pie,” brewmaster James Costa says. The beer, available only on draft, is decidedly unspicy—so unspicy that one might entirely fail to notice that the reddish hued, creamy topped ale is spiced at all. The pumpkin, meanwhile, is faint, as nature intended this humble squash to be.
Dawn Letner has perhaps never tasted that pumpkin beer. She owns the Chico Home Brew Shop in Chico, Calif., where she frequently sends home customers during October and November with pumpkin beer recipes.
For her, most pumpkin beers are almost intolerable.
“I might buy a bottle now and then, but definitely not a 6-pack,” Letner says. “Do you really want to sit and drink more than one of these spicy cinnamon bombs? For me, the answer is no. If I did want to, I’d just make a spiced tea and add a shot of alcohol.”
Sean Lilly Wilson, owner and founder of Fullsteam Brewery in Durham, N.C., makes a wide array of unusual fruit and vegetable beers to celebrate the autumn–but he has chosen not to make a beer featuring the pumpkin.
“There are enough pumpkin beers in the world,” he says, adding that he doesn’t much care for the style. “They’re often so overly spiced that they’ve lost all nuances. Some of the most celebrated pumpkin beers are just too much for me.”
To make pumpkin beers, some brewers use freshly harvested pumpkins, roasted until the starches turn gooey and sweet. Buffalo Bill’s Brewery, for one, has long used the jumbo pumpkins famous for their hippo-like dimensions, if not their flavor. Half Moon Bay Brewing, on the other hand uses apple-sized sugar pie pumpkins–though Costa concedes that the variety of squash used is probably irrelevant. Other brewers use only pumpkin concentrate, rendered from cooked pumpkins and reduced to a dense, extremely sweet juice and purchased in cans. The pumpkin is added at varying stages of the brewing process, sometimes prior to boiling, other times toward the end of fermentation. Late in the process, too, the spices are added, and another pie-flavored pumpkin beer hits the shelf.
Whether you disdain pumpkin beers, simply tolerate them for a few weeks or wait all summer for them, you must give credit to Buffalo Bill’s Pumpkin Ale. Though the mild chai-tasting beer receives consistently poor reviews on beer rating forums, it was the original of what has become a wildly popular style, with almost countless examples now on the market. As of this writing, Beer Advocate’s online rating forum included no less than 529 pumpkin beers–most, if not all of them, spiced like mulled wine. And at the Great American Beer Festival, an annual fall event in Colorado, pumpkin beers occupy their very own category. Clearly, no matter the scorn felt by some critics, America loves these beers. Geoff Harries, the owner of Buffalo Bill’s since 1994, says demand continues to grow for his pumpkin ale, which is now distributed in 43 states, and he said in an interview that from October to November, the beer-drinking public goes into a state of “hyper-excitement” over pumpkin beers. Come December, though, the interest peters to a stop.
Even if you aren’t hyper-excited about pumpkin beers, it’s worth exploring the category for the oddball renditions some breweries have introduced:
- Oak Jacked, from Uinta Brewing Company, in Salt Lake City, is a sweet, deep brown ale with more than 10 percent alcohol and is aged in whiskey barrels for a creamy, vanilla-Chardonnay finish.
- New Belgium’s pumpkin beer, named Pumpkick, includes cranberry juice and lemongrass for an unusual, tart and zesty interpretation.
- Elysian Brewing Company, in Seattle, makes a well-liked pumpkin beer, too–a copper-colored imperial style named The Great Pumpkin. This brewery, in fact, has held an annual pumpkin beer festival since 2005. The event’s centerpiece is a jumbo pumpkin filled with beer and tapped like a keg.
But of the many off-center pumpkin beers available, a few stand alone as marvels of beer making. Perhaps most extreme of them all is a boozy ale called Rumpkin, from Avery Brewing Company.
“I’m one of the biggest pumpkin beer fans in the world,” says Adam Avery, the man who created this beer. As the founder of the brewery, Avery has garnered a reputation over the years for making some of the most outlandish, aggressive, almost unapproachable beers in the world. “I would drink pumpkin beers every day if I could, and it seemed weird that I had never made one before. So we thought, ‘Let’s make a pumpkin beer, and let’s make it the granddaddy of them all.’”
And unless we overlooked something grander, Rumpkin is it. The dark, cognac-like beer, which tastes of vanilla, coconut and dark chewy fruits, has been aged in rum barrels and weighs in at 18.6-percent alcohol.
Autumn is the season of abundance, diversity and color–not just pumpkins, pumpkins, pumpkins–and Fullsteam Brewery, at least, seems to recognize this. The small facility, now just three years old, released a persimmon ale this fall named First Frost after the seasonal event which traditionally marks the ripening of the persimmon crop. Wilson, Fullsteam’s owner, is also getting set to brew a fig-chestnut beer, named Fruitcake, and a pawpaw beer, named Pawpaw, while a sweet potato lager, named Carver, is available year round on draft at the brewery.
None of these fall and winter beers are spiced.
“We’re not in the scented candle business,” Wilson quips. “We’re in the craft beer business. We want to let people taste the ingredients we’re using.”
As for those spicy pumpkin beers, Bostwick, for all his skepticism, gets why brewers make them like they do:
“No one wants to buy a pumpkin beer expecting it to taste like pumpkin pie and finding that it tastes like nothing.”
They’d rather, it seems, have it taste like allspice soup.
October 24, 2013
The avocado is a fruit of a different time. The plant hit its evolutionary prime during the beginning of the Cenozoic era when megafauna, including mammoths, horses, gomphotheres and giant ground sloths (some of them weighing more than a UPS truck) roamed across North America, from Oregon to the panhandle of Florida. The fruit attracted these very large animals (megafauna by definition weigh at least 100 pounds) that would then eat it whole, travel far distances and defecate, leaving the seed to grow in a new place. That’s the goal of all botanical fruits, really. Survival and growth via seed dispersal.
But the great mammals disappeared forever about 13,000 years ago in the Western Hemisphere. Around that time, North America lost 68 percent of its diverse Pleistocene megafauna, and South America lost 80 percent, Connie Barlow, author of The Ghosts of Evolution: Nonsensical Fruit, Missing Partners, And Other Ecological Anachronisms says. But even after this major shift in the land mammal population, the wild avocado still requires the same method of seed dispersal, which makes it somewhat of an evolutionary anachronism.
“After 13,000 years, the avocado is clueless that the great mammals are gone,” Barlow explains. “Without larger mammals like the ground sloth to carry the seed far distances, the avocado seeds would rot where they’ve fallen and must compete with the parent tree for light and growth.”
A fruit with smaller seeds, like a berry, for example, can be consumed whole and dispersed by small mammals, making the chances of fruiting in a new place higher.
After the giant mammals had died out, if an avocado tree was lucky, a jaguar might’ve found the fruit attractive—the cat’s stomach is designed for digesting large hunks of meat, leaving potential for swallowing the avocado whole, though there is no evidence to support this idea. Rodents like squirrels and mice may have also contributed, as they traveled and buried seeds in the ground, rather than letting it rot on the surface. Wild avocados were appealing to larger animals because it had enough tasty flesh to lure them in and could be eaten in one bite. The fruit had a larger pit and less flesh than today’s avocados, but it really served as a quick snack for big mammals like the mammoth. Barlow writes in “Haunting the Wild Avocado,” originally published in Biodversity:
The identities of the dispersers shifted every few million years, but from an avocado’s perspective, a big mouth is a big mouth and a friendly gut is a friendly gut. The passage of a trifling 13,000 years (since the Pleistocene extinction) is too soon to exhaust the patience of genus Persea. The genes that shape fruits ideal for megafauna retain a powerful memory of an extraordinary mutualistic relationship.
How the avocado still exists in the wild after surviving its evolutionary failures remains a puzzle. But once Homo sapiens evolved to the point where it could cultivate the species, the fruit had the chance to thrive anew. Back when the giant beasts roamed the earth, the avocado would’ve been a large seed with a small fleshy area—less attractive to smaller mammals such as ourselves. Through cultivation, humans have bulked up avocados so there is more flesh for us to eat.
The avocado has been a staple food in Mexico, as well as Central and South America, since 500 B.C. Spanish conquistadors discovered the fruit from the Aztecs in the 16th century, but the ahuacate, the Aztec word for “avocado,” wasn’t grown commercially in the United States until the turn of the 20th century. By 1914, the exotic fruit made an appearance on California soil. Roughly 90 percent of today’s avocados are grown in California according to NPR. But Barlow is quick to point out the difference between a cultivated avocado and those found naturally.
“The wild varieties of avocados that are still somewhat available have a thin fleshy area around the seed—it wouldn’t necessarily be something that we would recognize as edible,” says Barlow. “When we go to the store and we see an avocado on sale, it’s always a question of will this be one with a tiny seed, or will it be a batch where the seed takes up five-sixths of the space of the fruit?”
Ecologist Dan Janzen conducted groundbreaking research on these and other “anachronistic fruits” and found that the avocado isn’t alone in this regard. His research in the late ’70s in the neotropics— an ecozone that includes both Americas and the entire South American temperate zone—sparked a shift in ecological thinking regarding these evolutionary-stunted fruits. Other examples include: papaya, cherimoya, sapote and countless other fleshy fruits of the neotropics. Another surprising “ghost” you may see everyday: Honey locust pods scattered about your driveway. All of these fruits are not considered edible by most native mammalian standards today. Barlow continues:
“In 1977, however, [Janzen] was beginning to suspect that he—along with every other ecologist working with large tropical fruits of the New World—had been wrong in one very big way. They all had failed to see that some fruits are adapted primarily for animals that have been extinct for 13,000 years.”
“We don’t have the liver or the enzyme systems to detoxify our bodies from something like the avocado seed,” Barlow says. “But at the same time, the rhino which has been around for ages, can eat all kinds of things that are toxic to everyone else.”
A South American folk recipe for rat poison mixes avocado pits with cheese or lard to kill off unwanted rodents. Whether or not humans are supposed to eat avocados from an evolutionary standpoint, America produced 226,450 tons of the fruit and consumed 4.5 pounds per capita in 2011. The avocado, a true “ghost of evolution,” lives on.
More avocado facts to drop at your next party:
- The Aztec word for avocado, ahuacatl means “testicle”. This is most likely because the avocado, growing in pairs, resembled the body part. After the arrival of Spanish conquistadors, Spanish speakers substituted the form avocado for the Aztec (Nahuatl) word because ahuacatl sounded like the early Spanish word avocado (now abogado), meaning “lawyer.”
- The Spanish-Mexican word “guacamole” was derived from ahuacamolli, meaning “avocado soup or sauce,” made from mashed avocados, chiles, onions and tomatoes.
- For reasons related to the word’s origin, the avocado is also considered an aphrodisiac. According to the book The Aphrodisiac Encyclopaedia, by the time the fruit traveled to Europe, the Sun King (Louis XIV) nicknamed avocados la bonne poire (the good pear) because he believed it restored his lagging libido.
- The Hass variety of avocado was named after a postal employee, Rudolph Hass, who purchased the seedling in 1926 from a California farmer.
- For more information regarding other “ghosts of evolution” Barlow’s theme song is a great listen:
October 21, 2013
It was the second day of autumn term at a small boys’ school in South London in 1979. Without warning, 78 schoolboys and a handful of monitors simultaneously fell ill. Symptoms included vomiting, diarrhea, abdominal pain and, in severe cases, depression of the central nervous system. Several patients were comatose with episodes of convulsive twitching and violent fits of fever. In many patients, there were signs of peripheral circulatory collapse. Within five days of the initial outbreak, all patients recovered in full, though some hallucinated for several days, Mary McMillan and J.C. Thompson report in the Quarterly Journal of Medicine. But what could cause such a sudden and mysterious illness?
Turns out, a bag of potatoes left in storage from the previous summer term.
After careful analysis of the sequence of events, the onset of symptoms was pinpointed to about four to 14 hours after the boys had eaten boiled potatoes that had a high concentration of the toxin, solanine, a glycoalkaloid that was first isolated in 1820 in the berries of a European black nightshade. Nightshade is the term used to describe over 2,800 species of plants in the scientific family, Solanaceae. Eggplants, tomatoes, and some berries are common members of the nightshade family—many of them contain highly toxic alkaloids.
That said, the potato is the most common cause of solanine poisoning in humans. But how do you know when solanine is present in a potato? The tuber is turning green.
Though the green color that forms on the skin of a potato is actually chlorophyll, which isn’t toxic at all (it’s the plant’s response to light exposure), the presence of chlorophyll indicates concentrations of solanine. The nerve toxin is produced in the green part of the potato (the leaves, the stem, and any green spots on the skin). The reason it exists? It’s a part of the plant’s defense against insects, disease and other predators.
If you eat enough of the green stuff, it can cause vomiting, diarrhea, headaches, paralysis of the central nervous system (as evidenced by the incident above) but in some rare cases the poisoning can cause coma—even death. Studies have recorded illnesses caused by a range of 30 to 50 mg of solanine per 100 grams of potato, but symptoms vary depending on the ratio of body weight of the toxin and the individual’s tolerance of the alkaloid. The following cases recorded in various medical journals include examples of some of the most severe cases of solanine poisoning (many of which resulted in death):
1899: After eating cooked potatoes containing 0.24 mg of solanine per gram of potato, 56 German soldiers experienced solanine poisoning. Though all recovered, in a few cases, jaundice and partial paralysis were observed.
1918: In Glasgow, Scotland, 61 people from 18 separate households were affected at once by a bad batch of potatoes. The following day, a five-year-old boy died of strangulation of the bowel following extreme retching and vomiting. According to “An Investigation of Solanine Poisoning” by S. G. Willimott, PhD, B.Sc. published in 1933, the case was investigated by scientists, R. W. Harris and T. Cockburn, who concluded in their article, “Alleged Poisoning By Potatoes” (1918), that the poisoning was the result of eating potatoes which contained five or six times the amount of solanine found in normal potatoes. Willimott cites this particular occurrence as an example of the toxin’s prevalence: “A review of the literature reveals the fact that authentic cases of solanine poisoning are not so rare as authorities appear to believe.”
1925: Seven members of a family were poisoned by greened potatoes. Two of them died. According to reports, symptoms included vomiting, extreme exhaustion, but no convulsions like that of the schoolboys in London. Breathing was rapid and labored until consciousness was lost a few hours before death.
1948: A case of solanine poisoning involving the potato’s nightshade relative, the berry, was recorded in the article “A Fatal Case of Solanine Poisoning“ published in the British Medical Journal. On August 13 of that year, a 9-year-old girl with a bad habit of snacking on the berries that grew along the railroad tracks by her house was admitted to the hospital with symptoms of vomiting, abdominal pain, and distressed breathing. She died two days later. An autopsy found hemorrhages in the mucosa of stomach and middle section of her small intestine. The stomach contained about one pint of dark brown fluid.
1952: According to the British Medical Journal, solanine poisoning is most common during times of food shortage. In the face of starvation, there have been accounts of large groups eating older potatoes with a higher concentration of the toxin. In North Korea during the war years of 1952-1953, entire communities were forced to eat rotting potatoes. In one area alone, 382 people were affected, of whom 52 were hospitalized and 22 died. The most severe cases died of heart failure within 24 hours of potato consumption. Some of the less severe symptoms included irregular pulses, enlargement of the heart, and blueing lips and ears. Those who displayed these ailments died within 5 or 10 days. Authors John Emsley and Peter Fell explain their book Was It Something You Ate?: Food Intolerance: What Causes It and How to Avoid It: ”In the final stages [of the illness] there were sometimes a state of high excitability with shaking attacks and death was due to respiratory failure.”
1983: Sixty-one of 109 school children and staff in Alberta, Canada, fell ill within five minutes of eating baked potato. Forty-four percent of those affected noted a green tinge and a bitter taste in the potatoes.
Not to worry though, fatal cases of solanine poisoning are very rare these days. Most commercial varieties of potatoes are screened for solanine, but any potato will build up the toxin to dangerous levels if exposed to light or stored improperly. Often, the highest concentrations of solanine are in the peel, just below the surface and in the sprouted “eyes”—things that are typically removed in cooking preparation—though Warren would argue even boiling water in potato prep dissolves only a little of the alkaloid. Emsley and Fell continue:
Most people can easily cope with the solanine in the average portion of potato and show no symptoms of poisoning because the body can break it down and rapidly and excrete the products in the urine. But if the level of solanine is as high as 40 mg per 100 g of potato, symptoms include diarrhea…even coma.
The best way to prevent solanine poisoning is to store tubers in a cool, dark place and remove the skin before consumption. A general rule for avoiding illnesses like the ones described above? Green and sprouted? Throw it out.
October 11, 2013
For the privileged eaters of the Western world, so much of eating is done routinely: cereal for breakfast, a sandwich for lunch, probably a protein and vegetable for dinner. Sometimes, the act of eating is so second nature that the guidelines that dictate how and when we eat are invisible—guidelines such as eating a steak for dinner but not for breakfast, or eating lunch in the middle of the day. Eating wasn’t always dictated by these rules—so why is it now? That’s the question that food historian Abigail Carroll set out to answer in her new book, Three Squares: The Invention of the American Meal. Tracing the meal’s history from colonial America to present-day, Carroll explores why we eat cereal for breakfast, how dinner became American and how revisiting the history of our meal can have a tpositive impact on the future of eating. Carroll spoke with Smithsonian.com about the guidelines that control our dining.
How did the associations between certain meals and certain foods, like cereal for breakfast, form?
You start in the very early colonial era with one meal in the middle of the day—and it’s the hot meal of the day, dinner. Farmers and laborers ate earlier because they were up really early, and the elite were eating later in the day because they could sleep in. Breakfast and supper were kind of like glorified snacks, often leftovers or cornmeal mush, and there was not a lot of emphasis placed on these meals. Dinner, the main meal, at which people did tend to sit down together and eat, was really not the kind of social event that it has become. People did not emphasize manners, they did not emphasize conversation, and if conversation did take place it wasn’t very formal: it was really about eating and refueling. That’s the time where there are very blurry lines between what is and what isn’t a meal, and very blurry lines between what is breakfast, dinner and lunch.
Then, with the Industrial Revolution, everything changed, because people’s work schedules changed drastically. People were moving from the agrarian lifestyle to an urban, factory-driven lifestyle, and weren’t able to go home in the middle of the day. Instead, they could all come home and have dinner together, so that meal becomes special. And that’s when manners become very important, and protocol and formality. It’s really around then that people start to associate specific foods with certain meals.
Then, with dinner shifting you have the vacuum in the middle of the day that lunch is invented to fill. People are bringing pie for lunch, they’re bringing biscuits, but the sandwich really lends itself to lunch well. So the popularity of the sandwich really does have something to do with the rise of lunch—and especially the rise of children’s lunch, because it’s not messy. You don’t need utensils, you don’t have to clean up—you can stick it in a lunch pail really easily.
Why is it acceptable to eat cereal and eggs and a waffle for breakfast, but not for lunch or dinner? How did breakfast go from being a necessity meal—fueled by leftovers—to a meal with clear guidelines for what is acceptable to eat?
There was a problem during the Industrial Revolution: people were still eating a farmer’s diet, but they were shifting to a more sedentary lifestyle, which caused indigestion. People who were interested in health started looking into that and started coming up with solutions. Sylvester Graham, the reformer who became a preacher of health ideology, advocated for vegetarian food, and whole wheat as kind of a panacea for health problems, which becomes the answer to the question of breakfast. Then, people who ran sanitariums, including John Harvey Kellogg, in the late 1800s and early 1900s, really took that idea and flew with it and invented new ways to eat farinaceous [starchy] foods.
Entrepreneurs—some of whom worked in the sanitariums, like Charles C. Post–really build on these ideas and make them a healthy requirement. He creates all sorts of crazy testimonies that serve as advertisements for Grape-Nuts, where people’s lives are saved from chronic illness and they’re able to walk again.
Then, there’s also the history of orange juice and milk, with the discovery of vitamins in the 1910s. Milk came to be seen as a super food, and something that would keep you from getting deficiency diseases. It shows up at other meals too, but for much of the 20th century, it’s not a complete meal unless you have milk.
Why is it that, in America, we have maintained the feeling that lunch needs to be a quick meal in the middle of the day?
We still are working a lot—we’re working more hours in the United States than any other industrialized nation. Lunch is the original quick meal; it accommodated changing work schedules.
And dinner has taken on the ideological weight of the meal. Dinner has been the time when we celebrate family, and when we concentrate on having a nice, hot meal, ideally. Because dinner fulfilled that need, there was less of a need for the other meals to. Lunch doesn’t have a lot of cultural work to do; it just has to get us by.
But, if you think about it, it’s not just lunch—it’s breakfast too. We can just pour milk over cereal, or pop some toast in the toaster and walk out the door without even needing a plate or utensils. Breakfast accommodates work. It’s not the meal that shapes work, it’s the work that shapes the meal.
Could you talk about how dinner became a particularly American institution?
Dinner was not initially a strong identifying factor, in terms of nationality, for colonists. At first, they were eating more or less peasant food, porridges brought from England that said more about class than nationality. Then, dinner shifts in the 1700s to become an identifying factor in terms of being English. They’re in this new world, seen as primitive, and so they feel that they have to compensate for that. They inherit the fashions that cross the ocean, like eating a roast with dinner.
In the nineteenth century, the emerging middle class identifies itself through French food and French ways of eating. Things that we take for granted now, like starting a meal with soup or having a salad, were really French concepts. Dessert was largely a French concept, and many of the desserts that we adopted in the 19th century were French desserts. For the Victorian middle class, eating in the French way was a way to imitate the elite.
With the decline of servants in the late 1800s, people just couldn’t keep that up. Then there are the [World] Wars and the Depression, and those require Americans to be frugal. But they don’t just require Americans to be frugal—they give Americans the opportunity to celebrate frugality as patriotic. To eat frugally, to have a Victory Garden and can your own foods is patriotic. The model for dinner is no longer the French multicourse formal meal, but Thanksgiving. Thanksgiving becomes the model for the everyday American dinner. Of course, you don’t eat a whole roast every night, but the idea is that you have “a chicken in every pot,” which was Herbert Hoover’s 1928 campaign slogan. You would have some kind of meat on the table.
Are there any dishes or foods that you would classify as typically, or even exclusively, “American?”
A number of iconic foods—hot dogs and hamburgers, snack food—are hand-held. They’re novelties associated with entertainment. These are the kinds of food you eat at the ballpark, buy at a fair and eventually eat in your home. I think that there is a pattern there of iconic foods being quick and hand-held that speaks to the pace of American life, and also speaks to freedom. You’re free from the injunctions of Victorian manners and having to eat with a fork and knife and hold them properly, sit at the table and sit up straight and have your napkin properly placed. These foods shirk all that. There’s a sense of independence and a celebration of childhood in some of those foods, and we value that informality, the freedom and the fun that is associated with them.
Along those lines, there’s a lot of pushback against those processed foods today, with people wanting to recall old ways of eating, with eating local and fresh. But, how do you think that knowing the kinds of food that we used to eat and the ways that we used to eat, and think about eating, influences the future of American food?
History can play a really central role in thinking about the way that we want to eat in the future. The evolution of the meal is a process, and it continues.
With all of the talk of food and health, I think a really good question to ask is “Can we actually be healthy without eating meals?” And without even, perhaps, eating a family dinner? Studies show that eating together, we always eat better, always.
The family meal is the opportunity to put to work what we’re talking about. If we’re learning about fresh foods and ingredients, the family meal has potential to be another way of instructing our children and ourselves. There’s an interest in renewing the family meal, even reinventing it. We’re not going to be able to revive a Victorian notion of dining; I don’t think we’re interested in it. If we want to spend time together, if we want to invest in our children, if we want to be healthy, the family meal can be a vehicle for that.