December 6, 2013
Next time you are at an all-you-can eat buffet, imagine the food displays without any covering: there are flies in the coleslaw, the man in front of you leans over the spread, breathing heavily. His nose scrunches up as though he might sneeze at any moment. You cringe, but it’s too late. Mashed potatoes are off the menu tonight.
Johnny Garneau is the reason people like this man will never sneeze on your food today.
On March 10, 1959, the restaurateur and inventor filed his patent for the “Food Service Table” later known as the “sneeze guard,” meant to protect food on display from bacteria and other germs that may be spread by sneezing. These days, it’s required by law that retail, self-service food bars have one—nary a salad bar shall be left uncovered.
At the time of his invention, he owned and ran a chain of American Style Smorgasbord restaurants in Ohio and Pennsylvania—a set price, all-you-can-eat buffet model based off of the the traditional Swedish “smorgasbord,” a celebratory meal, buffet style, with a laid-out table of food. The first example of a smorgasbord in America appeared at the 1939 New York World’s Fair. Garneau’s “American Style Smorgasbord” restaurant was one of the first of many self-service restaurants that would pop up in the the United States in the ’50s.
“Being the germaphobe that he was, he couldn’t stand people going down the Smorgasbords smelling things and having their noses too close to the food,” Barbara Kelley, one of five of Garneau’s children says. “He said to his engineers, ‘We have to devise something—I don’t want these people sneezing on the food.”
When the patent was granted (for a term of 14 years), Garneau installed them in each of his restaurants. His daughter Barbara was born the year her father filed for the patent and remembers growing up in the spotless kitchens and dining rooms of her father’s businesses.
“He had that typical entrepreneur mind—he was always thinking of the next great idea.” Kelley says. These common things we use every day, somebody, somewhere had an idea and they had the guts to take it to fruition. My dad was one of them. There wasn’t one thing he thought that he couldn’t make or do.”
At 15, Garneau got a taste for the restaurant business as a “soda jerk” when he began forming dreams of his first restaurant, “The Beanery,” which opened 1949. The six-stool, 20-foot by 15-foot diner served American classics like the hot dog with curb service. By 1952, he opened his first American Style Smorgasbord restaurant.
When the smorgasbord style became less trendy, he turned each of his restaurants into steakhouses called the Golden Spike, the first of which opened in 1954. The railroad theme (there was a toy train set up at the bar that delivered your drink) came from Garneau’s interest in Promontory Summit in Utah, the point that completed of the first transcontinental railroad in 1869. At the height of his business, he had six successful restaurants: four in Pittsburgh, one in Clarion, Pennsylvania, where Garneau raised his family, and one in South Florida. Garneau died in May of this year at his home in Florida at age 90.
Garneau’s invention effectively changed the standard for food safety in self-service environments. Even though there isn’t evidence of a direct causation between Garneau’s patent and food-safety initiatives, as far back as the early ’60s, the FDA regulated the presence of food shields. “The 1962 Model Food Service Sanitation Ordinance and 1976 Model Food Service Sanitation Ordinance also has very similar language,” David Steigman a communications representative of the FDA stated in an email to Smithsonian.com. “Instead of ‘salad bar food guards’, the term ‘counter protective devices’ and ‘salad bar protective devices’ were used in 1962 and 1976 respectively.” The NSF’s Food Service Equipment criteria for the design and construction for “counter guards” go as far back as 1965, and perhaps even earlier.
The most current law, the 2013 Food Code, under section 3-306.11 states that: “FOOD on display shall be protected from contamination by the use of PACKAGING; counter, service line, or salad bar FOOD guards; display cases; or other effective means.”
All 50 states have adopted food codes patterned after one of the six versions of the FDA’s model (1993, 1995, 1997, 1999, 2001, 2005, 2009, and as of last month 2013), which include requirements for protecting food on display that resemble Garneau’s original design. Though each state’s regulation remains in line with the FDA’s guidelines, it is up to state, local and tribal agencies to regulate and inspect retail food establishments. The degree of coverage and specific dimensions of “food guards” vary. New Jersey for example follows the National Sanitation Foundation Internation (NSF International) Food Guard requirements which state that a sneeze guard must be positioned 14 inches above the food counter surface and must extend seven inches beyond the edge of the utensil on which food is placed.
According to Elizabeth Dougherty, director of inventor education at the U.S. Patent and Trademark Office, there are only about 100 patents filed in the area of food storage, safety and care—a small number when you consider that there are eight million U.S. patents total. After Garneau’s patent in 1959, there have been some innovations in the field with minor changes in the original design.
“The late ’50s does seem to be the era in time when the sneeze guards started to become an object for innovation and invention,” Dougherty says. “Prior to this time, there are very few documented patents in this technology area.”
The saying is that “necessity is the mother of invention.” It took a Midwestern restauranteur to realize that without something to protect them, everyone’s favorite buffet foods were defenseless from the attack of a 40 mph sneeze.
November 27, 2013
Americans consume 5,062,500 gallons of jellied cranberry sauce—Ocean Spray’s official name for the traditional Thanksgiving side dish we know and love that holds the shape of the can it comes in—every holiday season. That’s four million pounds of cranberries—200 berries in each can—that reach a gel-like consistency from pectin, a natural setting agent found in the food. If you’re part of the 26 percent of Americans who make homemade sauce during the holidays, consider that only about five percent of America’s total cranberry crop is sold as fresh fruit. Also consider that 100 years ago, cranberries were only available fresh for a mere two months out of the year (they are usually harvested mid-September until around mid-November in North America making them the perfect Thanksgiving side). In 1912, one savvy businessman devised a way to change the cranberry industry forever.
Marcus L. Urann was a lawyer with big plans. At the turn of the 20th century, he left his legal career to buy a cranberry bog. “I felt I could do something for New England. You know, everything in life is what you do for others,” Urann said in an interview published in the Spokane Daily Chronicle in 1959, decades after his inspired career change. His altruistic motives aside, Urann was a savvy businessman who knew how to work a market. After he set up cooking facilities at as packinghouse in Hanson, Massachusetts, he began to consider ways to extend the short selling season of the berries. Canning them, in particular, he knew would make the berry a year-round product.
“Cranberries are picked during a six-week period,” Robert Cox, coauthor of Massachusetts Cranberry Culture: A History from Bog to Table says. “Before canning technology, the product had to be consumed immediately and the rest of the year there was almost no market. Urann’s canned cranberry sauce and juice are revolutionary innovations because they produced a product with a shelf life of months and months instead of just days.”
Native Americans were the first to cultivate the cranberry in North America, but the berries weren’t marketed and sold commercially until the middle of the 18th century. Revolutionary war veteran Henry Hall is often credited with planting the first-known commercial cranberry bed in Dennis, Massachusetts in 1816, but Cox says Sir Joseph Banks, one of the most important figures of his time in British science, was harvesting cranberries in Britain a decade earlier from seeds that were sent over from the states—Banks just never marketed them. By the mid-19th century, what we know as the modern cranberry industry was in full swing and the competition among bog growers was fierce.
The business model worked on a small scale at first: families and members of the community harvested wild cranberries and then sold them locally or to a middle man before retail. As the market expanded to larger cities like Boston, Providence and New York, growers relied on cheap labor from migrant workers. Farmers competed to unload their surpluses fast—what was once a small, local venture, became a boom or bust business.
What kept the cranberry market from really exploding was a combination of geography and economics. The berries require a very particular environment for a successful crop, and are localized to areas like Massachusetts and Wisconsin. Last year, I investigated where various items on the Thanksgiving menu were grown: “Cranberries are picky when it comes to growing conditions… Because they are traditionally grown in natural wetlands, they need a lot of water. During the long, cold winter months, they also require a period of dormancy which rules out any southern region of the U.S. as an option for cranberry farming.”
Urann’s idea to can and juice cranberries in 1912 created a market that cranberry growers had never seen before. But his business sense went even further.
“He had the savvy, the finances, the connections and the innovative spirit to make change happen. He wasn’t the only one to cook cranberry sauce, he wasn’t the only one to develop new products, but he was the first to come up with the idea,” says Cox. His innovative ideas were helped by a change in how cranberries were harvested.
In the 1930s, techniques transitioned from “dry” to “wet”— a confusing distinction, says Sharon Newcomb, brand communication specialist with Ocean Spray. Cranberries grow on vines and can be harvested either by picking them individually by hand (dry) or by flooding the bog at time of harvest (wet) like what we see in many Ocean Spray commercials. Today about 90 percent of cranberries are picked using wet harvesting techniques. “Cranberries are a hearty plant, they grow in acidic, sandy soil,” Newcomb says. “A lot of people, when they see our commercials think cranberries grow in water.”
The water helps to separate the berry from the vine and small air pockets in the berries allow them to float to the surface. Rather than taking a week, you could do it in an afternoon. Instead of a team of 20 or 30, bogs now have a team of four or five. After the wet harvesting option was introduced in the mid to late 1900s, growers looked to new methods of using their crop, including canning, freezing, drying, juicing berries, Cox says.
Urann also helped develop a number of novel cranberry products, like the cranberry juice cocktail in 1933, for example, and six years later, he came up with a syrup for mixed drinks. The famous (or infamous) cranberry sauce “log” we know today became available nationwide in 1941.
Urann had tackled the challenge of harvesting a crop prone to glut and seesawing prices, but federal regulations stood in the way of him cornering the market. He had seen other industries fall under scrutiny for violating antitrust laws; in 1890, Congress passed the Sherman Anti-Trust Act, which was followed by additional legislation, including the Clayton Act of 1914 and the Federal Trade Commission Act of 1914.
In 1930, Urann convinced his competitors John C. Makepeace of the AD Makepeace company—the nation’s largest grower at the time—and Elizabeth F. Lee of the New Jersey-based Cranberry Products Company to join forces under the cooperative, Cranberry Canners, Inc. His creation, a cooperative that minimized the risks from the crop’s price and volume instability, would have been illegal had attorney John Quarles not found an exemption for agricultural cooperatives in the Capper-Volstead act of 1922, which gave “associations” making agricultural products limited exemptions from anti-trust laws.
After World War II, in 1946, the cooperative became the National Cranberry Association and by 1957 changed its name to Ocean Spray. (Fun Fact: Urann at first “borrowed” the Ocean Spray name and added the image of the breaking wave, and cranberry vines from a fish company in Washington State from which he later bought the rights). Later, Urann would tell the Associated Press why he believed the cooperative structure worked: ”grower control (which) means ‘self control’ to maintain the lowest possible price to consumers.” In theory, the cooperative would keep the competition among growers at bay. Cox explains:
From the beginning, the relationship between the three [Urann, Makepeace and Lee] was fraught with mistrust, but on the principle that one should keep one’s enemies closer than one’s friends, the cooperative pursued a canned version of the [American Cranberry Exchange] ACE’s fresh strategy, rationalizing production, distribution, quality control, marketing and pricing.
Ocean Spray still is a cooperative of 600 independent growers across the United States that work together to set prices and standards.
We can’t thank Urann in person for his contribution to our yearly cranberry intake (he died in 1963), but we can at least visualize this: If you lay out all the cans of sauce consumed in a year from end to end, it would stretch 3,385 miles—the length of 67,500 football fields. To those of you ready to crack open your can of jellied cranberry sauce this fall, cheers.
October 30, 2013
In 1971, Walt Disney World had just opened in Orlando, Florida. Led Zepplin was about to blow our minds, a prison riot had been shut down at Attica, and all across America, kids were pooping pink. Hundreds of mothers hospitalized their children for fecal testing out of fear of internal bleeding. Within that same year, not-so-coincidentally, General Mills released their classic monster cereals Count Chocula and Franken Berry. The latter was colored red using “Food, Drug and Cosmetics” (FD & C) Red No. 2 and No. 3., originally and chemically known as amaranth, a synthetic color named after the natural flower. The synthetic dye can’t be broken down or absorbed by the body.
A 1972 case study, “Benign Red Pigmentation of Stool Resulting from Food Coloring in a New Breakfast Cereal (The Franken Berry Stool),” published in Pediatrics explains the phenomenon later known as “Franken Berry Stool.” A 12-year-old boy was hospitalized for four days after being admitted for possible rectal bleeding. “The stool had no abnormal odor but looked like strawberry ice cream,” Payne reports. Further questioning of the mother revealed that the child had enjoyed a bowl of Franken Berry cereal two days and one day prior to his hospitalization. By the fourth day, they did a little experiment: They fed the boy four bowls of Franken Berry cereal and for the next two days, he passed bright pink stools. But other than pink poop, there were no other symptoms, Payne reports, “Physical examination upon admission revealed [a boy] in no acute distress and with normal vital signs…Physical examination was otherwise unremarkable.”
At the time of the study, the product had only been on the market for a few weeks. The author warns that “physicians should be aware of its potential for producing reddish stools.” Other monster cereals at the time also used dyes that caused stool to change colors. Booberry, which debuted in December of 1972, for example, uses Blue No. 1 (a dye currently banned in Norway, Finland and France) and turns stool green. Apparently, green stool seems less life-threatening than the reddish hue caused by Franken Berry.
But pink poop wasn’t always the worst side effect from colored confections. Ruth Winters’s A Consumer’s Dictionary of Cosmetic Ingredients details the history of commercial food dyes, including those later used in Franken Berry. At the turn of the 20th century, with virtually no regulation of more than 80 dyes used to color food, the same dyes used for clothes could also be used to color confections and other edibles.
In 1906, Congress passed the first legislation for food colors, the Pure Food and Drug Act, deeming seven colors suitable for use in food: orange, erythrosine, ponceu 3R, amaranth (the color later used in Franken Berry cereal), indigotin, naphthol yellow, and light green. Since then, upon further study, several of these choices have been delisted.
More than 20 years later, in 1938, Congress passed the Federal Food, Drug, and Cosmetic Act which gave these colors numbers instead of chemical names—every batch needed to be certified by the Food and Drug Administration, though some problems still arose: in the fall of 1950, many children became ill from eating an orange Halloween candy containing one to two percent FD&C Orange No. 1, for example.
Red Dye No. 2, the one used by the original Franken Berry cereal, was one of the most widely used color additives at the time, until a 1971 Russian study reported that the dyes caused tumors in female rats. Years of research led the FDA to find that even though the Russian study was extremely flawed (the FDA couldn’t even prove that amaranth was one of the dyes used), the agency would remove the dye from its Generally Regarded As Safe (GRAS) list in 1976. Between public outcry against the dye and the chance that trace elements could potentially have carcinogens, the FDA banned a number of other dyes as well. According to the FDA, 47 other countries, including Canada and the United Kingdom, still allow for the use of Red Dye No. 2.
That same year, Mars removed their red M&M’s from the candy-color spectrum for nearly a decade, even though Mars didn’t even use Red No. 2; the removal of the red candies was a response to the scare, livescience.com reports:
The red food coloring in question was not actually used in M&M’s chocolate candies, according to mms.com. “However, to avoid consumer confusion, the red candies were pulled from the color mix.”
Inquiries to General Mills as to when the Franken Berry ingredients switched to less poop-worrying dyes, were not responded to. These days, the only red colors accepted by the FDA are Red No. 40, which appears in all five of the General Mills monster cereals, and Red No. 3, typically used in candied fruits.
The symptoms of “Franken Berry Stool” were pretty benign compared to other more notable confectionary mishaps in history: The accidental poisoning of more than 200 people in Bradford, England in 1858 comes to mind. The sweets were accidentally made with arsenic. Let’s be thankful there’s a bit more regulation of food dyes these days.
Another stool scare in cereal history: Smurfberry Crunch Cereal, released in 1982 by Post Foods, turned the poop of those who ate it blue—the ultimate Smurfs experience. Post then changed the formula and re-released the cereal in 1987 as Magic Berries Cereal.
Looking for a sugar high now? You’re safe. When you open your celebratory, Franken Berry or any of the other monster cereals this Halloween, [for the first time, all five monsters are available in stores since the well-received re-release of Frute Brute (1975-1984) and Fruity Yummy Mummy (1987-1992)], expect a sugar high—without the pink poop aftermath. We tasted all five of the cereals and Count Chocula is the best by a long shot.
The best part is when the chocolate “sweeties,” as the marshmallows were called in the original commercials in 1971, are all gone: the plain milk becomes chocolate milk. Let’s be real, what child—or “adult”—prefers regular milk to chocolate? I haven’t met this kind of person.
October 21, 2013
It was the second day of autumn term at a small boys’ school in South London in 1979. Without warning, 78 schoolboys and a handful of monitors simultaneously fell ill. Symptoms included vomiting, diarrhea, abdominal pain and, in severe cases, depression of the central nervous system. Several patients were comatose with episodes of convulsive twitching and violent fits of fever. In many patients, there were signs of peripheral circulatory collapse. Within five days of the initial outbreak, all patients recovered in full, though some hallucinated for several days, Mary McMillan and J.C. Thompson report in the Quarterly Journal of Medicine. But what could cause such a sudden and mysterious illness?
Turns out, a bag of potatoes left in storage from the previous summer term.
After careful analysis of the sequence of events, the onset of symptoms was pinpointed to about four to 14 hours after the boys had eaten boiled potatoes that had a high concentration of the toxin, solanine, a glycoalkaloid that was first isolated in 1820 in the berries of a European black nightshade. Nightshade is the term used to describe over 2,800 species of plants in the scientific family, Solanaceae. Eggplants, tomatoes, and some berries are common members of the nightshade family—many of them contain highly toxic alkaloids.
That said, the potato is the most common cause of solanine poisoning in humans. But how do you know when solanine is present in a potato? The tuber is turning green.
Though the green color that forms on the skin of a potato is actually chlorophyll, which isn’t toxic at all (it’s the plant’s response to light exposure), the presence of chlorophyll indicates concentrations of solanine. The nerve toxin is produced in the green part of the potato (the leaves, the stem, and any green spots on the skin). The reason it exists? It’s a part of the plant’s defense against insects, disease and other predators.
If you eat enough of the green stuff, it can cause vomiting, diarrhea, headaches, paralysis of the central nervous system (as evidenced by the incident above) but in some rare cases the poisoning can cause coma—even death. Studies have recorded illnesses caused by a range of 30 to 50 mg of solanine per 100 grams of potato, but symptoms vary depending on the ratio of body weight of the toxin and the individual’s tolerance of the alkaloid. The following cases recorded in various medical journals include examples of some of the most severe cases of solanine poisoning (many of which resulted in death):
1899: After eating cooked potatoes containing 0.24 mg of solanine per gram of potato, 56 German soldiers experienced solanine poisoning. Though all recovered, in a few cases, jaundice and partial paralysis were observed.
1918: In Glasgow, Scotland, 61 people from 18 separate households were affected at once by a bad batch of potatoes. The following day, a five-year-old boy died of strangulation of the bowel following extreme retching and vomiting. According to “An Investigation of Solanine Poisoning” by S. G. Willimott, PhD, B.Sc. published in 1933, the case was investigated by scientists, R. W. Harris and T. Cockburn, who concluded in their article, “Alleged Poisoning By Potatoes” (1918), that the poisoning was the result of eating potatoes which contained five or six times the amount of solanine found in normal potatoes. Willimott cites this particular occurrence as an example of the toxin’s prevalence: “A review of the literature reveals the fact that authentic cases of solanine poisoning are not so rare as authorities appear to believe.”
1925: Seven members of a family were poisoned by greened potatoes. Two of them died. According to reports, symptoms included vomiting, extreme exhaustion, but no convulsions like that of the schoolboys in London. Breathing was rapid and labored until consciousness was lost a few hours before death.
1948: A case of solanine poisoning involving the potato’s nightshade relative, the berry, was recorded in the article “A Fatal Case of Solanine Poisoning“ published in the British Medical Journal. On August 13 of that year, a 9-year-old girl with a bad habit of snacking on the berries that grew along the railroad tracks by her house was admitted to the hospital with symptoms of vomiting, abdominal pain, and distressed breathing. She died two days later. An autopsy found hemorrhages in the mucosa of stomach and middle section of her small intestine. The stomach contained about one pint of dark brown fluid.
1952: According to the British Medical Journal, solanine poisoning is most common during times of food shortage. In the face of starvation, there have been accounts of large groups eating older potatoes with a higher concentration of the toxin. In North Korea during the war years of 1952-1953, entire communities were forced to eat rotting potatoes. In one area alone, 382 people were affected, of whom 52 were hospitalized and 22 died. The most severe cases died of heart failure within 24 hours of potato consumption. Some of the less severe symptoms included irregular pulses, enlargement of the heart, and blueing lips and ears. Those who displayed these ailments died within 5 or 10 days. Authors John Emsley and Peter Fell explain their book Was It Something You Ate?: Food Intolerance: What Causes It and How to Avoid It: ”In the final stages [of the illness] there were sometimes a state of high excitability with shaking attacks and death was due to respiratory failure.”
1983: Sixty-one of 109 school children and staff in Alberta, Canada, fell ill within five minutes of eating baked potato. Forty-four percent of those affected noted a green tinge and a bitter taste in the potatoes.
Not to worry though, fatal cases of solanine poisoning are very rare these days. Most commercial varieties of potatoes are screened for solanine, but any potato will build up the toxin to dangerous levels if exposed to light or stored improperly. Often, the highest concentrations of solanine are in the peel, just below the surface and in the sprouted “eyes”—things that are typically removed in cooking preparation—though Warren would argue even boiling water in potato prep dissolves only a little of the alkaloid. Emsley and Fell continue:
Most people can easily cope with the solanine in the average portion of potato and show no symptoms of poisoning because the body can break it down and rapidly and excrete the products in the urine. But if the level of solanine is as high as 40 mg per 100 g of potato, symptoms include diarrhea…even coma.
The best way to prevent solanine poisoning is to store tubers in a cool, dark place and remove the skin before consumption. A general rule for avoiding illnesses like the ones described above? Green and sprouted? Throw it out.
October 11, 2013
For the privileged eaters of the Western world, so much of eating is done routinely: cereal for breakfast, a sandwich for lunch, probably a protein and vegetable for dinner. Sometimes, the act of eating is so second nature that the guidelines that dictate how and when we eat are invisible—guidelines such as eating a steak for dinner but not for breakfast, or eating lunch in the middle of the day. Eating wasn’t always dictated by these rules—so why is it now? That’s the question that food historian Abigail Carroll set out to answer in her new book, Three Squares: The Invention of the American Meal. Tracing the meal’s history from colonial America to present-day, Carroll explores why we eat cereal for breakfast, how dinner became American and how revisiting the history of our meal can have a tpositive impact on the future of eating. Carroll spoke with Smithsonian.com about the guidelines that control our dining.
How did the associations between certain meals and certain foods, like cereal for breakfast, form?
You start in the very early colonial era with one meal in the middle of the day—and it’s the hot meal of the day, dinner. Farmers and laborers ate earlier because they were up really early, and the elite were eating later in the day because they could sleep in. Breakfast and supper were kind of like glorified snacks, often leftovers or cornmeal mush, and there was not a lot of emphasis placed on these meals. Dinner, the main meal, at which people did tend to sit down together and eat, was really not the kind of social event that it has become. People did not emphasize manners, they did not emphasize conversation, and if conversation did take place it wasn’t very formal: it was really about eating and refueling. That’s the time where there are very blurry lines between what is and what isn’t a meal, and very blurry lines between what is breakfast, dinner and lunch.
Then, with the Industrial Revolution, everything changed, because people’s work schedules changed drastically. People were moving from the agrarian lifestyle to an urban, factory-driven lifestyle, and weren’t able to go home in the middle of the day. Instead, they could all come home and have dinner together, so that meal becomes special. And that’s when manners become very important, and protocol and formality. It’s really around then that people start to associate specific foods with certain meals.
Then, with dinner shifting you have the vacuum in the middle of the day that lunch is invented to fill. People are bringing pie for lunch, they’re bringing biscuits, but the sandwich really lends itself to lunch well. So the popularity of the sandwich really does have something to do with the rise of lunch—and especially the rise of children’s lunch, because it’s not messy. You don’t need utensils, you don’t have to clean up—you can stick it in a lunch pail really easily.
Why is it acceptable to eat cereal and eggs and a waffle for breakfast, but not for lunch or dinner? How did breakfast go from being a necessity meal—fueled by leftovers—to a meal with clear guidelines for what is acceptable to eat?
There was a problem during the Industrial Revolution: people were still eating a farmer’s diet, but they were shifting to a more sedentary lifestyle, which caused indigestion. People who were interested in health started looking into that and started coming up with solutions. Sylvester Graham, the reformer who became a preacher of health ideology, advocated for vegetarian food, and whole wheat as kind of a panacea for health problems, which becomes the answer to the question of breakfast. Then, people who ran sanitariums, including John Harvey Kellogg, in the late 1800s and early 1900s, really took that idea and flew with it and invented new ways to eat farinaceous [starchy] foods.
Entrepreneurs—some of whom worked in the sanitariums, like Charles C. Post–really build on these ideas and make them a healthy requirement. He creates all sorts of crazy testimonies that serve as advertisements for Grape-Nuts, where people’s lives are saved from chronic illness and they’re able to walk again.
Then, there’s also the history of orange juice and milk, with the discovery of vitamins in the 1910s. Milk came to be seen as a super food, and something that would keep you from getting deficiency diseases. It shows up at other meals too, but for much of the 20th century, it’s not a complete meal unless you have milk.
Why is it that, in America, we have maintained the feeling that lunch needs to be a quick meal in the middle of the day?
We still are working a lot—we’re working more hours in the United States than any other industrialized nation. Lunch is the original quick meal; it accommodated changing work schedules.
And dinner has taken on the ideological weight of the meal. Dinner has been the time when we celebrate family, and when we concentrate on having a nice, hot meal, ideally. Because dinner fulfilled that need, there was less of a need for the other meals to. Lunch doesn’t have a lot of cultural work to do; it just has to get us by.
But, if you think about it, it’s not just lunch—it’s breakfast too. We can just pour milk over cereal, or pop some toast in the toaster and walk out the door without even needing a plate or utensils. Breakfast accommodates work. It’s not the meal that shapes work, it’s the work that shapes the meal.
Could you talk about how dinner became a particularly American institution?
Dinner was not initially a strong identifying factor, in terms of nationality, for colonists. At first, they were eating more or less peasant food, porridges brought from England that said more about class than nationality. Then, dinner shifts in the 1700s to become an identifying factor in terms of being English. They’re in this new world, seen as primitive, and so they feel that they have to compensate for that. They inherit the fashions that cross the ocean, like eating a roast with dinner.
In the nineteenth century, the emerging middle class identifies itself through French food and French ways of eating. Things that we take for granted now, like starting a meal with soup or having a salad, were really French concepts. Dessert was largely a French concept, and many of the desserts that we adopted in the 19th century were French desserts. For the Victorian middle class, eating in the French way was a way to imitate the elite.
With the decline of servants in the late 1800s, people just couldn’t keep that up. Then there are the [World] Wars and the Depression, and those require Americans to be frugal. But they don’t just require Americans to be frugal—they give Americans the opportunity to celebrate frugality as patriotic. To eat frugally, to have a Victory Garden and can your own foods is patriotic. The model for dinner is no longer the French multicourse formal meal, but Thanksgiving. Thanksgiving becomes the model for the everyday American dinner. Of course, you don’t eat a whole roast every night, but the idea is that you have “a chicken in every pot,” which was Herbert Hoover’s 1928 campaign slogan. You would have some kind of meat on the table.
Are there any dishes or foods that you would classify as typically, or even exclusively, “American?”
A number of iconic foods—hot dogs and hamburgers, snack food—are hand-held. They’re novelties associated with entertainment. These are the kinds of food you eat at the ballpark, buy at a fair and eventually eat in your home. I think that there is a pattern there of iconic foods being quick and hand-held that speaks to the pace of American life, and also speaks to freedom. You’re free from the injunctions of Victorian manners and having to eat with a fork and knife and hold them properly, sit at the table and sit up straight and have your napkin properly placed. These foods shirk all that. There’s a sense of independence and a celebration of childhood in some of those foods, and we value that informality, the freedom and the fun that is associated with them.
Along those lines, there’s a lot of pushback against those processed foods today, with people wanting to recall old ways of eating, with eating local and fresh. But, how do you think that knowing the kinds of food that we used to eat and the ways that we used to eat, and think about eating, influences the future of American food?
History can play a really central role in thinking about the way that we want to eat in the future. The evolution of the meal is a process, and it continues.
With all of the talk of food and health, I think a really good question to ask is “Can we actually be healthy without eating meals?” And without even, perhaps, eating a family dinner? Studies show that eating together, we always eat better, always.
The family meal is the opportunity to put to work what we’re talking about. If we’re learning about fresh foods and ingredients, the family meal has potential to be another way of instructing our children and ourselves. There’s an interest in renewing the family meal, even reinventing it. We’re not going to be able to revive a Victorian notion of dining; I don’t think we’re interested in it. If we want to spend time together, if we want to invest in our children, if we want to be healthy, the family meal can be a vehicle for that.