March 6, 2013
In 1975, a team of Russian archaeologists announced that they’d made a remarkable find: From a cave in the Altai Mountains of Siberia, they’d unearthed a 33,000-year-old fossil skull that resembled a wolf. In 2011, an anatomical analysis suggested that the fossil was a hybrid of a wolf (with its large teeth) and a dog (with its shortened snout), raising the possibility that it was a partly domesticated wolf—in other words, one of the oldest ancestors of the modern dog ever discovered.
At the time, though, DNA analysis was needed to make certain that the fossil came from an ancestor of man’s best friend. A paper published today in the journal PLOS ONE confirms that fact, indicating that the creature was more closely related to modern dogs than wolves, and forcing scientists to reconsider the dog’s evolutionary family tree.
To come to the finding, a team led by Anna Druzhkova of the Russian Academy of Sciences sequenced mitochondrial DNA taken from one of the skull’s teeth. This type of genetic material comes from an organelle inside each cell called the mitochondria, which has a distinct type of DNA that’s separate from the cell’s normal chromosomes. For each individual, mitochondrial DNA is inherited directly from one’s mother without any modifications and thus remains relatively constant over generations, except for the gradual effect of mutations. Similarities found in such DNA collected from various animals helps scientists understand the evolutionary relationships between species.
The research team compared their sample of mitochondrial DNA from the ancient skull with samples from 70 different modern breeds of dog, along with 30 different wolf and 4 different coyote DNA samples. Their analysis found that the fossil’s DNA didn’t match any of the other samples perfectly, but most closely resembled the modern dog breeds, sharing the most similarities with Tibetian Mastiffs, Newfoundlands and Siberian Huskies in particular.
Scientists know that dogs evolved as a result of the domestication of wolves, but the specific time and location of this domestication is still poorly understood—and this discovery further complicates that picture. Most experts agree that dogs predate the invention of agriculture (which happened roughly 10,000 years ago), but some say that domestication may have occurred as long as 100,000 years ago.
This finding—and the previous radiocarbon dating of the skull which established its age—set that event to at least 33,000 years ago. However, dogs may have been domesticated from wolves multiple times, and this breed of Siberian dog may have actually gone extinct, rather than serving as an ancestor for modern dogs. Archaeological evidence indicates that, with the onset of the last glacial maximum (around 26,000 years ago), humans in this area of Siberia may have stopped domesticating dogs, maybe due to food scarcity. In that case, an independent domestication elsewhere may have led to the dogs of today.
On the other hand, domestication in the vicinity of the Altai Mountains, as evidenced by this finding, may have led to the geographic spread of dogs elsewhere in Asia and Europe, even if they died out in Siberia. Previously, many have suggested that the first domestication occurred in the Middle East or East Asia, but this skull could force scientists to rethink their theories. The research team behind the analysis notes that finding more ancient dog remains will help us in putting together the puzzle.
Sign up for our free newsletter to receive the best stories from Smithsonian.com each week.
January 15, 2013
For years, when museums, textbooks or other outlets attempted to illustrate what a particular ancient human skeleton would have looked like in the flesh, their method was admittedly unscientific—they basically had to make an educated guess.
Now, though, a group of researchers from Poland and the Netherlands has provided a remarkable new option, described in an article they published in the journal Investigative Genetics on Sunday. By adapting DNA analysis methods originally developed for forensic investigations, they’ve been able to determine the hair and eye color of humans who lived as long as 800 years ago.
The team’s method examines 24 locations in the human genome that vary between individuals and play a role in determining hair and eye color. Although this DNA degrades over time, the system is sensitive enough to generate this information from genetic samples—taken either from teeth or bones—that are several centuries old (although the most degraded samples can provide information for eye color only).
As a proof of concept, the team performed the analysis for a number of people whose eye and hair color we already know. Among others, they tested the DNA of Władysław Sikorski, a former Prime Minister of Poland who died in a 1943 plane crash, and determined that Sikorski had blue eyes and blonde hair, which correctly matches color photographs.
But the more useful application of the new method is providing new information. “This system can be used to solve historical controversies where colour photographs or other records are missing,” co-author Manfred Kayser, of Erasmus University in Rotterdam, said in a statement.
For example, in the paper, the researchers analyzed the hair and eye color for a female skeleton buried in the crypt of a Benedictine Abbey near Kraków, Poland, sometime between the 12th and 14th centuries. The skeleton had been of interest to archaeologists for some time, since male monks were typically the only people buried in the crypt. The team’s analysis showed that she had brown eyes and dark blond or brown hair.
The team is not sure yet just how old a skeleton has to be for its DNA to be degraded beyond use—the woman buried in the crypt was the oldest one tested—so it‘s conceivable that it might even work for individuals who’ve been in the ground for more than a millenium. The researchers suggest this sort of analysis could soon become part of a standard anthropological toolkit for evaluating human remains.
January 7, 2013
Around 120 B.C.E., the Relitto del Pozzino, a Roman shipping vessel, sank off the coast of Tuscany. More than two millennia later, in the 1980s and 90s, a team sent by the Archeological Superintendency of Tuscany began to excavate the ruins, hauling up planks of rotting wood.
“It wasn’t an easy task. The wreck is covered by marine plants and their roots. This makes it hard to excavate it,” underwater archaeologist Enrico Ciabatti told Discovery News in 2010. “But our efforts paid off, since we discovered a unique, heterogeneous cargo.”
That cargo, it turned out, included ceramic vessels made to carry wine, glass cups from the Palestine area and lamps from Asia minor. But in 2004, the archaeologists discovered it also included something even more interesting: the remains of 2,000-year-old medicine chest.
Although the chest itself—which had presumably belonged to a Roman doctor—was apparently destroyed, researchers found a surgery hook, a mortar, 136 wooden drug vials and several cylindrical tin vessels (called pyxides) all clustered together on the ocean floor. When they x-rayed the pyxides, they saw that one of them had a number of layered objects inside: five circular, relatively flat grey medicinal tablets. Because the vessels had been sealed, the pills had been kept completely dry over the years, providing a tantalizing opportunity for us to find out what exactly the ancient Romans used as medicine.
Now, as revealed today in a paper in the Proceedings of the National Academy of Sciences, a team of Italian chemists has conducted a thorough chemical analysis of the tablets for the first time. Their conclusion? The pills contain a number of zinc compounds, as well as iron oxide, starch, beeswax, pine resin and other plant-derived materials. One of the pills seems to have the impression of a piece of fabric on one side, indicating it may have once been wrapped in fabric in order to prevent crumbling.
Based on their shape and composition, the researchers venture that the tablets may have served as some sort of eye medicine or eyewash. The Latin name for eyewash (collyrium), in fact, comes from the Greek word κoλλυρα, which means “small round loaves.”
Although it remains to be seen just how effective this sort of compound would have been as an actual eye treatment, the rare glimpse into Roman-era medicinal practices is fascinating nonetheless. The vast majority of our knowledge of ancient medicine comes from writings—which may vary in accuracy and lack crucial details—so the presence of actual physical evidence is especially exciting.
December 12, 2012
Archaeologists have long known that cheese is an ancient human invention. Wall murals in Egyptian tombs from 2000 BCE depict cheesemaking, and Sumerian tablets written in cuneiform text seem to describe cheese as well. Our distant ancestors, it seems clear, knew about the wonder that is cheese.
Today, though, cheese lovers have cause to celebrate: New evidence indicates that the invention of the utterly delicious and at times stinky product actually came thousands of years earlier. As described in a paper published today in Nature, chemical analysis of prehistoric pottery unearthed from sites in Poland shows that cheesemaking was invented way farther back than originally believed—roughly 7000 years ago.
A team of researchers from the University of Bristol, Princeton and a group of Polish universities came to the finding by examining an unusual group of artifacts from the Polish sites: clay shards that were pierced with a series of small holes. Struck by their resemblance to in modern-day cheese strainers, they chemically tested the material around the holes, and were vindicated to find ancient traces of the kinds of lipids and fatty acids found in dairy products. These ceramics are attributed to what archaeologists call the Linear Pottery culture, and are dated to 5200 to 4900 BCE.
“The presence of milk residues in sieves, which look like modern cheese-strainers, constitutes the earliest direct evidence for cheesemaking,” said lead author Mélanie Salque of the University of Bristol in a statement. “So far, early evidence for cheesemaking were mostly iconographic, that is to say murals showing milk processing, which dates to several millennia later than the cheese strainers.”
Although different cheeses are made by a variety of processes, nearly all start with the separation of milk into liquid whey and solid curds. This is typically accomplished by adding bacteria to the milk, along with rennet (a mix of enzymes produced in animal stomachs), then straining out the liquid from the newly-coagulated curds. These perforated pots, then, seem like they were used to strain out the solids.
The researchers also analyzed other pottery fragments from the site. Several unperforated bowls also had traces of dairy residues, indicating they might have been used to store the curds or whey after separation. They also found remnants of fats from cow carcasses in some of the ceramics, along with beeswax in others, suggesting they were used to cook meat and sealed to store water, respectively. Apart from being capable of making a complex food product like cheese, it seems that these ancient people also created different types of specialized ceramics for different purposes.
The authors of the paper believe this ancient cheesemaking goes a long way in explaining a mystery: why humans bothered to domesticate cows, goats and sheep thousands of years ago, rather than eating their wild ancestors, even though genetic evidence indicates that we hadn’t yet evolved the ability to digest lactose, and thus couldn’t drink milk. Since cheese is so much lower in lactose than milk, they say, figuring out how to make it would have provided a means for unlocking milk’s nutritional content, and gave prehistoric humans incentive to raise these animals over a long period of time, instead of slaughtering them for their meat immediately. Making cheese also gave these people the ability to preserve the nutritional content, since milk spoils much more quickly.
That leaves one more pressing question—what did this ancient cheese actually taste like? Without abundant access to salt or knowledge of the refined heating and ripening processes that are necessary for the variety of cheese we have today, it’s likely that the first cheeses were pretty bland and liquidy. Like ancient Egyptian cheeses, these were probably comparable in texture and taste to cottage cheese, Salque and colleagues noted.
December 5, 2012
The iconic caveman in popular culture is Fred Flintstone: slow-witted and unskilled. In general, we think of the cave art produced by prehistoric people as crude and imprecise too—a mere glimmer of the artistic mastery that would blossom millenia later, during the Renaissance and beyond.
If this is your impression of prehistoric humans, a new study published today in PLOS ONE by researchers from Eotvos University in Budapest, Hungary, might surprise you. In analyzing dozens of examples of cave art from places such as Lascaux, the group, led by Gabor Horvath, determined that prehistoric artists were actually better at accurately depicting the way four-legged animals walk than artists from the 19th and 20th centuries.
The researchers evaluated the prehistoric artists on the basis of the landmark 1880s finding by British photographer Eadweard Muybridge that horses (and, it was later discovered, most four-legged animals) move their legs in a particular sequence as they walk. The “foot-fall formula,” as it’s called, goes LH-LF-RH-RF, where H means ‘hind,’ F means ‘fore,’ and L and R mean ‘left’ and ‘right,’ respectively. At the time of Muybridge, this was thought to be an entirely novel discovery.
Except, as it turns out, prehistoric people apparently knew it too—and got it right in their drawings the majority of the time. Of the 39 ancient cave paintings depicting the motion of four-legged animals that were considered in the study, 21 nailed the sequence correctly, a success rate of 53.8%. Due to the number of combinations of how a four-legged animal’s gait can be depicted, the researchers state that mere chance would lead to a 26.7% rate of getting it right. Cavemen artists knew what they were doing.
When the researchers looked at 272 paintings and statues of four-legged animals made during modern times but before Muybridge’s findings in the 1880s, such as a famous horse sketch by Leonardo da Vinci, it turned out that these more recent artists were much worse: They only got the sequence right 16.5% of the time. Remarkably, even the 686 paintings and statues studied that were made more recently than 1887, after scientists knew for sure how four-legged animals walked, still got it right just 42.1% of the time.
Even apart from artists, a sizable number of depictions of four-legged animals made during the 20th century specifically for the sake of accuracy got the sequence wrong too, according to references used in the study. Out of 307 renditions analyzed, just 58.9% of depictions in natural history museums were correct, along with 56.9% of those in taxidermy catalogues, 50% of animal toy models and 36.4% of illustrations in animal anatomy textbooks.
Although the amount of art studied in each group varies greatly, the accuracy rate for animal depictions in prehistoric times is noteworthy. How could prehistoric humans possibly be this skilled at depicting animals such as bulls, antelopes and wild horses? For a potential answer, consider the way these ancient artists probably thought about the animals: as prey.
For prehistoric humans, “the observation of animals was not merely a pastime, but a matter of survival,” the study’s authors write. “Compared to artists of latter eras, when people were not as directly connected to nature, the creators of such cave paintings and carvings observed their subjects better and thus they depicted the walk of the animals in a more life-like manner.”