December 5, 2013
We’re in an age when scientists can detect the infinitesimally tiny particles that endow atoms with mass and are probing some of the deepest mysteries of biology, such as how experiences and memories can be passed down through an organism’s genes.
Thus, it may come as a surprise that we still don’t understand the mechanics of a relatively simple natural phenomenon: snow.
The formation of snowflakes—essentially, the idiosyncratic way that water crystallizes when suspended in our atmosphere—is an extremely complex process that still hasn’t been fully described by scientific formulas. “People think that a snowflake is just a frozen raindrop,” says Caltech physics professor Kenneth Libbrecht, who’s spent the past few decades studying the process of snowflake formation. “But that’s sleet, just little ice cubes, and not even close to what a snowflake is.”
Over the course of his research, Libbrecht’s work has grown to encompass art and science. He’s produced both scientific papers and hundreds of beautiful photos of natural snowflakes (which he’s published in several different books and had featured on U.S. postage stamps), and also devised ingenious ways to artificially grow snowflakes in a lab to study their formation in microscopic detail.
But it all started, he says, with a trip back to his childhood home of North Dakota. “I was visiting my family back there, and I went outside, and there was all this snow on the ground,” he says. “I suddenly thought, ‘Why don’t I understand more about these snowflakes?’”
That led Libbrecht to begin studying the dynamics of snowflake formation in his lab, in between researching more esoteric subjects like turnable diode lasers and the noise released by supernovae. “I realized that a lot about snowflakes is just not very well understood, and that ice is a pretty inexpensive material to work with,” he says.
The formation of even a single snowflake is a complex event at the molecular level. In nature, it begins when a cloud’s water vapor condenses into water droplets. Even at temperatures below freezing, though, most of these droplets typically stay in liquid form, because they need a particle upon which to freeze: either a dust particle or a few water molecules that have
arranged themselves into the hexagonal matrix that characterizes ice.
Once droplets begin crystallizing on a central particle, though, the process accelerates rapidly. With a crystal nucleus in place, the supercooled water molecules in the surrounding water droplets readily condense on the crystal, adding to its growth in a geometrically regular way. By the time the large crystal (which we call a snowflake) has left the cloud, Libbrecht estimates that it will have abosrbed the water from about 100,000 nearby droplets.
All that might sound simple, but as Libbrecht and other scientists have discovered, slight changes in the circumstances of these crystals—the cloud’s humidity and temperature, for starters—can lead to radically different-looking flakes. To better understand these dynamics, Libbrecht realized, he needed a way to observe the actual growth process of snowflakes. Without a way of embedding himself in a floating cloud, he decided to develop a method for artificially growing snowflakes in his California lab.
“To get an individual crystal growing in such a way that it looks like a snowflake is not easy,” he says. “If you want frost—just a bunch of crystals all growing at once—that’s pretty simple, but individual crystals are trickier.”
Libbrecht’s process, developed over the past few years, is done in a cold chamber and takes about 45 minutes in total. He starts with a completely clean piece of glass, and scatters many microscopic ice crystals onto it. With a microscope, he isolates a particular crystal, then blows slightly warmer humid air onto the glass. The water vapor condenses on the seed crystal, just like in a real cloud, eventually forming a visible snowflake.
Working with this process, Libbrecht has determined the temperature and humidity levels that lead to each particular kind of snowflake. “I call them ‘designer snowflakes,’ because you can change the conditions as you grow them and predict what they’ll will look like,” he says. Among other things, he’s found that a snowflake with a thin edge grows faster, causing the edge to sharpen even further, ultimately leading to a relatively large flake. Snowflakes that begin with blunter edges, however, grow more slowly and remain blunt, leading to blocky prisms, rather than elegant plates.
Eventually, when Libbrecht wanted to publish a book on his work, he discovered that, although they were good for their time, most of the photos of snowflakes available were out of date, like those taken by Wilson Bentley in the 1930s. In response, he began photographing them himself in high resolution, using specialized equipment and at times colored lights to give the clear flakes increased color and depth.
The answer, it turns out, is a math problem. If you define a snowflake as a mere ten molecules of water, then it’s possible for two different flakes to be identical at the molecular level. But for a full-size flake, he says, it’s extremely unlikely that you’d fine two identical ones that occur naturally—the same way that the odds of two identical human fingerprints are exceedingly small. “Once you start making things even slightly complicated, the number of possibilities grows astronomically, and the probability of even having two snowflakes that look remotely alike drops to zero,” he says.
November 19, 2013
In 2010, photographer Rose-Lynn Fisher published a book of remarkable images that captured the honeybee in an entirely new light. By using powerful scanning electron microscopes, she magnified a bee’s microscopic structures by hundreds or even thousands of times in size, revealing startling, abstract forms that are far too small to see with the naked eye.
Now, as part of a new project called “Topography of Tears,” she’s using microscopes to give us an unexpected view of another familiar subject: dried human tears.
“I started the project about five years ago, during a period of copious tears, amid lots of change and loss—so I had a surplus of raw material,” Fisher says. After the bee project and one in which she’d looked at a fragment of her own hip bone removed during surgery, she’d come to the realization that “everything we see in our lives is just the tip of the iceberg, visually,” she explains. “So I had this moment where I suddenly thought, ‘I wonder what a tear looks like up close?’”
When she caught one of her own tears on a slide, dried it, and then peered at it through a standard light microscope, “It was really interesting. It looked like an aerial view, almost as if I was looking down at a landscape from a plane,” she says. “Eventually, I started wondering—would a tear of grief look any different than a tear of joy? And how would they compare to, say, an onion tear?”
This idle musing ended up launching a multi-year photography project in which Fisher collected, examined and photographed more than 100 tears from both herself an a handful of other volunteers, including a newborn baby.
Scientifically, tears are divided into three different types, based on their origin. Both tears of grief and joy are psychic tears, triggered by extreme emotions, whether positive or negative. Basal tears are released continuously in tiny quantities (on average, 0.75 to 1.1 grams over a 24-hour period) to keep the cornea lubricated. Reflex tears are secreted in response to an irritant, like dust, onion vapors or tear gas.
All tears contain a variety of biological substances (including oils, antibodies and enzymes) suspended in salt water, but as Fisher saw, tears from each of the different categories include distinct molecules as well. Emotional tears, for instance, have been found to contain protein-based hormones including the neurotransmitter leucine enkephalin, a natural painkiller that is released when the body is under stress.
Additionally, because the structures seen under the microscope are largely crystallized salt, the circumstances under which the tear dries can lead to radically dissimilar shapes and formations, so two psychic tears with the exact same chemical makeup can look very different up close. “There are so many variables—there’s the chemistry, the viscosity, the setting, the evaporation rate and the settings of the microscope,” Fisher says.
As Fisher pored over the hundreds of dried tears, she began to see even more ways in which they resembled large-scale landscapes, or as she calls them, “aerial views of emotion terrain.”
“It’s amazing to me how the patterns of nature seem so similar, regardless of scale,” she says. “You can look at patterns of erosion that are etched into earth over thousands of years, and somehow they look very similar to the branched crystalline patterns of a dried tear that took less than a moment to form.”
Closely studying tears for so long has made Fisher think of them as far more than a salty liquid we discharge during difficult moments. “Tears are the medium of our most primal language in moments as unrelenting as death, as basic as hunger and as complex as a rite of passage,” she says. “It’s as though each one of our tears carries a microcosm of the collective human experience, like one drop of an ocean.”
November 7, 2013
If they chose, they could sit down and have their portrait painted. The catch, though, was that it’d be planned and executed entirely by an artificial intelligence program called The Painting Fool.
“I’m interested in the idea that software itself can be creative,” says Simon Colton, the British computer scientist behind the program. “I want to drag software into new territory—by getting it to write music, or compose poems or paint pictures in a creative way.”
The Painting Fool was created in 2001, when Colton, who was then working on a dissertation involving artificial intelligence, became obsessed with using photoshop to alter his photography. “I realized photoshop wasn’t doing what I wanted it to do, and I started programming, trying to get the graphics to work how I wanted,” he says. “Eventually, I realized I could bring this computer graphics work into the fold of computational creativity.”
In the years since, his software has created thousands of paintings and graphics, and he’s continually improved the algorithm to come ever-closer to meeting what he sees as seven key criteria for creativity: skill, appreciation, imagination, learning, intentionality, reflection and invention. “Appreciation is what sets the program apart from Photoshop, which has no appreciation of what it’s doing, or what it’s produced, or what materials it’s working with,” Colton says. “In terms of imagination—if the software doesn’t do fun, surprising things, that you wouldn’t have thought of, then it’s not truly creative.”
He and colleagues have developed a number of different applications for the Painting Fool, but for the July exhibition, the program’s approach began with a seemingly unrelated task: reading the newspaper. They want to make the algorithm’s products unpredictable and surprising—hallmarks of creativity—but not merely the result of randomness, so reading the news and analyzing keywords in hundreds of articles is a means of putting the Painting Fool into different moods that inform its work.
At times, reading the news puts the program into such a bad mood that it doesn’t want to paint at all. “I was in a particularly negative mood, because I was reading an article entitled: ‘Aftershocks rock Italy earthquake zone‘ in the world section of the Guardian newspaper, which was really sad, because it spoke of ‘terrified residents.’ So, I decided not to paint a portrait,” the Painting Fool wrote in response to one exhibition-goer.
Most of the time, though, the articles put the program into other moods (experimental, reflective or happy) that dictate one of roughly 30 qualities—bright, colorful, vivid, cold, bleary or crazy, among others—that it seeks to convey with a painting. With this in mind, when a subject sits down for a portrait, the Painting Fool starts issuing instructions. “You never feel like you’re using it, you feel like it’s using you, and you’re the model,” Colton says. “It says, ‘Thanks for being my model.’ Then, maybe ‘I want you to smile right now.’”
After taking a photo, the program isolates the subject’s face and places it within one of roughly 1000 abstract templates, then uses one of an additional 1000 image filters to manipulate the template and face further, searching for a combination likely to produce a portrait with the quality it originally chose. Finally, it splits the image into segments and fills each of these with a different color and texture, using virtual tools such as pencil, pastel or watercolors.
Afterward, the Painting Fool assesses its product and decides whether it achieved the desired look, comparing it to thousands of other works of art in a database with characteristics commonly associated with the artistic quality that it sought to convey. Like a human, it’s sometimes pleased with its work and sometimes disappointed. “I was in a positive mood. So I wanted to paint a patterned portrait,” it wrote in response to the portrait above. “This is a miserable failure—I’m very unhappy about that. And I’m also annoyed that the portrait is bleached, because that does not suit my mood.”
This sort of intentionality and reflection, Colton says, are crucial elements of creativity. “It’s very easy to say, ‘You wrote the program, you tell it what to do, so it’s really just an extension of you. So we tried to get the software to aim to do something on its own, and then realize whether it has or hasn’t achieved it in the end,” he explains.
Colton’s aware that there are lots of people out there who don’t see real creativity in the program—and he sees their criticisms as essential to the Painting Fool’s success. “I’m always looking for people who say to me, ‘I don’t think it’s creative for this reason,’” he says. “That drives me on, and I’ll come back a year later with a few thousand lines of code to begin addressing that issue.”
Like Colton, the Painting Fool’s greatest strength is the fact that it can learn and improve—each time it fails to meet its own expectations, it assesses what went wrong and uses that knowledge in future creative decisions. “It did about 100 portraits, and by the end of the week, it knew, for instance, that pencils are not good for vibrant paintings, but they are good for making bleak and dreary ones,” Colton says. “It reflected, it learned, and by the end, it was doing things that I hadn’t programmed it to do.”
October 30, 2013
As part of the new book The Photography of Modernist Cuisine, they want us to see inside the food itself. When they grill, they want to give us a snapshot of ground beef cook slowly browning from the flame upward. When they can vegetables, they want us to see the boiling water bubbling up, preserving the carrots and mushrooms in tightly sealed jars. They want us to see inside the mechanisms that power an oven, allowing us to roast a ham without building a fire in our home.
To accomplish this goal, they’ve sliced pots, pans and ovens in half, used heat-tolerant adhesives to glue pieces of glass in place, and stitched multiple images together with digital software, among other techniques. There are hundreds of other stunning photos in the enormous 13-pound, large-format tome, but the innovative cutaways stand out, providing unprecedented views into the essential processes that produce the food we eat every day.
Myhrvold, a former executive at Microsoft, continued pursuing his lifelong interests in food and photography after retiring from the company and produced a pair of in-depth cookbooks—the encyclopedic, 2,472-page Modernist Cuisine and the more user-friendly Modernist Cuisine at Home. In them, he sought to convey the complex science that goes on during all sorts of cooking processes. Working with Ryan Matthew Smith, Chris Hoover and other photographers, he embarked on an ambitious project to provide suitably thorough and beautiful photos to illustrate the recipes.
“The first idea I had was to show food from a new perspective, literally: through annotated cutaway views of the interior,” he writes in The Photography of Modernist Cuisine, which features a selection of some of the best photos from the library of more than 200,000 they’ve created over the years, along with thorough explanations of their creative process. “Although I knew it would be tough to pull this off with photography, photos would be much more stunning than illustrations, if we succeeded. It would be a kind of image that few people had ever seen before.”
This photo of broccoli being steamed was the first cutaway they ever created, and was shot in Myhrvold’s garage, as the research kitchen they later used to produce the cookbooks (and provide R&D to food companies) hadn’t been built yet. “We had to solve many technical challenges: how to machine the pot in half, perch the cut broccoli florets in a stable but natural-looking configuration, capture side-on shots of boiling water, and composite all the elements together,” Myhrvold writes. “But the result was so successful, so magical at revealing cooking as it happens, that we went on to make dozens of such cutaways.”
For this image of vegetables being canned, the team had to knit together two different images: cut vegetables, suspended in a gel inside jars that were themselves cut in half; and empty cut jars, glued to a glass plate attached to the pot and surrounded by boiling water. “The result is a new view of a very old cooking method,” Myhrvold writes.
Soon, the group moved on to even more challenging images, like this one of a pork roast atop ashy embers. This too involved digitally stitching together multiple shots—including those of partially-cooked meat, vegetables and broth placed in a halved iron pot, and those of burning embers piled below and above an emptied pot. “From an aesthetic point of view, the result was a pleasant surprise,” Myhrvold writes. “We find the contrasts of color and texture among the pink meat, fiery orange embers, gray ash, boiling broth, and wispy smoke endlessly fascinating.”
By the time they attempted this shot of a pork shoulder in a pressure cooker, cutting the cooker in half was relatively easy for the team at The Cooking Lab. “But it is harder than you might think to get a clean cut right through the middle of a stout pork shoulder,” Myhrvold writes. “We eventually met with success by freezing the shoulder solid and then sawing it in half.”
Not all the book’s images feature food in the kitchen—some, such as this photo of carrots, radishes, potatoes, leeks and beets, show food products in a more natural habitat. “Of the many thousands of photos in our two previous books, this image was the most technically challenging to set up, shoot, and edit,” Myhrvold writes. “But the rare glimpse it provides of what is normally hidden from us in the garden—the circus-like range of colors of those roots and tubers nestled beneath the surface—was worth the effort.”
October 25, 2013
You’ve probably seen a bee fly by hundreds of times in your life, if not thousands. When it arrived, maybe attracted by something you were eating or drinking, you likely shooed it away, or perhaps remained entirely still to avoid provoking a sting.
One thing you probably didn’t do was consider how the bee would look under intense magnification, blown up to 30, 300 or even 3,000 times its original size. But—as photographer Rose-Lynn Fisher has discovered over the past two decades working with powerful scanning electron microscopes (SEMs) to capture images of the insects in remarkable detail—everyday bees feature incredible microscopic structures.
“Once you scratch the surface, you see there’s a whole world down there,” says Fisher, who published her photos in the 2010 book Bee and is having them featured in the new exhibition Beyond Earth Art at Cornell University in January. “Once I started, it became a geographical expedition into the little body of the bee, with higher and higher magnifications that took me deeper and deeper.”
Fisher began creating the images back in 1992. “I was curious to see what something looked like under a scanning electron microscope, and a good friend of mine was a microscopist, and he invited me to bring something to look at,” she says. “I’ve always loved bees, and I had one that I found, so I brought it in to his lab.”
When Fisher first looked at the creature through the device, she was awestruck by the structures that comprised its body at scales naked to the human eye. One of the first that captured her attention was the bee’s multi-lensed compound eye. “In that first moment, when I saw its eye, I realized that the bees’ eyes are composed of hexagons, which echo the structure of the honeycomb,” she says. “I stood there, just thinking about that, and how there are these geometrical patterns in nature that just keep on repeating themselves.”
Fisher was inspired to continue exploring the body of that bee, and others, continually looking at their microscopic structures and organs in greater and greater detail.
Her creative process started with the obvious: collecting a specimen to examine. “First, I’d find a bee, and look at it through my own regular light microscope to confirm its parts were intact,” she says. “The freshest ones were the best, so sometimes I’d find one walking on the ground that looked like it wouldn’t be around much longer, and I’d bring it home and feed it some honey, to give it something nice for its last meal.” Some of these were rejuvenated by her care, but those that weren’t, and perished, became the subjects of her microscopic exploration.
At her friend’s lab, in off hours, Fisher used a model of scanning electron microscope called a JEOL 6100, which can detect objects as small as 40 angstroms (for comparison, a thin human hair is roughly 500,000 angstroms in diameter). Before scanning, she’d carefully coat the bee in an ultra-thin layer of gold sputter coating.
This coating, she explains, enhanced the electrical conductivity of the bee’s surfaces, which allow the microscope to detect them in finer resolution. “The SEM uses a very finely focused electron beam that scans across the surface of the prepared sample,” she says. ‘It’s akin to shining a flashlight across the surface of an object in a dark room, which articulates the form with light. With an SEM, it’s electrons, not light—as it moves across the bee’s surface, it’s converting electrical signals into a viewable image.”
Once the bee specimen was prepared and mounted inside the SEM’s vacuum chamber, Fisher could use the machine to view the insect at different angles, and manipulated the magnification to look for interesting images. At times, zooming in on the structures abstracted them beyond recognition, or yielded surprising views she’d never thought she’d see looking at a bee.
“For instance, when I looked at the attachment between the wing and the forewing, I saw these hooks,” she says. “When I magnified them 700 times, their structure was amazing. They just looked so industrial.”
Zoom in close enough, she found, and a bee stops looking anything like a bee—its exoskeleton resembles a desert landscape, and its proboscis looks like some piece of futuristic machinery from a sci-fi movie. At times, Fisher says, “you can go in deeper and deeper, and at at a certain level, your whole sense of scale gets confounded. It becomes hard to tell whether you’re observing something from very close up, or from very far away.”
For more beautiful bee art, see Sam Droege’s bee portraits shot for the U.S. Geological Survey