May 3, 2013
It started with hair. Donning a pair of rubber gloves, Heather Dewey-Hagborg collected hairs from a public bathroom at Penn Station and placed them in plastic baggies for safe keeping. Then, her search expanded to include other types of forensic evidence. As the artist traverses her usual routes through New York City from her home in Brooklyn, down sidewalks onto city buses and subway cars—even into art museums—she gathers fingernails, cigarette butts and wads of discarded chewing gum.
Do you get strange looks? I ask, in a recent phone conversation. “Sometimes,” says Dewey-Hagborg. “But New Yorkers are pretty used to people doing weird stuff.”
Dewey-Hagborg’s odd habit has a larger purpose. The 30-year-old PhD student, studying electronic arts at Rensselaer Polytechnic Institute in Troy, New York, extracts DNA from each piece of evidence she collects, focusing on specific genomic regions from her samples. She then sequences these regions and enters this data into a computer program, which churns out a model of the face of the person who left the hair, fingernail, cigarette or gum behind.
It gets creepier.
From those facial models, she then produces actual sculptures using a 3D printer. When she shows the series, called “Stranger Visions,” she hangs the life-sized portraits, like life masks, on gallery walls. Oftentimes, beside a portrait, is a Victorian-style wooden box with various compartments holding the original sample, data about it and a photograph of where it was found.
Rest assured, the artist has some limits when it comes to what she will pick up from the streets. Though they could be helpful to her process, Dewey-Hagborg refuses to swipe saliva samples and used condoms. She tells me she has had the most success with cigarette butts. “They [smokers] really get their gels into that filter of the cigarette butt,” she says. “There just tends to be more stuff there to actually pull the DNA from.”
Dewey-Hagborg takes me step-by-step through her creative process. Once she collects a sample, she brings it to one of two labs—Genspace, a do-it-yourself biology lab in Brooklyn, or one on campus at Rensselaer Polytechnic Institute. (She splits her time between Brooklyn and upstate New York.) Early on in the project, the artist took a crash course in molecular biology at Genspace, a do-it-yourself biology lab in Brooklyn, where she learned about DNA extraction and a technique called polymerase chain reaction (PCR). She uses standard DNA extraction kits that she orders online to analyze the DNA in her samples.
If the sample is a wad of chewing gum, for example, she cuts a little piece off of it, then cuts that little piece into even smaller pieces. She puts the tiny pieces into a tube with chemicals, incubates it, puts it in a centrifuge and repeats, multiple times, until the chemicals successfully extract purified DNA. After that, Dewey-Hagborg runs a polymerase chain reaction on the DNA, amplifying specific regions of the genome that she’s targeted. She sends the
mitochondrial amplified DNA (from both mitochondria and the cells’ nuclei) to a lab to get sequenced, and the lab returns about 400 base pair sequences of guanine, adenine, thymine and cytosine (G, A, T and C).
Dewey-Hagborg then compares the sequences returned with those found in human genome databases. Based on this comparison, she gathers information about the person’s ancestry, gender, eye color, propensity to be overweight and other traits related to facial morphology, such as the space between one’s eyes. “I have a list of about 40 or 50 different traits that I have either successfully analyzed or I am in the process of working on right now,” she says.
Dewey-Hagborg then enters these parameters into a computer program to create a 3D model of the person’s face.” Ancestry gives you most of the generic picture of what someone is going to tend to look like. Then, the other traits point towards modifications on that kind of generic portrait,” she explains. The artist ultimately sends a file of the 3D model to a 3D printer on the campus of her alma mater, New York University, so that it can be transformed into sculpture.
There is, of course, no way of knowing how accurate Dewey-Hagborg’s sculptures are—since the samples are from anonymous individuals, a direct comparison cannot be made. Certainly, there are limitations to what is known about how genes are linked to specific facial features.”We are really just starting to learn about that information,” says Dewey-Hagborg. The artist has no way, for instance, to tell the age of a person based on their DNA. “For right now, the process creates basically a 25-year-old version of the person,” she says.
That said, the “Stranger Visions” project is a startling reminder of advances in both technology and genetics. “It came from this place of noticing that we are leaving genetic material everywhere,” says Dewey-Hagbog. “That, combined with the increasing accessibility to molecular biology and these techniques means that this kind of science fiction future is here now. It is available to us today. The question really is what are we going to do with that?”
Hal Brown, of Delaware’s medical examiner’s office, contacted the artist recently about a cold case. For the past 20 years, he has had the remains of an unidentified woman, and he wondered if the artist might be able to make a portrait of her—another clue that could lead investigators to an answer. Dewey-Hagborg is currently working on a sculpture from a DNA sample Brown provided.
“I have always had a love for detective stories, but never was part of one before. It has been an interesting turn for the art to take,” she says. “It is hard to say just yet where else it will take me.”
Dewey-Hagborg’s work will be on display at Rensselaer Polytechnic Institute on May 12. She is taking part in a policy discussion at the Wilson Center in Washington, D.C. on June 3 and will be giving a talk, with a pop-up exhibit, at Genspace in Brooklyn on June 13. The QF Gallery in East Hampton, Long Island, will be hosting an exhibit from June 29-July 13, as will the New York Public Library from January 7 to April 2, 2014.
Editor’s Note: After getting great feedback from our readers, we clarified how the artist analyzes the DNA from the samples she collects.
May 2, 2013
An artist’s studio is usually a private space, and the hours spent with a paint-dipped brush in hand mostly solitary. So, the final products we gaze at on gallery walls are just the tip of the iceberg when it comes to the makers’ creative processes.
For Nathan Walsh, each of his realist paintings is a culmination of four months of eight to 10-hour days in the studio. Now, thanks to a new app, we can go back in time and see how his work came to be, stroke by stroke.
Repentir, a free app for smartphones and the iPad, provides a hand-controlled time-lapse of Walsh’s oil painting, Transamerica. It compresses months of sketching and revision into interactive pixels, allowing users to peel back layers of paint and deconstruct Transamerica to its original pencil sketches.
The app, developed by researchers at Newcastle and Northumbria universities in England, uses computer vision algorithms to recognize the painting in photographs taken from various perspectives. When you take a photo of any part of Transamerica (or the entire work), the app replaces your image with those captured in the studio as Walsh painted. Every day for four months, a digital camera set up in his York-based studio snapped a shot of his progress, accumulating roughly 90 images.
Users can view the painting’s layers in two ways. A slider feature at bottom allows viewers to see the piece in its beginning stages to the final product by swiping from left to right (think “slide to unlock”). They can also use their fingers to rub away at a given spot on the painting on the screen, revealing earlier stages in the process.
“Where their fingers have been, we basically remove pixels from the image and add pixels from older layers until they’re rubbed away,” says Jonathan Hook, a research associate at Newcastle who studies human-computer interaction. “It’s like how you add paint to the canvas—we’re doing the opposite.”
Repentir was unveiled this week at the ACM SIGCHI Conference on Human Factors in Computing in Paris, an annual science, engineering and design gathering. This year’s theme is “changing perspectives.” Transamerica will be on display there until tomorrow, when it moves to the Bernarducci Meisel Gallery, a realist painting collection in New York.
The app relies on a process known as scale invariant feature matching, technology that’s similar to that of augmented reality. Researchers trained the app against a high-resolution image of Transamerica to identify and create markers for certain features. These markers can then be used to find matching features in a user’s photo of the painting and the artwork itself—even in a tiny piece of it.
“If you take a picture of the bottom right-hand corner, it will find the features in the bottom right-hand corner of the image and match them against those same features in the source image,” Hook says. “If there’s at least three or four features matched, you’re able to work out the perspective and the difference in image position on those features.”
Ninety images worth of layers may not sound like a lot when you factor in today’s smartphone scrolling speeds, but if you’re viewing Transamerica in person, there’s more than enough of it to explore. The canvas measures roughly 71 by 48 inches. It would take a massive number of screen grabs to rub away the layers of the entire work.
Transamerica is a colorful composite of elements that caught Walsh’s eye during a trip to San Francisco’s Chinatown, the largest Chinese community outside of Asia. Several years ago, Walsh traveled across America, stopping in major cities, including San Francisco, New York and Chicago, sketching and taking photographs of the urban landscapes.
Walsh says he’s often accused of stitching photographs together or touching up in Photoshop because of the realistic look of his paintings. He aims to convey a sense of three-dimensional space in his work. In Transamerica, the juxtaposition of different objects and designs create almost palpable layers of paint.
“There’s always an assumption that there’s some sort of trickery involved,” Walsh says. “Getting involved in a project like this explains literally how I go about constructing these paintings. It shows all the nuts and bolts of their making.”
Hook says the researchers chose Walsh’s work to expose those “nuts and bolts.” “Lots of people, when they see his paintings, they think he’s cheated, when in reality what Nathan does is just get a pencil and a ruler and draws these really amazing photorealistic pictures from scratch,” he says. “The idea behind the app was to reveal Nathan’s process and show people how much hard work he does.”
In this way, Walsh believes using Repentir in front of the actual work will make the gallery experience more educational for visitors. “For me, the exciting thing is that you’re getting close, as close as you can, to my experience of making the painting,” he says.
While the app is free, Hook believes the tool could lead to a new business model for artists. In the future, app users could purchase a print of a configuration of layers they like best.
February 5, 2013
When Pupa U. P. A. Gilbert, a biophysicist at the University of Wisconsin, Madison, and her colleague Christopher E. Killian saw the scanning electron micrograph that they took of a sea urchin’s tooth, they were dumbstruck, says the journal Science. “I had never seen anything that beautiful,” Gilbert told the publication.
The individual crystals of calcite that form an urchin’s tooth are pointy, interlocking pieces; as the outermost crystals decay, others come to the surface, keeping the tooth sharp. In Photoshop, Gilbert added blues, greens and purples to the black-and-white image to differentiate the crystals. The resulting image calls to mind an eerie landscape in a Tim Burton movie.
Judges of the 2012 International Science & Engineering Visualization Challenge, a competition sponsored by Science and the National Science Foundation, as well as the public who voted online, were equally ecstatic about the SEM image. Enough so, in fact, that they selected the micrograph as the first place and people’s choice winner for the contest’s photography division.
The 10th annual Visualization Challenge received 215 entries across five categories—photography, illustration, posters and graphics, games and apps, and video. The submissions are judged based on visual impact, effective communication and originality.
And…drum roll, please. Here are some of the the recently announced winners:
Kai-hung Fung, a radiologist at Pamela Youde Nethersole Eastern Hospital in Hong Kong, captured this image of a clam shell (on the left) and a spiral-shaped sea snail shell (on the right) using a CT scanner. The image won honorable mention in the photography category. The multi-colored lines represent the contours in the shells. Fung told Science that he took into account “two sides of a coin” when making the image. “One side is factual information, wile the other side is artistic,” he told the journal.
Viktor Sykora, a biologist at Charles University in Prague, and researchers at the Czech Technical University submitted three miniscule (we’re talking three millimeters in diameter or less) seeds to high-resolution, high-contrast x-ray imaging (on the left) and microscopy (on the right). The above image also won honorable mention in the photography category.
Earning him first prize in the illustration category, Emmett McQuinn, a hardware engineer at IBM, created this “wiring diagram” for a new kind of computer chip, based on the neural pathways in a macaque‘s brain.
Maxime Chamberland, a computer science graduate student at the Sherbrooke Connectivity Imaging Lab in Canada, used magnetic resonance imaging (MRI) to capture this ominous image of a brain tumor. (The tumor is the solid red mass in the left side of the brain.) Science calls the image a “road map for neurosurgeons,” in that the red fibers are hot-button fibers that, if severed, could negatively impact the patient’s everyday functions, while blue fibers are nonthreatening. The image won honorable mention and was the people’s choice winner in the contest’s illustration category.
A team of researchers (Guillermo Marin, Fernando M. Cucchietti, Mariano Vázquez, Carlos Tripiana, Guillaume Houzeaux, Ruth Arís, Pierre Lafortune and Jazmin Aguado-Sierra) at the Barcelona Supercomputing Center produced this first-place and people’s-choice winning video, “Alya Red: A Computational Heart.” The film shows Alya Red, a realistic animation of a beating human heart that the scientists designed using MRI data.
“I was literally blown away,” Michael Reddy, a judge in the contest, told Science. “After the first time I watched the video, I thought, ‘I’ve just changed the way I thought about a heart.’”
Be sure to check out the other videos below, which received honorable mention in the contest:
Fertilization, by Thomas Brown, Stephen Boyd, Ron Collins, Mary Beth Clough, Kelvin Li, Erin Frederikson, Eric Small, Walid Aziz, Hoc Kho, Daniel Brown and Nobles Green Nucleus Medical Media
Observing the Coral Symbiome Using Laser Scanning Confocal Microscopy, by Christine E. Farrar, Zac H. Forsman, Ruth D. Gates, Jo-Ann C. Leong, and Robert J. Toonen, Hawaii Institute of Marine Biology, University of Hawaii, Manoa
Revealing Invisible Changes in the World, by Michael Rubinstein, Neal Wadhwa, Frédo Durand, William T. Freeman, Hao-Yu Wu, John Guttag, MIT; and Eugene Shih, Quanta Research Cambridge
For winners in the posters and graphics and games and apps categories, see the National Science Foundation’s special report on the International Science & Engineering Visualization Challenge.
January 25, 2013
Art buffs are not the only ones intrigued by Leonardo da Vinci’s Mona Lisa. In a fun experiment in 2005, a group of researchers from the University of Amsterdam analyzed Mona Lisa’s famous smile. They ran a scanned reproduction of the painting through “emotion recognition” software, which concluded that Mona was precisely 83 percent happy, 9 percent disgusted, 6 percent fearful, 2 percent angry, 1 percent neutral—and completely unsurprised.
In 2010, scientists in France used X-ray fluorescence spectrometry on the painting and found that da Vinci applied layers upon thin layers of glazes and paints to achieve the subject’s flawless complexion. Then, this past year, Italian archaeologists exhumed the skeletal remains of Lisa Gherardini, the suspected sitter for the portrait, in Florence, in hopes of identifying, once and for all, the real Mona Lisa.
And now even NASA has taken an interest in da Vinci’s coy lady.
In an experiment in laser communication, scientists at the Next Generation Satellite Laser Ranging (NGSLR) station at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, beamed a digital image of the Mona Lisa to the Lunar Reconnaissance Orbiter (LRO) some 240,000 miles away.
“This is the first time anyone has achieved one-way laser communication at planetary distances,” says David Smith of the Massachusetts Institute of Technology, in a recent press release. Smith is the Lunar Orbiter Laser Altimeter’s principal investigator.
NASA scientists, up until this point, have used radio waves to track and communicate with satellites traveling outside of Earth’s orbit. Going forward, however, they are considering lasers, which can transmit more data at a faster rate than radio signals.
The LRO, a satellite that has been orbiting the moon and mapping its surface since 2009, provided the perfect test case. The spacecraft is the only one currently outside of Earth’s orbit that is capable of receiving lasers; it can be tracked using both lasers and radio.
But of all images to send, why the Mona Lisa?
“We chose the Mona Lisa because it is a familiar image with a lot of subtleties, which helped us to see the effect of transmission errors,” says Xiaoli Sun, a scientist at NASA Goddard and lead author of a recent paper about the project in Optics Express.
The digital image essentially rode “piggyback” on laser pulses that are regularly beamed at LRO to follow its position in space. NASA describes the process in more detail in its press release:
Precise timing was the key to transmitting the image. Sun and colleagues divided the Mona Lisa image into an array of 152 pixels by 200 pixels. Every pixel was converted into a shade of gray, represented by a number between zero and 4,095. Each pixel was transmitted by a laser pulse, with the pulse being fired in one of 4,096 possible time slots during a brief time window allotted for laser tracking. The complete image was transmitted at a data rate of about 300 bits per second.
The satellite pieced together the full image and then sent it back via radio waves. The portrait was not transmitted perfectly; natural disturbance of the laser as it passed through the Earth’s atmosphere account for the blank pixels in the image, shown above. (At one point while the image was being sent to LRO, an airplane was detected within five degrees of the laser, and the laser was blocked for that time, which explains the vertical white streak running through the image.) Sun and his team cleaned up the image using what’s called Reed-Solomon error-correction coding.
The experiment will likely be the first of many. “This pathfinding achievement sets the stage for the Lunar Laser Communications Demonstration (LLCD), a high data rate laser-communication demonstration that will be a central feature of NASA’s next moon mission, the Lunar Atmosphere and Dust Environment Explorer (LADEE),” says Goddard’s Richard Vondrak, the LRO deputy project scientist, in the press release.
On a simpler note, though, I can’t help but wonder: what would da Vinci think of Mona Lisa’s cosmic journey?
October 17, 2012
If public speaking is an art form, then presidential elections produce some masterpieces—from memorable speeches and powerful sound bites to well-timed digs in the debates.
But Bill Seaver and Nathan Moore, two innovators from Nashville, have taken this idea to the next level. Their new company, Epic Frequency, takes the aural high points in history and actually transforms them into visual showpieces for your home.
According to the company’s Web site, the idea came to the co-founders one evening four years ago when they were recording their monthly podcast on new media and technology. (Seaver is a social marketing consultant, and Moore is a web developer.) It struck the two of them that the jagged waveform of an audio file in recording software is quite eye-catching. Why not select some of the most significant speeches and audio clips in history, they thought, and print the waveforms in bright colors on large black canvasses?
Epic Frequency launched just last week with nine initial offerings. The works of art include Neil Armstrong’s “One Small Step,” Ronald Reagan’s “Tear Down This Wall,” Franklin Delano Roosevelt’s “Fear Itself,” John F. Kennedy’s “Ask Not,” Martin Luther King, Jr.’s “I Have a Dream,” Albert Einstein’s explanation of “E=MC2,” President Barack Obama’s 2009 inauguration speech and, not to ignore this election season, both Mitt Romney and President Obama‘s convention speeches.
The criteria for choosing clips, says Moore, is “reflected in our name.” The speeches have to be epic. “Every audio recording that we select must have played an important role in our history. Usually these are moments that truly defined that era. They are glimpses into our past that changed the way we looked at the world,” he says.
On the side of each canvas is a QR code. A viewer can scan the barcode with their cell phone to access a mobile version of the print, with the actual audio overlaid on it. This way, one can listen to the speech and follow along with the peaks and valleys of the waveform. The company will sell 5,000 prints of an iconic speech, before retiring that particular design.
Clearly, Epic Frequency bridges art and history. But, of course, I am especially interested in how the concept combines art and science. What can these works teach people about sound? The artwork itself is data. So, what can viewers gather from it?
“We have continually been fascinated by the information that can be uncovered by analyzing the waveforms,” says Moore. “Since the sound amplitude is represented, volume and cadence become immediately apparent.” Seaver told Wired that he thought President Obama might have a speaking style similar to that of Martin Luther King, Jr., based on a visual comparison of their speeches.
The co-founders have their artistic preferences. They tend to think that the shorter the recording, the more visually appealing it is, because the sounds and silences are more pronounced in the waveforms. When looking at the “One Small Step” print, says Moore, you can almost hear Armstrong’s words, “That’s one small step for [a] man, one giant leap for mankind,” uttered on July 21, 1969.
Epic Frequency is even venturing into personal histories, creating custom prints based on audio clips that customers provide. “We have definitely had some interesting requests,” says Moore. “Most are romantic or sentimental in nature, such as ‘I Love You,’ wedding vows or a baby’s first cry.”