June 19, 2013
Chances are you’ve heard about the Texas company that recently announced it was able to produce a working handgun on a 3-D printer. After assembling the gun out of printed plastic parts, the firm’s founder, Cody Wilson, took it out to a shooting range and successfully fired some .380 caliber bullets. He calls his creation “The Liberator.”
Chances are you haven’t heard about the 3-D printed working bionic ear made by Princeton and Johns Hopkins scientists. Or the University of Michigan researchers who used a 3-D printer to produce a plastic splint that likely saved the life of a baby with a rare condition that caused his windpipe to collapse. Or the company called The Sugar Lab. It creates amazingly elaborate–and edible–sugar structures on, yes, a printer.
The truth is, almost any business that makes a product is probably weighing how 3-D printing–also known as additive manufacturing–fits into its future. Ford already is using the technology to print cylinder heads, brake rotors and rear axles for test vehicles. In fact, production time for some parts has been shaved by 25 to 40 percent. And engineers at Mattel are using 3-D printers to create parts of virtually every type of toy that it manufactures, from Hot Wheels cars to Barbie dolls.
If you’re still not buying into the notion that 3-D printing is finally, after 30 years, going mainstream, consider this: Last month Staples became the first major U.S. retailer to start selling 3-D printers. And one more tidbit: Amazon just launched an online 3-D printer store.
It’s easy to get carried away with the idea that 3-D printing will change everything, that one day you’ll never have to go to an auto parts store or a toy store or a hardware store since you’ll be able to print out whatever you need. Not so fast. For starters, think about the liability issues that would come with installing car parts you printed at home.
That said, Janine Benyus thinks that 3-D printing presents a rare opportunity to profoundly change how we make things. Benyus is founder of the Biomimicry 3.8 Institute--that’s a reference to the 3.8 billion years life has been adapting on Earth–and she knows as well as anyone how much can be learned from nature. So, asks Benyus, why not take advantage of this moment in technological evolution to see how products can be created to better mimic the natural world? And what would it take to ensure that everything made on a 3-D printer is recyclable?
They’re questions she and other scientists will tackle later this week at the first Biomimicry Global Conference in Boston. During that discussion, Benyus will likely spend some time talking about potato chips bags.
They seem so simple, but as Benyus likes to point out, every bag is actually seven distinct layers, each made of a different material–one for waterproofing, one for excluding oxygen, one for inking, etc. Altogether, a potato chips bag comprises as many as 350 different polymers. By contrast, notes Benyus, a beetle’s shell is made of one material–chitin–but it’s strong, waterproof, allows air to pass through it and can change colors.
The challenge now, she notes, is to get the 3-D printer industry look to nature for inspiration. Says Benyus:
“Nature works with five polymers. Only five polymers. In the natural world, life builds from the bottom up and it builds in resilience and multiple uses. What would it be like to use only five polymer classes to build everything?”
Benyus’ focus is on rallying experts in her field to design biomimetic digital structures for materials that when printed, will have the same kind of strength, toughness and flexibility so common in substances in the natural world. And once a product’s life is over, it could be broken down and fed back into the printer to take shape as something new.
“We rarely get opportunities like this. This is our opportunity to get very close to how nature works,” said Benyus. “Are we going to address this? Or are we going to build bigger landfills?”
Here are a few more recent 3-D printer innovations:
- Hold the toner: NASA has contracted with a Texas firm to develop a 3-D printer that can make pizzas in space. The company landed the contract, in part, because it has already built a printer that can print chocolate chips on to a cookie.
- It’s alive!: A San Diego company recently announced that it has created on a 3-D printer samples of liver cells that function just as they would in a human. The 3-D cells were able to produce some of the same proteins as an actual liver does and interacted with each other and with compounds as they would in your body.
- Go print up your room: Designers Benjamin Dillenburger and Michael Hansmeyer are building an entire room out of sandstone shapes created on a printer. The ornate room, which has been described as a “cross between an alien skeletal system and a cathedral on another planet,” will be unveiled next month.
- But why stop there?: A Dutch architectural firm has designed an entire house that will be built of plastic parts made on a printer. The architects plan to have the entire front facade of the house, which will be located on a canal in northern Amsterdam, constructed by the end of the year. The 3-D-printed kitchen, study, storage room and guestroom will be added next year.
- Imagine that: And in Chile, a team of engineers say they’ve developed software that enables objects to be printed in response to a person’s brain waves. In theory, users will be able to create and print 3-D versions of whatever their brains can conjure up. Chilean children will get the first crack at trying it out during a tour of schools later this month.
Video bonus: Janine Benyus talks about her favorite subject–the inspiration of nature.
Video bonus bonus: Listen to this violin for a few bars and you’ll see why some things probably shouldn’t be made on a printer.
More from Smithsonian.com
May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
May 8, 2013
Cell phones are so many things now–computer, map, clock, calculator, camera, shopping device, concierge, and occasionally, a phone. But more than anything, that little device that never leaves your person is one amazingly prolific data engine.
Which is why last October, Verizon Wireless, the largest U.S, carrier with almost 100 million customers, launched a new division called Precision Market Insights. And why, at about the same time, Madrid-based Telefonica, one of the world’s largest mobile network providers, opened its own new business unit, Telefonica Dynamic Insights.
The point of these ventures is to mine, reconstitute and sell the enormous amount of data that phone companies gather about our behavior. Every time we make a mobile call or send a text message–which pings a cell tower–that info is recorded. So, with enough computer power, a company can draw pretty accurate conclusions about how and when people move through a city or a region. Or they can tell where people have come from to attend an event. As part of a recent case study, for example, Verizon was able to say that people with Baltimore area codes outnumbered those with San Francisco area codes by three to one inside the New Orleans Superdome for the Super Bowl in February.
In a world enamored of geolocation, this is digital gold. It’s one thing to know the demographic blend of a community, but to be able to find out how many people pass by a business and where they’re coming from, that adds a whole nother level of precision to target marketing.
Follow the crowd
But this data have value beyond companies zeroing in on potential customers. It’s being used for social science, even medical research. Recently IBM crunched numbers from 5 million phone users in the Ivory Coast in Africa and, by tracking movements of people through which cell towers they connected to, it was able to recommend 65 improvements to bus service in the city of Abidjan.
And computer scientists at the University of Birmingham in England have used cell phone data to fine tune analysis of how epidemics spread. Again, it’s about analyzing how people move around. Heretofore, much of what scientists knew about the spread of contagious diseases was based largely on guesswork. But now, thanks to so many pings from so many phones, there’s no need to guess.
It’s important to point out that no actual identities are connected to cell phone data. It all gets anonymized, meaning there shouldn’t be a way to track the data back to real people.
There shouldn’t be.
Leaving a trail
But a study published in Scientific Reports in March found that even anonymized data may not be so anonymous after all. A team of researchers from Louvain University in Belgium, Harvard and M.I.T. found that by using data from 15 months of phone use by 1.5 million people, together with a similar dataset from Foursquare, they could identify about 95 percent of the cell phones users with just four data points and 50 percent of them with just two data points. A data point is an individual’s approximate whereabouts at the approximate time they’re using their cell phone.
The reason that only four locations were necessary to identify most people is that we tend to move in consistent patterns. Just as everyone has unique fingerprints, everyone has unique daily travels. While someone wouldn’t necessarily be able to match the path of a mobile phone–known as a mobility trace–to a specific person, we make it much easier through geolocated tweets or location “check-ins,” such as when we use Foursquare.
“In the 1930s, it was shown that you need 12 points to uniquely identify and characterize a fingerprint,” the study’s lead author, Yves-Alexandre de Montijoye, told the BBC in a recent interview. “What we did here is the exact same thing, but with mobility traces. The way we move and the behavior is so unique that four points are enough to identify 95 percent of the people.”
“We think this data is more available than people think. When you share information, you look around and you feel like there are lots of people around–in a shopping center or a tourist place–so you feel this isn’t sensitive information.”
In other words, you feel anonymous. But are you really? De Montijoye said the point of his team’s research wasn’t to conjure up visions of Big Brother. He thinks there’s much good that can come from mining cell phone data, for businesses, for city planners, for scientists, for doctors. But he thinks it’s important to recognize that today’s technology makes true privacy very hard to keep.
The title of the study? “Unique in the Crowd.”
Here are other recent developments related to mobile phones and their data:
- Every picture tells your story: Scientists at Carnegie Mellon University’s Human Computer Interaction Center say their research of 100 smartphone apps found that about half of them raised privacy concerns. For instance, a photo-sharing app like Instagram provided information that allowed them to easily discover the location of the person who took the photo.
- Cabbies with cameras: In the Mexican city of Tuxtla Gutiérrez, taxi drivers have been provided with GPS-enabled cell phones and encouraged to send messages and photographs about accidents or potholes or broken streetlights.
- Follow that cell: Congress has started looking into the matter of how police use cell phone data to track down suspects. The key issue is whether they should be required to get a warrant first.
- Follow that cell II: Police in Italy have started using a data analysis tool called LogAnalysis that makes it especially easy to visualize the relationships among conspiring suspects based on their phone calls. In one particular case involving a series of robberies, the tool showed a flurry of phone activity among the suspects before and after the heists, but dead silence when the crimes were being committed.
Video bonus: If you’re at all paranoid about how much data can be gleaned from how you use your mobile phone, you may not want to watch this TED talk by Malte Spitz.
March 25, 2013
I committed my first texting heresy a few years ago when my son was away at college. I had asked him about a class he was taking and had needed three, maybe four sentences to express myself.
He responded with bemusement. Or maybe it was disgust. Who could tell?
But his message was clear: If I continued to be so lame as to send texts longer than two sentences–using complete words, no less–he would have little choice but to stop answering.
I was reminded of this less-than-tender father-son moment recently by a post by Nick Bilton for The New York Times’ Bits blog in which he railed against those who send “Thank you” emails, among other digital transgressions.
His contention is that such concise expressions of gratitude, while well-intended, end up being an imposition for recipients who have to open up an email to read a two-word message. Better to leave the sentiment unexpressed–although he does concede that it probably makes sense to indulge old folks, who are much more likely to appreciate the appreciation.
Bilton’s larger point is that as technology changes how we communicate and gather information, we need to adapt what we consider proper etiquette. Why should we continue to leave voice mails, he argues, when a text is much more likely to be answered? And why, he asks, would anyone these days be so rude as to ask for directions?
Not that this is the first time that tech is forcing an etiquette rethink. Bilton harkens back to the early days of the telephone when people truly didn’t know what to say when they picked up a ringing phone. Alexander Graham Bell himself lobbied for “Ahoy,” while Thomas Edison pushed for “Hello.” Edison ruled, of course, although now that our phones tell who’s calling before we have to say a word, the typical greeting has devolved to “Hey” or the catatonically casual “‘S up.”
Sure, some of this is a generational thing–The Independent nailed that in a recent piece on how members of three generations of one family communicate–or not–with each other.
But it’s also about volume. Email never sleeps. For a lot of people, each day can bring a fire hose of digital messages. Imagine if you received 50 to 100 phone calls a day. You can bet you’d be telling people to stop calling.
If the purpose of etiquette is to be considerate of other people, Bilton would contend that that’s the whole idea behind cutting back on emails and voice mails. And he’d have a point.
Me, my phone and I
But then there’s the matter of device isolation. I’m sure you know it well by now–the person who starts texting away during a conversation, or a meal, or even a meeting, which is one of those things bosses tend not to like (not to mention that it probably also means the death of doodling.)
It’s hard to put a positive spin on this since it does send a pretty clear message: I’d rather focus my energy on connecting to someone through a device than in person. Maybe it’s just me, but that, I’d say, reeks of rude.
If anything, it’s going to get worse, especially with wearable tech about to go mainstream. Some think this is the year the smart watch could start to become the accessory of choice, which means people will be looking at their wrists a lot more in the future–not so much to check the time, which is rude enough, but more to see who’s sent them emails and texts.
And what about when Google Glass goes on the market later this year? They’re glasses that will enable you to check emails, go on the Web, watch videos, even take pictures, all while feigning eye contact with the people you’re with. And the Google Glass camera raises all kinds of issues. Will wearers have to make pre-date agreements not to take stealth photos, particularly any involving eating or drinking? Is anyone fair game in a Google Glass video?
But beyond questions of privacy and social boorishness, the impact of our obsession with digital devices, especially when it comes to the loss of personal connections, could go much deeper. In a piece in Sunday’s New York Times, Barbara Frederickson, a psychology professor at the University of North Carolina, cites research suggesting that if you don’t practice connecting face-to-face with others, you can start to lose your biological capacity to do so.
“When you share a smile or laugh with someone face to face, a discernible synchrony emerges between you, as your gestures and biochemistries, even your respective neural firings, come to mirror each other. It’s micro-moments like these, in which a wave of good feeling rolls through two brains and bodies at once, that build your capacity to empathize as well as to improve your health.”
Here are other recent developments in how technology is affecting behavior:
- Yeah, but can I text while I meditate?: A course at the University of Washington is focusing on helping students improve their concentration skills by requiring them both to watch videos of themselves multitasking and to do meditation.
- And it really cuts down on shuffleboard injuries: A study at North Carolina State University found that seniors–people 63 years or older– who played video games had higher levels of well-being and “emotional functioning” and lower levels of depression than old folks who didn’t.
- Does loyalty go deeper than latte?: This May Starbucks will break new ground when it allows its loyalty cardholders to earn points by buying Starbucks products in grocery stores.
Video bonus: All kinds of embarrassing things can happen while you’re texting.
Video bonus bonus: More evidence of the obsession that is texting: Here’s a clip of a bride firing off one last message before she says her vows.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
More from Smithsonian.com
March 5, 2013
It’s amazing how putting a lower case “i” in front of the name of a gadget can make it righteous.
What that means, of course, is that Apple has deemed that particular piece of technology worthy of its attention. And with that comes both market credibility and geeky cool.
So when rumors started swirling a few weeks ago that Apple could unveil an “iWatch” later this year, tech writers around the Web were quick to ponder if 2013 will become “The Year of the Smartwatch.” Maybe. Maybe not. The iGod has not yet spoken on the subject. At least not officially.
The article that stirred the iWatch clamor was a recent piece by Nick Bilton in the New York Times’ Bits blog. It was high on speculation–Apple isn’t talking–and spiced with juicy questions: Will it come with Siri, the voice of the iPhone? What about Apple’s map software? Will an iWatch enable its wearers to track their steps taken? How about their heartbeats?
But the biggest tease was an allusion to glass. Specifically bendable glass. Imagine a watch face that could curve around your wrist. That sounds light, sleek and yes, geekily cool. That sounds so Apple.
The Wall Street Journal followed up, citing a source saying that Apple has been discussing the design of a smartwatch with its Chinese manufacturing partner. And then Bloomberg chimed in, reporting that Apple has a team of at least 100 people cranking away on a “wristwatch-like device.”
It also quoted Bruce Tognazzini, a tech consultant and former Apple employee: “The iWatch will fill a gaping hole in the Apple ecosystem.”
So game over, right? Whenever Apple rolls out its device, it will define what a smartwatch should be, right?
Not so fast. Believe it or not, it’s already a crowded field, with more than half a dozen smartwatches out in the market. Maybe the best known, at least among gadget geeks, is the Pebble, which made a big splash a year ago, even before it existed. Its inventors made a pitch for investors on Kickstarter, hoping to drum up $100,000. Instead they raised $10 million, and a crowd-funding legend was born. The first Pebbles shipped earlier this year, to generally positive reviews.
Sony came out with its own model last year, sometimes to less than enthusiastic reviews. Others in the game include the MetaWatch Strata, the strangely-named I’m Watch, the oddly-named Martian Passport, one called Buddy and another called Cookoo. Later this year, a model called The Pine is expected to hit the market.
But, aside from having names that you’d never imagined calling a wristwatch, what do all these products bring to modern life? Obviously, they tell time, but most also connect wirelessly to your smartphone so you can see who’s calling or texting or emailing or posting on your Facebook page without digging into your pocket for your phone. They can show you weather forecasts, sports scores or news headlines. Some have apps that let you control the music on your phone or track how far you’ve run or cycled.
And keep in mind, this is only the first wave. They probably can’t do enough yet to entice most people to shell out a few hundred bucks–they range from $130 for a Cookoo to more than $400 for an I’m Watch. But as more apps are added, they could be used to make mobile payments, navigate with GPS, take photos and shoot videos. A few already can handle phone calls, albeit clunkily. So, the day is fast coming when you’ll be able to talk into your wristwatch without making people nervous.
Some say we’re on the cusp of a wearable tech boom, and that the smartphone, as something we need to actually carry around, will become passe. Others are more dubious, positing that the smartwatch is just another gadget phase we’re going through.
But there’s that bendable glass…
It’s long been said that if you want to succeed, it helps to be smart. Now that applies to products, too.
- At last, a cure for expiration date anxiety: Researchers at the Eindhoven University of Technology in the Netherlands say they’ve developed packaging with sensors that will be able to tell if the food inside is still edible.
- When bottles share: A Florida entrepreneur thinks the time has come for medicine bottles to get smart. His idea is to put QR codes on bottles that once scanned, will play a video on your smartphone telling you all you really need to know about the meds inside.
- Let sleeping babies lie: And for anxious young parents who check every 30 seconds to see if their baby is still breathing, students at Brigham Young University are developing something they call the Owlet Baby Monitor. Using a built-in pulse oximeter, the wireless smart sock can track both a sleeping child’s heart and breathing rates.
- Say goodbye to the “You’ll just feel a little pinch” lie: Scientists at Purdue University have created bandages that could make the needle stick obsolete. Powered by a person’s body heat, the adhesive patches would be able to deliver medication without the need for a shot.
- Which is so much cooler than wearing a smart sock: In Japan, Fujitsu has unveiled its “Next Generation Cane.” Yep, it’s a smart cane and it can monitor a person’s vitals. It also comes with GPS so you can always know where Grandma’s taking a stroll.
Video bonus: Want the lowdown on how the Pebble smartwatch works? The Wall Street Journal’s Walt Mossberg lays it out a video review.
More from Smithsonian.com