January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com
December 21, 2012
Earlier this week I rolled out the first half of a list of a dozen of the more innovative ideas of 2012.
We’re not talking Google Glass or invisibility cloaks or other flashes from the future. No, these are less splashy things, yet, in their own ways, no less inspired and probably more likely to become a part of our daily lives. They’re the creations of people joined under a common maxim, namely, “There’s gotta be a better way.”
So, muffled drum roll, please…the Fresh Ideas of 2012, Part 2:
7) While you’re at it, can you pick up a paper and some gum: Yes, suitcases with wheels were a big breakthrough, but Madrid designer Rodrigo Garcia Gonzalez says why stop there? Why should we still have lug luggage?
So he has invented a new kind of suitcase he calls Hop!. What makes it so special is that it follows you around like the most loyal of pets.
Well, technically it follows your smart phone. The suitcase contains three receivers that communicate, via Bluetooth, with an app on your smart phone and, put simply, it follows that signal. The same controller also directs a dual caterpillar track-type system on the bottom of the suitcase to move it along. If the signal gets lost, the bag locks itself and vibrates its owner’s phone.
Of course, there are issues to resolve–think of the security challenges of an airport full of roaming luggage–but Gonzalez deserves props for giving us hope that we’ll one day break loose from our bags.
8) Anticipation was so overrated: It never really made much sense: In a world increasingly geared to instant gratification, we waited for ketchup. It took forever to come out of the bottle, but we seemed willing to live with that.
Not Kripa Varanasi and his team of MIT engineers. They’ve developed a substance called LiquiGlide, which, when coating the inside surface of bottles, helps ketchup and mustard slide right out. Now this may seem a trivial modern indulgence, but, as the LiquiGlide team estimates, roughly a million tons of food could avoid being tossed in the garbage if it wasn’t getting stuck in bottles. So it’s only right that we go with the flow.
9) Which gives new meaning to “All you can eat”: While we’re on the subject of food waste, let’s give it up for WikiCells. These are the edible membranes created by Harvard professor David Edwards and French designer Francois Azambourg that encase food and liquids. In other words, it’s packaging you can eat.
The membranes, meant to mimic the skin of a grape, are made of food particles, such as cheese or dried fruit, and are held together by calcium or magnesium ions. So far, the pair have conjured up a tomato membrane containing gazpacho, an orange one filled with orange juice, a chocolate version holding hot chocolate. They’ve even created a grape-flavored pouch filled with wine. The goal is to do away with plastic bottles and packaging. Let’s raise our membranes to that.
10) Talk to the glove: Four Ukrainian students have designed gloves that can communicate with a smart phone and, as a result, developed a way for people with speech and hearing disabilities to talk to people who don’t use or understand sign language.
Their invention, which they call Enable Talk, works like this: The gloves are lined with 15 flex sensors in their fingers that can recognize sign language and transmit the message to a smart phone where it’s converted to text. The phone then says the words that the gloves sent.
11) So now we can stay focused on not changing the oil: If you’re like me, you have no idea when you last checked your tire pressure. It’s a blind faith thing. As long as the tires keep rolling, no need to look for that little gauge you bought many tires ago.
Goodyear understands this so they’ve gone ahead and invented a tire that takes us out the equation. It does this by inflating itself. A regulator in the tire senses when the pressure drops below a pre-set point and opens to allow air flow into the pumping tube. As the tire rolls, deformation flattens the tube, pushing air into the tire cavity. And we’ll no longer have to worry about keeping our tires pumped up. Not that we ever did.
12) No longer will a charger come between you and your phone: A few years ago the 11-year-old daughter of Wake Forest University scientist David Carroll wondered aloud if a cell phone could be charged solely by human body heat. Good question, thought Carroll, and earlier this year he and his team came up with an answer.
They unveiled Power Felt, a fabric that uses nanotechnology to convert heat into electricity. It’s still in the early stages of development but initial results suggest they’re on to something big–an inexpensive material that could use the heat from your car’s engine to run its AC and radio and the sun to power your home’s appliances and yes, your own personal warmth to keep your cell phone alive. Thanks, Dad.
Video bonus: While we’re talking about nifty ideas whose time is about to come, it’s pretty certain that Samsung will come out with a bendable phone next year. As you can see in this video, it passes the hammer test.
More on Smithsonian.com
December 18, 2012
Within the next week or so, the year-end reviews will start rolling out like strips of prize tickets in a games arcade.
Most will revisit events that we’ll all remember, albeit some we’d rather forget. My own list is a little different. I want to look back at ideas that haven’t received a lot of attention, but struck me as being particularly clever and ripe with potential. Chances are you haven’t heard of many of them. But chances also are you will.
Here’s Part 1 of my list of a dozen ideas whose time is about to come:
1) Sadly, it does not say, “You’re getting warmer.”: Are you flummoxed by how often you lose things–your keys, your TV remote, your glasses. Have I got an invention for you. It’s called Stick-N-Find and it works like this.
You attach one of the Bluetooth-powered stickers to whatever object you’re tired of losing, then download the Stick-N-Find smartphone app. The app will tell you how far away you are from the missing item–it has a range of 100 feet–and you can set off a buzzer in the sticker. If you’re in the dark, you can trigger a blinking red light. Where has this been all my life?
2) Will it do nails? It may be a while before we see it in action, but Dyson, the British company that makes those high-powered Airblade hand dryers, has filed a patent for a tap that would wash your hands with water, then dry them without you having to move an inch. Put your hands under the tap and sensors release water. Move them slightly so that they’re under two connected tubes and warm, dry air shoots out. You’re wet, you’re dry, you’re outta there.
3) All hail plastic: Using nanotechnology, a team of researchers at Wake Forest University has developed a plastic material that glows like a soft white light when an electric current is run through it. Its inventors say it’s as efficient as an LED light and twice as efficient as a fluorescent bulb. But what makes it so innovative is that because it’s plastic, it can be made into any shape. Imagine a soft glowing ceiling panel replacing those hideous fluorescent lights above your head.
4) And all hail fewer jerks on planes: Gemma Jensen used to be a flight attendant for Virgin Atlantic so she has seen more than her share of airline passengers doing the jerk. I’m talking about that moment during long flights when just as you’re starting to nod off, your head tips forward. End of snooze.
So Jensen has invented the J-Pillow. It’s a step up from the familiar U-shaped pillow that keeps your head from falling from side to side, but can’t stop it from dropping forward. Her pillow comes with a “J-hook” that goes around the neck and under a person’s chin. Doctors seem to like it because it keeps your spine aligned while you’re sleeping on a plane. Which explains why a panel just chose it Great Britain’s Best Consumer Invention of 2012.
5) Cause that’s how they roll: Two former MIT students have designed a camera that bounces and rolls. Who needs a bouncing camera, you ask? How about firefighters who have to see inside a building or a swat team looking for hostages? That’s what Francisco Aguilar and Dave Young had in mind when they invented their ball-shaped device outfitted with six wide-angle cameras packed inside a rubber casing.
The idea is that first responders could toss it into a space they need to survey. Its cameras could snap pictures every second as it rolls, then send them wirelessly to a smartphone where they would be stitched together to provide a 360-degree view.
6) Can I make Kit Kat bars in that thing?: There’s nothing new about 3D printers, but Virginia Tech’s College of Engineering has come up with a novel way to give its students access to the nifty replicating devices. It has set up something it calls DreamVendor, which it has described as a “vending machine with infinite inventory.” What it is is a station of four 3D printers where engineering students can load in their designs and wait for the printers to do their magic. It’s free for the students, but it’s not hard to imagine some entrepreneur refining the idea of vending machines that print stuff.
Video bonus: And under the category of an idea whose time is still coming, there’s the LuminAR lamp system invented in MIT’s Media Lab a few years ago. Still being refined, it allows you to screw a LuminAR device–it’s combo projector/camera/wireless computer–into a standard light socket and turn your desk into an interactive surface. See for yourself.
More from Smithsonian.com
December 14, 2012
There are times when I wonder why so many scientists are spending so much time trying to recreate something as fickle and full of fogginess as the human brain.
But who am I kidding? Those dyspeptic moments inevitably pass, as anyone who’s been following this blog knows. Every few months, it seems, I’m back writing about the latest attempt to build machines that can learn to recognize objects or even develop cognitive skills.
And now there’s Spaun.
Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basal ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.
Here are a few other recent developments in brain research and artificial intelligence:
- I can’t get this song out of your head: Scientists in Berlin wired guitarists playing a duet with electrodes and found that when they had to closely coordinate their playing, their brain activity became synchronized. But when they weren’t coordinated, when one was leading and the other following, their brain activity was distinctly different.
- One day the brain may actually understand itself: A team of MIT neuroscientists has developed a way to monitor how brain cells coordinate with each other to control specific behaviors, such as telling the body to move. Not only could this help them map brain circuits to see how tasks are carried out, but it also may provide insight into how psychiatric diseases develop.
- Deep thinking is so yesterday: The top prize in a recent competition sponsored by pharmaceutical giant Merck went to a team of researchers from the University of Toronto who used a form of artificial intelligence known as deep learning to help discover molecules that could become new drugs.
- So robots will learn how to stare at smart phones?: To teach robots how to function in social situations, scientists at Carnegie-Mellon University are tracking groups of people with head-mounted cameras to see when and where their eyes converge in social settings.
- Unfortunately, they keep trying to hide nuts: By using the deceptive behavior of birds and squirrels as a model, researchers at Georgia Tech have been able to develop robots that can trick each other.
Video bonus: Check out a demo of Spaun in action.
More from Smithsonian.com
November 30, 2012
A little refresher:
Back in late 2005, the guys running a small San Francisco startup named Odeo were feeling desperate. They had planned to make it big in the podcasting business, but Apple had just announced that iTunes would include a podcasting platform built into every iPod.
So the Odeo group started scrambling to come up with a new plan. One of the employees, a guy named Jack Dorsey, came up with the idea of a system where you could send a text message to a number and it would be delivered to all of your friends.
Someone came up with the code name twttr–a takeoff on Flickr–and when they looked up twitter in the dictionary and saw that it meant 1) A short burst of inconsequential information and 2) Chirps from birds, they agreed, Dorsey recalls, that the name “was just perfect.”
Such a tool
This is just to remind all of us that Twitter was born not as a grand vision, but more an act of desperation. And that it was originally meant as nothing more than a cool way to send reports of your status to all of your friends at once.
Which makes it all the more remarkable that these days Twitter is being hailed as everything from a barometer of the nation’s emotional health to a conduit for the flow of linguistic invention to a tool for urban planners to map travel routes.
Oh, and earlier this week, a young mother reportedly named her newborn daughter “Hashtag.”
There are those, of course, who think way too much is being made of Twitter’s capacity for capturing the zeitgeist. But there’s no question that it’s gaining status as an analytical tool. Here are just a few of the ways it’s being taken seriously:
1) It’s not the tweet, it’s emotion: Last month tech giant SGI rolled out something it calls the Global Twitter Heartbeat, a Big Data analysis of 10 percent of the roughly 500 million tweets tapped out every day.
The tool takes geotagged tweets over a period of time and converts them into a “heat map” designed to show the tone and intensity of what’s being said where. It’s first big effort was during Superstorm Sandy.
2) Pocket of politeness? Or pool of profanity?: The company Vertalab created its own Twitter heat map a few months ago, but that one focused on the use of two particular phrases on Twitter. While many weighed in with a conventional “Good morning,” a surprising number posted a two-word phrase rhyming with “duck flew.” .
True to form, the well-mannered tweets tended to bubble up from the South, particularly parts of Texas and Tennessee, while the cursing flowed freely around New York, Toronto and especially Los Angeles.
3) I hear ya, bruh: Researchers at Georgia Tech analyzed 30 million tweets sent around the U.S. between December 2009 and May 2011 and concluded that new words, at least on Twitter, tend to first pop up in cities with large African-American populations, then spread.
One example they gave was “bruh,” a Twitter version of “bro,” that first appeared in several cities in the U.S.’ Southeast, then leap-frogged to California.
4) The roads most traveled: Data-mapping expert Eric Fischer tracked millions of tweets from around the world and laid them over maps of highways to get a sense of how many people are heading where. He thinks urban planners could use this kind of data to fine-tune existing transportation systems and figure out where new routes are needed.
5) Exit polls are so last century: Go ahead and scoff, but some think Twitter analysis can even help predict an election. Barack Obama’s victory in the recent presidential race didn’t come as a big surprise to the Pew Research Center, which analyzed 2,500 online conversations in the two months leading up to the election. It found that a much higher percentage–58 percent–of the comments about Mitt Romney were negative, while 45 percent of the tweets about Barack Obama were harsh.
At the same time Twitter did its own analysis of which tweets by both campaigns provoked the strongest responses in which states. One key indicator: Obama had a high engagement level in the key swing state of Ohio–determined by retweets and favorites–while Romney had only a moderate engagement level there.
6) When military intelligence is not an oxymoron: Three U.S. Defense Department units are field-testing a software called the Dynamic Twitter Network Analysis (DTNA), to see how effective it is at gauging public opinion in political hot spots around the world. The software pulls in data from the public Twitter feed, then sorts it, live, by phrases, keywords or hashtags. The hope is that intelligence officers could use the software to understand people’s moods about a topic, or hopefully prevent or respond faster in any future U.S. embassy attacks.
7) I’m not a doctor, but I play one on Twitter: Researchers at the University of Pennsylvania were pleasantly surprised to see that people are using Twitter to share information on medical subjects that wouldn’t seem the stuff of tweets, such as cardiac arrest and CPR. Their analysis of a month of tweets found more than 15,000 messages that contained specific and useful information about cardiac arrest and resuscitation.
8) When short stories aren’t short enough: And finally, it is here at last, the first annual Twitter Fiction Festival. Since Wednesday two dozen authors from five continents have been posting their mini-stories in five different languages. The fare ranges from Iowa writer Jennifer Wilson posting photographs of gravestones, then writing “flash fiction” in response to epitaphs submitted by followers, to French fantasy novelist Fabrice Colin writing a serialized story of five strangers trapped on a bus. Stop by at the Fiction Festival website–it will be over before you know it.
Video bonus: Here’s another SGI heat map, this one tracking Obama and Romney-related tweets during election week.
More from Smithsonian.com