October 4, 2013
You have to hand it to Google.
Yes, Google Glass is one nifty technology, but wearing glasses with a little camera attached seems to reek of geek, the kind of gadget that would appeal most to men and women who, as young boys and girls, wanted so much to believe in X-ray glasses.
Yet twice now, Google Glass has managed to crash one of America’s biggest glamor parties—New York’s Fashion Week. Last year, all of the models in designer Diane Von Furstenberg’s show strutted down the runway accessorized by Google. And, a few weeks ago, at this year’s event, anyone who was anyone—top models, fashion editors, reality show judges—was walking around shooting pictures and videos with their clever camera glasses.
Still, if Google Glass is to go mainstream, it needs to move beyond the air kiss crowd and geek buzz. That part of the plan starts tomorrow in Durham, North Carolina, the first stop in what Google says will be a national roadshow. With Google Glass expected to hit the market by early 2014, it’s time to start letting the general public see what all the chatter’s about.
The camera never blinks
So, it’s also time to begin taking a closer look at what it might mean to have a whole lot of people walking around with computers/cameras attached to their heads.
There’s obviously the matter of privacy. Google Glass wearers will have the ability to shoot a steady stream of photos and videos as they go about their daily lives. A group of U.S. congressmen raised the issue to Google earlier this year, as have privacy commissioners from Canada, the European Union, Australia, Israel, Mexico, Switzerland and other countries.
Google’s response is that the camera will not be that surreptitious since it will be voice-activated and a light on the screen will show that it’s on. Google also insists that it won’t allow facial recognition software on Google Glass—critics have raised concerns about someone being able to use facial recognition to track down the identity of a person they’ve captured in photos or videos on the street or in a bar.
Others are worried about so much visual data being captured every day, particularly if Google Glass hits it big. The video and images belong to the owner of the glasses, but who else could get access to them? Google has tried to assuage some of those fears by pointing out that all the files on the device will be able to be deleted remotely in the event that it’s lost or stolen.
Thanks for sharing
Then there’s this. In August, Google was awarded a patent to allow for the use of something known as “pay-per-gaze” advertising. In its application, the company noted that “a head-mounted tracking device”—in other words, Google Glass—could follow where the person wearing it was gazing, and be able to send images of what they saw to a server. Then, any billboards or other real-world ads the person had seen would be identified and Google could charge the advertiser. As noted in the New York Times’ Bits blog, the fee could be adapted based on how long the ad actually held the person’s gaze.
Here’s how Google proposed the idea in its patent: “Pay-per-gaze advertising need not be limited to online advertisements, but rather can be extended to conventional advertisement media including billboards, magazines, newspapers and other forms of conventional print media.”
Since it became public, Google has downplayed the patent—first filed in 2011—saying it has no plans to incorporate the eye-tracking capability into Google Glass any time soon. “We hold patents on a variety of ideas,” the company responded in a statement. “Some of those ideas later mature into real products or services, some don’t. Prospective product announcements should not necessarily be inferred from our patents.”
There are other ways advertising could be integrated into the Google Glass experience. Digital ads could pop up in a person’s glasses based on what they may be looking at. Say you’re walking down the street and suddenly an ad for the restaurant down on the corner shows up on your display screen. That could get real old real fast—but it’s not that improbable. Or maybe you’d see virtual ads—for which advertisers pay Google—which would replace real-world ads that appear in your line of vision.
No doubt, though, Google Glass will provide us with plenty of ethical dilemmas. When, for instance, will you be justified in telling someone to please remove their camera glasses? And will there be places and situations where glasses in the filming position are universally seen as bad form—say, at dinner parties, or stops at public bathrooms or in the midst of messy breakups?
But there’s another aspect of Google Glass—or most wearable tech, for that matter—that’s particularly intriguing. It has to do with the power of real-time feedback to change behavior. Studies have shown that nothing is more effective at getting people to slow down their cars than those digital signs that tell you how fast you’re going. It’s feedback to which you can immediately respond.
So, will a steady stream of data about our personal health and exercise make us take our bad habits a lot more seriously? Sure, you can forget the occasional crack from your partner about your weight gain. But a smart watch reminding you all day, every day? What about prompts from your smart glasses that give you cues when you start spending money recklessly? Or flagging you on behavior patterns that haven’t turned out so well for you in the past? Can all these devices make us better people?
Sean Madden, writing for Gigaom, offered this take: “This is social engineering in its most literal sense, made possible by technology, with all of the promise and paranoia that phrase implies.”
Wear it well
Here are other recent developments on the wearable tech front:
- Remember when all a watch needed to do was tick: Samsung has jumped into the wearable tech business with the release of its Galaxy Gear smart watch, although some critics have suggested that it’s just not smart enough.
- If teeth could talk: Researchers at National Taiwan University have designed a sensor that when attached to a tooth can track everything your mouth does during a typical day—how much you chew, how much you talk, how much you drink, even how much you cough.
- How about when you need more deodorant?: A Canadian company is developing a machine-washable T-shirt that can track and analyze your movement, breathing and heart activity.
- Don’t let sleeping dogs lie: Why shouldn’t dogs have their own wearable tech? Whistle is a monitoring device that tells you how much exercise your dog is getting while you’re at work. Or more likely, how much he’s not getting.
Video bonus: Here’s a Google video showing how Glass can keep you from ever getting lost again.
Video bonus bonus: With luck, advertising on Google Glass will never get as bad as it plays out on this video parody.
More on Smithsonian.com
First Arrest Caught on Google Glass
February 19, 2013
Last Friday was, astronomically speaking, one of those days that comes along every 40 years. Actually, a lot less frequently than that. That’s how often, according to NASA estimates, an asteroid the size of the one that flew by Friday gets that close to hitting the Earth–it passed 17,000 miles away. But when you throw in the considerably smaller meteorite that exploded over Russia the same day and injured more than 1,000 people–that’s never happened before–you’re talking about one extremely unique moment in space rock history.
Most of us have moved on, taking comfort in the belief that that’s not happening again any time soon. But there was something sobering about seeing how much damage could be done by rock about as big as one and and a half school buses. Also, that if the flyby asteroid, which was three times that size, had been on target to hit our planet, we really couldn’t have done much about it–the giant rock was spotted by a team of amateur astronomers in Spain only a year ago.
All of which prompted two basic questions: “How much warning will we get before a monster asteroid collides with the planet?” and “What’s the plan for stopping it?”
Beware of “city killers”
The good news is that NASA, which really didn’t start tracking near-Earth objects until the mid-1990s, believes it has charted almost 95 percent of the 980 asteroids more than a half-mile wide that are orbiting in our part of the universe. These are known as “planet-killers,” space rocks so large that if they collided with Earth, it would pretty much end civilization as we know it. None, I’m happy to say, are headed our way.
But move down a bit in size to asteroids roughly between 100 feet and a half mile wide and it’s a very different story. NASA figures it’s located only 1 percent of the near-Earth objects that small. They may not sound very menacing, but keep in mind that the rock that missed us Friday was roughly 150 feet wide and it would have had a cataclysmic impact if it had exploded over or landed on a populated area. And the one that did blow apart over Russia and hurt so many people was only 55 feet wide.
Scientists at the University of Hawaii, with NASA funding, are developing a network of telescopes designed to find the smaller ones. It’s called ATLAS, which stands for the ominous-sounding Asteroid Terrestrial-Impact Last Alert System, and its creators say they’ll be able to provide a one-week warning of incoming ”city killers”–rocks about 150 wide–and three weeks notice of “county killers”–ones three times as large.
Seek and you shall find
The truth is, though, infrared telescopes surveying from space are better suited for the job, particularly when it comes to spotting asteroids orbiting close to the sun. NASA’s WISE telescope identified 130 near-Earth asteroids, but it’s been shut down for two years. Instead of replacing it, NASA is reviewing proposals for a sensor that could detect asteroids as small as 100 feet wide, while attached to a communications satellite.
But now private groups have started floating their own ideas for finding rocks flying through space. One, called the B612 Foundation after the fantasy asteroid on which the Little Prince lived, has ambitious plans to launch a deep space telescope named Sentinel. From a vantage point as far away as Venus, it should be able to look back at our planet and see the heat signatures of objects that come near the Earth’s orbit.
It’s no small undertaking–the estimated cost is $450 million–but among those driving the project are two former astronauts, Russell Schweickart and Edward Lu, who’s now a Google executive and has been able to stir up interest for the mission in Silicon Valley. Lu sees last week’s double asteroid display as a wakeup call. Sure enough, his group was getting calls all day Friday from people wanting to know when it will have its telescope up. Most likely it won’t be until 2018.
And two companies hoping to make a fortune by mining asteroids will also soon be in the business of tracking them. Planetary Resources, which includes among its investors filmmaker James Cameron, Google execs Larry Page and Eric Schmidt and X-Prize Foundation head Peter Diamandis, plans to launch its own asteroid-charting space telescope late next year. The other, Deep Space Industries, has proposed a kind of sentry line of spacecraft circling the Earth that would evaluate and, if necessary, intercept incoming asteroids.
Taking care of business
Okay, but then what? Can an asteroid moving at 18,000 miles an hour be stopped, or at least steered away?
Forget about the Armageddon approach. Blowing up an asteroid with a nuclear bomb–good for a movie, bad for Planet Earth. The resulting debris shower might do almost as much damage.
Instead, here are five ideas that have been proposed:
1) A shout out to our old friend gravity: This would involve what’s referred to as a “gravity tractor.” Actually, it’s a large spaceship that would be maneuvered as close as possible to the orbiting asteroid. In theory, the gravitational pull of such a large object would be strong enough to change the asteroid’s path. Unfortunately, some scientists say we might need a decade’s notice to pull this off.
2) Prepare for ramming speed!: The European Space Agency is working with scientists at Johns Hopkins University on a plan that would involve sending a spacecraft to bump an asteroid off course. Called the Asteroid Impact and Deflection misson, or AIDA for short, it would actually involve sending up two spacecraft. One would be there to observe and gather data while the other does the ramming. The goal would be to alter the asteroid’s spin and ultimately, its direction.
3) Okay, so there is a nuclear option: But it hopefully wouldn’t involve blowing up the asteroid to smithereens. Instead, scientists would prefer to detonate a device close enough that it would change the rock’s orbit. This is always referred to as a last resort.
4) Would you like something in an eggshell? Or perhaps a tasteful pearl white?: Then there’s the white paint strategy. According to this plan, a spacecraft would approach the asteroid and pummel it with white paint balls. The new white coat would more than double the rock’s reflectivity and, over time, that would, in theory, increase solar radiation pressure enough to move it off course. You scoff? This plan, devised by an MIT graduate student, won the 2012 Move an Asteroid Technical Paper Competition sponsored by the United Nations.
5) You knew there had to be lasers in here somewhere: And just in time for last week’s space rock event, two California scientists outlined a strategy in which they would use the sun’s power to create laser beams that could be aimed at an asteroid. They would start small, creating an array in space about the size of the International Space Station. The laser beams it created would be strong enough to push an asteroid on to a different path, say the plan’s inventors. But they wouldn’t stop there. They foresee building out the array until it’s as large as six miles wide. And then it would be able to produce laser beams powerful enough that , within a year, could vaporize an asteroid.
Sure, it sounds like a George Lucas fever dream. But the scientists say it’s eminently feasible. Besides, says one, physicist Philip Lubin of the University of California, Santa Barbara, it’s time to be proactive instead of reactive. As he put it, “Duck and cover is not an option.”
Video bonus: In case you forgot how bad a movie Armageddon was, and that it featured Steve Buscemi as an astronaut, here’s the over-the-top trailer.
Video bonus bonus : Or if you want to stick to the real thing, here’s a collection of videos of Friday’s asteroid flyby.
More from Smithsonian. com
October 18, 2012
A few months ago Google shared with us another challenge it had taken on. It wasn’t as fanciful as a driverless car or as geekily sexy as augmented reality glasses, but in the end, it could be bigger than both. In fact, it likely will make both of them even more dynamic.
What Google did was create a synthetic brain, or at least the part of it that processes visual information. Technically, it built a mechanical version of a neural network, a small army of 16,000 computer processors that, by working together, was actually able to learn.
At the time, most of the attention focused on what all those machines learned, which mainly was how to identify cats on YouTube. That prompted a lot of yucks and cracks about whether the computers wondered why so many of the cats were flushing toilets.
But Google was going down a path that scientists have been exploring for many years, the idea of using computers to mimick the connections and interactions of human brain cells to the point where the machines actually start learning. The difference is that the search behemoth was able to marshal resources and computing power that few companies can.
The face is familiar
For 10 days, non-stop, 1,000 computers–using those 16,000 processors–examined random thumbnail images taken from 10 million different YouTube videos. And because the neural network was so big–it had more than a billion connections–it was able to learn to identify features on its own, without any real human guidance. Through the massive amount of information it absorbed, the network, by recognizing the relationships between data, basically taught itself the concept of a cat.
Impressive. But in the realm of knowledge, is this cause for great jubilation? Well, yes. Because eventually all the machines working together were able to decide which features of cats merited their attention and which patterns mattered, rather than being told by humans which particular shapes to look for. And from the knowledge gained through much repetition, the neural network was able to create its own digital image of a cat’s face.
That’s a big leap forward for artificial intelligence. It’s also likely to have nice payoffs for Google. One of its researchers who worked on the project, an engineer named Jeff Dean, recently told MIT’s Technology Review that now his group is testing computer models that understand images and text together.
“You give it ‘porpoise” and it gives you pictures of porpoises,” Dean explained. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”
So Google’s image search could become far less dependent on accompanying text to identify what’s in a photo. And it’s likely to apply the same approach to refining speech recognition by being able to gather extra clues from video.
No question that the ability to use algorithms to absorb and weave together many streams of data, even different types of data, such as sound and images, will help make Google’s driverless car that much more autonomous. Same with Google glasses.
But now a slice of perspective. For all its progress, Google still has a long way to go to measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million times smaller than the human brain’s visual cortex.
A matter of intelligence
Here are more recent developments in artificial intelligence:
- A bee, or not a bee: A team of British scientists are attempting to create an accurate model of a honeybee’s brain. By reproducing the key systems that make up a bee’s perception, such as vision and scent, the researchers hope to eventually be able to install the artificial bee brain in a small flying robot.
- But does it take the cover into account?: New software called Booksai is using artificial intelligence to give you book recommendations based on the style, tone, mood and genre of things you already know you like to read.
- Do I always look this good?: Scientists at Yale have programmed a robot that can recognize itself in the mirror. In theory, that should make the robot, named Nico, better able to interact with its environment and humans.
- Lost in space no more: Astronomers in Germany have developed an artificial intelligence algorithm to help them chart and explain the structure and dynamics of the universe with amazing accuracy.
- Walk this way: Scientists at MIT have created a wearable intelligent device that creates a real-time map of where you’ve just walked. It’s designed as a tool to help first responders coordinate disaster search and rescue.
Video bonus: In France–where else?–an inventor has created a robot that not only prunes grape vines, but also has the intelligence to memorize the specific needs of each plant. And now it’s learning to pick grapes.
More from Smithsonian.com
August 3, 2012
For 40 years Landsat satellites have been circling the Earth, taking pictures from roughly 440 miles above us. Each loop lasts about 99 minutes and it takes about 16 days to capture the entire planet. Which means that Landsats have been recording, in 16-day intervals, the ebb and flow of our relationship with the planet since the early 1970s.
It’s been, as they say in the relationship business, a rough stretch, but for most of it, only scientists have been paying much attention. These were people tracking the explosion of cities or the scarring of rainforests or the melting of glaciers. As for the rest of us, well, we may have been aware that things were changing, and not for the better, but we had little sense of the scale or pace of change.
Now we can see for ourselves, thanks to a joint project of Google, the U.S. Geological Survey and Carnegie-Mellon University. Google has already stored 1.5 Landsat million images in its Google Earth Engine and now CMU scientists have refined software that allows many of those images to be watched as zoomable, time-lapse videos.
It’s an experience both fascinating and sobering. Take, for instance, a satellite timelapse of Las Vegas since 1999. You see the city speading like kudzu into the desert, while nearby, Lake Mead shrinks a bit more every year. The two aren’t directly related–the lake’s being drained by drought and warm winters upstream on the Colorado River. But if you live anywhere near there, it couldn’t be a comforting juxtaposition.
Or consider a time lapse of the Amazon rainforest during the same period. You watch as farmers’ fields spider out like veins from a road built through the green canopy. And when brown fields take over an area, another road is cut and more fields follow. As Carnegie Mellon scientist Randy Sargent put it, “You can continue to argue about why deforestation has happened, but you no longer will be able to argue whether it happened.”
Archaeology from space
It turns out that satellite photography isn’t just a powerful tool for tracking recent Earth events; it’s also a way to look deep into the past. A report published earlier this year revealed that archaeologists are able to see traces of now-buried ancient settlements by applying a computer program to satellite photos. This works because human settlements, specifically organic waste and decayed mud bricks, leave behind a unique signature in the soil. Under infrared analysis, it tends to be much denser than the soil around it.
Using this technique, Harvard archaeologist Jason Ur was able to spot as many as 9,000 potential hidden settlements in a 23,000-kilometer area of northeastern Syria alone. “Traditional archaeology goes straight to the biggest features — the palaces or cities — but we tend to ignore the settlements at the other end of the social spectrum,” said Ur. “The people who migrated to cities came from somewhere; we have to put these people back on the map.”
Another scientist using satellite images, Sarah Parcak, of the University of Alabama at Birmingham, actually refers to herself as a “space archaeologist.” Last year she located as many as 17 possible small pyramids buried under the sand in Egypt through a satellite survey. Said Parcak, “It’s an important tool to focus where we’re excavating. It gives us a much bigger perspective on archaeological sites. We have to think bigger and that’s what the satellites allow us to do.”
Here’s a sampling of some of the more memorable images captured by satellite cameras:
- An Olympian effort: In the spirit of the Games, NASA has pulled together aerial views of the 22 cities that have hosted the Summer Olympics since the modern games began in 1896.
- Growth spurts: While we’re peering down at cities, here are 11 more that have seen explosive growth in recent decades, from Chandler, Arizona, which has eight times as many residents as it did in 1980, to the Pearl River Delta in China, which was completely rural in the 1970s and now has a population of more than 36 million.
- Scorched Earth: Only a satellite image can give you a true sense of how much devastation the Waldo Canyon fire did in Colorado earlier this summer.
- Beetle mania: More ugliness in Colorado: A satellite’s view of the destruction done by the tiny pine bark beetle.
- Breaking away: A series of satellite images captures an ice island twice the size of Manhattan breaking away from the Petermann Glacier in Greenland a few weeks ago.
- Dust never sleeps: This will make you throat go dry: A dust storm bridging the Red Sea.
- Is this place beautiful or what?: And finally…to mark Landsat’s 40th birthday, NASA and the U.S. Geological Survey asked people to vote for the Landsat image that best presented Earth as a work of art. Here are the five top choices. .
Video bonus: Check out more stunning Landsat images in this clip about how the Google Earth Engine will make it much easier for people like you and me to to follow the Earth’s transformation.
More from Smithsonian.com
April 20, 2012
Remember how excited everyone got a few weeks ago when Google started sharing details about the augmented reality glasses it’s developing. Project Glass, as it’s called, seemed sure to be the next big thing in wearable tech–glasses that work like a smart phone, giving you directions, taking photos, connecting to the Web, pinging you with reminders, buying tickets, and generally acting like a concierge wrapped around your head.
Now that all seems soooo early April.
Because this week the new new thing is a smart watch called the Pebble. Not that smart watches are new–they’ve been around for a few years. But Pebble’s cutting a sharper edge. It’s the first smart watch to be able to communicate wirelessly with both iPhones and Android smartphones. Even more impressive, though, is how Allerta, the company behind it, has used “crowd-funding” to go viral and, in the process, raise way, way more cash than it thought it could.
Nine days ago, Pebble rolled out on Kickstarter, the website that’s usually associated with encouraging the public to invest in creative projects–indie films, music, video games, books. Allerta hoped people would kick in $100,000; as of this morning, it has raised almost $5.5 million. That’s serious money.
Usually a product like the Pebble would go the venture capitalist route. But founder Eric Migicovsky knew that investors can be skittish about throwing money into hardware, and would likely ask a lot of questions about models and market size. So he took his smart watch to the people. He simply made a video showing what the Pebble could do and invited visitors to the Kickstarter site to pre-order models at a discounted rate from the $150 it will cost in stores. More than 37,000 people have ponied up so far, and the offer still has almost a month to go. Which means the Pebble, which won’t come out until this fall, already has itself a community of believers.
That’s a sweet enticement to mobile apps developers, who are as critical to the success of a smart watch as they are to smartphones. If they can see that much demand for a product months before it’s available, they don’t need much incentive to jump on board. And that’s what will ensure that Allerta can deliver on its claim that the Pebble will be the first truly customizable smart watch.
How smart can a watch be?
So what is it about the Pebble that makes it so alluring? Start with the fact that it’s compatible with iPhones. That’s huge, since no other smart watch is. But here’s what else it will be able to do. It will allow you to read text messages on Android smartphones and flash caller ID on its screen when a call comes in. You can use it to control music on your phone and to track how far you’ve run or at what pace you’ve ridden your bike. On the golf course, it will be able to tell you how far it is to the hole. Plus, the Pebble is water-resistant, can hold a charge for a week and its e-paper screen is easy to read, even in direct sunlight.
Right, and it tells time. But not on some standard, dull digital display–unless that’s what you choose. Because you’ll be able to customize the watch face to your preference for how you want time to look as it passes.
Does this tech make me look fat?
More experts are saying wearable tech is about to go mainstream. Here are some of the latest developments:
- Turn the beat around: The Mayo Clinic is partnering with Preventice to develop a miniature wearable device that monitors heart and respiratory rates and, through a smartphone, wirelessly sends the data to a doctor’s office. The device, worn under your clothes, is now in clinical trials in the U.S. and Europe.
- Your baby called and he’s wet: For those who just can’t know enough about their baby, there’s now a very special onesie with sensors that track your kid’s vitals and send the data to your PC or phone. The cost of the outfit, software and service? A cool $1,000.
- Fashion statement: Oakley, known for is stylish sunglasses, is working on its own version of augmented reality glasses that could put it in direct competition with Google.
- The all-day workout: Nike is making its mark in the wearable tech biz with its Nike+ FuelBand, a rubber wristband that lets you set your exercise goals in the morning, then tracks steps taken, calories burned or other progress you’ve made. If you hit your goal, the color display turns green.
- May your soles rise up: And this summer, Nike plans to release the Nike+ Basketball and Nike+ Training shoes with pressure sensors in the soles. The sensors will collect information about your movement, like how high you jump, how fast you move, and how hard you play and transmit it to your phone.
Video bonus: See the video on Kickstarter that convinced thousands of people to invest in a Pebble smart watch.