October 23, 2013
It is hard to imagine, but there was once a time when cars didn’t come equipped with rearview mirrors. In fact, it wasn’t until 1916 that the simple, yet game-changing invention became standard, making it easy for drivers to safely weave through traffic—except, of course, in those situations when they’re stuck behind a slow moving cargo truck on a two-lane road. Too bad there wasn’t also a way to see ahead of these road turtles to determine when it would be safe to pass.
Today, it’s 100 percent possible, and to prove it, researcher Michel Ferreira of the University of Porto in Portugal has developed a technology that seems to magically let you peer through the car you’re trailing. And, for manufacturers, what’s attractive about the computer scientist’s “See-Through System” is that it isn’t an entirely new device but rather a clever adaptation of a nascent technology known as Vehicular Communication Systems, designed to let wirelessly interconnected cars communicate and update each other with traffic information. One example of such systems is the ongoing development of semi-autonomous platooning vehicles, where a network of cars relay information, which is used to navigate the streets together as a convoy.
The See-Through System makes innovative use of these systems by tapping into the visual data being recorded by the forward-facing webcam that some vehicles have installed on their windshield. When a truck or bus ahead of you has this type of webcam, the system within your own car streams the feed from this camera into a transparent LCD screen built into your windshield. The viewpoint helps you judge distances and see whether any incoming cars in the other lane are moving into view.
“These windshield cameras will someday be very common and I predict you’ll eventually find them standard on every car,” Ferreira says. “But besides letting cars simply communicate with each other, it can also be deployed as a form of augmented reality, where you can see through cars.”
The most immediate and obvious concern with relying on transmitted data from some of these more elaborate communication systems, like for instance GPS, is the possibility of computer error and the real-time lag that’s common even with high-speed data transmission technologies. The reason why rearview mirrors are generally dependable is because they reflect accurate and instant changing road conditions at the speed of light. Even then car manufacturers are required to affix a warning that “objects may appear closer than they actually are.” In an interview with New Scientist, Hannes Kaufmann of the Vienna University of Technology in Austria says that while he thinks “it’s a good idea to support drivers to judge situations better,” these kind of interventions can be a “two-edged sword.”
“What if the image transmission stutters?” he wonders.
Ferreira has already taken this potential “blind spot” into account and has thoroughly road-tested the system (as evidenced in the video), even going to so far as building in a fail-safe that shuts off the feed in case a glitch were to occur.
“The big problem that’s been brought up is the latency of the transmission. When you’re driving you don’t want to have a half-second delay, and our tests show that the latency isn’t more than 200 milliseconds, which translates to about 10 meters at high speed and won’t make any difference with safety in real-world driving,” Ferreira argues.
Even if you’re not quite ready to trust the professor’s word on it, there’s no need to worry right this minute. Such a system likely won’t be available until car-to-car communication becomes much more ubiquitous.
“What you’re seeing in the video are the raw beginnings of core research,” he says. “This has been possible for a little while now, so the work I’m doing is just so that people can see that it is.”
August 20, 2012
Last week The Voice was back. I’m not referring to the treacly TV show or the latest crooner chased down by TMZ. I’m talking about Julia Child.
In honor of what would have been her 100th birthday, America’s first real TV chef was all over the airwaves. Or at least her voice was, a voice that, on first hearing, sounded like it could set off car alarms, or maybe was a car alarm. But it was all part of the package, a presence as genuine as it was gangly. There was nothing snooty about Julia as she taught Americans French cooking. If you dropped a piece of lamb and you were alone in the kitchen, she once confided to viewers, just pick it up. No one had to know.
So it was no small irony that the day after her birthday, the New Scientist’s website published a piece about how robots, sensors and augmented reality are now being used to train novice chefs. It’s good that Julia never had to hear about this.
Something’s watching you
Nonetheless, this is where cooking is headed, a future where precision and skill in the kitchen will have as much to do with what’s watching as who’s training.
Consider the setup that computer scientist Yu Suzuki and his team have created in a test kitchen at Kyoto Sangyo University. They’ve installed cameras and projectors on the ceiling that project cooking instructions right on the ingredients.
So, let’s say you want to filet a fish. Once you place it on a chopping board, the camera detects its size and shape and the projector then overlays the equivalent of a virtual dotted line showing you where to make the cut. In a macabre twist, instructive word bubbles appear at the fish’s mouth to ensure that his gutting is done properly.
So far, because the scientists have to program each process manually, Suzuki’s system can teach people only how to prepare fish and peel onions. But he promises that once it’s automated, its repertoire will grow quickly.
Do the right thing
Then there’s Jinna Lei, a robotics Ph.D. student at the University of Washington. She’s also using cameras in the kitchen, specifically Kinect-like depth-sensing cameras capable of recording both the shape and appearance of kitchen objects. And that allows them to track cooking actions, such as whether a certain ingredient has been poured into a bowl.
Eventually, says Lei, the system should be able to alert the cook if he or she makes a mistake. Already, she’s tested it with a cake-baking video and it was able to identify, in seconds, the start and end points of 17 different recipe actions.
Still another chef-teaching technique has been developed by researcher Thomas Ploetz at the University of Newcastle in the U.K. He has installed sensors in kitchen utensils which record when and how they’re used by the novice cooks. And since they hear their instructions from a computer in French, the chefs learn both cooking and French.
Now that Julia would have loved.
Here are more recent innovations on the food front:
- Oodles of noodles: A Chinese restaurateur has started mass-producing robots that can tirelessly hand-slice noodles into a pot of boiling water. One robot costs about $2,000 in American dollars; a human doing the same job in China would make about $4,700 a year. (That’s right, $4,700.)
- I, Sushi Master: Meanwhile, in Japan, a new robot is cranking out 2,500 perfect sushi rolls an hour. The machine injects a puff of air into each tofu skin to open it up fully, then a second robotic probe tucks the sushi rice inside the corners.
- The printer needs more meat: A startup in Missouri is promoting the idea that one day hamburgers could be produced on a 3-D printer. The company, Modern Meadow, thinks it will be able to “print” slivers of environmentally-friendly, in-vitro meat. I know, doesn’t
sound too tasty, but Pay Pal co-founder and billionaire Peter Thiel has kicked in about $300,000 to see if it could actually work.
- Can you earn rewards for banning cell phone yakking?: If they can make a game out of running a farm, why not one where you manage a restaurant? So now there’s a game app called Cafeteria Nipponica where you hire staff, create dishes, maybe set up a mobile phone campaign to get customers in the door. And if you really get serious, you can try your hand at trying to run three restaurants at the same time.
- Do we really need to make it easier to buy donuts?: Dunkin’ Donuts has gone the Starbucks route and is now offering a mobile payment app that lets you set up your own donut account where you can pay at the counter by scanning your phone over a barcode. You can even use the app to send donut gift cards to your friends, for which they will either love you or hate you. Probably both.
Video bonus: In case you forgot what a charmer Julia Child could be, watch this 1987 clip where she whips out a blow torch to grill up a burger for David Letterman. And for a bonus bonus, here’s a great new remix of Julia at her snappy best.
More from Smithsonian.com
August 14, 2012
Chances are you think you already have enough information in your life. Why, oh why, would you want to add more layers?
Yet there’s something intriguing about the concept of augmented reality, the notion of enhancing objects in the real world with virtual sounds and images and additional info. And when Google revealed earlier this year that it was developing glasses that will be part wearable computer, part digital assistant that flashes relevant data right before your eyes, augmented reality (AR) no longer seemed such a digital parlor trick. The geek gods had spoken.
In fact, recent analysis by the London firm ABI Research concludes that the next big phase of AR–now largely played out on smartphones and tablets–will be through wearable tech. That’s when the technology will become truly functional, when your glasses are able to tell you everything you want to know about the restaurants and stores on the block where you’re walking.
Will Powell, an AR wiz recently interviewed by Slash Gear, concurs:
I think that with the desire for more content and easier simpler devices, using what we are looking at and hearing to tell our digital devices what we want to find is the way forward. Even now we have to get a tablet, phone or laptop out to look something up. Glasses would completely change this because they are potentially always on and are now adding full time to at least one of our fundamental senses.
Scenes from an exhibition
One place, however, where AR is still making its mark on small screens is the museum world. Those who run museums know that the people walking around their buildings are already spending an inordinate amount of time using their phones, whether it’s taking pictures or texting friends or taking pictures to text to friends. So it only makes sense to find ways to turn phones into storytelling tools that can bring the inanimate to life. Or shift time. Or add layers of knowledge. More museums are taking the leap and while the results can sometimes still seem a bit gimmicky, it’s a move in the right direction.
One of the the latest examples is an exhibit called “Ultimate Dinosaurs” that opened at the Royal Ontario Museum in Toronto earlier this summer. It uses augmented reality to add flesh to the bones of dinosaurs and lets them move around. In some cases, you can use an app on your smartphone to make beasts pop out of markers around the exhibit, including on the floor; in others you can use iPads provided by the museum to turn fossils into fleshed-out creatures. And along the walls are animated projections of dinos that also are interactive. With the help of a Kinect 3-D camera, their eyes follow your every move. A bit creepy, but what museum couldn’t use a little thrill.
Instead of reconstituting dinosaurs, the Laguna Beach Art Museum in California is using AR to bring motion to still photos. Dancers frozen in an image start to spin on your smartphone screen; a woman captured under water suddenly swims away. It’s the first phase of images escaping their frames.
The Getty Museum in Los Angeles is taking yet another approach. In an exhibit titled “Life of Art,” it enables visitors to use iPads to explore in much more detail–and even rotate–classic historical objects from its permanent collection–a 17th century lidded porcelain bowl from Asia, for instance, and an 18th century French armchair.
But maybe the most engaging twist of AR with an exhibit has been pulled off by the Science Museum in London. An iPhone app turns James May, one of the hosts of the popular BBC show “Top Gear,” into a virtual museum guide. By aiming the camera at a marker near nine of the exhibits in the Making the Modern World Gallery, you conjure up a CGI version of May, spinning tales and reeling off details about steam engines and the first home computers.
What is reality?
Here are other examples of augmented reality pushing envelopes:
- Now that’s point-and-shoot: Researchers at MIT’s Media Lab have developed an AR device they call EyeRing. It’s a tiny camera you wear on your finger and when you take a picture of an object, it transmits it to a smartphone that gives you information about what you’ve photographed.
- But does it work on bald?: Meanwhile, the folks at Disney Research have created a technology using reverse electrovibration that projects texture on to smooth surfaces.
- Really interior design: The 2013 edition of the IKEA catalog has its own AR spin. You can use a smartphone app to see inside cabinets and get design ideas not available to those satisfied only with reality.
- But wait,there’s more: The Los Angeles Times used the start of the London Olympics to join print publications dabbling in AR. It rolled out an app that enabled readers to get more material by hovering their phones over Olympics photos in the paper.
- For those who expect more from their chips than crunch: We should all be grateful that we have lived long enough to experience potato chip bags that predict the weather. This month and next, Walkers crisps will come in bags that, once you download the appropriate mobile app, share the weather report for today and tomorrow. There are no plans, as yet, for five-day forecasts.
Video bonus: Here’s a demo video showing how dinosaurs come back to life in a Toronto museum.
More from Smithsonian.com