September 13, 2012
A new movie premieres in New York today and chances are none of you will ever see it.
It’s a short film titled “DVF Through Glass” and it’s video that models working for designer Diane von Furstenberg shot during New York’s Fashion Week using Google glasses they were wearing. (Google prefers to call its augmented reality devices Google Glass to distinguish them from actual glasses because they contain no glass. Got that?)
They’re the frames that caused such a stir last spring when Google unveiled them, wearable computers that can shoot videos and photos and tell you where the nearest Starbucks can be found. By wearing them as they strolled down the runway, von Furstenberg’s models became high-tech accessorized. For its part, Google managed to de-geek its invention a tad by putting it on fashion models, not to mention grab some New York media exposure before all the spotlights swung over to Apple’s iPhone 5.
As Spencer Ante pointed out in The Wall Street Journal this week, Google Glass remains a work in progress, with much of its software unfinished. It won’t be available until next year and, at $1,500 a pop, will likely be a novelty bauble for awhile.
Still, it’s already the best known of what are being called “appcessories,” wearable devices that work with smart phones. Earlier this week, a potential challenger, glasses developed by a British firm called The Technology Partnership (TTP), made its debut. Unlike Google Glass, the TTP device looks like regular glasses and beams an image directly into the wearer’s eye, instead of making him or her shift focus to a tiny screen attached to the frame.
Then there’s the Pebble, a smart watch that tells you the time, but also connects wirelessly with your iPhone or Android phone to show you who’s calling, display text messages, Facebook or email alerts and let you control, from your wrist, what’s playing on your smartphone. Its inventors had hoped to raise $100,000 on Kickstarter, with the goal of selling 1,000 watches. Instead they raised $10 million and already have orders for 85,000 watches–so many that they’ve had to push back the first shipment, which was supposed to start this month.
It’s that kind of response that has a lot of people predicting that wearable computing is the next big wave, the thing that will free us from what’s been called the “black mirror” of our smartphone screens. Your phone may still be the powerful little computer you carry around, but it may never have to leave your pocket.
Or you can do without the phone altogether. London digital art director Dhani Sutanto created an enamel ring with the electronics of a transit card implanted in it. One swipe of his ring and he can ride the London subway.
His goal, he says, is to design “interactions without buttons,” to link physical items–such as a ring–to your virtual identity and preferences.
“Imagine a blind person using an ATM and fumbling with the buttons or touch screen,” Sutanto recently told an interviewer. “If they had wearable technology in the form of a ring, for example, they could approach and just touch it. The ATM would say, “Welcome, Mr. Smith. Here’s your £20.”
Turn me on
Google wasn’t alone in infusing tech in Fashion Week. Microsoft was there, too, presenting a dress that tweeted. Okay, the dress, made of paper, didn’t actually tweet, but the person wearing it could, using a keyboard on its bodice, decorate the bottom of the dress with Twitter banter.
My guess–and hope–is that this won’t catch on and we will never have to live in a world where people wear their tweets on their sleeves. But another breakthrough in wearable tech a few months ago could dramatically change what we expect our clothes to do for us.
Scientists at the University of Exeter in the U.K. have created a substance that can be woven into a fabric to produce the lightest, most transparent and flexible material ever made that conducts electricity. One day, they say, we could be walking around in clothing that carries a charge.
To me, this would not seem a good fashion choice if there’s even a chance of thunder and lightning. But the researchers at Exeter have happier thoughts. They talk of shirts that turn into MP3 players and of charging your phone with your pants.
Which could give new meaning to “wardrobe malfunction.”
Here are other recent developments in wearable tech:
- You’ve got the power: A British professor is trying to produce clothing made with materials capable of generating electricity from either the warmth or movement of the human body.
- If you must talk in public, do it with style: Nothing stylish about walking around wearing a Bluetooth headset. But now, at least for women, there are other options, such as a pendant that works like a headset, but looks like a necklace.
- One device to rule them all: Scientists at Dartmouth are developing a device worn like a bracelet that would authenticate a user’s identity and connect any other medical devices he or she has had implanted or is wearing.
- Mom, is that you?: A device called LUMOback that you wear like a belt around your back vibrates to let you know if you’re slouching.
- News from the front: Adidas now has a sports bra that both tracks your heartbeat and tells you how many calories you’ve burned.
- Are you going to answer your phone or what?: Not quite sure what to make of this one, but Nokia has filed for a patent for a magnetic vibrating tattoo. The idea is that it would work like a silent ringtone, setting off a different vibration depending on who’s calling or if your phone battery is running low.
Video bonus: See how Microsoft’s Kinect is being used to let you try on clothes without having to take any off.
More from Smithsonian.com
August 27, 2012
It was a just about a year ago that a handful of Stanford professors began hatching a revolution in college education.
Sebastian Thrun, more widely known as the head of the team behind Google’s driverless car, decided that he and colleague Peter Norvig would start making their popular course in artificial intelligence available online. Free of charge. To anyone in the world. About 160,000 people signed up.
A few weeks later, another Google researcher/Stanford computer scientist, Andrew Ng, followed suit, offering his equally popular course, “Machine Learning” for free. More than 100,000 people watched his lectures online. As Ng pointed out, it would have taken him 250 years to reach that many students in a conventional Stanford classroom.
The problem, of course, is that Stanford charges students in those conventional classrooms about $40,000 a year. Freebies were not a good business strategy.
By January, Thrun had lined up venture capital money and left Stanford to start Udacity, an independent, online-only education service focusing on science and technology courses. Within a few months, Ng and another Stanford computer scientist, Daphne Koller, had rounded up their own boatload of VC money–a reported $16 million to start with-and went on leave from Stanford to start their own online college operation called Coursera.
Less talk, more questions
But Ng and Koller actually have ratcheted things up another notch. Instead of just distributing its own online courses, Coursera has formed partnerships with some of America’s top universities to help them convert courses for free Internet access. Last month, the startup announced that in addition to its four original partners,–Stanford, Princeton, Penn and Michigan–it has added 12 more, ranging from Duke and Johns Hopkins to the University of Toronto and the University of Edinburgh in Scotland.
So what does that mean? For starters, Coursera is spreading what’s becoming the new model for online teaching. No more videos of professors talking non-stop for an hour. Instead, lectures are chopped into much smaller chunks, say 10 minutes long, with students asked a quiz question every few minutes. They need to answer correctly before they can move on with the video.
And having tens of thousands of people taking the course at the same time makes it much easier than you would expect for students working on their own to find and study with like-minded classmates. Ng says that, on average, it takes only 22 minutes for someone to get a question answered in Coursera’s online forums.
The huge size of Internet classes–they’re now known as massive open online courses or MOOCs–also allows for much more comprehensive analysis of how subjects are taught and whether they’re understood. Since the online behavior of students is tracked–where they rewind videos, how they respond to quiz questions, etc.–professors can see where a large number of students may have struggled or given the same wrong answer and then make adjustments. Course material now not only has to be interactive, but also more dynamic. Immutable lectures, delivered as if chiseled on stone tablets, are going the way of chalkboards and elbow patches.
Professors also will be teaching classes far more culturally diverse than any they’ve previously experienced. When Coursera announced a few week ago that its enrollment had topped one million in just four months, it also noted that the students who’ve signed up for courses live in 196 different countries. Six out of 10 are outside the U.S.
Can this make money?
Is this really where college is headed? It says something that last spring Harvard and MIT launched their own their MOOC partnership called edX, and that over the summer, the University of California at Berkeley joined it. Even if top-line universities aren’t sure what they’ll gain by offering free courses to the world, they don’t want to risk being left behind if this is a template of the future.
Clearly, there remain some very large unanswered questions, starting with how do any of these partnerships make money. One notion is to charge a relatively small fee, say $50, for a student to receive a certified copy of a letter saying he or she has completed a course. In other words, it wouldn’t cost anything to take a class, but you’d have to pay for proof that you finished it.
Another idea Sebastian Thrun has floated is to have MOOCs serve as a new kind of placement service, using what they glean about students to help companies find employees with very specific skills. But, as recruiters from Intel and Dell told Bloomberg Business Week recently, a certificate for an online course may help someone land a job, but only if they already have a conventional, sit-in-a-classroom four-year degree. Only a very few colleges, including the University of Washington and the University of Helsinki, have agreed to give credit to students who complete MOOC courses.
What about cheating?
No question that plenty of skeptics are dubious about the depth and quality of an online education, who feel the sheer size of the classes precludes any level of one-on-one learning and also invites cheating.
So far only about 25 percent of the people who have enrolled in Coursera courses have actually completed them. And earlier this month The Chronicle of Higher Education reported “dozens” of complaints about plagiarism in essays written for some of the humanities courses Coursera is now offering. (Almost all of the free online courses to date have been in science or technology.)
The accusations actually came from other students, who, in the Coursera system, grade and comment on each other’s essays. In response to the complaints, Coursera reminded students of the honor code they signed when they enrolled. It also is considering using software that can detect plagiarism.
Some professors in the program have suggested that cultural differences could, at least in part, explain why someone would lift whole sections of text from Wikipedia for a course for which they’re not receiving any credit. Eric Rabkin, a University of Michigan English professor who teaches a Coursera class, told the Chronicle that one student who admitted plagiarizing content said he didn’t realize copying and pasting text from another site was inappropriate.
Coursera’s Daphne Koller would point out that this comes with making top college courses available in places where a year ago it would have been inconceivable. She put it this way recently: “This could enable a wave of innovation because amazing talents can be found anywhere. Maybe the next Albert Einstein, or the next Steve Jobs, is living in a remote village in Africa.”
Here are a few other ways technology is changing education:
- Pack light: Another well-financed online initiative called The Minerva Project will be added to the mix by 2014. Its goal is to be the first elite global university. From sophomore year on, students will be encouraged to live in a new country, or at least a new city, every semester.
- That algorithm just doesn’t understand me: Winners of a competition sponsored by the Hewlett Foundation have devised algorithms that can grade essays.
- Today’s assignment is from “Mythbusters:” Big media companies, such as Discovery Communications and the News Corporation, are moving into the digital textbooks business in a big way. They see it as a boom market that could become a new source of revenue.
- You tie shoes?: According to an infographic from LearnStuff.com, 1.5 million iPads will be used in classrooms this year. Also, while 70 percent of American children between ages two and five can use a computer mouse, only 11 percent can ties their own shoes.
Video bonus: Want to hear why so many top universities have become enamored of Coursera? Here’s co-founder Daphne Koller, in a recent TED talk, laying out why online courses should be a big part of college education’s future.
Also on Smithsonian.com
August 20, 2012
Last week The Voice was back. I’m not referring to the treacly TV show or the latest crooner chased down by TMZ. I’m talking about Julia Child.
In honor of what would have been her 100th birthday, America’s first real TV chef was all over the airwaves. Or at least her voice was, a voice that, on first hearing, sounded like it could set off car alarms, or maybe was a car alarm. But it was all part of the package, a presence as genuine as it was gangly. There was nothing snooty about Julia as she taught Americans French cooking. If you dropped a piece of lamb and you were alone in the kitchen, she once confided to viewers, just pick it up. No one had to know.
So it was no small irony that the day after her birthday, the New Scientist’s website published a piece about how robots, sensors and augmented reality are now being used to train novice chefs. It’s good that Julia never had to hear about this.
Something’s watching you
Nonetheless, this is where cooking is headed, a future where precision and skill in the kitchen will have as much to do with what’s watching as who’s training.
Consider the setup that computer scientist Yu Suzuki and his team have created in a test kitchen at Kyoto Sangyo University. They’ve installed cameras and projectors on the ceiling that project cooking instructions right on the ingredients.
So, let’s say you want to filet a fish. Once you place it on a chopping board, the camera detects its size and shape and the projector then overlays the equivalent of a virtual dotted line showing you where to make the cut. In a macabre twist, instructive word bubbles appear at the fish’s mouth to ensure that his gutting is done properly.
So far, because the scientists have to program each process manually, Suzuki’s system can teach people only how to prepare fish and peel onions. But he promises that once it’s automated, its repertoire will grow quickly.
Do the right thing
Then there’s Jinna Lei, a robotics Ph.D. student at the University of Washington. She’s also using cameras in the kitchen, specifically Kinect-like depth-sensing cameras capable of recording both the shape and appearance of kitchen objects. And that allows them to track cooking actions, such as whether a certain ingredient has been poured into a bowl.
Eventually, says Lei, the system should be able to alert the cook if he or she makes a mistake. Already, she’s tested it with a cake-baking video and it was able to identify, in seconds, the start and end points of 17 different recipe actions.
Still another chef-teaching technique has been developed by researcher Thomas Ploetz at the University of Newcastle in the U.K. He has installed sensors in kitchen utensils which record when and how they’re used by the novice cooks. And since they hear their instructions from a computer in French, the chefs learn both cooking and French.
Now that Julia would have loved.
Here are more recent innovations on the food front:
- Oodles of noodles: A Chinese restaurateur has started mass-producing robots that can tirelessly hand-slice noodles into a pot of boiling water. One robot costs about $2,000 in American dollars; a human doing the same job in China would make about $4,700 a year. (That’s right, $4,700.)
- I, Sushi Master: Meanwhile, in Japan, a new robot is cranking out 2,500 perfect sushi rolls an hour. The machine injects a puff of air into each tofu skin to open it up fully, then a second robotic probe tucks the sushi rice inside the corners.
- The printer needs more meat: A startup in Missouri is promoting the idea that one day hamburgers could be produced on a 3-D printer. The company, Modern Meadow, thinks it will be able to “print” slivers of environmentally-friendly, in-vitro meat. I know, doesn’t
sound too tasty, but Pay Pal co-founder and billionaire Peter Thiel has kicked in about $300,000 to see if it could actually work.
- Can you earn rewards for banning cell phone yakking?: If they can make a game out of running a farm, why not one where you manage a restaurant? So now there’s a game app called Cafeteria Nipponica where you hire staff, create dishes, maybe set up a mobile phone campaign to get customers in the door. And if you really get serious, you can try your hand at trying to run three restaurants at the same time.
- Do we really need to make it easier to buy donuts?: Dunkin’ Donuts has gone the Starbucks route and is now offering a mobile payment app that lets you set up your own donut account where you can pay at the counter by scanning your phone over a barcode. You can even use the app to send donut gift cards to your friends, for which they will either love you or hate you. Probably both.
Video bonus: In case you forgot what a charmer Julia Child could be, watch this 1987 clip where she whips out a blow torch to grill up a burger for David Letterman. And for a bonus bonus, here’s a great new remix of Julia at her snappy best.
More from Smithsonian.com
August 14, 2012
Chances are you think you already have enough information in your life. Why, oh why, would you want to add more layers?
Yet there’s something intriguing about the concept of augmented reality, the notion of enhancing objects in the real world with virtual sounds and images and additional info. And when Google revealed earlier this year that it was developing glasses that will be part wearable computer, part digital assistant that flashes relevant data right before your eyes, augmented reality (AR) no longer seemed such a digital parlor trick. The geek gods had spoken.
In fact, recent analysis by the London firm ABI Research concludes that the next big phase of AR–now largely played out on smartphones and tablets–will be through wearable tech. That’s when the technology will become truly functional, when your glasses are able to tell you everything you want to know about the restaurants and stores on the block where you’re walking.
Will Powell, an AR wiz recently interviewed by Slash Gear, concurs:
I think that with the desire for more content and easier simpler devices, using what we are looking at and hearing to tell our digital devices what we want to find is the way forward. Even now we have to get a tablet, phone or laptop out to look something up. Glasses would completely change this because they are potentially always on and are now adding full time to at least one of our fundamental senses.
Scenes from an exhibition
One place, however, where AR is still making its mark on small screens is the museum world. Those who run museums know that the people walking around their buildings are already spending an inordinate amount of time using their phones, whether it’s taking pictures or texting friends or taking pictures to text to friends. So it only makes sense to find ways to turn phones into storytelling tools that can bring the inanimate to life. Or shift time. Or add layers of knowledge. More museums are taking the leap and while the results can sometimes still seem a bit gimmicky, it’s a move in the right direction.
One of the the latest examples is an exhibit called “Ultimate Dinosaurs” that opened at the Royal Ontario Museum in Toronto earlier this summer. It uses augmented reality to add flesh to the bones of dinosaurs and lets them move around. In some cases, you can use an app on your smartphone to make beasts pop out of markers around the exhibit, including on the floor; in others you can use iPads provided by the museum to turn fossils into fleshed-out creatures. And along the walls are animated projections of dinos that also are interactive. With the help of a Kinect 3-D camera, their eyes follow your every move. A bit creepy, but what museum couldn’t use a little thrill.
Instead of reconstituting dinosaurs, the Laguna Beach Art Museum in California is using AR to bring motion to still photos. Dancers frozen in an image start to spin on your smartphone screen; a woman captured under water suddenly swims away. It’s the first phase of images escaping their frames.
The Getty Museum in Los Angeles is taking yet another approach. In an exhibit titled “Life of Art,” it enables visitors to use iPads to explore in much more detail–and even rotate–classic historical objects from its permanent collection–a 17th century lidded porcelain bowl from Asia, for instance, and an 18th century French armchair.
But maybe the most engaging twist of AR with an exhibit has been pulled off by the Science Museum in London. An iPhone app turns James May, one of the hosts of the popular BBC show “Top Gear,” into a virtual museum guide. By aiming the camera at a marker near nine of the exhibits in the Making the Modern World Gallery, you conjure up a CGI version of May, spinning tales and reeling off details about steam engines and the first home computers.
What is reality?
Here are other examples of augmented reality pushing envelopes:
- Now that’s point-and-shoot: Researchers at MIT’s Media Lab have developed an AR device they call EyeRing. It’s a tiny camera you wear on your finger and when you take a picture of an object, it transmits it to a smartphone that gives you information about what you’ve photographed.
- But does it work on bald?: Meanwhile, the folks at Disney Research have created a technology using reverse electrovibration that projects texture on to smooth surfaces.
- Really interior design: The 2013 edition of the IKEA catalog has its own AR spin. You can use a smartphone app to see inside cabinets and get design ideas not available to those satisfied only with reality.
- But wait,there’s more: The Los Angeles Times used the start of the London Olympics to join print publications dabbling in AR. It rolled out an app that enabled readers to get more material by hovering their phones over Olympics photos in the paper.
- For those who expect more from their chips than crunch: We should all be grateful that we have lived long enough to experience potato chip bags that predict the weather. This month and next, Walkers crisps will come in bags that, once you download the appropriate mobile app, share the weather report for today and tomorrow. There are no plans, as yet, for five-day forecasts.
Video bonus: Here’s a demo video showing how dinosaurs come back to life in a Toronto museum.
More from Smithsonian.com
June 11, 2012
These are tough times for storytelling.
While they’ve proven that brevity is not always the soul of wit, Twitter and Facebook have transformed what it means to communicate. We now write in quick bursts, sometimes completing thoughts, often not, with the goal always of cutting to the chase. No need for nuance or complexity. No reason for meandering twists to add flavor and depth or slow builds that unfold a story rather than eject it.
What hope in this world is there for the great long narrative, such as Jon Krakauer’s “Into Thin Air” or even more so, John Hersey’s 31,000-word epic, “Hiroshima,” which sold out the August 31, 1946 New Yorker within hours after the issue hit the street?
Actually, there’s a glimmer of hope, maybe even a glow.
Two relatively new companies actually are trying to make a business of saving long-form non-fiction, a quest that might seem to make as much sense as attempting to apply the rules of grammar to texting. Yet both are convinced that a lot of people still like to settle in for a long read of real-life stories.
Have I got a story for you
One, called Byliner, is taking a more traditional approach, albeit with a touch of social networking and personalized recommendations thrown in. The other, The Atavist, is experimenting with multimedia enhancements, adding video, music and other extras, without, hopefully, distracting the reader from the tale being told.
Byliner launched in San Francisco less than two years ago with a goal of collecting in one place, the best literary non-fiction and narrative journalism out there. It links out to articles on other magazine sites, but also publishes what it calls Byliner Originals–pieces such as author William Vollman’s “Into the Forbidden Zone,” a 20,000-word narrative about life after last year’s earthquake and tsunami in Japan and Jon Krakauer’s “Three Cups of Deceit,” which focuses on the myths and deceptions at the heart of Three Cups of Tea, the best-seller by Greg Mortenson. If a story takes off–they sell for $2.99 for download on iPads, Kindles and Nooks–a writer can earn considerably more than he or she could make selling the piece to a magazine.
Because their stories are online, writers can be much more current than in a book, and they can add updates, something rarely done in magazines. Byliner also provides recommendations to visitors based on other stories they’re read and liked–it’s been dubbed “the Pandora of non-fiction writing.” A few months ago, for “making literary nonfiction and journalism hip,” Byliner made it into the Top Ten of Fast Company’s list of most innovative media companies.
Mixing in maps and timelines
But it’s The Atavist, based in Brooklyn, that’s working closer to the cutting edge. It too champions longer nonfiction, but its iPad and iPhone app also invites readers to veer outside the text if it feels the story can be clarified or strengthened by adding video–a story, for instance, titled “Lifted” about a bank heist gone bad in Sweden, starts with security video of the robbers in action–or music or sound effects. Timelines, maps, and background info on the characters are also available, although they’re flagged through subtle gray arrows, the goal being to allow the narrative to flow, with minimal disruptions.
The Atavist publishes one major piece a month and each includes a feature through which you can easily toggle between the text and an audio version read by the author. A story for an iPad costs $2.99 and comes with the bells and whistles. Versions for Kindle and Nook, which are only text, cost $1.99.
But the real revenue engine at The Atavist is a custom-designed content management system that makes it fairly simple to not just create and publish multimedia stories, but also automatically adapts their format to the platforms on which they’re appearing. So the content for an iPhone will be optimized for a smart phone. The same goes for an iPad. And for a Kindle.
That’s potentially a game-changer in the storytelling business and it’s no surprise that the bulk of the Atavist’s revenue comes from licensing its software to other publishers. Later this summer it plans to release a free version to the public that will enable people to start self-publishing their own multimedia books.
And that shiny tool is what makes The Atavist much more than another digital publisher. It undoubtedly was a a big reason the company was able to raise $1.5 million in seed money a few weeks ago. And if you still have doubts about the potential of this venture, consider some of its new investors: Eric Schmidt (Google’s executive chairman), Marc Andreesen (one of Netscape’s founders) and a group called the Founders Fund, which is led by the likes of Peter Thiel (a founder of PayPal) and Sean Parker (co-founder of Napster and the first president of Facebook).
Not bad company to be in. Not bad at all.
Here are other recent takes on how and why we tell stories:
- Your life is a lie, actually many lies: A recent book by Jonathan Gotschall, The Storytelling Animal: How Stories Make Us Human, elaborates on the reasons we tell stories, not the least of which is to bring meaning and order to the chaos of life. Also, as Maura Kelly pointed out in a recent review in The Atlantic, we tend to lie a lot to ourselves as we fine-tune the narratives of our lives.
- Here’s my brain’s story and it’s sticking to it: Neuroscientist Michael Gazzaniga talks about how research has found that a part of the left brain always wants to explain actions we’ve taken after they’ve occurred, the purpose of which is to turn behavior into a story that makes everything feel coherent.
- A wag of tales: In a fast-paced TED talk, storyteller Joe Sabia uses an iPad to trace the history of storytelling from the first interactive element–the pop-up book–to the re-versioning of Shakespeare on Facebook.
Video bonus: Here’s a little tutorial on how The Atavist tries to wrap extras through the thread of a narrative.