August 27, 2012
It was a just about a year ago that a handful of Stanford professors began hatching a revolution in college education.
Sebastian Thrun, more widely known as the head of the team behind Google’s driverless car, decided that he and colleague Peter Norvig would start making their popular course in artificial intelligence available online. Free of charge. To anyone in the world. About 160,000 people signed up.
A few weeks later, another Google researcher/Stanford computer scientist, Andrew Ng, followed suit, offering his equally popular course, “Machine Learning” for free. More than 100,000 people watched his lectures online. As Ng pointed out, it would have taken him 250 years to reach that many students in a conventional Stanford classroom.
The problem, of course, is that Stanford charges students in those conventional classrooms about $40,000 a year. Freebies were not a good business strategy.
By January, Thrun had lined up venture capital money and left Stanford to start Udacity, an independent, online-only education service focusing on science and technology courses. Within a few months, Ng and another Stanford computer scientist, Daphne Koller, had rounded up their own boatload of VC money–a reported $16 million to start with-and went on leave from Stanford to start their own online college operation called Coursera.
Less talk, more questions
But Ng and Koller actually have ratcheted things up another notch. Instead of just distributing its own online courses, Coursera has formed partnerships with some of America’s top universities to help them convert courses for free Internet access. Last month, the startup announced that in addition to its four original partners,–Stanford, Princeton, Penn and Michigan–it has added 12 more, ranging from Duke and Johns Hopkins to the University of Toronto and the University of Edinburgh in Scotland.
So what does that mean? For starters, Coursera is spreading what’s becoming the new model for online teaching. No more videos of professors talking non-stop for an hour. Instead, lectures are chopped into much smaller chunks, say 10 minutes long, with students asked a quiz question every few minutes. They need to answer correctly before they can move on with the video.
And having tens of thousands of people taking the course at the same time makes it much easier than you would expect for students working on their own to find and study with like-minded classmates. Ng says that, on average, it takes only 22 minutes for someone to get a question answered in Coursera’s online forums.
The huge size of Internet classes–they’re now known as massive open online courses or MOOCs–also allows for much more comprehensive analysis of how subjects are taught and whether they’re understood. Since the online behavior of students is tracked–where they rewind videos, how they respond to quiz questions, etc.–professors can see where a large number of students may have struggled or given the same wrong answer and then make adjustments. Course material now not only has to be interactive, but also more dynamic. Immutable lectures, delivered as if chiseled on stone tablets, are going the way of chalkboards and elbow patches.
Professors also will be teaching classes far more culturally diverse than any they’ve previously experienced. When Coursera announced a few week ago that its enrollment had topped one million in just four months, it also noted that the students who’ve signed up for courses live in 196 different countries. Six out of 10 are outside the U.S.
Can this make money?
Is this really where college is headed? It says something that last spring Harvard and MIT launched their own their MOOC partnership called edX, and that over the summer, the University of California at Berkeley joined it. Even if top-line universities aren’t sure what they’ll gain by offering free courses to the world, they don’t want to risk being left behind if this is a template of the future.
Clearly, there remain some very large unanswered questions, starting with how do any of these partnerships make money. One notion is to charge a relatively small fee, say $50, for a student to receive a certified copy of a letter saying he or she has completed a course. In other words, it wouldn’t cost anything to take a class, but you’d have to pay for proof that you finished it.
Another idea Sebastian Thrun has floated is to have MOOCs serve as a new kind of placement service, using what they glean about students to help companies find employees with very specific skills. But, as recruiters from Intel and Dell told Bloomberg Business Week recently, a certificate for an online course may help someone land a job, but only if they already have a conventional, sit-in-a-classroom four-year degree. Only a very few colleges, including the University of Washington and the University of Helsinki, have agreed to give credit to students who complete MOOC courses.
What about cheating?
No question that plenty of skeptics are dubious about the depth and quality of an online education, who feel the sheer size of the classes precludes any level of one-on-one learning and also invites cheating.
So far only about 25 percent of the people who have enrolled in Coursera courses have actually completed them. And earlier this month The Chronicle of Higher Education reported “dozens” of complaints about plagiarism in essays written for some of the humanities courses Coursera is now offering. (Almost all of the free online courses to date have been in science or technology.)
The accusations actually came from other students, who, in the Coursera system, grade and comment on each other’s essays. In response to the complaints, Coursera reminded students of the honor code they signed when they enrolled. It also is considering using software that can detect plagiarism.
Some professors in the program have suggested that cultural differences could, at least in part, explain why someone would lift whole sections of text from Wikipedia for a course for which they’re not receiving any credit. Eric Rabkin, a University of Michigan English professor who teaches a Coursera class, told the Chronicle that one student who admitted plagiarizing content said he didn’t realize copying and pasting text from another site was inappropriate.
Coursera’s Daphne Koller would point out that this comes with making top college courses available in places where a year ago it would have been inconceivable. She put it this way recently: “This could enable a wave of innovation because amazing talents can be found anywhere. Maybe the next Albert Einstein, or the next Steve Jobs, is living in a remote village in Africa.”
Here are a few other ways technology is changing education:
- Pack light: Another well-financed online initiative called The Minerva Project will be added to the mix by 2014. Its goal is to be the first elite global university. From sophomore year on, students will be encouraged to live in a new country, or at least a new city, every semester.
- That algorithm just doesn’t understand me: Winners of a competition sponsored by the Hewlett Foundation have devised algorithms that can grade essays.
- Today’s assignment is from “Mythbusters:” Big media companies, such as Discovery Communications and the News Corporation, are moving into the digital textbooks business in a big way. They see it as a boom market that could become a new source of revenue.
- You tie shoes?: According to an infographic from LearnStuff.com, 1.5 million iPads will be used in classrooms this year. Also, while 70 percent of American children between ages two and five can use a computer mouse, only 11 percent can ties their own shoes.
Video bonus: Want to hear why so many top universities have become enamored of Coursera? Here’s co-founder Daphne Koller, in a recent TED talk, laying out why online courses should be a big part of college education’s future.
Also on Smithsonian.com
August 20, 2012
Last week The Voice was back. I’m not referring to the treacly TV show or the latest crooner chased down by TMZ. I’m talking about Julia Child.
In honor of what would have been her 100th birthday, America’s first real TV chef was all over the airwaves. Or at least her voice was, a voice that, on first hearing, sounded like it could set off car alarms, or maybe was a car alarm. But it was all part of the package, a presence as genuine as it was gangly. There was nothing snooty about Julia as she taught Americans French cooking. If you dropped a piece of lamb and you were alone in the kitchen, she once confided to viewers, just pick it up. No one had to know.
So it was no small irony that the day after her birthday, the New Scientist’s website published a piece about how robots, sensors and augmented reality are now being used to train novice chefs. It’s good that Julia never had to hear about this.
Something’s watching you
Nonetheless, this is where cooking is headed, a future where precision and skill in the kitchen will have as much to do with what’s watching as who’s training.
Consider the setup that computer scientist Yu Suzuki and his team have created in a test kitchen at Kyoto Sangyo University. They’ve installed cameras and projectors on the ceiling that project cooking instructions right on the ingredients.
So, let’s say you want to filet a fish. Once you place it on a chopping board, the camera detects its size and shape and the projector then overlays the equivalent of a virtual dotted line showing you where to make the cut. In a macabre twist, instructive word bubbles appear at the fish’s mouth to ensure that his gutting is done properly.
So far, because the scientists have to program each process manually, Suzuki’s system can teach people only how to prepare fish and peel onions. But he promises that once it’s automated, its repertoire will grow quickly.
Do the right thing
Then there’s Jinna Lei, a robotics Ph.D. student at the University of Washington. She’s also using cameras in the kitchen, specifically Kinect-like depth-sensing cameras capable of recording both the shape and appearance of kitchen objects. And that allows them to track cooking actions, such as whether a certain ingredient has been poured into a bowl.
Eventually, says Lei, the system should be able to alert the cook if he or she makes a mistake. Already, she’s tested it with a cake-baking video and it was able to identify, in seconds, the start and end points of 17 different recipe actions.
Still another chef-teaching technique has been developed by researcher Thomas Ploetz at the University of Newcastle in the U.K. He has installed sensors in kitchen utensils which record when and how they’re used by the novice cooks. And since they hear their instructions from a computer in French, the chefs learn both cooking and French.
Now that Julia would have loved.
Here are more recent innovations on the food front:
- Oodles of noodles: A Chinese restaurateur has started mass-producing robots that can tirelessly hand-slice noodles into a pot of boiling water. One robot costs about $2,000 in American dollars; a human doing the same job in China would make about $4,700 a year. (That’s right, $4,700.)
- I, Sushi Master: Meanwhile, in Japan, a new robot is cranking out 2,500 perfect sushi rolls an hour. The machine injects a puff of air into each tofu skin to open it up fully, then a second robotic probe tucks the sushi rice inside the corners.
- The printer needs more meat: A startup in Missouri is promoting the idea that one day hamburgers could be produced on a 3-D printer. The company, Modern Meadow, thinks it will be able to “print” slivers of environmentally-friendly, in-vitro meat. I know, doesn’t
sound too tasty, but Pay Pal co-founder and billionaire Peter Thiel has kicked in about $300,000 to see if it could actually work.
- Can you earn rewards for banning cell phone yakking?: If they can make a game out of running a farm, why not one where you manage a restaurant? So now there’s a game app called Cafeteria Nipponica where you hire staff, create dishes, maybe set up a mobile phone campaign to get customers in the door. And if you really get serious, you can try your hand at trying to run three restaurants at the same time.
- Do we really need to make it easier to buy donuts?: Dunkin’ Donuts has gone the Starbucks route and is now offering a mobile payment app that lets you set up your own donut account where you can pay at the counter by scanning your phone over a barcode. You can even use the app to send donut gift cards to your friends, for which they will either love you or hate you. Probably both.
Video bonus: In case you forgot what a charmer Julia Child could be, watch this 1987 clip where she whips out a blow torch to grill up a burger for David Letterman. And for a bonus bonus, here’s a great new remix of Julia at her snappy best.
More from Smithsonian.com
August 14, 2012
Chances are you think you already have enough information in your life. Why, oh why, would you want to add more layers?
Yet there’s something intriguing about the concept of augmented reality, the notion of enhancing objects in the real world with virtual sounds and images and additional info. And when Google revealed earlier this year that it was developing glasses that will be part wearable computer, part digital assistant that flashes relevant data right before your eyes, augmented reality (AR) no longer seemed such a digital parlor trick. The geek gods had spoken.
In fact, recent analysis by the London firm ABI Research concludes that the next big phase of AR–now largely played out on smartphones and tablets–will be through wearable tech. That’s when the technology will become truly functional, when your glasses are able to tell you everything you want to know about the restaurants and stores on the block where you’re walking.
Will Powell, an AR wiz recently interviewed by Slash Gear, concurs:
I think that with the desire for more content and easier simpler devices, using what we are looking at and hearing to tell our digital devices what we want to find is the way forward. Even now we have to get a tablet, phone or laptop out to look something up. Glasses would completely change this because they are potentially always on and are now adding full time to at least one of our fundamental senses.
Scenes from an exhibition
One place, however, where AR is still making its mark on small screens is the museum world. Those who run museums know that the people walking around their buildings are already spending an inordinate amount of time using their phones, whether it’s taking pictures or texting friends or taking pictures to text to friends. So it only makes sense to find ways to turn phones into storytelling tools that can bring the inanimate to life. Or shift time. Or add layers of knowledge. More museums are taking the leap and while the results can sometimes still seem a bit gimmicky, it’s a move in the right direction.
One of the the latest examples is an exhibit called “Ultimate Dinosaurs” that opened at the Royal Ontario Museum in Toronto earlier this summer. It uses augmented reality to add flesh to the bones of dinosaurs and lets them move around. In some cases, you can use an app on your smartphone to make beasts pop out of markers around the exhibit, including on the floor; in others you can use iPads provided by the museum to turn fossils into fleshed-out creatures. And along the walls are animated projections of dinos that also are interactive. With the help of a Kinect 3-D camera, their eyes follow your every move. A bit creepy, but what museum couldn’t use a little thrill.
Instead of reconstituting dinosaurs, the Laguna Beach Art Museum in California is using AR to bring motion to still photos. Dancers frozen in an image start to spin on your smartphone screen; a woman captured under water suddenly swims away. It’s the first phase of images escaping their frames.
The Getty Museum in Los Angeles is taking yet another approach. In an exhibit titled “Life of Art,” it enables visitors to use iPads to explore in much more detail–and even rotate–classic historical objects from its permanent collection–a 17th century lidded porcelain bowl from Asia, for instance, and an 18th century French armchair.
But maybe the most engaging twist of AR with an exhibit has been pulled off by the Science Museum in London. An iPhone app turns James May, one of the hosts of the popular BBC show “Top Gear,” into a virtual museum guide. By aiming the camera at a marker near nine of the exhibits in the Making the Modern World Gallery, you conjure up a CGI version of May, spinning tales and reeling off details about steam engines and the first home computers.
What is reality?
Here are other examples of augmented reality pushing envelopes:
- Now that’s point-and-shoot: Researchers at MIT’s Media Lab have developed an AR device they call EyeRing. It’s a tiny camera you wear on your finger and when you take a picture of an object, it transmits it to a smartphone that gives you information about what you’ve photographed.
- But does it work on bald?: Meanwhile, the folks at Disney Research have created a technology using reverse electrovibration that projects texture on to smooth surfaces.
- Really interior design: The 2013 edition of the IKEA catalog has its own AR spin. You can use a smartphone app to see inside cabinets and get design ideas not available to those satisfied only with reality.
- But wait,there’s more: The Los Angeles Times used the start of the London Olympics to join print publications dabbling in AR. It rolled out an app that enabled readers to get more material by hovering their phones over Olympics photos in the paper.
- For those who expect more from their chips than crunch: We should all be grateful that we have lived long enough to experience potato chip bags that predict the weather. This month and next, Walkers crisps will come in bags that, once you download the appropriate mobile app, share the weather report for today and tomorrow. There are no plans, as yet, for five-day forecasts.
Video bonus: Here’s a demo video showing how dinosaurs come back to life in a Toronto museum.
More from Smithsonian.com
July 30, 2012
Not long ago, banner ads showing coffins draped with American flags started appearing on websites in Yemen. They had been placed by supporters of Al Qaeda in the Arabian Peninsula. Their message was that Americans were the enemy and Al Qaeda was killing them.
A few days later people working for the U.S. State Department posted banners on the same websites, only this time the coffins were covered with Yemeni flags, photoshopped into the image. The message also had changed. This time it said that most of the people killed by Al Qaeda in the Arabian Peninsula were Yemen.
For all the attention paid to drone strikes and intelligence coups, the daily grind of counterterrorism is as much a digital parry and thrust, a continuous war of words and ideas played out on websites, chat rooms, forums, blogs and Twitter feeds. Now, experts will tell you, it’s all about the cyber-narrative.
And the State Department, specifically a group within it called the Center for Strategic Counterterrorism Communications, is taking on this role with tools and techniques few could have imagined in the days after 9/11. Among other things, they’re training people to be trolls.
Hit them with your best shot
It’s part of something called Viral Peace. As yet, it’s a small project with a miniscule budget by federal government standards, but this gives you a sense of what’s now in play when it comes to counterterrorism tactics. The man behind it, a former Silicon Valley geek named Shahed Amanullah, believes that impressionable young men and women can be discouraged from becoming terrorists by challenging and undercutting extremists online, which is where they do most of their recruiting.
As he told Wired in a recent interview, Amanullah intends to use “logic, humor, satire, religious arguments, not just to confront them, but to undermine and demoralize them.”
To that end he sent two members of his team to Muslim countries–Indonesia, Singapore, Malaysia, the Phillipines, Pakistan–where they met with young adults who had already developed online followings. Better for them to do the trolling instead of people who’d be seen as mouthpieces of the U.S. government.
How effective this guerilla strategy of ridicule and rebuke will ultimately be is anyone’s guess, although people who monitor extremists online say they generally don’t respond well to being challenged. But it’s clear that the strategy of using the Web to take on terrorists goes all the way to the top of the State Department.
None other than Hillary Clinton was the one who proudly revealed the story of the photoshopped coffins.
Have I got a story for you
Meanwhile, over at the Pentagon, the focus on controlling the narrative has taken an even more intriguing turn. DARPA, the Defense Department agency that funds cutting-edge research, is underwriting a study of what happens in the brain to incite political violence and how reshaping the narrative can help make people less radical.
The concept is called Narrative Networks and it looks at how stories affect the brain and human behavior, with the goal of finding ways to present narratives that help persuade people not to become terrorists.
Critics have already railed that it has all the makings of a new form of mind control, that with the highly sophisticated brain scans available today, a government could get a far better sense of how to refine messaging to make it more effective at changing people’s minds.
One of the researchers on the project, Paul Zak, of Claremont Graduate University in California, studies how listening to stories affects the brain’s release of oxytocin, known as the “love” or “trust” hormone. He says the purpose of the research is to see what kind of messages would help people view the military in the best possible light.
“We’re not in the business of reading people’s minds or implanting thoughts,” says Greg Berns, an Emory University professor also doing brain research for DARPA. “By understanding the biology of what causes people to go to war, we might begin to understand how to mitigate it.”
The fight stuff
Here’s more of the latest research into devices geared to 21st century warfare:
- Inner vision: Veritas Scientific is developing for the Pentagon a helmet it says will help identify enemies. When placed on a person’s head, it would use sensors to read their brain’s reactions to images flashed on the helmet’s visor, such as specs for how to make a bomb.
- Think fast: U.S. soldiers may soon be able to use a new technology called Sentinel, binoculars connected to a computer that would actually speed up the brain’s normal thought-processing so threats can be identified more quickly.
- Shock troops: Next month some U.S. soldiers in Afghanistan will start carrying a small pack called a Soldier Body Unit. Developed by the Georgia Tech Research Institute, it’s equipped with sensors that will measure the force of blasts that soldiers have been exposed to, and help doctors know if he or she has suffered a concussion.
- That’s what he said: In May DARPA awarded a $7 million contract for the first phase of a project to create software that not only would translate all aspects of a foreign language, –including slang, regional dialects, and text messaging lingo–but would do it in real time.
- Sound effects: And earlier this month DARPA unveiled a technique for putting out a fire using only sound. By playing a low-frequency bass note through two speakers pointed at the flame, researchers were able to increase air velocity and create a wider and cooler flame that sputtered out.
Video bonus: DARPA’s also been very big on funding robots. Here’s its AlphaDog Robot lugging 400 pounds over rugged terrain.
More from Smithsonian.com
June 21, 2012
For all the speech lines we hear about jobs these days, rarely does anyone mention robots.
They do occasionally, but usually it’s saved for the “innovation” speeches. This is understandable. If you’re running for office, better to keep the two ideas separated, because while jobs are good because they’re, well, jobs, and robots are good because they mean progress, mix the two together and soon enough people will start asking how you’ll be able to create a lot of jobs if these really smart machines are doing more and more of the work.
No, I’m not going all Luddite on you. I’m in awe of machines and the remarkable things they can now do. But that’s the point. We’re not talking about the technology of the past, which clearly made humans more productive and allowed us to move into better-paying jobs requiring more specialized skills.
Now we’re creating machines that are much more than tools. They’re learning to think and adapt, and technologists such as Martin Ford, author of Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, believe that within five to ten years, machines will be able to surpass the ability of humans to do routine work. As he told The Fiscal Times: “It’s the first time we’ve had this level of technology that allows machines to solve problems on their own, to interact with their environment, to analyze visual imagines, and to manipulate their environment based on that.”
Do robots know “Kumbaya?”
There are those, of course, who feel that Ford and other techno-downers have the human-robot thing all wrong. Futurist Ray Kurzweil, for one, is convinced that by mid-century, humans and robots will merge in some form. Maybe we’ll be able to live forever in a body of artificial parts. Or our consciousness will live on inside a computer, a kind of humanoid software. Whatever shape it takes, Kurzweil already has a name for it– singularity.
Kurzweil’s take is that machines are gaining intelligence so quickly that it won’t be that long before they’re considerably more intelligent than humans. And he says we should be encouraged by this, not threatened. Technology will only continue to make our lives better, he contends, in ways we can’t yet imagine.
Five years ago, he likes to point out, who would have thought that hundreds of millions of people around the world would be walking around with devices as powerful as smart phones. Or that almost half a million people could have jobs in the business of making mobile apps.
Still, all of this doesn’t seem to bode well for people who don’t have the skills to play in that world. Earlier this month, Forbes, in an article titled, “Is Your Job Robot-Proof?” noted that, “Today America needs 5 million fewer workers to produce a greater value of goods and services than it did in December 2007 when the recession began.”
And other recent news from the robot front provides more grist for worriers like Ford. Canon just announced that it has begun phasing out human workers in its plants, and that in a few years its cameras will be made solely by robots. Earlier this month DARPA, the R&D arm of the Pentagon, awarded $1.2 million to a Georgia start-up to develop machines that would allow U.S. factories to “produce garments with zero direct labor.” That might allow American clothing factories to actually undercut the costs of cheap labor in China.
Or maybe not. Foxconn, the giant Chinese company known both for manufacturing Apple products and for worker suicides, announced last year that it will create a “robot kingdom” of more than 1 million robots within the next few years.
If you could read my mind
But there’s been at least one recent development that’s more in line with Kurzweil’s vision of robot-human togetherness. Researchers at MIT say they’ve developed an algorithm that will enable robots to work side-by-side with humans. The software apparently will allow robots to learn the preferences of their human partners and anticipate their needs out on the factory floor. And if the machine has to move on to help another worker, it would be able to quickly adapt to him or her.
Julie Shah, head of the MIT research team, put it this way: “It’s an interesting machine-learning human-factors problem. Using this algorithm, we can significantly improve the robot’s understanding of what the person’s next likely actions are.”
Wonder if they’ll be able to smell fear?
Mo’ better machines
Here are more reports on robots rising:
- Snakes on a vein: Scientists at Carnegie-Mellon University in Pittsburgh have created tiny, snake-like robots that, armed with cameras, scissors, forceps and sensors, are helping doctors perform surgery on hearts and cancer tumors.
- The barista will not respond to lame attempts at flirtation: A start-up at the University of Texas has installed a coffee kiosk run by a robot barista in the campus’ academic center. Students can order their drink online or on their phone and receive a text when it’s good to go.
- So much for the career in sushi: Yes, the Japanese have been on to the robots in restaurants thing for awhile. But now food machine manufacturer Suzumo has developed a sushi-making robot that can crank out 2,500 pieces in an hour.
- Don’t even think about pulling my finger: Researchers at the University of Southern California have given robots a sense of touch–one, in fact, that’s actually more sensitive than a human’s finger. Sensors can even tell where and in which direction forces are applied to robot’s fingertip.
- That’s nice, but it still doesn’t do windows: Roomba, the king of household robots, is going wireless. iRobot announced earlier this week that its new Roomba 790, which retails for a mere $699, will come with a “wireless command center” that, among other things, will allow you to schedule it to clean your house while you’re not home.
Video bonus: From the land that nailed Robot Cute long ago comes i-SODOG, a robot pup that shakes, dances, responds to voice commands and can be trained through your smartphone. Ignore the background din of little awestruck Japanese kids. This is a toy you’ll want to take to work, to dates, to job interviews. How could they not hire you?