May 8, 2013
Cell phones are so many things now–computer, map, clock, calculator, camera, shopping device, concierge, and occasionally, a phone. But more than anything, that little device that never leaves your person is one amazingly prolific data engine.
Which is why last October, Verizon Wireless, the largest U.S, carrier with almost 100 million customers, launched a new division called Precision Market Insights. And why, at about the same time, Madrid-based Telefonica, one of the world’s largest mobile network providers, opened its own new business unit, Telefonica Dynamic Insights.
The point of these ventures is to mine, reconstitute and sell the enormous amount of data that phone companies gather about our behavior. Every time we make a mobile call or send a text message–which pings a cell tower–that info is recorded. So, with enough computer power, a company can draw pretty accurate conclusions about how and when people move through a city or a region. Or they can tell where people have come from to attend an event. As part of a recent case study, for example, Verizon was able to say that people with Baltimore area codes outnumbered those with San Francisco area codes by three to one inside the New Orleans Superdome for the Super Bowl in February.
In a world enamored of geolocation, this is digital gold. It’s one thing to know the demographic blend of a community, but to be able to find out how many people pass by a business and where they’re coming from, that adds a whole nother level of precision to target marketing.
Follow the crowd
But this data have value beyond companies zeroing in on potential customers. It’s being used for social science, even medical research. Recently IBM crunched numbers from 5 million phone users in the Ivory Coast in Africa and, by tracking movements of people through which cell towers they connected to, it was able to recommend 65 improvements to bus service in the city of Abidjan.
And computer scientists at the University of Birmingham in England have used cell phone data to fine tune analysis of how epidemics spread. Again, it’s about analyzing how people move around. Heretofore, much of what scientists knew about the spread of contagious diseases was based largely on guesswork. But now, thanks to so many pings from so many phones, there’s no need to guess.
It’s important to point out that no actual identities are connected to cell phone data. It all gets anonymized, meaning there shouldn’t be a way to track the data back to real people.
There shouldn’t be.
Leaving a trail
But a study published in Scientific Reports in March found that even anonymized data may not be so anonymous after all. A team of researchers from Louvain University in Belgium, Harvard and M.I.T. found that by using data from 15 months of phone use by 1.5 million people, together with a similar dataset from Foursquare, they could identify about 95 percent of the cell phones users with just four data points and 50 percent of them with just two data points. A data point is an individual’s approximate whereabouts at the approximate time they’re using their cell phone.
The reason that only four locations were necessary to identify most people is that we tend to move in consistent patterns. Just as everyone has unique fingerprints, everyone has unique daily travels. While someone wouldn’t necessarily be able to match the path of a mobile phone–known as a mobility trace–to a specific person, we make it much easier through geolocated tweets or location “check-ins,” such as when we use Foursquare.
“In the 1930s, it was shown that you need 12 points to uniquely identify and characterize a fingerprint,” the study’s lead author, Yves-Alexandre de Montijoye, told the BBC in a recent interview. “What we did here is the exact same thing, but with mobility traces. The way we move and the behavior is so unique that four points are enough to identify 95 percent of the people.”
“We think this data is more available than people think. When you share information, you look around and you feel like there are lots of people around–in a shopping center or a tourist place–so you feel this isn’t sensitive information.”
In other words, you feel anonymous. But are you really? De Montijoye said the point of his team’s research wasn’t to conjure up visions of Big Brother. He thinks there’s much good that can come from mining cell phone data, for businesses, for city planners, for scientists, for doctors. But he thinks it’s important to recognize that today’s technology makes true privacy very hard to keep.
The title of the study? “Unique in the Crowd.”
Here are other recent developments related to mobile phones and their data:
- Every picture tells your story: Scientists at Carnegie Mellon University’s Human Computer Interaction Center say their research of 100 smartphone apps found that about half of them raised privacy concerns. For instance, a photo-sharing app like Instagram provided information that allowed them to easily discover the location of the person who took the photo.
- Cabbies with cameras: In the Mexican city of Tuxtla Gutiérrez, taxi drivers have been provided with GPS-enabled cell phones and encouraged to send messages and photographs about accidents or potholes or broken streetlights.
- Follow that cell: Congress has started looking into the matter of how police use cell phone data to track down suspects. The key issue is whether they should be required to get a warrant first.
- Follow that cell II: Police in Italy have started using a data analysis tool called LogAnalysis that makes it especially easy to visualize the relationships among conspiring suspects based on their phone calls. In one particular case involving a series of robberies, the tool showed a flurry of phone activity among the suspects before and after the heists, but dead silence when the crimes were being committed.
Video bonus: If you’re at all paranoid about how much data can be gleaned from how you use your mobile phone, you may not want to watch this TED talk by Malte Spitz.
April 19, 2013
Question: What’s needed to raise the quality of school teachers in America?
Answer: A bar exam?
So say the head of the country’s most powerful teachers’ union, the governor of New York and the U.S. secretary of education, among others. Their contention is that the only way teachers can truly elevate their profession–and with it the level of public education–is if they follow the lead of doctors, lawyers and engineers and are required to pass a test to prove mastery of their subject matter and how to teach it.
Randi Weingarten, president of the American Federation of Teachers (AFT), first floated the idea last summer at the Aspen Ideas Festival when asked what more could be done in training teachers. Then, late last year, her union put out a report, titled “Raising the Bar,” that pushed the idea further, calling for “a rigorous entry bar for beginning teachers.”
The debate has raged on ever since.
Joining those singing the praises of a tough teacher assessment is Joel Klein, the former chancellor of New York City’s Department of Education. Writing on The Atlantic website, he pointed out that pretty much anyone who graduates from college in America today can become a teacher, and that “job security, not teacher excellence, defines the workforce culture.” He also quoted a sobering statistic from McKinsey: The U.S. gets nearly half of its teachers from the bottom third of its college classes.
And just last weekend, in the New York Times, Jal Mehta,an associate professor at the Harvard Graduate School of Education, wrote that compared to many other fields where quality is maintained by building a body of knowledge and training people in that knowledge, “American education is a failed profession.”
“We let doctors operate, pilots fly and engineeers build because their fields have developed effective ways of certifying that they can do these things. Teaching, on the whole, lacks this specialized knowledge base; teachers teach based mostly on what they have picked up from experience and from their colleagues.”
So what exactly do the proponents have in mind? For starters, they think any exam would need to focus both on the prospective teacher’s subject and on teaching more generally, particularly the social and emotional aspects of learning. While states would be able to adapt the guidelines, the intent would be to set national certification standards. And, above all, the process would need to be “rigorous.” They say “rigorous” a lot.
AFT’s proposal also recommends that American universities need to get much more selective in accepting students into education programs, that they should require a minimum of a 3.0 grade point average, plus an average score in the top third percentile on college entrance exams. The goal, ultimately, is make teaching a skill to be mastered, and one that requires serious preparation. Said Weingarten: “It’s time to do away with a common rite of passage into the teaching profession—whereby newly minted teachers are tossed the keys to their classrooms, expected to figure things out, and left to see if they and their students sink or swim.”
Of course, not everyone thinks this is such a good idea. Some critics have suggested that it’s a ploy by the teacher’s union to sound high-minded, while actually aiming to protect its current members–who likely wouldn’t have to take the exam–and to justify a sizable bump in salary. Or that it’s really a swipe at programs like Teach for America, which offers a different route to becoming a teacher.
Still others think that focusing so much on a test score doesn’t make sense for a profession so dependent on interpersonal and motivational skills. Jonathan Kozol, author of numerous books on education, including “Letters to a Young Teacher,” makes the point that no test, no matter how refined, could adequately measure what he thinks is a good teacher’s greatest quality, that he or she loves being with students. The only way you can gauge that, he says, is watching them teach.
And Jason Richwine and Lindsey Burke, both of the conservative think tank, the Heritage Foundation, argued recently in The Atlantic that having knowledge and being able to impart it are two different things. They wrote:
“A teacher with a doctorate degree, every certification and license available, and 15 years of experience is no more likely to be a high performer than a teacher with a B.A., the minimal certification, and five years of experience.”
In the end, this discussion often ends up in Finland. It’s the Magic Kingdom of Education, the place the experts talk about when they imagine what American teachers could be. Roughly 40 years ago, the Finnish government concluded that the key to the country’s economic future was a first-class public education system. And the key to that was a system that gave teachers the prestige of doctors.
To even be accepted into a Finnish teacher education program, candidates must be at the top of their class, complete exams on pedagogy, be observed often in clinical settings, and pass a challenging interview. Only about 1 in 10 Finnish applicants are accepted to study to be teachers. And while the U.S. has more than 1,200 universities that train teachers, Finland has only eight. In short, teachers need to earn the right to feel special.
So, does that elevated status of teachers there result in better students? Yes, you could say that. In science, in math, in reading, Finnish students rank first in the world.
Here are other recent innovations in education:
- Never start by trying to learn Chinese: One of the hot trends in higher education is predictive analysis, which evaluates data to help identify students at risk of dropping out and also which course sequences are more likely keep kids in school and which are more likely to make them choose to drop out.
- Even tests can be all about you: A new online portal called Smart Sparrow allows teachers to offer material that’s adapted specifically to a student. For instance, quiz questions can be based on how a student answered the previous question. If he got it right, the next question’s harder, if he got it wrong, it’s easier.
- Do the math: A company called Mango Learning is building a reputation for its mobile apps that teach grade school kids math. They’re interactive games that supposedly can make kids even want to add decimals.
Video bonus: The Young Turks online news show offers its take on what makes Finnish education so special.
More from Smithsonian.com
March 25, 2013
I committed my first texting heresy a few years ago when my son was away at college. I had asked him about a class he was taking and had needed three, maybe four sentences to express myself.
He responded with bemusement. Or maybe it was disgust. Who could tell?
But his message was clear: If I continued to be so lame as to send texts longer than two sentences–using complete words, no less–he would have little choice but to stop answering.
I was reminded of this less-than-tender father-son moment recently by a post by Nick Bilton for The New York Times’ Bits blog in which he railed against those who send “Thank you” emails, among other digital transgressions.
His contention is that such concise expressions of gratitude, while well-intended, end up being an imposition for recipients who have to open up an email to read a two-word message. Better to leave the sentiment unexpressed–although he does concede that it probably makes sense to indulge old folks, who are much more likely to appreciate the appreciation.
Bilton’s larger point is that as technology changes how we communicate and gather information, we need to adapt what we consider proper etiquette. Why should we continue to leave voice mails, he argues, when a text is much more likely to be answered? And why, he asks, would anyone these days be so rude as to ask for directions?
Not that this is the first time that tech is forcing an etiquette rethink. Bilton harkens back to the early days of the telephone when people truly didn’t know what to say when they picked up a ringing phone. Alexander Graham Bell himself lobbied for “Ahoy,” while Thomas Edison pushed for “Hello.” Edison ruled, of course, although now that our phones tell who’s calling before we have to say a word, the typical greeting has devolved to “Hey” or the catatonically casual “‘S up.”
Sure, some of this is a generational thing–The Independent nailed that in a recent piece on how members of three generations of one family communicate–or not–with each other.
But it’s also about volume. Email never sleeps. For a lot of people, each day can bring a fire hose of digital messages. Imagine if you received 50 to 100 phone calls a day. You can bet you’d be telling people to stop calling.
If the purpose of etiquette is to be considerate of other people, Bilton would contend that that’s the whole idea behind cutting back on emails and voice mails. And he’d have a point.
Me, my phone and I
But then there’s the matter of device isolation. I’m sure you know it well by now–the person who starts texting away during a conversation, or a meal, or even a meeting, which is one of those things bosses tend not to like (not to mention that it probably also means the death of doodling.)
It’s hard to put a positive spin on this since it does send a pretty clear message: I’d rather focus my energy on connecting to someone through a device than in person. Maybe it’s just me, but that, I’d say, reeks of rude.
If anything, it’s going to get worse, especially with wearable tech about to go mainstream. Some think this is the year the smart watch could start to become the accessory of choice, which means people will be looking at their wrists a lot more in the future–not so much to check the time, which is rude enough, but more to see who’s sent them emails and texts.
And what about when Google Glass goes on the market later this year? They’re glasses that will enable you to check emails, go on the Web, watch videos, even take pictures, all while feigning eye contact with the people you’re with. And the Google Glass camera raises all kinds of issues. Will wearers have to make pre-date agreements not to take stealth photos, particularly any involving eating or drinking? Is anyone fair game in a Google Glass video?
But beyond questions of privacy and social boorishness, the impact of our obsession with digital devices, especially when it comes to the loss of personal connections, could go much deeper. In a piece in Sunday’s New York Times, Barbara Frederickson, a psychology professor at the University of North Carolina, cites research suggesting that if you don’t practice connecting face-to-face with others, you can start to lose your biological capacity to do so.
“When you share a smile or laugh with someone face to face, a discernible synchrony emerges between you, as your gestures and biochemistries, even your respective neural firings, come to mirror each other. It’s micro-moments like these, in which a wave of good feeling rolls through two brains and bodies at once, that build your capacity to empathize as well as to improve your health.”
Here are other recent developments in how technology is affecting behavior:
- Yeah, but can I text while I meditate?: A course at the University of Washington is focusing on helping students improve their concentration skills by requiring them both to watch videos of themselves multitasking and to do meditation.
- And it really cuts down on shuffleboard injuries: A study at North Carolina State University found that seniors–people 63 years or older– who played video games had higher levels of well-being and “emotional functioning” and lower levels of depression than old folks who didn’t.
- Does loyalty go deeper than latte?: This May Starbucks will break new ground when it allows its loyalty cardholders to earn points by buying Starbucks products in grocery stores.
Video bonus: All kinds of embarrassing things can happen while you’re texting.
Video bonus bonus: More evidence of the obsession that is texting: Here’s a clip of a bride firing off one last message before she says her vows.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
More from Smithsonian.com
March 5, 2013
It’s amazing how putting a lower case “i” in front of the name of a gadget can make it righteous.
What that means, of course, is that Apple has deemed that particular piece of technology worthy of its attention. And with that comes both market credibility and geeky cool.
So when rumors started swirling a few weeks ago that Apple could unveil an “iWatch” later this year, tech writers around the Web were quick to ponder if 2013 will become “The Year of the Smartwatch.” Maybe. Maybe not. The iGod has not yet spoken on the subject. At least not officially.
The article that stirred the iWatch clamor was a recent piece by Nick Bilton in the New York Times’ Bits blog. It was high on speculation–Apple isn’t talking–and spiced with juicy questions: Will it come with Siri, the voice of the iPhone? What about Apple’s map software? Will an iWatch enable its wearers to track their steps taken? How about their heartbeats?
But the biggest tease was an allusion to glass. Specifically bendable glass. Imagine a watch face that could curve around your wrist. That sounds light, sleek and yes, geekily cool. That sounds so Apple.
The Wall Street Journal followed up, citing a source saying that Apple has been discussing the design of a smartwatch with its Chinese manufacturing partner. And then Bloomberg chimed in, reporting that Apple has a team of at least 100 people cranking away on a “wristwatch-like device.”
It also quoted Bruce Tognazzini, a tech consultant and former Apple employee: “The iWatch will fill a gaping hole in the Apple ecosystem.”
So game over, right? Whenever Apple rolls out its device, it will define what a smartwatch should be, right?
Not so fast. Believe it or not, it’s already a crowded field, with more than half a dozen smartwatches out in the market. Maybe the best known, at least among gadget geeks, is the Pebble, which made a big splash a year ago, even before it existed. Its inventors made a pitch for investors on Kickstarter, hoping to drum up $100,000. Instead they raised $10 million, and a crowd-funding legend was born. The first Pebbles shipped earlier this year, to generally positive reviews.
Sony came out with its own model last year, sometimes to less than enthusiastic reviews. Others in the game include the MetaWatch Strata, the strangely-named I’m Watch, the oddly-named Martian Passport, one called Buddy and another called Cookoo. Later this year, a model called The Pine is expected to hit the market.
But, aside from having names that you’d never imagined calling a wristwatch, what do all these products bring to modern life? Obviously, they tell time, but most also connect wirelessly to your smartphone so you can see who’s calling or texting or emailing or posting on your Facebook page without digging into your pocket for your phone. They can show you weather forecasts, sports scores or news headlines. Some have apps that let you control the music on your phone or track how far you’ve run or cycled.
And keep in mind, this is only the first wave. They probably can’t do enough yet to entice most people to shell out a few hundred bucks–they range from $130 for a Cookoo to more than $400 for an I’m Watch. But as more apps are added, they could be used to make mobile payments, navigate with GPS, take photos and shoot videos. A few already can handle phone calls, albeit clunkily. So, the day is fast coming when you’ll be able to talk into your wristwatch without making people nervous.
Some say we’re on the cusp of a wearable tech boom, and that the smartphone, as something we need to actually carry around, will become passe. Others are more dubious, positing that the smartwatch is just another gadget phase we’re going through.
But there’s that bendable glass…
It’s long been said that if you want to succeed, it helps to be smart. Now that applies to products, too.
- At last, a cure for expiration date anxiety: Researchers at the Eindhoven University of Technology in the Netherlands say they’ve developed packaging with sensors that will be able to tell if the food inside is still edible.
- When bottles share: A Florida entrepreneur thinks the time has come for medicine bottles to get smart. His idea is to put QR codes on bottles that once scanned, will play a video on your smartphone telling you all you really need to know about the meds inside.
- Let sleeping babies lie: And for anxious young parents who check every 30 seconds to see if their baby is still breathing, students at Brigham Young University are developing something they call the Owlet Baby Monitor. Using a built-in pulse oximeter, the wireless smart sock can track both a sleeping child’s heart and breathing rates.
- Say goodbye to the “You’ll just feel a little pinch” lie: Scientists at Purdue University have created bandages that could make the needle stick obsolete. Powered by a person’s body heat, the adhesive patches would be able to deliver medication without the need for a shot.
- Which is so much cooler than wearing a smart sock: In Japan, Fujitsu has unveiled its “Next Generation Cane.” Yep, it’s a smart cane and it can monitor a person’s vitals. It also comes with GPS so you can always know where Grandma’s taking a stroll.
Video bonus: Want the lowdown on how the Pebble smartwatch works? The Wall Street Journal’s Walt Mossberg lays it out a video review.
More from Smithsonian.com
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com