October 14, 2013
The unassuming rectangular box that you’re seeing can, in some ways, be thought of as a time machine.
Its inventors, Chad Russell and Charles Butkus, conceived of the device as a way for users to surf web pages without being inundated by the proliferation of advertisements, reminiscent of how people experienced it in the good old early days of the internet. “The idea started as a casual conversation with a friend about how cluttered the internet had become,” says Russell. “These days not only do you have banner ads, but also video commercials and advertising embedded into you mobile apps. They’re everywhere.”
After testing several hacked “Linux boxes” as prototypes, the duo came up with AdTrap, a mini-computer that connects to both your router and modem, and functions as an advertising firewall. The final product was designed to be entirely hardware-based so that it automatically removes all ads without the need for installed software or configuration. Simply plug it in and the low-powered machine instantly blocks out display ads, app-based ads and even the type of video ads commonly programmed into your favorite YouTube videos. And, it enables users to do this on every one of their devices.
“The unique thing about AdTrap is that it is run on a full web server, so it has better ad blocking abilities than just software,” Russell says. “And the ability to prevent video commercials from rolling is a new innovation, which I believe makes it interesting.”
Only a month after launching a funding campaign in November on the crowdsourcing site Kickstarter, Russell and his development team at the Palo Alto-based security software startup Bluepoint Security well exceeded their fundraising goal, finishing with $213,392 worth of seed money.
Since the simple days of text, photos and links, online advertising has become big business, essentially subsidizing much of what exists on the web. Data from the Interactive Advertising Bureau reveals that a record $20 billion was spent on advertising in the first half of this year alone, doubling the amount spent in 2007.
Third party solutions designed to combat this intrusive trend isn’t anything new. Popular browser plug-ins like AdBlock Plus have been widely available for a a few years now, and fundamentally, AdTrap employs many of the same strategies. But the mere fact that users can put in place such comprehensive ad filtering, and do it with such ease, can, in the long run, pose a substantial threat to the main source of revenue for a vast percentage of major publications (not to mention Silicon Valley stalwarts such as Facebook and Google).
As the project has rolled ahead (shipment began in August), Russell has yet to receive a single legal challenge or even stir up any complaints. He isn’t at all surprised since he sees the device as neatly falling into the same category as other widely-accepted means of filtering internet content, such as firewall security systems and parental control software like NetNanny. He also doesn’t think of the project as a means of waging war on advertising.
“We are not against ads,” says Russell. “The main problem with the way a lot of advertisements work nowadays is that they encroach upon people’s privacy by collecting data on their online activity, which many prefer outside parties not to have. Basically, internet users are paying for content by trading in their private information.”
Russell is hardly alone in working towards developing alternatives that would help users protect their privacy. Recently, a team of former Google employees figured out a way to buck their former employers by releasing Disconnect search, a free browser plug-in that prevents search engines such Google, Bing and Yahoo from keeping tabs on your search habits. The uprising against the long arm of marketing has reached a level where Russell says that even advertisers are fearing broader ramifications on the industry as a whole.
In fact, he mentioned that the company has begun negotiating with a small number of prominent firms to formulate a model that just might work better for all parties involved. For example, a few of the discussions have revolved around a potential opt-in system that gives users the choice to allow for ads from certain parties in exchange for a small payment. The advantage for sellers, he explains, is the potential to receive more individual attention from audiences without having them become annoyed by the sheer barrage of flashing click bait.
Even so, there are still other pressing concerns. Like, for instance, what if the technology eventually takes off? Would the internet, as a whole, suffer? Will it lead to sites cutting back on content, or might cash-strapped outlets resort to producing cheaper, lower-quality content?
Russell argues that online publishers need to continue evolving as they’ve always been. He points out that other media entities, like Pandora, have shifted to giving users a choice between having to listen to ads and the option of a commercial-free paid subscription.
“Listen, I wouldn’t like to see every site put up a paywall either,” says Russell. “But when you rely solely on advertising, it’s almost like you’re saying content isn’t worth anything. People should be allowed other means to subsidize content. If you’re against that, it makes me wonder what the value of that content is in the first place.”
October 11, 2013
It’s hard to imagine that technology could be a friend to Obamacare, given the dismal performance of its official website last week. But it turns out that the high-speed crunching of a huge amount of information—aka Big Data—could ensure that one of the principle tenets of health care reform, known as “accountable care,” can become more than a catchy phrase in a policy paper.
U.S. hospitals have begun shifting their way of doing business. It’s long been the case that the payments hospitals received from Medicare largely were based on the tests their doctors ordered and the procedures they performed. So, strangely enough, the sicker a hospital’s patients were, the more money it tended to receive. But the Affordable Care Act is designed to change that, instead providing incentives that reward positive results. And, that seems to be prompting hospitals to move from focusing solely on treating sick people to helping patients take better care of themselves in the outside world. They want their ex-patients to stay ex-patients.
It’s crunch time
Case in point is Mount Sinai Hospital in New York. Not long ago it hired a 30-year-old named Jeff Hammerbacher to try to work wonders with the hospital’s new supercomputer. His previous job was as Facebook’s first data scientist, so you know he knows how much wisdom can be gleaned from mountains of information—if you have computers powerful and fast enough to make sense of it.
So far, the hospital has developed a computer model that crunches all the data it has on past patients—from why they were admitted to how many times they’ve been there to everything that happened during their stays—and from that, it’s able to predict which ones are most likely to return. But instead of just waiting for those patients to come back, Mount Sinai, like more and more hospitals, is turning proactive, reaching out to those frequent patients with follow-up calls to make sure they get to their doctor appointments or avoid the bad habits that end up sending them to the hospital. In one pilot program, Mount Sinai was able to cut re-admissions in half. If you don’t think that hospitals can put a serious dent in health care costs by slashing the number of repeat patients, keep in mind that nationwide, 1 percent of patients accounted for nearly 22 percent of health spending in 2009.
Methodist Health System in Dallas is going down a parallel track. It’s been analyzing patient data from 14,000 patients and 6,000 employees to identify people who are most likely to need expensive health care in the future, and it’s reaching out to help them take preventative measures before they develop costly ailments.
Here are a few other recent findings that have come from hospitals crunching Big Data:
- A health care provider in Southern California using data on the behavior of staff doctors found that one physician was using a certain antibiotic much more often than the rest of the staff—potentially increasing the risk of drug-resistant bacteria.
- At Memorial Care Health System in California, hospital management has begun tracking how doctors there perform on such things as immunizations, mammograms and blood glucose control in diabetic patients. That and other doctor data helped reduce the average patient stay from 4.2 days in 2011 to four days in 2012.
- Use of full-time nurses, rather than contract or temporary ones, coincided with higher patient satisfaction scores, according to Baylor Health Care System.
- Researchers in Ontario are working with IBM on a system to detect subtle changes in the condition of premature babies that could tip off the onset of infection 24 hours before symptoms appear.
- In another case, data analysis was able to determine which doctors were costing the most money by ordering procedures and other treatments. Hospital administrators reviewed the results with the costly doctors and suggested ways they could cut back on duplicate tests and unnecessary procedures.
Ultimately, hospitals hope to get to the point where, based on analysis of all the data of every patient who’s ever walked through their doors, they’ll have a very good idea of the risk facing each new patient who arrives.
To your health
Here’s a smattering of other recent research on hospital treatment:
- With luck, you’ll forget about the ICU: Researchers at Vanderbilt University found that 75 percent of people who spend time in a hospital’s intensive care unit suffer some level of cognitive decline. In some cases, according to the study, they can experience Alzheimer’s-like symptoms for a year or longer after leaving the hospital.
- Still need a reason to stay out of hospitals?: According to a recent report in the Journal of the American Medical Association, treatment of infections people develop in a hospital adds $9.8 billion to America’s health care costs every year. The Centers for Disease Control has estimated that one out of every 20 patients gets an infection while in the hospital. About a third of the cost comes from infections following surgery—they add an average of $20,785 to a patient’s medical bills.
- Here’s another: A study published in the recent issue of the Journal of Patient Safety estimates that as many as 210,000 to 440,000 patients each year who go to the hospital suffer some type of preventable harm that ultimately contributes to their death. If that’s the case, it would make medical errors the third-leading cause of death in America, behind heart disease and cancer.
- Must be the food: After crunching results from 4,655 hospitals, a health care economist from Thomas Jefferson University Hospital in Philadelphia found that the best hospitals, in terms of medical results, generally don’t receive the highest satisfaction rankings from patients. Instead, the top hospitals, which often are bigger and busier, tend to get only lukewarm ratings from people who spend time in them.
- But they found no link between moon cycles and back hair: Believe it or not, researchers at Rhode Island Hospital contend that their analysis showed that cardiac surgery, specifically aortic dissection, is less likely to result in death if performed in the waning of a full moon. They also said that patients who had the surgery during a full moon tended to stay in the hospital for shorter lengths of time.
Video bonus: Here’s another way Big Data is being used to predict human behavior, in this case, what we’re likely to do when we enter a store.
Video bonus bonus: And, in advance of Halloween, a little macabre hospital humor.
More from Smithsonian.com
June 7, 2013
Andrea, the first tropical storm of hurricane season is churning up the East Coast today and while it’s not expected to do much more than deliver a heavy drenching, it has kicked off the first wave of storm tracking.
Will it hug the coast or drift inland? Will it dump and inch of rain or three? Will it provide us with our first 2013 image of a TV reporter doing unintended slapstick on a beach?
Already we’ve been told that this could be one nasty season, with a prediction from the National Oceanic and Atmospheric Administration (NOAA) of seven to 11 hurricanes, of which three to six could be major–that’s with winds of 111 mph or higher. And hurricane experts at Colorado State University are pretty confident–they put the likelihood at 72 percent–that at least one of those major hurricanes will make landfall somewhere along the Gulf Coast or the Eastern seaboard. Keep in mind that Sandy was not considered a major hurricane when it swept in over New Jersey last fall.
Hurricane forecasting is much more science than crapshoot these days. Computer models have become amazingly accurate, considering how many variables need to be taken into account–temperature, wind speed, humidity, barometric pressure, topography–from many different locations at different times. All told, there can be hundreds of thousands of factors that need to be weighed. And the task is complicated by the fact that we only have about 60 years of good historical data to plug into the models.
Most of the real-time data that gets fed into the computers comes from dropsonde sensors that are dropped into the storms from big, heavy “hurricane hunters,” planes that are essentially flying laboratories. These are impressive machines. They also are quite expensive. One plane costs about $22 million.
Kamran Mohseni thinks there may be a better way to gather storm data. It’s about thinking small.
Mohseni, an engineering professor at the University of Florida, believes the next generation of hurricane hunters will be drones small enough to almost fit into the palm of your hand, but able to engage fierce hurricanes by riding the wind rather than trying to punch through it. Its weight–about as much as an iPod Nano–is an asset in his mind. “Our vehicles don’t fight the hurricane,” he says. “We use the hurricane to take us places.”
His take is that instead of relying on a few “super-duper” aircraft, why not use hundreds of little drones that through their sheer numbers, could make the data that much more accurate or, as he put it, “You get super duper on an aggregate level.”
Mohseni’s drones, with their sensors, would be launched with commands from a laptop, and then, with the help of mathematical models that predict where the best wind currents can be found, would be able to hitch a ride into the storm. Once there, the drones can be powered up or down as needed, with the goal of taking advantage of the wind’s power to explore the hurricane.
Riding the waves
But Mohseni is not just talking about flying drones. He also has developed underwater vehicles designed to mimic jellyfish as they move through the ocean. He envisions them as a tiny naval fleet working in tandem with a squadron of his flying drones, and that could allow scientists to also gather data from under the sea, which can be particularly difficult to collect.
He realizes, of course, that even though his drones–since they won’t resist the wind–aren’t likely to be blown apart, a lot of them will be lost once they take on a hurricane. But because they’re so small and light, they’re not likely to do much damage if they hit something. And he figures the data gained will be worth the expense.
Each of his drones costs about $250.
Eyes of the storm
Here are other recent developments in weather tech:
- It’s a wind win: The Canadian firm Aeryon Labs has developed an “Unmanned Aerial Vehicle” (UAV) designed to do military reconnaissance in bad weather. It promises that its SkyRanger drone can remain stable in winds for 40 and survive gusts of 55 mph and also can function in temperatures from -22 to 122º Fahrenheit.
- It was a dark and stormy flight: Later this summer NASA will send a pair of large unmanned aircraft loaded with instruments out over the Atlantic to study more closely how hurricanes form and build in intensity. Last fall, the agency used one of these drones, called Global Hawk, but will add another as it expands its focus to wind and rain bands inside hurricanes.
- After all, why shouldn’t clouds be able to get that inner glow: With the goal of seeing how lasers might affect cloud formation, researchers at the Karlsruhe Institute of Technology in Germany found that lasers can actually make a cirrus cloud glow. Unfortunately, lasers aren’t able to do this yet with real clouds; the scientists produced the effect on clouds created in the lab.
- Not to mention, an awesome shield against flying beer: And now, meet the Rainshader, an umbrella that looks more like a motorcycle helmet on a stick. Designed to protect you from rain at sporting events, it promises not to blow inside out, poke people in the eye, or drip on those sitting next to you. And, best of all, because it can he held to sit low on your head, it shouldn’t block anyone else’s view.
Video bonus: Watch Kamran Mohseni’s little hurricane hunters taking flight.
Video bonus bonus: And for old time’s sake, the lighter side of big storms.
More from Smithsonian.com
May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
May 8, 2013
Cell phones are so many things now–computer, map, clock, calculator, camera, shopping device, concierge, and occasionally, a phone. But more than anything, that little device that never leaves your person is one amazingly prolific data engine.
Which is why last October, Verizon Wireless, the largest U.S, carrier with almost 100 million customers, launched a new division called Precision Market Insights. And why, at about the same time, Madrid-based Telefonica, one of the world’s largest mobile network providers, opened its own new business unit, Telefonica Dynamic Insights.
The point of these ventures is to mine, reconstitute and sell the enormous amount of data that phone companies gather about our behavior. Every time we make a mobile call or send a text message–which pings a cell tower–that info is recorded. So, with enough computer power, a company can draw pretty accurate conclusions about how and when people move through a city or a region. Or they can tell where people have come from to attend an event. As part of a recent case study, for example, Verizon was able to say that people with Baltimore area codes outnumbered those with San Francisco area codes by three to one inside the New Orleans Superdome for the Super Bowl in February.
In a world enamored of geolocation, this is digital gold. It’s one thing to know the demographic blend of a community, but to be able to find out how many people pass by a business and where they’re coming from, that adds a whole nother level of precision to target marketing.
Follow the crowd
But this data have value beyond companies zeroing in on potential customers. It’s being used for social science, even medical research. Recently IBM crunched numbers from 5 million phone users in the Ivory Coast in Africa and, by tracking movements of people through which cell towers they connected to, it was able to recommend 65 improvements to bus service in the city of Abidjan.
And computer scientists at the University of Birmingham in England have used cell phone data to fine tune analysis of how epidemics spread. Again, it’s about analyzing how people move around. Heretofore, much of what scientists knew about the spread of contagious diseases was based largely on guesswork. But now, thanks to so many pings from so many phones, there’s no need to guess.
It’s important to point out that no actual identities are connected to cell phone data. It all gets anonymized, meaning there shouldn’t be a way to track the data back to real people.
There shouldn’t be.
Leaving a trail
But a study published in Scientific Reports in March found that even anonymized data may not be so anonymous after all. A team of researchers from Louvain University in Belgium, Harvard and M.I.T. found that by using data from 15 months of phone use by 1.5 million people, together with a similar dataset from Foursquare, they could identify about 95 percent of the cell phones users with just four data points and 50 percent of them with just two data points. A data point is an individual’s approximate whereabouts at the approximate time they’re using their cell phone.
The reason that only four locations were necessary to identify most people is that we tend to move in consistent patterns. Just as everyone has unique fingerprints, everyone has unique daily travels. While someone wouldn’t necessarily be able to match the path of a mobile phone–known as a mobility trace–to a specific person, we make it much easier through geolocated tweets or location “check-ins,” such as when we use Foursquare.
“In the 1930s, it was shown that you need 12 points to uniquely identify and characterize a fingerprint,” the study’s lead author, Yves-Alexandre de Montijoye, told the BBC in a recent interview. “What we did here is the exact same thing, but with mobility traces. The way we move and the behavior is so unique that four points are enough to identify 95 percent of the people.”
“We think this data is more available than people think. When you share information, you look around and you feel like there are lots of people around–in a shopping center or a tourist place–so you feel this isn’t sensitive information.”
In other words, you feel anonymous. But are you really? De Montijoye said the point of his team’s research wasn’t to conjure up visions of Big Brother. He thinks there’s much good that can come from mining cell phone data, for businesses, for city planners, for scientists, for doctors. But he thinks it’s important to recognize that today’s technology makes true privacy very hard to keep.
The title of the study? “Unique in the Crowd.”
Here are other recent developments related to mobile phones and their data:
- Every picture tells your story: Scientists at Carnegie Mellon University’s Human Computer Interaction Center say their research of 100 smartphone apps found that about half of them raised privacy concerns. For instance, a photo-sharing app like Instagram provided information that allowed them to easily discover the location of the person who took the photo.
- Cabbies with cameras: In the Mexican city of Tuxtla Gutiérrez, taxi drivers have been provided with GPS-enabled cell phones and encouraged to send messages and photographs about accidents or potholes or broken streetlights.
- Follow that cell: Congress has started looking into the matter of how police use cell phone data to track down suspects. The key issue is whether they should be required to get a warrant first.
- Follow that cell II: Police in Italy have started using a data analysis tool called LogAnalysis that makes it especially easy to visualize the relationships among conspiring suspects based on their phone calls. In one particular case involving a series of robberies, the tool showed a flurry of phone activity among the suspects before and after the heists, but dead silence when the crimes were being committed.
Video bonus: If you’re at all paranoid about how much data can be gleaned from how you use your mobile phone, you may not want to watch this TED talk by Malte Spitz.