October 11, 2013
It’s hard to imagine that technology could be a friend to Obamacare, given the dismal performance of its official website last week. But it turns out that the high-speed crunching of a huge amount of information—aka Big Data—could ensure that one of the principle tenets of health care reform, known as “accountable care,” can become more than a catchy phrase in a policy paper.
U.S. hospitals have begun shifting their way of doing business. It’s long been the case that the payments hospitals received from Medicare largely were based on the tests their doctors ordered and the procedures they performed. So, strangely enough, the sicker a hospital’s patients were, the more money it tended to receive. But the Affordable Care Act is designed to change that, instead providing incentives that reward positive results. And, that seems to be prompting hospitals to move from focusing solely on treating sick people to helping patients take better care of themselves in the outside world. They want their ex-patients to stay ex-patients.
It’s crunch time
Case in point is Mount Sinai Hospital in New York. Not long ago it hired a 30-year-old named Jeff Hammerbacher to try to work wonders with the hospital’s new supercomputer. His previous job was as Facebook’s first data scientist, so you know he knows how much wisdom can be gleaned from mountains of information—if you have computers powerful and fast enough to make sense of it.
So far, the hospital has developed a computer model that crunches all the data it has on past patients—from why they were admitted to how many times they’ve been there to everything that happened during their stays—and from that, it’s able to predict which ones are most likely to return. But instead of just waiting for those patients to come back, Mount Sinai, like more and more hospitals, is turning proactive, reaching out to those frequent patients with follow-up calls to make sure they get to their doctor appointments or avoid the bad habits that end up sending them to the hospital. In one pilot program, Mount Sinai was able to cut re-admissions in half. If you don’t think that hospitals can put a serious dent in health care costs by slashing the number of repeat patients, keep in mind that nationwide, 1 percent of patients accounted for nearly 22 percent of health spending in 2009.
Methodist Health System in Dallas is going down a parallel track. It’s been analyzing patient data from 14,000 patients and 6,000 employees to identify people who are most likely to need expensive health care in the future, and it’s reaching out to help them take preventative measures before they develop costly ailments.
Here are a few other recent findings that have come from hospitals crunching Big Data:
- A health care provider in Southern California using data on the behavior of staff doctors found that one physician was using a certain antibiotic much more often than the rest of the staff—potentially increasing the risk of drug-resistant bacteria.
- At Memorial Care Health System in California, hospital management has begun tracking how doctors there perform on such things as immunizations, mammograms and blood glucose control in diabetic patients. That and other doctor data helped reduce the average patient stay from 4.2 days in 2011 to four days in 2012.
- Use of full-time nurses, rather than contract or temporary ones, coincided with higher patient satisfaction scores, according to Baylor Health Care System.
- Researchers in Ontario are working with IBM on a system to detect subtle changes in the condition of premature babies that could tip off the onset of infection 24 hours before symptoms appear.
- In another case, data analysis was able to determine which doctors were costing the most money by ordering procedures and other treatments. Hospital administrators reviewed the results with the costly doctors and suggested ways they could cut back on duplicate tests and unnecessary procedures.
Ultimately, hospitals hope to get to the point where, based on analysis of all the data of every patient who’s ever walked through their doors, they’ll have a very good idea of the risk facing each new patient who arrives.
To your health
Here’s a smattering of other recent research on hospital treatment:
- With luck, you’ll forget about the ICU: Researchers at Vanderbilt University found that 75 percent of people who spend time in a hospital’s intensive care unit suffer some level of cognitive decline. In some cases, according to the study, they can experience Alzheimer’s-like symptoms for a year or longer after leaving the hospital.
- Still need a reason to stay out of hospitals?: According to a recent report in the Journal of the American Medical Association, treatment of infections people develop in a hospital adds $9.8 billion to America’s health care costs every year. The Centers for Disease Control has estimated that one out of every 20 patients gets an infection while in the hospital. About a third of the cost comes from infections following surgery—they add an average of $20,785 to a patient’s medical bills.
- Here’s another: A study published in the recent issue of the Journal of Patient Safety estimates that as many as 210,000 to 440,000 patients each year who go to the hospital suffer some type of preventable harm that ultimately contributes to their death. If that’s the case, it would make medical errors the third-leading cause of death in America, behind heart disease and cancer.
- Must be the food: After crunching results from 4,655 hospitals, a health care economist from Thomas Jefferson University Hospital in Philadelphia found that the best hospitals, in terms of medical results, generally don’t receive the highest satisfaction rankings from patients. Instead, the top hospitals, which often are bigger and busier, tend to get only lukewarm ratings from people who spend time in them.
- But they found no link between moon cycles and back hair: Believe it or not, researchers at Rhode Island Hospital contend that their analysis showed that cardiac surgery, specifically aortic dissection, is less likely to result in death if performed in the waning of a full moon. They also said that patients who had the surgery during a full moon tended to stay in the hospital for shorter lengths of time.
Video bonus: Here’s another way Big Data is being used to predict human behavior, in this case, what we’re likely to do when we enter a store.
Video bonus bonus: And, in advance of Halloween, a little macabre hospital humor.
More from Smithsonian.com
October 4, 2013
You have to hand it to Google.
Yes, Google Glass is one nifty technology, but wearing glasses with a little camera attached seems to reek of geek, the kind of gadget that would appeal most to men and women who, as young boys and girls, wanted so much to believe in X-ray glasses.
Yet twice now, Google Glass has managed to crash one of America’s biggest glamor parties—New York’s Fashion Week. Last year, all of the models in designer Diane Von Furstenberg’s show strutted down the runway accessorized by Google. And, a few weeks ago, at this year’s event, anyone who was anyone—top models, fashion editors, reality show judges—was walking around shooting pictures and videos with their clever camera glasses.
Still, if Google Glass is to go mainstream, it needs to move beyond the air kiss crowd and geek buzz. That part of the plan starts tomorrow in Durham, North Carolina, the first stop in what Google says will be a national roadshow. With Google Glass expected to hit the market by early 2014, it’s time to start letting the general public see what all the chatter’s about.
The camera never blinks
So, it’s also time to begin taking a closer look at what it might mean to have a whole lot of people walking around with computers/cameras attached to their heads.
There’s obviously the matter of privacy. Google Glass wearers will have the ability to shoot a steady stream of photos and videos as they go about their daily lives. A group of U.S. congressmen raised the issue to Google earlier this year, as have privacy commissioners from Canada, the European Union, Australia, Israel, Mexico, Switzerland and other countries.
Google’s response is that the camera will not be that surreptitious since it will be voice-activated and a light on the screen will show that it’s on. Google also insists that it won’t allow facial recognition software on Google Glass—critics have raised concerns about someone being able to use facial recognition to track down the identity of a person they’ve captured in photos or videos on the street or in a bar.
Others are worried about so much visual data being captured every day, particularly if Google Glass hits it big. The video and images belong to the owner of the glasses, but who else could get access to them? Google has tried to assuage some of those fears by pointing out that all the files on the device will be able to be deleted remotely in the event that it’s lost or stolen.
Thanks for sharing
Then there’s this. In August, Google was awarded a patent to allow for the use of something known as “pay-per-gaze” advertising. In its application, the company noted that “a head-mounted tracking device”—in other words, Google Glass—could follow where the person wearing it was gazing, and be able to send images of what they saw to a server. Then, any billboards or other real-world ads the person had seen would be identified and Google could charge the advertiser. As noted in the New York Times’ Bits blog, the fee could be adapted based on how long the ad actually held the person’s gaze.
Here’s how Google proposed the idea in its patent: “Pay-per-gaze advertising need not be limited to online advertisements, but rather can be extended to conventional advertisement media including billboards, magazines, newspapers and other forms of conventional print media.”
Since it became public, Google has downplayed the patent—first filed in 2011—saying it has no plans to incorporate the eye-tracking capability into Google Glass any time soon. “We hold patents on a variety of ideas,” the company responded in a statement. “Some of those ideas later mature into real products or services, some don’t. Prospective product announcements should not necessarily be inferred from our patents.”
There are other ways advertising could be integrated into the Google Glass experience. Digital ads could pop up in a person’s glasses based on what they may be looking at. Say you’re walking down the street and suddenly an ad for the restaurant down on the corner shows up on your display screen. That could get real old real fast—but it’s not that improbable. Or maybe you’d see virtual ads—for which advertisers pay Google—which would replace real-world ads that appear in your line of vision.
No doubt, though, Google Glass will provide us with plenty of ethical dilemmas. When, for instance, will you be justified in telling someone to please remove their camera glasses? And will there be places and situations where glasses in the filming position are universally seen as bad form—say, at dinner parties, or stops at public bathrooms or in the midst of messy breakups?
But there’s another aspect of Google Glass—or most wearable tech, for that matter—that’s particularly intriguing. It has to do with the power of real-time feedback to change behavior. Studies have shown that nothing is more effective at getting people to slow down their cars than those digital signs that tell you how fast you’re going. It’s feedback to which you can immediately respond.
So, will a steady stream of data about our personal health and exercise make us take our bad habits a lot more seriously? Sure, you can forget the occasional crack from your partner about your weight gain. But a smart watch reminding you all day, every day? What about prompts from your smart glasses that give you cues when you start spending money recklessly? Or flagging you on behavior patterns that haven’t turned out so well for you in the past? Can all these devices make us better people?
Sean Madden, writing for Gigaom, offered this take: “This is social engineering in its most literal sense, made possible by technology, with all of the promise and paranoia that phrase implies.”
Wear it well
Here are other recent developments on the wearable tech front:
- Remember when all a watch needed to do was tick: Samsung has jumped into the wearable tech business with the release of its Galaxy Gear smart watch, although some critics have suggested that it’s just not smart enough.
- If teeth could talk: Researchers at National Taiwan University have designed a sensor that when attached to a tooth can track everything your mouth does during a typical day—how much you chew, how much you talk, how much you drink, even how much you cough.
- How about when you need more deodorant?: A Canadian company is developing a machine-washable T-shirt that can track and analyze your movement, breathing and heart activity.
- Don’t let sleeping dogs lie: Why shouldn’t dogs have their own wearable tech? Whistle is a monitoring device that tells you how much exercise your dog is getting while you’re at work. Or more likely, how much he’s not getting.
Video bonus: Here’s a Google video showing how Glass can keep you from ever getting lost again.
Video bonus bonus: With luck, advertising on Google Glass will never get as bad as it plays out on this video parody.
More on Smithsonian.com
First Arrest Caught on Google Glass
September 25, 2013
When I got back from vacation the other day, I returned to a clean desk. Well, not actually clean, but every stack of paper was aligned. Not a sheet was askew.
This lasted about 20 minutes.
But rather than stare forlornly at the paper swirl building before me, this time I gave myself a big “attaboy,” because clearly I was getting my creative on.
When things get messy
That’s right, a messy desk is a sign of an innovative mind at work, not a chaotic one. At least that’s the sage suggestion from a team of researchers at the University of Minnesota.
Here’s how they reached this conclusion. First, they arranged a room to look either particularly tidy or especially messy and haphazard. Then they invited people in for what they were told was a “consumer choice study.” The study participants were shown a menu for fruit smoothies. Actually, there were two versions of the menu. On one, smoothies with a “health boost” of added ingredients, were labeled “classic.” On the the other menu, those same smoothies were promoted as “new.”
And here’s how it played out: When people were in the tidy room, they picked smoothies with a health boost twice as often if it was labeled classic. Conversely, when they made their smoothie choices while in a messy room, they opted for those described as “new”—again twice as often. In short, they preferred convention while in a clean environment and novelty when immersed in messiness.
Interesting, but it doesn’t feel like this is quite enough to declare that messiness fosters creativity. So the Minnesota researchers, led by Kathleen Vohs, ratcheted up the research. They used the same tidy and messy rooms, only this time, they asked subjects to propose as many different uses for ping pong balls as possible. Then they had a team of independent judges rate the ideas based on the level of creativity.
Suggesting that the balls be used for beer pong wouldn’t have impressed the judges. Recommending that they could be converted into ice cube trays would.
Once again, the messy room worked its magic. As Vohs explained recently in the New York Times, the people who spent their time there offered up five times as many ideas deemed “highly creative.”
Maybe it’s time to aim a fan at the papers on my desk and start thinking deep thoughts.
It’s all about connections
If only it were that simple. Turns out that even the way our brains produce creative thoughts appears to be a lot more complicated than long believed. The conventional wisdom that the right half of our brain handles creative thinking? Way too simple—at least according to a study published last week in the journal Proceedings of the National Academy of Sciences. A team of Dartmouth scientists found that human imagination is much more of a whole brain experience.
That’s what they observed after they hooked 15 participants up to an fMRI scanner and asked them to visualize specific abstract shapes, then told them to imagine combining those shapes into more complex figures. Large networks within the subjects’ brains became active as they conjured up the images. This included areas that deal with visual processing, along with others related to attention and executive processes. All of them worked together to make the imaginary images take shape.
While their findings didn’t provide a clear answer as to why some people are more creative than others, it did allow the scientists to speculate that it may come down to a matter of connections, that in truly creative people, the different brain regions needed to shape imagination are particularly well-connected.
Here’s other recent research on what may help make us creative:
- Still, they should not be encouraged to take apart the air conditioner: Researchers at Vanderbilt University say that a teenager’s ability to figure out how things work may be a better predictor of innovative thinking than more conventional math or verbal skills. The study found that students who did well on the Differential Aptitude Test, which measures the ability to manipulate two-and three-dimensional objects, often proved to be high achievers in math, science and engineering.
- I did it my way…and so should you: A study published by Northwestern University scientists challenges the notion that creative people can be a bit flighty. Instead, their research suggests just the opposite, that people who achieve creative success tend to cling to ideas, sometimes to the point where it keeps them from shifting focus.
- And you scoffed: A British psychologist commissioned by the music streaming service Spotify to determine what type of music benefits which topics of study came to the conclusion that listening to the music of Miley Cyrus can actually boost a person’s creativity.
Video bonus: Singer Annie Lennox offers her take on catching creative ideas and why it’s important to keep our internal critic out of the room at those moments.
Video bonus bonus: You gotta admit that there’s something creative about putting birds on hang gliders.
Also on Smithsonian.com
Working In a Creative Field? Despite What You Think, Coffee Is Not Your Best Friend
August 27, 2013
As we in America head into the Labor Day weekend, let us pause to consider that these days when you refer to an army of workers, you could be talking about people managed by their smartphones. That’s pretty much how it works with an outfit called Gigwalk,which has found a way to build a big temp worker network strictly through a iPhone app–and now it’s become available on Android phones.
Gigwalk’s M.O. is to use its app to quickly mobilize temp workers for projects that cover a lot of territory. Say, for instance, a big company needs pictures of restaurants or current menus for an online guide. Gigwalk puts out the word on its app and people get a chance to make a little extra money, usually at $12 to $15 an hour.
Kudos to Gigwalk for creating a new type of labor market to deal with jobs that otherwise would probably be too complex logistically to get done. It’s been suggested that it could become the “eBay of work.”
So it’s all good, right? Well, mostly. But there’s another aspect of Gigwalk’s model that may give you pause. It scrupulously gathers performance data on it each of its temps, with the goal of being able to better match them with future gigs. It tracks how long it takes a person to respond to a job alert on their app–too slow and you ding your rating. It tracks the GPS on a temp’s phone see how long they spend on a job and takes that into account in measuring his or her productivity. It analyzes customer surveys, naturally, but it also evaluates how much complexity a person can handle on a project before they seek help.
All of this is wrapped into a “mathematical profile” that Gigwalk says makes it more likely that their temps will succeed because it’s easier to assign them to work for which they’re best suited. And the Gigwalk people are quite proud of that. As CEO Bob Bahramipour told Bloomberg BusinessWeek: “We know more about our workers than anyone has ever known about workers.”
Co-founder and CTO Matt Crampton had more to say in a recent interview on Salon:
“Behind the scenes we are watching everybody while they are going about doing their work. We are building these mathematical profiles on top of people, figuring out who is doing good jobs on a variety of gigs. We can figure out what kind of jobs you do well and start routing more complex, higher-paying jobs to you based on the skills we see inside our system. And then we can provide companies with workers with the specific kind of skill sets they need to get work done.”
It’s all perfectly logical, a deep bow to meritocracy, and as Crampton notes, if businesses–particularly retailers–find that this approach can consistently provide them with competent, geographically distributed temps, they’ll start to look for other ways to use them. And that could end up creating new kinds of jobs.
But there is the matter of Gigwalk’s worker profiles. Sure, they may be well-intentioned, but what to make of job performance grades largely driven by data and spawned by algorithms? How inviolable will they be? And who ultimately could have access to them?
Maybe I’m being paranoid–writing about algorithms as the engine of 21st century life will do that to you–but are we seeing the first hint of something like career credit ratings? Could you one day moan that your data points have done you wrong?
Matters of privacy
Here are other recent developments in the realm of digital privacy:
- Is it in your DNA to want to let people know what’s in your DNA?: A small start up in Minneapolis named Miinome wants to create what it calls the first “member-controlled human genetic marketplace.” It would electronically store the DNA data of anyone who has had their genome sequenced. Haven’t gone there yet? Well, Miinome would do this for you for free or at low cost. In return, you’d have the option of sharing select aspects of your DNA data–say, gluten intolerance or a genetic disposition to male pattern baldness–with marketers who could target offers to you based on what your genes say about you.
- It’s my life and you’re not welcome to it: According to a recent study by the Pew Research Center, more than half of all U.S. teenagers who use mobile apps have avoided certain ones because they would have to share personal information. And just under half of the teenagers surveyed said they have turned off location-tracking features in apps they’ve downloaded. The researchers said they found that American teenagers tend to think about privacy in the sense of “social privacy” or whether an app is “creepy,” but don’t seem to worry much about personal data being captured through advertising or governmental surveillance, as adults do.
- Thanks for sharing: Medical and health mobile apps are a boom business–an estimated 97,000 different ones now are out in the marketplace. And plenty of people are sharing lots of very personal information through those apps, assuming that it will stay secure and private. Not quite. A recent study found that many health apps firms don’t encrypt the data they receive and that fewer that half of those analyzed didn’t post privacy policies. Others didn’t disclose that captured data could be made available to third parties.
- Not that I don’t trust you: A “Boyfriend Tracker” app was removed from the Google Play app store in Brazil last week in response to complaints about potential invasion of privacy abuses–but not before tens of thousands of Brazilians downloaded it. The app lets users obtain a call history, receive any incoming or outgoing text messages, identify a partner’s location on a map using GPS, and actually turn on the phone to listen in to the surrounding environment.
- Help us help you get pregnant: Earlier this month, an app called Glow was launched with the goal of helping women get pregnant. Technically, it’s a free fertility app, but one that goes way beyond just tracking a woman’s menstrual cycle. It asks users to enter very detailed information about their health and sex lives–including frequency of sex and sexual positions. The more detailed the info is, say Glow’s creators, the more precise the app can be in projecting a woman’s best chance for getting pregnant. Glow is unique in another way, too. Users who sign up for a service called Glow First can get help paying for fertility treatments.
Video bonus: Rick Smolan, author of “The Human Face of Big Data” weighs in on how much we don’t know about what happens to all the personal info we’re so happy to share.
More from Smithsonian.com
August 20, 2013
It’s the time of year when learning seems remarkably possible. Students are excited, teachers are motivated–let the learnfest begin.
But by next month, it will become clear once again that the teaching/learning routine is a tricky dance, that all kinds of things, both in our heads and in our lives, can knock it off balance.
Fortunately, scientists have kept busy analyzing how and why people learn. Here are 10 examples of recent research into what works and what doesn’t.
1) Flippin’ it old school: The latest thinking has it that the most effective way to get students to learn these days is to flip the old model and instead have students first watch videos or read books, then do projects in the classroom. Au contraire, say researchers at Stanford University. They contend that you need to flip the flip after finding that students are much more likely to understand those videos and books if they first do hands-on exercises in class that tap into their prior knowledge of a subject, say to solve a problem. Only then, the researchers said, are students able to fully grasp more abstract concepts.
2) Such as “three idiot drivers”: Meanwhile, scientists at the University of Missouri found that preschoolers who have a hard time estimating the number of objects in a group were more than twice as likely to struggle with math later in life. Those researchers concluded that it has to do with a child’s inability to learn the concept of how numerals symbolize quantities. They suggest that parents should take advantage of opportunities to show how things in the world can be expressed in numbers.
3) Give that machine a timeout: Researchers at McMaster University in Ontario say that laptops in classrooms distract not only the students using them, but also those sitting nearby. They gave laptops to some students and asked them to perform certain tasks during class. They also asked classmates using only #2 pencils to complete the same tasks. Guess who performed worst: the kids with laptops, plus the people sitting next to them.
4) Like clockwork: Young girls need to stick to a regular bedtime if they want to help their brains develop. So says a study from University College, London, which found that girls under seven years old who had erratic bedtimes scored lower on IQ tests than girls who went to sleep around the same time every night. Inconsistent bedtimes also affected young boys, but the effect seemed to be temporary. The researchers also determined that when girls went to bed didn’t seem to matter nearly as much as whether they did so at the same time every night.
5) Let’s give them a big mazel tov shout out: One of the keys to learning a second language is the ability to pick up patterns, according to a recent study at Hebrew University. The scientists determined that American students who were better at learning Hebrew also scored particularly high on tests in which they needed to distinguish regularities in the sequence in which they were shown a series of shapes. Being able to spot patterns proved to be a very good predictor of who would have the best grasp of Hebrew after a year of study.
6) Not to mention, they can now sing in Hungarian at parties: It apparently also helps to sing the words of another language. In a study published last month in the journal Memory & Cognition, scientists said that people who sang back phrases they heard in a foreign language were considerably better at learning it than people who simply repeated the phrases in spoken words. In fact, research participants who learned through singing performed twice as well as those who learned by speaking the phrases. The study required English speakers to learn Hungarian, which is a particularly difficult language to master.
7) Brains are just so smart: Another recent study, this one by German scientists, determined that even under stress, humans are able to learn because certain receptors in the brain help us move from conscious and to unconscious learning. People in a study who were given drugs to block those receptors had more trouble learning in a stressful situation because their brains couldn’t make the switch.
8) Reading minds: Thanks to researchers at M.I.T., it may soon be possible to diagnose dyslexia in young children before they start trying to read. Using a type of MRI brain scan, the scientists discovered a correlation between the size and organization of a certain region of the brain and a child’s ability to identify and manipulate the sounds of language. By having a biomarker for dyslexia before they try to read, kids may be able to avoid some of the psychological stress they suffer when they struggle to understand written words.
9) Kids who can hand jive are off the charts: Turns out that it may a good thing for small children to talk with their hands. A study published in the journal Developmental Psychology, concluded that preschoolers and kindergartners who naturally gestured to indicate what they were trying to do showed more self control. The gestures seemed to help the kids think things through, according to the researchers, who said the hand movements had a stronger correlation to successful performance than age.
10) Strangely, however, they are unable to hear parents: If you have kids in middle school or older, they’ve no doubt told you countless times how good they are at multitasking, that they can watch a video, text their friends and study for a test without breaking a sweat. But, according to a study published in a recent issue of Computers in Human Behavior, they’re probably not learning much. Not only were researchers surprised at how often kids in the study multitasked–even when they knew someone was watching– but they also found that their learning was spottier and shallower than those who gave studying their full attention.
Video bonus: Math was always a lot more fun when Abbott and Costello did it.
Video bonus bonus: Forgive me if you’ve seen or heard Kenneth Robinson’s lecture on changes in education, but his insights, along with the clever animation illustrating them, make it worth an encore.
More from Smithsonian.com
To Develop Tomorrow’s Engineers, Start Before They Can Tie Their Shoes