April 15, 2013
Last fall, shoppers outside a Macy’s store in Boston were given a chance to test drive a robot. They were invited, compliments of Brigham and Women’s Hospital, to sit at a console and move the machine’s arm the same way surgeons would in an operating room.
And why not? What says cutting-edge medicine more than robotic surgery? Who wouldn’t be impressed with a hospital where robot arms, with all their precision, replace surgeons’ hands?
The surgeons, of course, control the robots on computers where everything is magnified in 3D, but the actual cutting is done by machines. And that means smaller incisions, fewer complications and faster recoveries.
But earlier this year, the Food and Drug Administration (FDA) began surveying doctors who use the operating room robots known as the da Vinci Surgical System. The investigation was sparked by a jump in incidents involving da Vinci robots, up to 500 in 2012.
The California company that makes the da Vinci, Intuitive Surgical, says the spike has to do with a change in how incidents are reported, as opposed to problems with its robots. It’s also true that robot surgery is being done a lot more frequently–almost 370,000 procedures were done in the U.S. last year, which is three and a half times as many as in 2008.
And the procedures are getting more complicated. At first, the robots were used primarily for prostate surgeries, then for hysterectomies. Now they’re removing gall bladders, repairing heart valves, shrinking stomachs during weight loss surgery, even handling organ transplants.
Not surprisingly, FDA survey has stirred up a swirl of questions about machine medicine. Have hospitals, in their need to justify the expense of a $1.5 million robot, ratcheted up their use unnecessarily? Has Intuitive Surgical placed enough emphasis on doctors getting supervised training on the machines? And how much training is enough?
It’s not an uncommon scenario for technological innovation. A new product gets marketed aggressively to companies–in this case hospitals–and they respond enthusiastically, at least in part because they don’t want to miss out on the next big thing.
But is newer always better? A study published recently in The Journal of the American Medical Association, compared outcomes in 264,758 women who had either laparoscopic or robotically assisted hysterectomies at 441 different hospitals between 2007 and 2010. Neither method is invasive.
But the researchers found no overall difference in complication rates between the two methods, and no difference in the rates of blood transfusion. The only big difference between the two is the cost–the robotic surgery costs one-third more than laparoscopic surgery.
Then there’s the matter of loosening training standards. When the FDA allowed the da Vinci system to be sold back in 2000, it was under a process called “premarket notification.” By claiming that new devices are similar to others already on the market, manufacturers can be exempted from rigorous trials and tough requirements. In this case, Intuitive Surgical was not formally required to offer training programs for surgeons.
The company did tell the FDA that it planned to require a 70-item exam and a three-day training session for doctors. But, as a recent New York Times article noted, Intuitive changed its policy just two years later. Instead it required surgeons to pass a 10-question online quiz and spend only a day in hands-on training.
So ultimately it’s up to the hospitals to set training standards. But in their rush to embrace the future, they can be tempted to avoid being too demanding. In one 2008 case that has resulted in a lawsuit against Intuitive, a patient suffered serious complications, including impotence and incontinence, while having his prostate gland removed. The surgeon, it turned out, had never done robotic surgery without supervision before.
A researcher at Johns Hopkins Hospital, Dr. Martin Makary, who has previously criticized hospitals for overhyping robotic surgery on their websites, has another study coming out soon that suggests that the problems involving da Vinci robots are underreported. “The rapid adoption of robotic surgery,” he contends, “has been done, by and large, without the proper evaluation.”
Dr. David Samadi, Chief of Robotics and Minimally Invasive Surgery at the Mount Sinai School of Medicine in New York, has a different way of looking at robotic surgery: “A good driver in a Lamborghini is going to win NASCAR. But someone’s who not a a good driver in a Lamborghini…he’s going to flip the car and maybe kill himself.”
Here are some other ways robots are being used in hospitals:
- Down go the mean old germs: Doctors at Johns Hopkins Hospital in Baltimore have turned to robots to take on the superbugs that have become such a threat of spreading dangerous infections among patients. After a hospital room is sealed, the robots spend the next half hour spraying a mist of hydrogen peroxide over every surface. Other hospitals are taking a a different approach in dealing with nasty bacteria–they’re using robots that zap germs with beams of ultraviolet light.
- And you’ll be able to see your face in the scalpel: GE is developing a robot that will keep the tools of the operating room sterile and organized. Instead of relying on humans doing this by hand–clearly not the most efficient process–the robot, by recognizing unique coding on each piece of equipment, will be able to sort scalpels from clamps from scissors, sterilize them and then deliver everything to the operating room.
- Bedside manner, without the bedside part: Earlier this year the FDA approved a medical robot called RP-VITA, which was developed by iRobot and InTouch Health. The machine moves around the hospital to rooms of patients identified by the doctor. Once in a room, it connects the doctor to the patient or hospital staff through the robot’s video screen.
- The buddy system: Researchers at Columbia University found that the pain ratings of hospitalized children dropped significantly when they interacted with “therapeutic robot companions.”
Video bonus: When da Vinci is good, it’s very, very good. Here’s a video of a surgeon using one to peel a grape.
Video bonus bonus: Okay, admittedly this has nothing to do with robotic surgery, but it’s the hottest robot video on the Web right now–an impressive, yet somewhat creepy demo of Boston Dynamics’ “Petman” in camo gear.
More from Smithsonian.com
March 1, 2013
So, we’re 42 years into the War on Cancer, and while the enemy remains formidable, our strategy is shifting into yet another phase. We’ve been through the equivalent of hand-to-hand combat–surgery–carpet bombing–radiation–and chemical warfare–chemotherapy.
Now the fight is about stealth. Instead of concentrating on blasting away at cancer cells, or poisoning them, you’re more likely to hear cancer scientists talk about “Trojan horses” or “cloaking strategies” or “tricking” the immune system. All are cell-level ploys hatched through nanomedicine–medical treatment gone very, very small. How small? At the nano level, about 5,000 particles would be as wide as a human hair.
We are not the enemy
Okay, so we’re in beyond comprehension territory here. But let’s not get hung up on size; let’s focus on deception.
The latest example of microscopic trickery was laid out last week a paper from researchers at the University of Pennsylvania. One of the most appealing aspects of nanomedicine is that it allows scientists to deliver drugs directly to a tumor instead flooding the whole body with chemotherapy. Unfortunately, the immune system sees the nanoparticles as invaders and tries to clear them away before they can go to work on the tumor cells.
The trick was to make the “sentry cells” of the body’s immune system think that the drug-delivering nanoparticles were native cells, that they weren’t intruders. The researchers did this by attaching to each nanoparticle a protein that’s present in every cell membrane. And put simply, it sent out a “don’t eat me” message to the body’s guard cells.
The result, at least in mice, is that this technique dramatically improved the success rate of two different kinds of nanoparticles–one that delivered tumor-shrinking drugs and one filled with dye that would help doctors capture images of cancer cells.
Meanwhile, earlier this year, scientists at the Methodist Hospital Research Institute in Houston announced that they had found their own way of letting nanoparticles fool the immune system. They developed a procedure to physically remove the membranes from active white blood cells and drape them over nanoparticles. And that “cloaking strategy” was enough to keep proteins that activate the immune system from doing their job and ordering it to go repel the invaders. The researchers believe it will one day be possible to harvest a patient’s own white blood cells and use them to cloak the nanoparticles, making it that much more likely that they’ll get to their target without being attacked.
As magical as all this can sound, nanomedicine is not without risk. Much more research needs to be done on the long-term impact of nanoparticles inside the body. Could they accumulate in healthy body tissues? And if they do, what effect would it have? Can those tiny particles now seemingly so full of promise, eventually turn toxic?
Still plenty of questions about nanomedicine, but it’s feeling more like an answer.
Here are six other ways in which thinking small is moving medicine forward:
1) But first, remove all jewelery: At the University of Minnesota, scientists are experimenting with nanoparticles and magnets to fight lung cancer. They’ve developed an aerosol inhalant that a patient can draw into his or her lungs with a few deep breaths. And that carries iron oxide nanoparticles to tumors inside the lungs. Then, by waving a magnet outside the body, they can agitate the particles so that they heat up enough to kill cancerous cells around them.
2) A new shell game: A team of engineers at UCLA has developed tiny capsules--about half the size of the smallest bacterium–that are able to carry proteins to cancer cells and stunt the growth of tumors. And the nanoscale shells degrade harmlessly in non-cancerous cells.
3) Gold’s fool: And at Northwestern, researchers say they’ve found a way to use gold nanoparticles to effectively fight lymphoma. They fool the lymphoma cells into thinking they contain high-density lipoprotein (HDL), which the cells need to survive. The gold nanoparticles bind to the cancer cells and starve them of cholesterol.
4) Way better than Krazy Glue: In Germany, scientists have invented a paste made of nanoparticles that they say can makes broken bones repair themselves faster. The paste contains two growth-factor genes that enter cells and accelerate bone healing.
5) Alas, it can’t help you find meds you dropped on the floor: While technically not nanomedicine, a small smart pill that tracks if people are taking their medications correctly could soon be on the market. Approved by the FDA last year, the pill contains a tiny sensor that interacts with stomach fluid and sends a signal to a patch on a person’s body. Taken with a real medication, the smart pill transmits information about the other med, particularly when it was ingested, to a smartphone. But it also sends physiological data, including heart rate and activity level.
6) Body heat gone bad: Along the same lines, firemen in Australia have started taking a tiny capsule to protect them from being overcome by heat. Sensors in the pill are able to take their core body temperatures in real time and relay that data to a smart phone. And that has led to changes in firefighters’ work patterns, including the length of time they are exposed to blazes.
Video bonus: Still not clear on nanomedicine? Here’s a TED talk on how it’s being used to fight cancer by Mark Davis, a leading expert on the subject and a chemical engineer at the California Institute of Technology.
More from Smithsonian.com;
January 30, 2013
Admittedly, it’s a little hard to imagine smell scientists, but research published earlier this week has those who study the sense of smell taking sides.
It comes down to how our noses detect odors. The long-standing explanation is that our noses have receptors that respond based on the shapes of odor molecules. Different molecules fit together with different receptors, the thinking goes, and when a match is made, the receptor tips off the brain that our nose has picked up a whiff of coffee or perhaps a very different smell emanating from the bottom of our shoe.
But a conflicting and more exotic theory received a boost in the new study by researchers in Greece. It holds that we can also sense smells through quantum physics, in this case the vibration of odor molecules. As Mark Anderson posits at Scientific American, “Does the nose, in other words, read off the chemical makeup of a mystery odorant—say, a waft of perfume or the aroma of wilted lettuce—by ‘ringing’ it like a bell?”
I know what you’re thinking: What difference does this make as long as I can still smell bacon?
Sniffing out trouble
But actually it does matter, because the more we understand the process of smelling, the more effective we can be at recreating it in machines. In fact, just last month IBM, in its annual “5 in 5″ forecast–a list of technologies it believes will hit the mainstream in five years–focused exclusively on the development of the five human senses in machines.
To mimic smelling, tiny sensors would be integrated into smartphones or other mobile devices and, as a breathalyzer can determine alcohol levels, they would gather data from the smell of your breath by detecting chemicals that humans wouldn’t perceive and send it to a computer in your doctor’s office. The thinking is that eventually this would be a core component of home health care–the ability to “smell” diseases remotely, such as liver or kidney ailments, asthma or diabetes.
Or on a more basic level, as IBM’s Hendrik Hamann put it: “Your phone might know you have a cold before you do.”
IBM is also working with health care organizations to equip patient and operating rooms with sensors that can help address one of the biggest problems hospitals face today–how do you keep them hygienic? Hundreds of sensors will basically sniff for cleanliness, identifying the chemical compounds that create odors, some of which are undetectable by humans. The staff can say they cleaned a room; the sensors will know if and when they did.
Every breath you take
The smell tests might even detect cancer. Last fall, in a study in the Journal of Thoracic Oncology, researchers from Israel and Colorado reported that breath analysis could distinguish between benign and malignant lung tumors with 88 percent accuracy. Plus, the breath test could determine the specific type and stage of the lung cancers.
And at the Cleveland Clinic, Dr. Peter Mazzone, director of the lung cancer program, is testing a sensor array that changes color when a patient’s breath passes over it. In a study of 229 patients, the test, using a machine developed by the California firm Metabolomx, was able to distinguish those with lung cancer with more than 80 percent accuracy.
Meanwhile, Mazzone and his team are collecting as many breath samples as possible from patients, both with and without lung cancer. The goal is match breath patterns with physical conditions. “My vision,” Mazzone told the Wall Street Journal, “is being able to say, ‘This is a 60-year old with emphysema who smoked for 30 years—what’s the chance of there being cancer there?’ But we have to teach the device what it looks like first.”
Or, perhaps more accurately, what it smells like.
Here are other recent discoveries scientists have made about smell:
- Me, my smell and I: Research in Germany concluded that not only can we identify our own body odor, but that we prefer it. For the study, women were asked to select which of their armpit odors they liked more. They showed a clear preference for the one perfumed with a solution that included elements of their own scent.
- Can robots wear Axe?: The U.S. Navy is looking to use scent-sniffing robots to move 1,000-pound bombs on ships. The idea is that a human would control the lead robot and it would dispense the equivalent of a robot pheromone that a swarm of other robots would follow like army ants.
- I love the smell of gridlock in the morning: When people are anxious, their sense of smell becomes more acute, according to a recent study at the University of Wisconsin-Madison.
- Why your dog can sniff out a chicken leg from a block away: And from the University of Chicago comes research finding that animals are able to focus their sense of smell much like humans can focus our eyes. Through their finely-honed sniffing techniques, they apparently can bring scents to receptors in different parts of the nose.
- There’s the rub: And finally, a study in the U.K. has found that thanks to a genetic variation, two percent of the population never has underarm body odor. Yet more than three-quarters of them still use deodorant because, well, that’s what people do.
Video bonus: Stuart Firestein, chairman of the biology department at Columbia University, tells you all you want to know about how our nose does its job.
Video bonus bonus: A Chinese airline that checks out the armpit odors of people interviewing to be pilots.
More from Smithsonian.com
January 17, 2013
Utensil history was made last week and I, for one, took pleasure in seeing that we had finally evolved beyond the spork or, as some of you may know it, the foon.
But sadly, the unveiling of the HapiFork at the Consumer Electronics Show (CES) was not universally greeted with great jubilation, but rather with a fair amount of ridicule.
Produced by a Hong Kong company called HapiLabs, the HapiFork is curious little thing. It looks like a fork and works like a fork, but it vibrates like a cellphone. And why it buzzes is the reason the media largely responded with one big group eyeroll.
See, the HapiFork is a fork with a simple and noble mission–to get you to stop eating like a pig. It buzzes to remind you to slow down.
It tracks not only the number of bites you’ve taken, but also how much time has passed between them and how long it takes you to finish the meal. The slower you eat, the fewer calories you consume. And because all the data can be stored on your smart phone, you can measure how less a chowhound you’ve become.
But some critics were not enamored of the concept, portraying the HapiFork as the essence of nanny technology, another “smart” gadget enforcer of data-driven moderation. How, the thinking goes, did it come to this, where forks are telling us to shut our pieholes?
The measure of a man
But maybe, given the obesity epidemic in the U.S. and Europe, it’s time to start listening to buzzing silverware. In fact, there are those who believe the current boom in mobile apps and devices that track our health and bad habits could play a big role in helping the U.S. get its outrageous health care costs under control.
A major health trend this year, according to a new report from PricewaterhouseCoopers, will be a shift by employers and insurance companies to encourage employees to be a lot more proactive when it comes to taking care of themselves. That’s in part due to incentives in the Affordable Care Act, but also because today’s technology–whether it’s sensors, WiFi or smart phones–has made it so much easier to track every move we make, every breath we take.
We’ll likely see more companies turn to employee wellness programs focusing on prevention and tapping into all that data that our smart phones and other health gadgets are able to gather about us. Already, start-ups such as the Boston-based Healthrageous are being hired by companies to work closely with their employees with chronic conditions, such as diabetes or hypertension or even sleep disorders. Healthrageous provides both a tracking device–say a blood glucose monitor for diabetics–and a customized plan to help employees reach their personal goals, which could be anything from fitting into pants you last wore 10 years ago to being able to play with your grandkids.
PUSH Wellness, in Chicago, also contracts out an employee wellness program, but with a different spin. It actually pays cash incentives to workers who meet goals that raise their “PUSH” score–a number based on a person’s Body Mass Index (BMI), blood pressure, cholesterol and fitness level. With PUSH, it’s not enough for an employee to exercise; they have to show real measurable results or there’s no pay out.
The big health insurance companies are getting in on the act, too. Last month, Aetna unveiled Passage, a fitness app it developed with Microsoft, that allows people to feel like they’re running or biking in some of the world’s great cities–Rome, New York, or Barcelona, for instance.
Also last month, Cigna announced that it has made available, for free, to the first 20,000 people who download them, four apps bundled together as the “Healthy Living App Pack. One is designed to track your workouts, another to get you to relax, another the help you sleep. The fourth, Fooducate, is a food nutrition app designed to make you health savvy when you’re food shopping.
When sensors speak
Here are five other health devices that made a splash at CES last week:
- Would your wrist lie to you?: Another health wristband is coming on the market soon. Called Fitbit Flex, it will be able to track your daily activity–steps taken, calories burned–and also how you’ve slept, plus wake you up with a little buzz in the morning. For motivation, a display of four LED lights shows how far along you are in meeting that day’s goal. And at $100, it will be less expensive than the competitors already out there, Nike Fuel and Jawbone’s Up.
- Keep running or we’ll play “Gangham Style:” Or you can let little earbuds do the monitoring work. Coming out this spring are iRiver On headphones equipped with PerformTek Precision Biometrics technology that measures a range of body metrics, including heart rate, distance traveled, steps taken, respiration rate, speed, metabolic rate, energy expenditure, calories burned and recovery time.
- It was so much easier when pills looked like the Flintstones: For those dealing with a daily dose of multiple meds, there’s the uBox. The little box reminds people when it’s time to take their pills with a combination of beeps, blinking lights and smart phone reminders. And if you’ve already taken your meds, the box remains locked until it’s time for another set–the better to keep forgetful seniors from double dosing. It even lets other family members know if grandpa’s missed a med.
- Giving new meaning to “Let me hear your body talk”: Then there’s Metria, a small patch a person wears on their chest that measures heartbeat, skin hydration, breathing, steps taken and sleep patterns. (It records the duration and quality of sleep based on how much you’ve tossed and turned.) Each patch gathers information for seven days and can send it to a phone or tablet anywhere in the world. Metria’s designed primarily for elderly people who live alone, but the U.S. Air Force reportedly may use it to monitor pilots.
- Will walk for prizes: And bringing us back full circle to obesity is the ibitz PowerKey, a pedometer for kids. It doesn’t just track their activity, but rewards them with games, apps, shows and prizes for staying on the move. And yes, parents can check in on their kids’ progress on their own smart phones.
Video bonus: See why Stephen Colbert thinks the HapiFork is “unAmerican.”
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com