November 26, 2013
With the inherent low-brow hokiness of instant spray-on hair and tans, the notion of clothing that you can simply spray on seems destined to occupy a spot at the bottom rung of gimmicky products typically found in the “As Seen On TV” aisle.
But it’s actually premier designer labels like Calvin Klein and specialty boutique shops that inventor Manel Torres had envisioned when he conceived and later developed his patented “couture-in-a-can” technology. At these upscale fashion outlets, shoppers would drop in, undress and have a custom-sprayed scarf draped around them in minutes. In this best-case scenario, prices will likely vary depending on whether the shopper wanted to be coated with $50 pair of Levi’s or $100 Ralph Lauren snug denim. Whatever outfit these style-conscious visitors choose, they’ll walk out feeling assured that they won’t run into anyone else who’s accidentally replicated their truly unique look.
Now, ten years after initially hitting upon the possibility, the British fashion designer is mostly busy fielding phone calls from representatives of fashion houses and other potential investors from a wide spectrum of industries. From the earliest failed prototypes to a current version that Torres has deemed “ready for production,” the revolutionary liquid fabric has since been showcased at a catwalk runway in London, during the Imperial College London Fashion show, where it received plenty of attention from the press. Still, the thoroughly refined technology has yet to go from showroom novelty to anyone’s actual wardrobe.
“I am always getting tons of emails asking when I will bring a product to the market,” says Torres, who founded Fabrican Ltd to market the concept. “Right now, we need global companies to fund this effort.”
The idea for spray-able garments came to him during a wedding, where he watched attendees playing with silly string. The sight left him wondering if something similar could be done with thread. Torres enrolled in a Chemical Engineering PhD program at Imperial College London, where he experimented with numerous formulations that would allow common fabrics like cotton, wool and nylon to be compressed and layered using an ejection system such as a spray gun or an aerosol can.
The fashion pioneer eventually settled on a solution comprised of short, cross-linked fibers held together by special polymers—all of which are soaked in a safe solvent so that the fabric can be delivered in liquid form. As the mixture is sprayed, the solvent evaporates before it comes in contact with the skin, which prevents the then-solid material from completely affixing to the body; it forms a layer of a sturdy, unwoven material with a texture Torres likens to the felt-like chamois leather used to make polishing cloths and towels for drying cars.
The method of spraying, he says, gives designers and consumers immense flexibility to hand-craft a wide range of apparel, such as shirts, coats and undergarments, on the fly. Spraying on multiple layers, for instance, hardens and strengthens the material, and designers can add their aesthetic touch by playing with a diverse range of source fabrics, colors, even scents. Clothing made from the spray-on technology can be washed, re-worn and easily recycled back since the same solvent used to deliver the material can be used to break it down too.
“The wearer can recycle the clothes themselves or perhaps they can take the used clothing into a shop and exchange it for a refill,” Torres explains. “There are many possibilities, but that’s really thinking further ahead.”
Besides being a fashion statement, Torres points out that the material is exceptionally versatile. In fact, Fabrican is currently developing a variation that can be sprayed to cover and protect car seats. It could also have medical value on the battlefield. What if you could, without ever touching a wound, spray on a 100 percent sterile bandage? The company has partnered with military personnel in Britain to test a prototype that functions as a plaster cast for soldiers who become injured while in combat.
“Fashion was our starting point, but we’re now also realizing the technology has so many applications that can benefit other industries,” says Torres. “Fashion owes a lot to science for innovations that make it into clothes you see today, and it’s nice to think this can be our way of giving back.”
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
October 15, 2012
The International Association of Police Chiefs held its convention in San Diego earlier this month and one of the booths drawing a lot of attention belonged to a California company called AeroVironment, Inc.
It’s in the business of building drones.
One of its models–the Raven–weighs less than five pounds and is the most popular military spy drone in the world. More than 19,000 have been sold. Another of its robot planes–the Switchblade–is seen as the kamikaze drone of the future, one small enough to fit into a soldier’s backpack.
But AeroVironment is zeroing in on a new market–police and fire departments too small to afford their own helicopters, but big enough to have a need for overhead surveillance. So in San Diego, it was showing off yet another model, this one called the Qube.
The camera never blinks
AeroVironment likes to tout the Qube as just what a future-thinking police department needs–a flying machine that fits in the trunk of a cop car–it’s less than five pounds and just three feet long–can climb as high as 500 feet and stays airborne as long as 40 minutes.
Outfitted with high-resolution color and thermal cameras that transmit what they see to a screen on the ground, the Qube is being marketted as a moderately-priced surveillance tool ($50,000 and up) for keeping fleeing criminals in sight or being eyes in the sky for SWAT teams dealing with hostage situations or gunmen they can’t see.
A few police departments have already taken the plunge into what are officially known as Unmanned Aerial Vehicles (UAVs)–big cities like Miami, Houston, and Seattle, but also smaller towns, such as North Little Rock, Ark., Ogden, Utah and Gadsen, Ala. Most used Homeland Security grants to buy their drones and they all had to be specially authorized by the FAA to fly them.
So far, they haven’t flown them all that much because the Federal Aviation Administration (FAA) doesn’t yet allow drones to be used in populated areas and near airports, at an altitude above 400 feet, or even beyond the view of the operator. But that’s going to change, with the FAA estimating that by the end of the decade, at least 15,000 drones will be licensed to operate over the U.S.
I spy a pool party
So how is this going to work? What’s to keep all those unmanned aircraft from hitting planes or helicopters or crashing into buildings? And what’s going to prevent them from spying on private citizens or shooting video of pool parties?
The FAA is wrestling with all that now and, given the need to ensure both safe skies and individual privacy, the agency may have a hard time nailing down regulations by August, 2014, the deadline Congress set earlier this year with the goal of opening up public airspace to commercial drones in the fall of 2015.
The feds are already behind schedule in selecting six locations in the U.S. where they’ll test drones to see if they can do what their manufacturers say they can do and, more importantly, if they can be kept from flying out of control. Later this month, however, at Fort Sill, Oklahoma, the Department of Homeland Security will start grading different drones on how well they perform when lives are at stake, say with a hostage situation, or a spill of hazardous waste or a search and rescue mission.
For a technology still largely seen as a deadly, and controversial, weapon for going after suspected terrorists, it couldn’t hurt to be able show how a drone can help find a lost kid or save an Alzheimer’s patient wandering through the woods.
Not so private eyes
Still, the idea of police departments or government agencies having access to flying cameras makes a lot of people uneasy. This summer, when a rumor started on Twitter that the EPA was using drones to spy on American farmers, it shot through the blogosphere, was repeated on TV, and then in condemning press releases issued by several congressmen–even though it wasn’t true.
As Benjamin Wittes and John Villasenor pointed out in the Washington Post earlier this year, the FAA isn’t a privacy agency. It’s loaded with aviation lawyers. Yet it will be dealing with some very dicey issues, such as how do you define invasion of privacy from public airspace and who can get access to video shot by a drone.
To quote Wittes and Villasenor:
“The potential for abuses on the part of government actors, corporations and even individuals is real — and warrants serious consideration before some set of incidents poisons public attitudes against a field that promises great benefits.”
Judging from a pair of surveys on the subject, the public is already pretty wary. Of those recently surveyed by the Associated Press, about a third said they are “extremely concerned” or “very concerned” about how drones could affect their privacy.
Another national poll, taken this summer by the Monmouth University Polling Institute, found that while 80 percent of the people surveyed like the idea of drones helping with search and rescue missions and 67 percent support using them to track runaway criminals, about 64 percent said they are “very concerned” or “somewhat concerned” about losing their privacy.
And they definitely don’t like the notion of police departments using them to enforce routine laws. Two out of three people surveyed said they hate the idea of drones being used to issue speeding tickets.
When robots fly
Here’s more recent research on flying robots:
- No crash courses: NASA scientists are testing two different computer programs to see if they can help drones sense and then avoid potential mid-air collisions. In theory, an unmanned aircraft would be able to read data about other flying objects and change its speed and heading if it appeared to be on a collision course.
- What goes up doesn’t have to come down: Two recent innovations could dramatically increase the flight time of both giant drones and handheld ones. Lockheed Martin has found a way to recharge its huge Stalker drones wirelessly using lasers, allowing them to stay airborne for as long as 48 hours. And Los Angeles-based Somatis Technologies is working on a process to convert wind pressure and vibrations into energy and that could triple the battery life of hand-launched drones to almost three hours.
- Get your protest souvenir photos here: Russia is stepping up its drone program and will continue to use them to monitor street protests.
- The face is familiar: The Congressional Research Service released a report last month suggesting that law enforcement agencies could, in the near future, outfit drones with facial recognition or biometric software that could “recognize and track individuals based on attributes such as height, age, gender and skin color.”
- Talk to me when it makes honey: Harvard researchers have been working on a tiny–not much larger than a quarter–robotic bee for five years and now it can not only take off on its own power, but it can also pretty much fly where they want it to go.
- Two blinks to get rid of red eye: Chinese scientists have designed quadcopters that can be controlled by human thought and be told to take a photo by the blink of an eye.
Video bonus: This promo video by AeroVironment sure makes it feel like the Qube drone could have its own TV series.
More from Smithsonian.com
October 5, 2012
Until last week, I don’t think I’d ever heard of the African spiny mouse. I’m guessing I’m probably not alone.
Apparently, they’re nice pets if you prefer an other-side-of-the-glass relationship. No question they’re cute things, only six inches or so long if you count their tails, and they have a rep for sucking down a lot of water. Oh, and you’re not supposed to pick them up by their tails.
Turns out the tail thing–namely that it can come off with great ease–is why this little furball was in the news. It’s also the reason the African spiny mouse could end up playing a big role in the future of medicine.
A study published in the journal Nature reported that not only can the mouse effortlessly lose its tail to escape predators, but it also can have its skin tear off and then grow back. This, however, is more than just some bizarre animal stunt like the lizards that shoot blood from their eyes. Salamanders can replace lost legs, fish can grow new fins, but mammals aren’t supposed to be able to regrow body parts.
Skin off my back
Mammals scar after they tear their skin. But not the spiny mouse. It can lose more than 50 percent of its skin and then grow a near perfect replacement, including new hair. Its ears are even more magical. When scientists drilled holes in them, the mice were able to not only grow more skin, but also new glands, hair follicles and cartilage.
And that’s what really excites researchers in human regenerative medicine, a fast-emerging field built around finding ways to boost the body’s ability to heal itself. As amazingly sophisticated as medicine has become, treatment of most diseases still focuses largely on managing symptoms–insulin shots to keep diabetes in check, medications to ease the strain on a damaged heart.
But regenerative medicine could dramatically change health care by shifting the emphasis to helping damaged tissue or organs repair themselves. Some already see it leading to a potential cure for Type 1 diabetes, as bone marrow stem cells have shown an ability to generate pancreas cells that produce insulin.
Another regenerative medicine procedure, in which a person’s own white blood cells and platelets are injected into an injured muscle or joint, is becoming popular, particularly among professional athletes, as a way of speeding up rehabilitation.
There’s also “spray-on skin,” created from neonatal stem cells. It’s proving to be a more effective and less painful way to treat burns and ulcers than skin grafts. And, at the Wake Forest Baptist Medical School, they’ve gone a step further, developing a process in which skin cells are essentially “printed” on burn wounds.
The wounds of war
That project at Wake Forest and, in fact, much of the cutting-edge research in regenerative medicine in the U.S., is funded through a Defense Department program called AFIRM, short for the Armed Forces Institute of Regenerative Medicine. It was launched in 2008, with the purpose of fast-tracking more innovative and less invasive ways to deal with the horrific burns, shattered limbs and other awful injuries suffered by soldiers in Iraq and Afghanistan.
A case in point is Sgt. Ron Strang, a Marine whose thigh was ripped apart by a roadside bomb in Afghanistan. The gaping wound “healed,” but not really. Without much of a quadriceps muscle, Strang kept falling over.
So doctors at the University of Pittsburgh Medical Center tried something new. They stitched a sheet made from a pig’s bladder into Strang’s leg. That’s known as scaffolding, cell material that scientists now know signals the body to start repairing tissue. Put simply, it tells stem cells to come to the site and develop into muscle cells.
And that’s what they did, so much so that Sgt. Strang can now run on a treadmill. As one of his doctors, Stephen Badylak, told the New York Times: “We’re trying to work with nature rather than fight nature.”
In another AFIRM project geared to help disfigured soldiers, researchers have been able to grow an almost perfectly-shaped human ear inside a lab dish–all from cartilage cells taken from inside the person’s nose. If the FDA approves the process, they hope to start attaching lab-grown ears to patients within a year.
Here are other new developments in regenerative medicine:
- Grow your own: Researchers at the University of Pittsburgh Medical Center found that liver cells, thymus tissue and pancreatic cells that produce insulin all can thrive within lymph nodes. And that provides a potential opportunity to grow organ cells in a body instead of needing to do full organ transplants.
- Gut check: A study at the University of Nevada discovered that a type of stem cell found in cord blood has the ability to migrate to the intestine and contribute to the cell population there. And that could lead to a new treatment for inflammatory bowel disease (IBD).
- This guy’s going to need a little more toner: Engineers at the University of California at San Diego have been able to fabricate 3D structures out of soft hydrogels, which makes it easier to imagine creating body parts from tissues produced on a printer.
- Blind luck: This summer, surgeons in California implanted embryonic stem cells, specially grown in a lab, into the eyes of two patients going blind. They were the first of 24 people who will be given the experimental treatment as part of a clinical trial approved by the FDA.
- In your face, Hair Club for Men Earlier this year a team at the Tokyo University of Science were able to develop fully functioning hair follicles by transplanting human adult stem cells into the skin of bald mice.
Video bonus: See for yourself black human hair growing out of the back of the neck of a bald mouse. Thank goodness it’s for science because it’s not a good look.
More from Smithsonian.com
July 30, 2012
Not long ago, banner ads showing coffins draped with American flags started appearing on websites in Yemen. They had been placed by supporters of Al Qaeda in the Arabian Peninsula. Their message was that Americans were the enemy and Al Qaeda was killing them.
A few days later people working for the U.S. State Department posted banners on the same websites, only this time the coffins were covered with Yemeni flags, photoshopped into the image. The message also had changed. This time it said that most of the people killed by Al Qaeda in the Arabian Peninsula were Yemen.
For all the attention paid to drone strikes and intelligence coups, the daily grind of counterterrorism is as much a digital parry and thrust, a continuous war of words and ideas played out on websites, chat rooms, forums, blogs and Twitter feeds. Now, experts will tell you, it’s all about the cyber-narrative.
And the State Department, specifically a group within it called the Center for Strategic Counterterrorism Communications, is taking on this role with tools and techniques few could have imagined in the days after 9/11. Among other things, they’re training people to be trolls.
Hit them with your best shot
It’s part of something called Viral Peace. As yet, it’s a small project with a miniscule budget by federal government standards, but this gives you a sense of what’s now in play when it comes to counterterrorism tactics. The man behind it, a former Silicon Valley geek named Shahed Amanullah, believes that impressionable young men and women can be discouraged from becoming terrorists by challenging and undercutting extremists online, which is where they do most of their recruiting.
As he told Wired in a recent interview, Amanullah intends to use “logic, humor, satire, religious arguments, not just to confront them, but to undermine and demoralize them.”
To that end he sent two members of his team to Muslim countries–Indonesia, Singapore, Malaysia, the Phillipines, Pakistan–where they met with young adults who had already developed online followings. Better for them to do the trolling instead of people who’d be seen as mouthpieces of the U.S. government.
How effective this guerilla strategy of ridicule and rebuke will ultimately be is anyone’s guess, although people who monitor extremists online say they generally don’t respond well to being challenged. But it’s clear that the strategy of using the Web to take on terrorists goes all the way to the top of the State Department.
None other than Hillary Clinton was the one who proudly revealed the story of the photoshopped coffins.
Have I got a story for you
Meanwhile, over at the Pentagon, the focus on controlling the narrative has taken an even more intriguing turn. DARPA, the Defense Department agency that funds cutting-edge research, is underwriting a study of what happens in the brain to incite political violence and how reshaping the narrative can help make people less radical.
The concept is called Narrative Networks and it looks at how stories affect the brain and human behavior, with the goal of finding ways to present narratives that help persuade people not to become terrorists.
Critics have already railed that it has all the makings of a new form of mind control, that with the highly sophisticated brain scans available today, a government could get a far better sense of how to refine messaging to make it more effective at changing people’s minds.
One of the researchers on the project, Paul Zak, of Claremont Graduate University in California, studies how listening to stories affects the brain’s release of oxytocin, known as the “love” or “trust” hormone. He says the purpose of the research is to see what kind of messages would help people view the military in the best possible light.
“We’re not in the business of reading people’s minds or implanting thoughts,” says Greg Berns, an Emory University professor also doing brain research for DARPA. “By understanding the biology of what causes people to go to war, we might begin to understand how to mitigate it.”
The fight stuff
Here’s more of the latest research into devices geared to 21st century warfare:
- Inner vision: Veritas Scientific is developing for the Pentagon a helmet it says will help identify enemies. When placed on a person’s head, it would use sensors to read their brain’s reactions to images flashed on the helmet’s visor, such as specs for how to make a bomb.
- Think fast: U.S. soldiers may soon be able to use a new technology called Sentinel, binoculars connected to a computer that would actually speed up the brain’s normal thought-processing so threats can be identified more quickly.
- Shock troops: Next month some U.S. soldiers in Afghanistan will start carrying a small pack called a Soldier Body Unit. Developed by the Georgia Tech Research Institute, it’s equipped with sensors that will measure the force of blasts that soldiers have been exposed to, and help doctors know if he or she has suffered a concussion.
- That’s what he said: In May DARPA awarded a $7 million contract for the first phase of a project to create software that not only would translate all aspects of a foreign language, –including slang, regional dialects, and text messaging lingo–but would do it in real time.
- Sound effects: And earlier this month DARPA unveiled a technique for putting out a fire using only sound. By playing a low-frequency bass note through two speakers pointed at the flame, researchers were able to increase air velocity and create a wider and cooler flame that sputtered out.
Video bonus: DARPA’s also been very big on funding robots. Here’s its AlphaDog Robot lugging 400 pounds over rugged terrain.
More from Smithsonian.com