February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
October 18, 2012
A few months ago Google shared with us another challenge it had taken on. It wasn’t as fanciful as a driverless car or as geekily sexy as augmented reality glasses, but in the end, it could be bigger than both. In fact, it likely will make both of them even more dynamic.
What Google did was create a synthetic brain, or at least the part of it that processes visual information. Technically, it built a mechanical version of a neural network, a small army of 16,000 computer processors that, by working together, was actually able to learn.
At the time, most of the attention focused on what all those machines learned, which mainly was how to identify cats on YouTube. That prompted a lot of yucks and cracks about whether the computers wondered why so many of the cats were flushing toilets.
But Google was going down a path that scientists have been exploring for many years, the idea of using computers to mimick the connections and interactions of human brain cells to the point where the machines actually start learning. The difference is that the search behemoth was able to marshal resources and computing power that few companies can.
The face is familiar
For 10 days, non-stop, 1,000 computers–using those 16,000 processors–examined random thumbnail images taken from 10 million different YouTube videos. And because the neural network was so big–it had more than a billion connections–it was able to learn to identify features on its own, without any real human guidance. Through the massive amount of information it absorbed, the network, by recognizing the relationships between data, basically taught itself the concept of a cat.
Impressive. But in the realm of knowledge, is this cause for great jubilation? Well, yes. Because eventually all the machines working together were able to decide which features of cats merited their attention and which patterns mattered, rather than being told by humans which particular shapes to look for. And from the knowledge gained through much repetition, the neural network was able to create its own digital image of a cat’s face.
That’s a big leap forward for artificial intelligence. It’s also likely to have nice payoffs for Google. One of its researchers who worked on the project, an engineer named Jeff Dean, recently told MIT’s Technology Review that now his group is testing computer models that understand images and text together.
“You give it ‘porpoise” and it gives you pictures of porpoises,” Dean explained. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”
So Google’s image search could become far less dependent on accompanying text to identify what’s in a photo. And it’s likely to apply the same approach to refining speech recognition by being able to gather extra clues from video.
No question that the ability to use algorithms to absorb and weave together many streams of data, even different types of data, such as sound and images, will help make Google’s driverless car that much more autonomous. Same with Google glasses.
But now a slice of perspective. For all its progress, Google still has a long way to go to measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million times smaller than the human brain’s visual cortex.
A matter of intelligence
Here are more recent developments in artificial intelligence:
- A bee, or not a bee: A team of British scientists are attempting to create an accurate model of a honeybee’s brain. By reproducing the key systems that make up a bee’s perception, such as vision and scent, the researchers hope to eventually be able to install the artificial bee brain in a small flying robot.
- But does it take the cover into account?: New software called Booksai is using artificial intelligence to give you book recommendations based on the style, tone, mood and genre of things you already know you like to read.
- Do I always look this good?: Scientists at Yale have programmed a robot that can recognize itself in the mirror. In theory, that should make the robot, named Nico, better able to interact with its environment and humans.
- Lost in space no more: Astronomers in Germany have developed an artificial intelligence algorithm to help them chart and explain the structure and dynamics of the universe with amazing accuracy.
- Walk this way: Scientists at MIT have created a wearable intelligent device that creates a real-time map of where you’ve just walked. It’s designed as a tool to help first responders coordinate disaster search and rescue.
Video bonus: In France–where else?–an inventor has created a robot that not only prunes grape vines, but also has the intelligence to memorize the specific needs of each plant. And now it’s learning to pick grapes.
More from Smithsonian.com
October 15, 2012
The International Association of Police Chiefs held its convention in San Diego earlier this month and one of the booths drawing a lot of attention belonged to a California company called AeroVironment, Inc.
It’s in the business of building drones.
One of its models–the Raven–weighs less than five pounds and is the most popular military spy drone in the world. More than 19,000 have been sold. Another of its robot planes–the Switchblade–is seen as the kamikaze drone of the future, one small enough to fit into a soldier’s backpack.
But AeroVironment is zeroing in on a new market–police and fire departments too small to afford their own helicopters, but big enough to have a need for overhead surveillance. So in San Diego, it was showing off yet another model, this one called the Qube.
The camera never blinks
AeroVironment likes to tout the Qube as just what a future-thinking police department needs–a flying machine that fits in the trunk of a cop car–it’s less than five pounds and just three feet long–can climb as high as 500 feet and stays airborne as long as 40 minutes.
Outfitted with high-resolution color and thermal cameras that transmit what they see to a screen on the ground, the Qube is being marketted as a moderately-priced surveillance tool ($50,000 and up) for keeping fleeing criminals in sight or being eyes in the sky for SWAT teams dealing with hostage situations or gunmen they can’t see.
A few police departments have already taken the plunge into what are officially known as Unmanned Aerial Vehicles (UAVs)–big cities like Miami, Houston, and Seattle, but also smaller towns, such as North Little Rock, Ark., Ogden, Utah and Gadsen, Ala. Most used Homeland Security grants to buy their drones and they all had to be specially authorized by the FAA to fly them.
So far, they haven’t flown them all that much because the Federal Aviation Administration (FAA) doesn’t yet allow drones to be used in populated areas and near airports, at an altitude above 400 feet, or even beyond the view of the operator. But that’s going to change, with the FAA estimating that by the end of the decade, at least 15,000 drones will be licensed to operate over the U.S.
I spy a pool party
So how is this going to work? What’s to keep all those unmanned aircraft from hitting planes or helicopters or crashing into buildings? And what’s going to prevent them from spying on private citizens or shooting video of pool parties?
The FAA is wrestling with all that now and, given the need to ensure both safe skies and individual privacy, the agency may have a hard time nailing down regulations by August, 2014, the deadline Congress set earlier this year with the goal of opening up public airspace to commercial drones in the fall of 2015.
The feds are already behind schedule in selecting six locations in the U.S. where they’ll test drones to see if they can do what their manufacturers say they can do and, more importantly, if they can be kept from flying out of control. Later this month, however, at Fort Sill, Oklahoma, the Department of Homeland Security will start grading different drones on how well they perform when lives are at stake, say with a hostage situation, or a spill of hazardous waste or a search and rescue mission.
For a technology still largely seen as a deadly, and controversial, weapon for going after suspected terrorists, it couldn’t hurt to be able show how a drone can help find a lost kid or save an Alzheimer’s patient wandering through the woods.
Not so private eyes
Still, the idea of police departments or government agencies having access to flying cameras makes a lot of people uneasy. This summer, when a rumor started on Twitter that the EPA was using drones to spy on American farmers, it shot through the blogosphere, was repeated on TV, and then in condemning press releases issued by several congressmen–even though it wasn’t true.
As Benjamin Wittes and John Villasenor pointed out in the Washington Post earlier this year, the FAA isn’t a privacy agency. It’s loaded with aviation lawyers. Yet it will be dealing with some very dicey issues, such as how do you define invasion of privacy from public airspace and who can get access to video shot by a drone.
To quote Wittes and Villasenor:
“The potential for abuses on the part of government actors, corporations and even individuals is real — and warrants serious consideration before some set of incidents poisons public attitudes against a field that promises great benefits.”
Judging from a pair of surveys on the subject, the public is already pretty wary. Of those recently surveyed by the Associated Press, about a third said they are “extremely concerned” or “very concerned” about how drones could affect their privacy.
Another national poll, taken this summer by the Monmouth University Polling Institute, found that while 80 percent of the people surveyed like the idea of drones helping with search and rescue missions and 67 percent support using them to track runaway criminals, about 64 percent said they are “very concerned” or “somewhat concerned” about losing their privacy.
And they definitely don’t like the notion of police departments using them to enforce routine laws. Two out of three people surveyed said they hate the idea of drones being used to issue speeding tickets.
When robots fly
Here’s more recent research on flying robots:
- No crash courses: NASA scientists are testing two different computer programs to see if they can help drones sense and then avoid potential mid-air collisions. In theory, an unmanned aircraft would be able to read data about other flying objects and change its speed and heading if it appeared to be on a collision course.
- What goes up doesn’t have to come down: Two recent innovations could dramatically increase the flight time of both giant drones and handheld ones. Lockheed Martin has found a way to recharge its huge Stalker drones wirelessly using lasers, allowing them to stay airborne for as long as 48 hours. And Los Angeles-based Somatis Technologies is working on a process to convert wind pressure and vibrations into energy and that could triple the battery life of hand-launched drones to almost three hours.
- Get your protest souvenir photos here: Russia is stepping up its drone program and will continue to use them to monitor street protests.
- The face is familiar: The Congressional Research Service released a report last month suggesting that law enforcement agencies could, in the near future, outfit drones with facial recognition or biometric software that could “recognize and track individuals based on attributes such as height, age, gender and skin color.”
- Talk to me when it makes honey: Harvard researchers have been working on a tiny–not much larger than a quarter–robotic bee for five years and now it can not only take off on its own power, but it can also pretty much fly where they want it to go.
- Two blinks to get rid of red eye: Chinese scientists have designed quadcopters that can be controlled by human thought and be told to take a photo by the blink of an eye.
Video bonus: This promo video by AeroVironment sure makes it feel like the Qube drone could have its own TV series.
More from Smithsonian.com
September 25, 2012
About a year ago I wrote about the first meeting of the 100 Year Starship Symposium (100YSS), a conference designed to keep scientists focused on what it will take for humans to be able to travel outside our solar system.
Luckily, they still have about a century to figure it out. NASA and DARPA, the research arm of the Defense Department, are behind the project, and the latter has kicked in $500,000 to start wrestling with the ridiculously difficult challenge of traveling trillions of miles in space by 2100.
Last week, at the second 100YSS meeting, there actually was a bit of progress to note. Along with a discussion of how many pair of underpants would be required to make such a trip and a rendition of the “Star Trek” theme song by Lt. Uhura herself, came a report that warp drive might actually be possible, that it would require far less energy than previously thought for a spaceship to travel several times faster than the speed of light.
Good news, but still a long, long way from making real something we used to see happen on TV every week. It reminded me, though, of the iterative, and often methodical process of science and how too often the focus on innovation is more about the potential of new ideas and technology and less about how they actually evolve in the real world.
So here are updates on five innovations I’ve written about in the past year. Some are already making their mark; others remain on a low boil.
1) When robots play nice: Robots work great by themselves, but mix them in with humans and it can get a little dicey. Most robots, while amazingly efficient and powerful, can also be dangerous to people nearby because, to put it simply, they don’t know we’re there.
That’s not the case, however, with a new model designed by Boston-based Rethink Robotics. It’s called Baxter and it’s been given the artificial intelligence to slow its motions when it detects a person approaching. And, to alert humans that it’s aware of their presence, its face turns red.
Next month Rethink will start selling Baxter, which can be trained by humans to do different tasks. The goal is to expand the robot market beyond big factories by providing a model that’s safe and relatively inexpensive–Baxter will cost $22,000, a steal by robot standards.
2) Replicator 2! Coming soon to an office near you!: Much has been written about 3-D printing as the future driver of manufacturing. But Bre Pettis, CEO of Brooklyn-based MakerBot Industries, has always believed in the more personal side of 3-D printers. He thinks they belong in people’s homes right next to their PCs.
Since 2009, the company has sold 13,000 of its MakerBot models. But buyers have largely been hobbyists who ordered their printers online. Now the company is taking things up a notch. Last week Pettis unveiled The Replicator 2, a sleek, stylized and more expensive model, one designed to fit right into the suitably applianced home. Also last week, MakerBot opened its first real store, in Manhattan no less.
Ah, but there’s also a bit of a dark side to giving people the power to print objects at home. Last month, a Wisconsin engineer showed readers of his blog the working gun he made.
3) Every picture tells a story. Or three: When it came on the market early this year, the Lytro camera had some people saying it would do for cameras what the iPhone did for cell phones. It made photos interactive, allowing you to change what’s in focus in an image after the fact. Chicago’s Museum of Science and Industry was impressed enough to include a Lytro in its 2012 Smart Home exhibit.
The Lytro still may transform photography, but not this year. Probably not next year, either. For now at least, most people seem perfectly content with the photos they can take on their smart phones, and they aren’t ready to pay $400 for a camera shaped like a stick of butter that allows them to do something with photos they’re not in the habit of doing.
This summer, Lytro founder Ren Ng stepped down as CEO, a move he said would allow him to focus on the company’s vision and not get bogged down in day-to-day operations. This likely has a lot to do with how quickly Lytro, which raised $50 million in private funding, has grown. It still isn’t able to fill online orders immediately–it won’t share sales figures–but Ng says it has reduced the wait time to about a month.
In case you haven’t seen how Lytro photography works, here’s a sampling.
4) Apple has spoken: A lot of attention has already been paid to the new features of the iPhone 5–its bigger screen, 4G speed, longer battery life. But it’s also worth noting something it doesn’t have–a Near-Field Communication (NFC) chip.
That’s what turns a smart phone into a mobile wallet, enabling it to make payments by waving it at checkout devices in stores. There was much speculation that if Apple gave NFC its blessing, it would push the technology mainstream in the U.S.
But Apple balked, in part because not many stores in the the U.S. have been willing to upgrade their checkout systems with NFC devices. Customers haven’t exactly been clamoring for them and besides, if Apple’s not buying in, why bother, say store owners. (Ah, the vicious circle.)
This is not good news for Isis, a partnership of mobile carriers, including Verizon and AT&T, and credit card companies, such as American Express and Capital One. The day after Apple introduced its new smart phone–minus a NFC chip–Isis announced that it was delaying the launch of its NFC mobile payments service.
5) But who’s going to blow the horn?: Since I first wrote about it in July, 2011, Google’s driverless car has received big boosts in Nevada, which last spring became the first state to issue license plates to autonomous vehicles, and California, where last month, in an extremely rare case of bipartisanship, Democrats and Republicans joined forces to overwhelmingly pass a self-driving car law. It directs the state’s Department of Motor Vehicles and the California Highway Patrol to develop safety and performance standards for robotic vehicles.
But Google’s just getting warmed up. It’s following up its success in lobbying officials there by pushing similar legislation in Florida, Arizona, Hawaii and Oklahoma. And this is a concept that’s trending: BMW and Audi are known to be working on their own versions and no less prestigious an organization as the Institute of Electrical and Electronics Engineers (IEEE) recently predicted that by 2040, 75 percent of the vehicles on the road won’t have human drivers.
Still, it’s not all open road ahead. Automakers have raised questions about their liability if they start selling driverless cars–although Google is quick to point out that its fleet of autonomous Priuses have so far logged 300,000 miles without one accident. And a consumer watchdog group in California fought the driverless car legislation, raising privacy concerns about how all the data gathered by the vehicles is used. Could you start receiving ads based on where your car drives?
Video bonus: This was probably inevitable. A candidate in Florida has come under fire for his support of driverless cars and now one of his opponent’s campaign ads features an old lady with a walker nearly run down at a stop sign by, you guessed it, a car without a driver. In case you miss the point, the large type next to her asks: “Will Driverless Cars REALLY Slow for Pedestrians?”
More from Smithsonian.com
September 20, 2012
Usually I walk to work, but earlier this week, after another apocalyptic forecast of torrential rains and head-twisting winds, I fell prey to weather dread and drove in.
In no time, I was reminded of why Washington D.C. has the worst drivers in the U.S.–Allstate verified it–and also why it’s among the Top 10 congested cities in the country. The latest estimate is that drivers here waste an average of 45 hours a year in traffic jams. I don’t know if anyone’s come up with a comparable analysis of how much time the stress of sitting in gridlock takes off your life, but I’m guessing I said goodbye to 15 minutes or so that morning.
The experience revived my interest in the science of traffic flow and how GPS, sensors, and algorithms have made it possible to imagine a day when the commuting madness will end.
Here are some of the ways we just may get there:
1) Follow the wisdom of E. coli: That’s the thinking of two Chinese engineers wrestling with the hideous traffic of Guangzhou, a city of 13 million in southern China. They are advocates of applying “swarm intelligence” to traffic lights in the city, or more specifically, something known as Bacterial Foraging Optimization. This is an algorithm based on the behavior of E. coli, which, while very basic, ultimately results in the optimal solution to problems. In this case, the algorithm would be applied to stop lights, adapting them to traffic flow instead of keeping them on a fixed loop.
2) Failing that, you can still learn a few things from humans: Scientists at the University of Southampton in the U.K. found that real humans are better traffic controllers than computerized systems. So now they’re focusing on developing artificial intelligence for traffic control systems so they can learn from experience as humans do.
3) Or feel the pulse of social chatter: IBM studied traffic jams in three Indian cities over the past year through the social network comments of people stuck in them. The company’s evaluation of tweets, Facebook updates and other social network discussions of people in Mumbai, Bangalore and New Delhi is designed to show how social data can be used to read public attitudes on big urban issues, such as traffic. Among its findings: Drivers in New Delhi talked more about public transportation, weather and the stress of commuting, while those in Bangalore vented about the overall driving experience, construction and parking. And in Mumbai, they tended to rant about accidents and pollution.
4) Twitter intelligence is not an oxymoron: And Twitter is also being used in real time to stay on top of traffic accidents and backups on British highways. A mobile app called Twitraffic analyzes what people are saying on Twitter about traffic and warns you about problems that have popped up. The company behind the app claims it lets people know about accidents an average of seven minutes before the government’s Highways Agency does. It hopes to launch a U.S. version next month.
5) Meanwhile, back in the U.S.: There’s already a pretty impressive mobile app available here for helping you avoid commuting nightmares. It’s called Waze and it not only gives you directions, but it also monitors what other drivers are saying about what’s happening on the streets around you. It’s a traffic report through crowdsourcing, and one that constantly updates with new directions if there’s bad news coming in about the road ahead.
6) Just let the cars work it out: Since last month, about 3,000 vehicles around Ann Arbor, Michigan have been able to talk to one another. As part of a joint project of the U.S.Department of Transportation and the University of Michigan, the cars and trucks have been adapted to be able to communicate wirelessly and warn each other of potential accidents or backups. For instance, one vehicle could tell another when it’s approaching an intersection or if it’s stopping on the road ahead. The Michigan researchers think these wireless systems, if they become a standard feature, could cut accidents by 80 percent.
7) Car Talk was taken: MIT scientists are heading down the same road, developing something they calls CarSpeak. It’s a communication system for driverless cars that lets them “see” through the data provided by other cars on the road. And that would allow a car to cruise right through an intersection because it would know no other cars were coming.
Down the road
Here are a few other developments designed to help us get around:
- Not so mellow yellow: A researcher at Virginia Tech concludes that one of our big problems is yellow lights because they create what he calls a “dilemma zone” for drivers. He’s developing a system for giving drivers a few seconds notice when a light is about to turn yellow.
- We don’t need no stinking stretch limo: The largest buses in the world, 98-foot-long vehicles capable of carrying more than 250 people, will be rolled out in Dresden, Germany next month.
- Nothing makes an old man feel young like driving at night: According to a study at MIT, the most important car feature for drivers over 50 are smart headlights, which adjust the range and intensity of light based on the location of other cars. The idea is to reduce glare and improve visibility at night.
- I’m sleepin’ here: A new study of traffic noise levels in and around Atlanta found that almost 10 percent of the area’s population is exposed to traffic noise at a level described as “annoying.” And more than 2 percent live where traffic noise was described as “highly disturbing to sleep.”
Video bonus: How maddening are phantom traffic jams, you know, when everything slows to a crawl for no apparent reason? Here are two explanations, one from scientists, the other more like what we imagine.
More from Smithsonian.com