May 24, 2013
Human flight has become boring. Air travel is a testament to man’s ingenuity and imagination. In the words of comedian Louis CK, “you’re siting in a chair – IN THE SKY.” It’s amazing. And yet, in only 50 years or so, flight, something scholars and inventors have been investigating for centuries, has become a banality. Sometimes, even an inconvenience! And though we may have mastered the skies to the extent that unmanned aerial vehicles can be sent anywhere on the planet, there is still some mystery left to discover. For while drone technology may seem to be the only area where advancements in flight are being made, many researchers today, like Archytas and da Vinci before them, remain fascinated by something that seems much simpler: bird flight, and by the possibility of creating unmanned aerial vehicles of a very different nature.
Take for example, SmartBird (top image) a project developed 2011 by Festo, a global leader in automation technology. Inspired by the herring gull and the book Jonathan Livingston Seagull, Smartbird is a robot with articulated wings that function just like their biological inspiration, generating thrust and forward motion. With Smartbird, researchers wanted to decode bird flight to develop a machine that could take off, fly, and land using only its own wing-flapping power. The “mechatronic and cybernetic holistic design” was made possible by using lightweight construction materials and a unique mechanism that allows the wings to twist and torque in a way that approximates real birds. SmartBird is not necessarily the future of aviation, but was created as a proof-of-concept for technology that may one day be used to help create more efficient factory automation and new power generators. However, it’s natural flight movements and seagull “disguise” seem to imply more tactical uses.
More recently, researchers at the University of Maryland Robotics Center have successfully launched a “micro air vehicle” that has been in development for eight years. After many test flights, many crashes, and many adjustments, the Robo Raven, as it is known took to the skies for the first after the team made a design breakthrough in April. Their new design features programmable wings that can be controlled independently, like real bird wings, allowing for high velocity dives, rolls, and other aerial acrobatics. The silver mylar-winged robot is much smaller and much more abstract in appearance than the SmartBird, but its movement is incredibly realistic. So realistic in fact, that it has even fooled nature – several early models were torn apart by hawks. It’s really quite something to see. The project’s success was also made possible by recent advancements in manufacturing like 3D printing and laser cutting. The Maryland team suggest that one day, the relatively lightweight, cheap, and versatile technology of robot birds could potentially be used for agriculture and environmental monitoring. There are other possibilities as well, including surveillance – Robo Raven has already been outfitted with a POV camera. If these robotic birds become natural enough, the drones of tomorrow could be undetectable to the untrained eye.
But you don’t need drones or robots to survey of a city from the skies. New York architects Aranda\Lasch have shown that cyborg pigeons will do just fine.
Aranda\Lasch developed The Brooklyn Pigeon Project as an experimental biological satellite. A flock of trained pigeons, ubiquitous in New York City, were equipped with a small battery, video camera, and microphone, and flown in spiral patterns over Brooklyn. The project is both a documentation of flocking behavior and an attempt to craft a true birds-eye view of the city. The avian cartographers of the Brooklyn Pigeon Project are sensitive to environmental stimuli that their human counterparts can’t observe. Their flight patterns are affected by sound, smells, and their ability to sense the Earth’s magnetic field form. The resulting maps differ dramatically from the purely technological “grid” of modern GIS systems to provide a unique perspective on the city that, in the words of the designers, “contrasts directly with the way the city is increasingly recorded and represented today.”
The Brooklyn Pigeon Project has a precedent in the work of pharmacist, inventor, and amateur photograph by the name of Julius Neubronner who, between 1907 and 1920, developed dozens of miniature cameras designed to be attached to carrier pigeons via tiny leather harnesses. While initially created as little more than a hobby, Neubronner anticipated that his invention would have military uses and indeed his pigeon photographers were briefly enlisted and deployed to safely take photographs over enemy lines (part of an ongoing effort to militarize animals, as noted in ion’s history of animal soldiers). Although slightly more unweilding than the BPP cameras, Neubronner’s device is perhaps more ingenious.
It’s exciting to think that the avian world still has much to teach us. We still strive to capture the world as experienced by birds – the way they so elegantly move thorough the skies, see the ground, and detect the invisible forces that surround us. New research, combined with new manufacturing technologies, is bringing us a little closer to the day when the familiar airplanes and intimidating drones filling our skies will be replaced by autonomous, naturally flying, all-seeing, robotic birds. Despite centuries of investigation, we’ve only just started to unlock the secrets that nature perfected over eons.
May 22, 2013
Our recent post on the history of the cuckoo clock inspired some research into other examples of early, non-timekeeping robot birds. For centuries, birds–pigeons and canaries in particular–have been a popular subject for inventors and engineers experimenting with early mechanical systems and robotics. Take, for example, Bubo, the ancient clockwork owl seen in the 1981 film Clash of The Titans. Bubo was forged by Hephaestus to aid Perseus in his quest and Bubo was, of course, purely fictional. There were however, actual avian automatons in actual ancient Greece.
The earliest example dates to 350 B.C.E. when the mathematician Archytas of Tarentum, who some credit with inventing the science of mechanics, is said to have created a mechanical wooden dove capable of flapping its wings and flying up to 200 meters, powered by some sort of compressed air or internal steam engine. Archytas’ invention is often cited as the first robot, and, in light of recent technological advancements, perhaps we could even consider it to be the first drone; the very first machine capable of autonomous flight. Very few details are actually known about the ancient mechanical dove, but it seems likely that it was connected to a cable and flew with the help of a pulley and counterweight. This early wind-up bird was chronicled a few hundred years later in the pages of a scientific text by a mathematician, Hero of Alexandria.
In his treatise on pneumatics, Hero also outlined his own designs for several different types of artificial birds that could move and sing in response to flowing water that pushed air through small tubes and whistles concealed within his carved birds. From these basic designs, the interest and intrigue surrounding mechanical birds, and automatons in general, only grew as the centuries passed.
It’s well known that Leonard da Vinci was fascinated by the idea of human flight. He obsessively observed the motion of birds in flight and created dozens of designs for flying machines of all shapes and sizes – from bat-winged gliders to corkscrew helicopters. He dissected and diagrammed bird wings in efforts to unlock the secrets of flight, recording everything in a codex dedicated to flight written in the early 16th century. Around that same time, da Vinci used what he learned to create a mechanical bird for a stage production. The bird was by all accounts a relatively simple thing that flapped its wings via a mechanism activated as it descended down a cable. During da Vinci’s day, such high-wire birds were used in Florence as part of the “Scoppio del Carro” tradition, during which a mechanical dove known as the “Columbina” is used to help ignite a cart of fireworks as a way to ring in the Easter Holiday. The tradition continues today. In the incredibly entertaining but historically dubious television series “Da Vinci’s Demons,” the titular artist creates a highly elaborate mechanical dove that bares more of a resembles to Haphaestus’s Bubo than to a simple theatrical prop:
Perhaps the most famous mechanical bird appeared during the 18th century when French inventor Jacques de Vaucanson astounded the public with a duck that could quack, rear up on its legs, bow its neck, flap its wings, drink, eat, and, most impressive, poop. As they say, if it looks like a duck, swims like a duck, and quacks like a duck, then it’s probably a duck – unless it’s a robot, that is. Vaucanson charged a steep fee to witness his famous clockwork canard and the gold-plated duck quickly became the talk of France, even earning the acknowledgment of Voltaire, who wryly commented, “without the shitting duck of Vaucanson, there would be nothing to remind us of the glory of France.”
Vaucanson alleged that his creation used a complex system of artificial bowels filled with chemicals to “digest” the grain, then evacuate it through the duck’s mechanical sphincter (there’s a phrase I never thought I’d write). While it made Vaucanson famous and was surely a hit at parties, the duck’s digestion digestion was a hoax – though still quite impressive. In reality, it used an elaborate mechanical system concealed in the podium wherein grain was collected in one chamber and artificial excrement made of dyed breadcrumbs was released from another. However, the hoax was not revealed for more than 100 years. Long after the digesting duck had been forgotten, it was re-discovered in a pawnshop attic, repaired by Swiss clockmaker, and eventually fell into the hands of magician Jean-Eugène Robert-Houdin, the man from whom Houdini took his name, before disappearing once again in the late 19th century. Robert-Houdin was also a clockmaker who used his talent to create several of his own elaborate automata.
To perfect his mechanical birds, Robert-Houdin spent his days climbing trees and listening to bird songs, trying to reproducer them on his own. The next step was to create a whistle tuned to a specific birdsong, then figure out a system to play the whistle while animating the bird’s beak and wings in sync with the sound. Houdin then took his mechanical bird a step further. He created an innovative combination of automata that included both a basic android –more specifically, a mechanical woman– and a mechanical canary. The “woman” cranked a serinette –a type of music box often used by real people to teach real canaries to sing– that played a song the canary would then imperfectly imitate. The process was repeated: the woman cranked the serinette again, but on the second turn, the canary’s imitation improved. The process continued until the canary “learned” the song and could reproduce it perfectly. Robert-Houdin’s automaton not only reproduced a song, but also the apparent learning of a song.
There were many other different types of automata built during the centuries that these early robot birds were crafted, but these early robot birds were both displays of technological savvy and reflections of trends (training canaries was all the rage in 19th century France), as well as expressions of man’s efforts to understand and to master the natural world. Our fascination with the mechanics of bird and birdsong continues to this day. In our next post, we’ll look at some of the more recent bird-machine hybrids.
May 8, 2013
Since writing last week’s post about the possible origin of the QWERTY keyboard and the viability of new digital alternatives, I’ve been especially mindful of every keyboard I use. As a footnote of sorts to that post, I’ve noticed that there’s a particularly strange feature on the iPad’s virtual keyboard: a raised bar on the F and J keys. On physical keyboards, these raised indicators allow touch typists to orient their eight fingers on the center row of the keyboard without looking. So why would a flat touchscreen have these raised indicators? One word. Skeuomorphism.
“Skeuomorphism” is a design principle in which an obsolete design element is integrated into a new object –often as a superficial graphic detail– even though it’s no longer functional or necessary. For instance, when the ancient Greeks started building in stone, they imitated the forms of wood construction – including unnecessary wood joints and ornamentation; protruding joists were eventually transformed into dentils. The term is certainly not a neologism (although spell check still refuses to acknowledge it) but its use has become much more widespread with the emergence of touchscreen applications. Digital skeuomorphic elements can help give users a sense of familiarity when dealing with a new technology – like a notepad app that looks like a legal pad, the page-turning animation on a digital book, or the sound of a shutter clicking on digital cameras and mobile phones. Soon these elements may outlive their usefulness or take on a new meaning, but for now these vestigial details work as sensory cues.
Let’s get back to the keyboard. In our previous post, it was suggested that the very nature of “keys” is obsolete for touchscreen devices. A case could be made either way, I think, but a graphic representation of the tactile raised bars are most definitely unnecessary on keys that are never physically touched. In fact, most touchscreen devices do not include these vestigial elements. Cursory Googling reveals that the keyboards on the Kindle, Nook, and Surface all lack any sort of tactile carryover. The iPad appears to be unique in this respect, but is in line with Apple’s initial approach to user interface design for mobile applications. In their iOS Human Interface Guidelines for software developers, the company recommends using visual metaphors to “suggest a usage or experience without enforcing the limitations of the real-world object or action on which they’re based” or adding physicality and realism to a user interface:
Sometimes, the more true to life your app looks and behaves, the easier it is for people to understand how it works and the more they enjoy using it….Think of the objects and scenes you design as opportunities to communicate with users and to express the essence of your app. Don’t feel that you must strive for scrupulous accuracy. Often, an amplified or enhanced portrayal of something can seem more real, and convey more meaning, than a faithful likeness.
Recently, the tide seems to be turning against skeuomorphism. Apple has taken a lot of flack for the skeuomorphic graphics in their mobile software, and after a recent executive shakeup it sounds like many of these elements won’t make it into the next iteration of their operating system. Yet with advances in touchscreen technology, there might actually be a chance that the virtual keyboard will once again require those home row “bumps”. Apple and other companies are researching touchscreens that can provide haptic feedback through the use of vibration, electronic impulses, and screens that can literally change shape to create a textured surface. With these new displays on the horizon, perhaps it’s only a matter of time until the vestigial home key bumps on virtual keyboards have their function returned.
May 3, 2013
What came first: the typist or the keyboard? The answer depends on the keyboard. A recent article in Smithsonian’s news blog, Smart News, described an innovative new keyboard system that proposes a more efficient alternative to the ubiquitous “universal” keyboard best known as QWERTY – named for the first six letters in the top row of keys. The new keyboard, known as KALQ, is designed specifically for thumb-typing on today’s smart phones and tablets. It’s an interesting and by all accounts commercially viable design that got me thinking about the rationale behind the QWERTY keyboard. Unlike KALQ, it couldn’t have been designed to accommodate a specific typing technique because, well, the idea of typing –touch typing, at least– hadn’t been invented yet. It turns out that there is a lot of myth and misinformation surrounding the development of QWERTY, but these various theories all seem to agree that the QWERTY layout was developed along with, and inextricably linked to, early typewriters.
In the 1860s, a politician, printer, newspaper man, and amateur inventor in Milwaukee by the name of Christopher Latham Sholes spent his free time developing various machines to make his businesses more efficient. One such invention was an early typewriter, which he developed with Samuel W. Soulé, James Densmore, and Carlos Glidden, and first patented in 1868. The earliest typewriter keyboard resembled a piano and was built with an alphabetical arrangement of 28 keys. The team surely assumed it would be the most efficient arrangement. After all, anyone who used the keyboard would know immediately where to find each letter; hunting would be reduced, pecking would be increased. Why change things? This is where the origin of QWERTY gets a little foggy.
The popular theory states that Sholes had to redesign the keyboard in response to the mechanical failings of early typewriters, which were slightly different from the models most often seen in thrift stores and flea markets. The type bars connecting the key and the letter plate hung in a cycle beneath the paper. If a user quickly typed a succession of letters whose type bars were near each other, the delicate machinery would get jammed. So, it is said, Sholes redesigned the arrangement to separate the most common sequences of letters like “th” or “he”. In theory then, the QWERTY system should maximize the separation of common letter pairings. This theory could be easily debunked for the simple reason that “er” is the fourth most common letter pairing in the English language. However, one of the typewriter prototypes had a slightly different keyboard that was only changed at the last minute. If it had been put into production this article would have been about the QWE.TY keyboard:
By 1873, the typewriter had 43 keys and a decidedly counter-intuitive arrangement of letters that supposedly helped ensure the expensive machines wouldn’t break down. Form follows function and the keyboard trains the typist. That same year, Sholes and his cohorts entered into a manufacturing agreement with gun-maker Remington, a well-equipped company familiar with producing precision machinery and, in the wake of the Cilvil War, no doubt looking to turn their swords into plowshares. However, right before their machine, dubbed the Sholes & Glidden, went into production, Sholes filed another patent, which included a new keyboard arrangement. Issued in 1878, U.S. Patent No. 207,559 (top image) marked the first documented appearance of the QWERTY layout. The deal with Remington proved to be an enormous success. By 1890, there were more than 100,000 QWERTY-based Remington produced typewriters in use across the country. The fate of the keyboard was decided in 1893 when the five largest typewriter manufacturers –Remington, Caligraph, Yost, Densmore, and Smith-Premier– merged to form the Union Typewriter Company and agreed to adopt QWERTY as the de facto standard that we know and love today.
There’s a somewhat related theory that credits Remington’s pre-merger business tactics with the popularization of QWERTY. Remington didn’t just produce typewriters, they also provided training courses – for a small fee, of course. Typists who learned on their proprietary system would have to stay loyal to the brand, so companies that wanted to hire trained typists had to stock their desks with Remington typewriters. It’s a system that’s still works today, as illustrated by the devout following Apple built through the ecosystem created by iTunes, the iTunes store, and the iPod.
While it can’t be argued that deal with Remington helped popularize the QWERTY system, its development as a response to mechanical error, has been questioned by Kyoto University Researchers Koichi Yasuoka and Motoko Yasuoka. In a 2011 paper, the researchers tracked the evolution of the typewriter keyboard alongside a record of its early professional users. They conclude that the mechanics of the typewriter did not influence the keyboard design. Rather, the QWERTY system emerged as a result of how the first typewriters were being used. Early adopters and beta-testers included telegraph operators who needed to quickly transcribe messages. However, the operators found the alphabetical arrangement to be confusing and inefficient for translating morse code. The Kyoto paper suggests that the typewriter keyboard evolved over several years as a direct result of input provided by these telegraph operators. For example;
“The code represents Z as ‘· · · ·’ which is often confused with the digram SE, more frequently-used than Z. Sometimes Morse receivers in United States cannot determine whether Z or SE is applicable, especially in the ﬁrst letter(s) of a word, before they receive following letters. Thus S ought to be placed near by both Z and E on the keyboard for Morse receivers to type them quickly (by the same reason C ought to be placed near by IE. But, in fact, C was more often confused with S).
In this scenario, the typist came before the keyboard. The Kyoto paper also cites the Morse lineage to further debunk the theory that Sholes wanted to protect his machine from jamming by rearranged the keys with the specific intent to slow down typists:
“The speed of Morse receiver should be equal to the Morse sender, of course. If Sholes really arranged the keyboard to slow down the operator, the operator became unable to catch up the Morse sender. We don’t believe that Sholes had such a nonsense intention during his development of Type-Writer.”
Regardless of how he developed it, Sholes himself wasn’t convinced that QWERTY was the best system. Although he sold his designs to Remington early on, he continued to invent improvements and alternatives to the typewriter for the rest of his life, including several keyboard layouts that he determined to be more efficient, such as the following patent, filed by Sholes in 1889, a year before he died, and issued posthumously:
But the biggest rivals to ever challenge QWERTY is the Dvorak Simplified Keyboard, developed by Dr. August Dvorak in the 1930s.
Dvorak users reported faster and more accurate typing, in part because the system dramatically increases the number of words that can be typed using the “home” row of keys where your fingers naturally rest – also known as the keys you type when you’re just trying fill space. asjdfkal; sdfjkl; asdfjkl; asdfjkl; dkadsf. asdfjklasdfjk. More recent research has debunked any claims that Dvorak is more efficient, but it hardly matters. Even in 1930 it was already too late for a new system to gain a foothold. While Dvorak certainly has its champions, it never gained enough of a following to overthrow King QWERTY. After all, the world learned to type using Remington’s keyboard.
When the first generation of computer keyboards emerged, there was no longer any technical reason to use the system – computers didn’t get jammed. But of course, there’s the minor fact that millions of people learned to type on the QWERTY keyboards. It had become truly ubiquitous in countries that used the Latin alphabet. Not only that, but way back in 1910, the system had been adopted by Teletype, a company that would go on to produce electronic typewriters and computer terminals widely used around the world, thereby ensuring QWERTY’s place as the new technological standard.
When a design depends on a previous innovation too entrenched in the cultural zeitgeist to change, it’s known as a path dependency. And this why the new KALQ proposal is so interesting. It attempts to break from the tyranny of Christopher Latham Sholes, whose QWERTY system makes even less sense on the virtual keyboards of tablets and smartphones than it does on a computer keyboards. Is the new KALQ system any different? In some ways, the answer is obviously yes. It has been designed around a very specific, very modern behavior – typing with thumbs. Like the telegraph operator QWERTY theory, the user is determining the structure of the keyboard. But it could still be argued that the KALQ system, or any similar system that may be developed in the future, is also a product of path dependency. Because no matter how the letters are arranged, they basic notion of individually separated letters distributed across a grid dates back to Sholes and co. tinkering away in their Milwaukee workshops. But it’s just not necessary in a tablet. If you gave an iPad to someone who had never used a keyboard and told them to develop a writing system, chances are they would eventually invent a faster, more intuitive system. Perhaps a gesture based system based on shorthand? Or some sort of swipe-to-type system? This is not to say that such a system would be better, it’s merely an observation that our most bleeding edge communication technology still dates back more than 150 years to some guys tinkering in their garage. Truly, the more things change, the more they stay the same.
March 19, 2013
It’s nearly impossible to walk around a city or college campus or shopping mall, or really anywhere these days, without seeing at least a few dozen people wearing little earbuds stuffed into their ears, or even huge headphones that look like something a 747 pilot might wear. The ubiquity of modern headphones could perhaps be attributed to the Sony Walkman, which debuted in 1979 and almost immediately became a pop culture icon. As the first affordable, portable music player, the Walkman became such an prominent characteristic of the young urban professional that it was even featured on the cover of The Yuppie Handbook. But of course, the history of headphones dates back further than the 1980s. Like many commercial electronics, modern headphones (and stereo sound) originated, in part, in the military. However, there’s not a singular figure or company who “invented” the headphones, but a few key players who brought them from military bases and switchboards into the home and out to the street.
In the 1890s, a British company called Electrophone created a system allowing their customers to connect into live feeds of performances at theaters and opera houses across London. Subscribers to the service could listen to the performance through a pair of massive earphones that connected below the chin, held by a long rod . The form and craftsmanship of these early headphones make them a sort of remote, audio equivalent of opera glasses. It was revolutionary, and even offered a sort of primitive stereo sound. However, the earliest headphones had nothing to do with music, but were used for radio communication and telephone operators in the late 19th century.
Before the Electrophone, French engineer Ernest Mercadier patented a set of in-ear headphones in 1891, as engineer Mark Schubin noted in an excellent article on the history of headphones. Mercadier was awarded U.S. Patent No. 454,138 for “improvements in telephone-receivers…which shall be light enough to be carried while in use on the head of the operator.” After extensive testing and optimization of telephone receivers, Mercadier was able to produce miniature receivers that weighed less than 1 3/4 ounces and were “adapted for insertion into the ear.” His design is an incredible feat of miniaturization and is remarkably similar to contemporary earbud headphones, down to the use of a rubber cover “to lessen the friction against the orifice of the ear…[and] effectually close the ear to external sounds.”
Do telephone headsets go back further than Mercadier’s 1891 patent? Sort of, but they’re almost unrecognizable shoulder harness-like objects that barely meet the definition by today’s standard. So let’s flash forward to the birth of the modern headphones.
In the years leading up to WWI, it wasn’t uncommon for the Navy to receive letters from small businesses and inventors offering up their unique products and skills. In 1910, a particularly memorable letter written in purple ink on blue and pink paper came from Utah native Nathaniel Baldwin, whose missive arrived with a pair of prototype telephone headsets offered for military testing. While the request wasn’t immediately taken seriously, the headphones were eventually tested and found to be a drastic improvement over the model then being used by Naval radio operators. More telephones were requested for testing and Baldwin obliged at his own expense.
The Navy offered Baldwin some suggestions for a a few tweaks, which he promptly incorporated into a new design that, while still clunky, was comfortable enough for everyday use. The Navy placed an order for Baldwin’s headphones, only to learn that Baldwin was building them in his kitchen and could only produce 10 at a time. But because they were better than anything else that had been tested, the Navy accepted Baldwin’s limited production capabilities. After producing a few dozen headphones, the head harness was further improved as its design was reduced to only two leather-covered, adjustable wire rods attached at each end to a receiver that supposedly contained a mile of copper wire. The new headset proved to be an immediate success and the Navy advised Baldwin to patent this new model of headphone. Baldwin, however, refused on the grounds that it was a trivial innovation. In order to increase production, the Navy wanted to move Baldwin out of his Utah kitchen and into much larger East Coast facility. But Nathaniel Baldwin was a polygamist and couldn’t leave Utah. Another manufacturer, the Wireless Specialty Apparatus Co., got wind of the situation and worked with the inventor to build a factory in Utah and manufacture the headphones. The agreement with Wireless Specialty came with one enormous caveat: the company could never raise the price of headsets sold to the U.S. Navy.
The next big innovation in headphone design came after the second World War, with the onset of stereophonics and the popular commercialization of the technology. Record label EMI pioneered stereo recordings in 1957 and the first commercial stereo headphones were created a year later by musician and entrepreneur John Koss, founder of the Koss Corporation. Koss heard about a “binaural audio tape” from a friend and was thrilled to hear how it sounded through a pair of military grade headphones. Determined to bring this sound to the public, Koss developed an entire “private listening system,” the Koss Model 390 phonograph, for enjoying music that included a phonograph, speaker and headphone jacks all in one small package. The only problem was that there were no commercially available headphones that were compatible with his new phonograph. They were all made for communication or warplanes. Koss talked with an audio engineer about this and they quickly rigged up a pair of makeshift prototype headphones. “It was a great sound,” Koss remembers. The design was refined built from two vacuum-formed brown plastic cups containing three-inch speakers protected by a perforated, light plastic cover and foam ear pads. These were connected by a bent metal rod and the Koss SP-3 headphones were born. “Now the whole thing was there,” remembers Koss. Music lovers embraced the stereophonic headphones due to their enhanced sound quality, which was made possible by the use of different signals in each ear that could closely approximate the sounds of a concert hall. The design was well received when it debuted at a hi-fi trade show in Milwaukee in 1958 and was almost immediately copied by other manufacturers, standardizing the design of headphones around the world for years to come.
An interesting footnote to this story is the suggestion from media theorist Friedrich Kittler that, while Koss may have created the first truly stereo headphones, the first people to actually experience stereophonic sound through headphones were the members of the German Luftwaffe during World War II.
In his book Gramophone, Film, Typewriter, Kittler describes the innovative radar system used by the German Airforce during World War II, which allowed headphone-wearing pilots to reach the destinations and bombers to accurately drop payload without visually seeing their targets:
“Radio beams emitted from the coast facing Britain…formed the sides of an ethereal trailing the apex of which was located precisely above the targeted city. The right transmitter beamed a continuous series of Morse dashes into the pilot’s right headphone, while the left transmitter beamed an equally continuous seres of Morse dots–always exactly in between the dashes–into the left headphone. As a result, any deviation from the assigned course results in the most beautiful ping-pong stereophony.”
When the pilots reached their target, the two radio signals merged into one continuous note. As Kittler writers, “Historically, [the German pilot] had become the first consumer of a headphone stereophony that today controls us all.”
The above mentioned designs are only a few of the more prominent developments in the history of personal audio. It’s likely that there are even earlier inventions and it’s certain that there are many, many other individuals who should be thanked for their contributions to the development of the modern headphones that let us shut out the roar of plane engines with music, listen to play-by-play analysis while watching a baseball game in person, and strut down the street to our own personal soundtracks.
Captain Linwood S. Howeth, USN, “The Early Radio Industry and the United States Navy,” History of Communications-Electronics in the United States Navy (1963): 133-152; Peter John Povey and Reg A. J. Earl, Vintage Telephones of the World (London: Peter Peregrinus Ltd., 1988); Friedrich Kittler, Gramophone, Film, Typewriter, trans. by Geoffrey Winthop-Young and Michael Wutz (Stanford, CA: Stanford University Press, 1999); Virginia Hefferman, “Against Headphones,” The New York Times (January 7, 2011); Mark Schubin “Headphones, History, & Hysteria” (2011), http://www.schubincafe.com/2011/02/11/headphones-history-hysteria/; “Koss History,” http://www.koss.com/en/about/history; Google patents