Sunday, July 31, 2011

The Rise of the Virtual Office

As the definition of the workplace changes, dramatic increases in productivity could be ahead

The idea that the office is a specific place where our professional lives "happen" is becoming less universal, and less important. These days many knowledge workers can be productive anywhere, thanks to smarter, more numerous mobile devices, faster network access, and a growing number of online collaboration tools. Telecommuting is no longer merely something that the phone company is trying to sell you. And wherever "the office" may be, wider and better use of social networks, data analytics, and smart technologies such as voice recognition could be poised to increase productivity dramatically—meaning that both real and virtual offices may have fewer people in them.

But while the physical office is changing, certain connotations of the word "office" are not. I can think of two others —"hierarchical organization" and "place for human interaction"—and there's no indication that these are becoming any less important. Even the most progressive high-tech companies retain many of the organizational trappings of their industrial-age predecessors: full-time managers, org charts, job descriptions, and so on. And since humans remain social animals, conventional gathering places will remain important in business. These spaces—whether they be conventional offices, temporary ones, or conference facilities—must be made conducive to collaboration. They must also become physically healthy places to spend hours of time, since sedentary work has emerged as a significant health threat.

As the office expands beyond its conventional boundaries, key challenges must be met, including the privacy and security issues posed by a distributed global workforce of people who work digitally and use multiple devices. New tools like cloud-based office productivity apps must be made not only user-friendly but resistant to attacks and data loss. And workers will need better tools—including improved voice-recognition software, e-mail-organizing technologies, and intelligent agents that help handle complex tasks once reserved for specialists—to streamline work processes, make sense of the overwhelming volumes of data besieging them, and improve productivity.

To date, IT-driven productivity gains within the office have been somewhat modest, at least compared with those seen in manufacturing. In 1989 the U.S. manufacturing sector employed 18 million people; by 2009 that figure had declined to 11.8 million. But though the workforce shrank 34 percent, the value added by U.S. manufacturers—that is, the value of their output minus the cost of raw materials purchased—surged 75 percent, to 1.78 trillion.  We've definitely observed white-collar productivity improvement as well, especially since the mid-1990s, but it hasn't been as big.

That may soon change. Consider that people already routinely deal with computers rather than office workers when they make an airline reservation, buy products and arrange for delivery, or troubleshoot a problem with a product they own. If a task involves simple and predictable forms of communication without much nuance or emotion, computers can do just fine, leaving humans to handle an ever-dwindling number of exceptions to the usual procedures or questions.

More far-out advances in artificial intelligence could push productivity even further. Voice recognition, speech synthesis, and automatic translation have improved significantly. And we've seen that computers can now accurately understand and reply to questions: IBM's Watson supercomputer beat human competitors at Jeopardy! earlier this year. Skeptics will point out that futurists have been promising an AI-driven revolution in knowledge work for decades. But by now even the skeptics are finding phone numbers with the help of computer-based operators. When the productivity enhancements from these innovations are tallied, I predict that they will be striking.

On top of this, software and social tools can boost the productivity of the remaining human office workers. For example, a customer-service rep who deals with technical questions can work with just one customer at a time on the phone, but it's easy to handle two or more customers simultaneously if the medium is instant messaging. Whole office-based industries may become vastly more efficient; the legal profession, for one, may be in the early stages of a deep transformation, especially since the prices clients are willing to pay are going through the floor. A new breed of legal outsourcing offers much cheaper ways to accomplish certain tasks: contract lawyers and digital tools scan documents during discovery processes, for example. Intelligent software will only get better at finding associations in those documents and mining meaning from it all.

Source: Technology Review
Author: Andrew Mcafee

Recognizing voices depends on language ability

Study finds that for people with dyslexia, it’s much harder to identify who is speaking

Distinguishing between other people's voices may seem like a trivial task. However, if those people are speaking a language you don't understand, it becomes much harder. That's because you rely on individuals' differences in pronunciation to help identify them. If you don't understand the words they are saying, you don't pick up on those differences.

That ability to process the relationship between sounds and their meanings, also known as phonology, is believed to be impaired in people with dyslexia. Therefore, neuroscientists at MIT theorized that people with dyslexia would find it much more difficult to identify speakers of their native language than non-dyslexic people.

In a study appearing in Science on July 29, the researchers found just that. People with dyslexia had a much harder time recognizing voices than non-dyslexics. In fact, they fared just as poorly as they (and non-dyslexics) did when listening to speakers of a foreign language.

The finding bolsters the theory that impaired phonology processing is a critical aspect of dyslexia, and sheds light on how human voice recognition differs from that of other animals, says John Gabrieli, MIT's Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience and senior author of the Science paper.

"Recognizing one person from another, in humans, seems to be very dependent on human language capability," says Gabrieli, who is part of MIT's Department of Brain and Cognitive Sciences and also a principal investigator at the McGovern Institute for Brain Research.

Verbal cues

The lead author of the study, MIT graduate student Tyler Perrachione, earned his undergraduate and master's degrees at Northwestern University, where he was involved in studies showing that it is easier to recognize voices of people speaking your own language.

"Everybody's speech is a little bit different, and that's a big cue to who you are," he says. "When you're listening to somebody talk, it's not just properties of their vocal cords or how sound resonates in their oral cavity that distinguishes them, but also the way they pronounce the words."

After Perrachione arrived at MIT, he and Gabrieli decided to try to link this research with evidence showing that phonological processing is impaired in people with dyslexia. They tested subjects in identifying people speaking their native language (English), then Chinese.

When listening to English, the non-dyslexic subjects were correct nearly 70 percent of the time, but performed at only 50 percent when trying to distinguish Chinese speakers. Dyslexic individuals performed at 50 percent for both English and Chinese speakers.

"It's a beautiful study, in the sense that it's so simple," says Shirley Fecteau, a visiting assistant professor at Harvard Medical School and research chair in cognitive neuroplasticity at Laval University in Quebec. "It really seems like a very clear effect on voice recognition in people with dyslexia."

The finding suggests that people with dyslexia may have even more trouble following a speaker than they may realize, Gabrieli says. This adds to the growing evidence that dyslexia is not simply a visual disorder.

"There was a big shift in the 1980s from understanding dyslexia as a visual problem to understanding it as a language problem," Gabrieli says. "Dyslexia may not be one thing. It may be a variety of ways in which you end up struggling to learn to read. But the single best understood one is a weakness in the processing of language sounds."

Friend versus foe

Recognizing other members of one's species by their voices is critical for humans and other social animals. "You want to know who is a friend and who is a foe, you want to know who your partner is," Perrachione says. "If you're cooperating with someone for food, you want to know who that person is."

However, it appears that humans and animals perform that task in different ways. Animals can identify other members of their own species by the sounds they make, but that ability is innate and based on the sounds themselves, rather than the meaning of those sounds.

"We notice individual differences in this learned feature of our communication, which is the words that we use, and that's what really distinguishes human communication from animal communication," Perrachione says.

The researchers believe their work may also offer insight into the performance of computerized voice-recognition systems. Voice-recognition programs with access to dictionary meanings of words might do a better job of understanding different speakers than systems that only identify sounds, Perrachione says.

The researchers are now using functional magnetic resonance imaging (fMRI) to determine which parts of the brain are most active in dyslexics and non-dyslexics as they try to identify voices.

Source: MIT News
Author: Anne Trafton

Wednesday, July 13, 2011

The Future of Translation and Interpretation

Interpretation enables people who speak different languages to understand each other. An interpreter is someone who is able to translate text or spoken words from one language to another. The world has become more diverse and globalized. The need for translators as well as translation services has, consequently, risen. Luckily, the way professionals offer these services is constantly evolving.

Onsite interpreting is delivered a number of ways, one of which involves the interpreter translating after a live speaker pauses. The translation is performed gradually and requires the speaker to take breaks during which the translation is performed for an audience or group. Consecutive interpretation is more effective in certain interpreting contexts, though it is often difficult to determine what interpretation method will be be best for a given situtation. Consecutive interpreters must have the memory skills to accurately summarize portions of a speech after they've been uttered. While the consecutive speech translation does not require verbatim translation, it calls for an ability to capture the most significant messages and ideas of the speakers in the target language.

One might say that simultaneous interpretation skills are even harder to develop and deploy. Simultaneous interpretation specialists often train by trying to perform live translation services of a TV or radio show. Interpreters work inside a booth with a basic mixer they can control, including an input channel, output channel, volume control and mute button. Also provided are chairs, microphones and some kind of cooling system. The best simultaneous translators confer with the speakers prior to their presentation. On some occasions, they have access to the document from which the speaker is reading, beforehand. Speakers who are being translated try to create delays in the delivery of their speeches to facilitate translation. Though the speaker's words or meaning may, at times, not be clear, the translator has to keep the translation moving forward by not fixating on any particular word or phrase and making their speech as a whole tangible to those listening.

Telephone interpretation is another form of simultaneous interpretation. It is employed in an array of situations. Health care and government as well as law enforcement agencies are common users. It is increasingly used by corporations, however, who have customers across broad markets where multiple languages are spoken by their customers. Telephone interpretation using Video Remote Interpreting (VRI) or Video Relay Service (VRS) technology is an option suitable for the deaf, hard-of-hearing or speech-impaired. Interpretation via telephone is the realm of of the translation industry that shows the most room for growth and where demand is anticipated to most expand, in light of the fact that communications between parties are remote or distance communications.

There are 6,909 languages spoken in the world today. While English is being adopted as the common tongue, many worldwide do not choose to use or don't know English and use another language for conferences, speeches and other communications instead. The more obscure a language is, the more likelihood there is for a live or phone translation service needed. Hopefully, as countries across non-western areas of the world -- where languages other than English and more culturally dominant languages are spoken -- emerge economically, the type of demand for translation services will change and expand in interesting ways.