Tag Archives: Artificial Intelligence

Voice Recognition – Update from the Economist

Excellent comment on the state of the art of voice recognition in the Economist. The entire article is below.

I think its fair to say, as the article does, “we’re in 1994 for voice.” In other words, just like the internet had core technology in place in 1994, no one really had a clue about what it would ultimately mean to society.

My guess is …. it will be a game-changer of the first order.

ECHO, SIRI, CORTANA – the beginning of a new era!

Just like the GUI, the mouse, and WINDOWS allowed computers to go mainstream, my instinct is that removing a keyboard as a requirement will take the computer from a daily tool, and will make it a second-by-second tool. The Apple Watch, which looks rather benign right now, could easily become the central means of communication. And the hard-to-use keyboard on the iPhone will become increasingly a white elephant – rarely used and quaint.

==================

CREDIT: http://www.economist.com/technology-quarterly/2017-05-01/language

TECHNOLOGY QUARTERLY
FINDING A VOICE

Language: Finding a voice
Computers have got much better at translation, voice recognition and speech synthesis, says Lane Greene. But they still don’t understand the meaning of language.

I’M SORRY, Dave. I’m afraid I can’t do that.” With chilling calm, HAL 9000, the on-board computer in “2001: A Space Odyssey”, refuses to open the doors to Dave Bowman, an astronaut who had ventured outside the ship. HAL’s decision to turn on his human companion reflected a wave of fear about intelligent computers.

When the film came out in 1968, computers that could have proper conversations with humans seemed nearly as far away as manned flight to Jupiter. Since then, humankind has progressed quite a lot farther with building machines that it can talk to, and that can respond with something resembling natural speech. Even so, communication remains difficult. If “2001” had been made to reflect the state of today’s language technology, the conversation might have gone something like this: “Open the pod bay doors, Hal.” “I’m sorry, Dave. I didn’t understand the question.” “Open the pod bay doors, Hal.” “I have a list of eBay results about pod doors, Dave.”

Creative and truly conversational computers able to handle the unexpected are still far off. Artificial-intelligence (AI) researchers can only laugh when asked about the prospect of an intelligent HAL, Terminator or Rosie (the sassy robot housekeeper in “The Jetsons”). Yet although language technologies are nowhere near ready to replace human beings, except in a few highly routine tasks, they are at last about to become good enough to be taken seriously. They can help people spend more time doing interesting things that only humans can do. After six decades of work, much of it with disappointing outcomes, the past few years have produced results much closer to what early pioneers had hoped for.

Speech recognition has made remarkable advances. Machine translation, too, has gone from terrible to usable for getting the gist of a text, and may soon be good enough to require only modest editing by humans. Computerised personal assistants, such as Apple’s Siri, Amazon’s Alexa, Google Now and Microsoft’s Cortana, can now take a wide variety of questions, structured in many different ways, and return accurate and useful answers in a natural-sounding voice. Alexa can even respond to a request to “tell me a joke”, but only by calling upon a database of corny quips. Computers lack a sense of humour.

When Apple introduced Siri in 2011 it was frustrating to use, so many people gave up. Only around a third of smartphone owners use their personal assistants regularly, even though 95% have tried them at some point, according to Creative Strategies, a consultancy. Many of those discouraged users may not realise how much they have improved.
In 1966 John Pierce was working at Bell Labs, the research arm of America’s telephone monopoly. Having overseen the team that had built the first transistor and the first communications satellite, he enjoyed a sterling reputation, so he was asked to take charge of a report on the state of automatic language processing for the National Academy of Sciences. In the period leading up to this, scholars had been promising automatic translation between languages within a few years.

But the report was scathing. Reviewing almost a decade of work on machine translation and automatic speech recognition, it concluded that the time had come to spend money “hard-headedly toward important, realistic and relatively short-range goals”—another way of saying that language-technology research had overpromised and underdelivered. In 1969 Pierce wrote that both the funders and eager researchers had often fooled themselves, and that “no simple, clear, sure knowledge is gained.” After that, America’s government largely closed the money tap, and research on language technology went into hibernation for two decades.

The story of how it emerged from that hibernation is both salutary and surprisingly workaday, says Mark Liberman. As professor of linguistics at the University of Pennsylvania and head of the Linguistic Data Consortium, a huge trove of texts and recordings of human language, he knows a thing or two about the history of language technology. In the bad old days researchers kept their methods in the dark and described their results in ways that were hard to evaluate. But beginning in the 1980s, Charles Wayne, then at America’s Defence Advanced Research Projects Agency, encouraged them to try another approach: the “common task”.

Many early approaches to language technology got stuck in a conceptual cul-de-sac

Step by step
Researchers would agree on a common set of practices, whether they were trying to teach computers speech recognition, speaker identification, sentiment analysis of texts, grammatical breakdown, language identification, handwriting recognition or anything else. They would set out the metrics they were aiming to improve on, share the data sets used to train their software and allow their results to be tested by neutral outsiders. That made the process far more transparent. Funding started up again and language technologies began to improve, though very slowly.

Many early approaches to language technology—and particularly translation—got stuck in a conceptual cul-de-sac: the rules-based approach. In translation, this meant trying to write rules to analyse the text of a sentence in the language of origin, breaking it down into a sort of abstract “interlanguage” and rebuilding it according to the rules of the target language. These approaches showed early promise. But language is riddled with ambiguities and exceptions, so such systems were hugely complicated and easily broke down when tested on sentences beyond the simple set they had been designed for. Nearly all language technologies began to get a lot better with the application of statistical methods, often called a “brute force” approach. This relies on software scouring vast amounts of data, looking for patterns and learning from precedent. For example, in parsing language (breaking it down into its grammatical components), the software learns from large bodies of text that have already been parsed by humans. It uses what it has learned to make its best guess about a previously unseen text. In machine translation, the software scans millions of words already translated by humans, again looking for patterns. In speech recognition, the software learns from a body of recordings and the transcriptions made by humans. Thanks to the growing power of processors, falling prices for data storage and, most crucially, the explosion in available data, this approach eventually bore fruit. Mathematical techniques that had been known for decades came into their own, and big companies with access to enormous amounts of data were poised to benefit. People who had been put off by the hilariously inappropriate translations offered by online tools like BabelFish began to have more faith in Google Translate. Apple persuaded millions of iPhone users to talk not only on their phones but to them. The final advance, which began only about five years ago, came with the advent of deep learning through digital neural networks (DNNs). These are often touted as having qualities similar to those of the human brain: “neurons” are connected in software, and connections can become stronger or weaker in the process of learning.

But Nils Lenke, head of research for Nuance, a language-technology company, explains matter-of-factly that “DNNs are just another kind of mathematical model,” the basis of which had been well understood for decades. What changed was the hardware being used. Almost by chance, DNN researchers discovered that the graphical processing units (GPUs) used to render graphics fluidly in applications like video games were also brilliant at handling neural networks. In computer graphics, basic small shapes move according to fairly simple rules, but there are lots of shapes and many rules, requiring vast numbers of simple calculations. The same GPUs are used to fine-tune the weights assigned to “neurons” in DNNs as they scour data to learn. The technique has already produced big leaps in quality for all kinds of deep learning, including deciphering handwriting, recognising faces and classifying images. Now they are helping to improve all manner of language technologies, often bringing enhancements of up to 30%. That has shifted language technology from usable at a pinch to really rather good. But so far no one has quite worked out what will move it on from merely good to reliably great.

Speech recognition: I hear you
Computers have made huge strides in understanding human speech

WHEN a person speaks, air is forced out through the lungs, making the vocal chords vibrate, which sends out characteristic wave patterns through the air. The features of the sounds depend on the arrangement of the vocal organs, especially the tongue and the lips, and the characteristic nature of the sounds comes from peaks of energy in certain frequencies. The vowels have frequencies called “formants”, two of which are usually enough to differentiate one vowel from another. For example, the vowel in the English word “fleece” has its first two formants at around 300Hz and 3,000Hz. Consonants have their own characteristic features.

In principle, it should be easy to turn this stream of sound into transcribed speech. As in other language technologies, machines that recognise speech are trained on data gathered earlier. In this instance, the training data are sound recordings transcribed to text by humans, so that the software has both a sound and a text input. All it has to do is match the two. It gets better and better at working out how to transcribe a given chunk of sound in the same way as humans did in the training data. The traditional matching approach was a statistical technique called a hidden Markov model (HMM), making guesses based on what was done before. More recently speech recognition has also gained from deep learning.

English has about 44 “phonemes”, the units that make up the sound system of a language. P and b are different phonemes, because they distinguish words like pat and bat. But in English p with a puff of air, as in “party”, and p without a puff of air, as in “spin”, are not different phonemes, though they are in other languages. If a computer hears the phonemes s, p, i and n back to back, it should be able to recognise the word “spin”.

But the nature of live speech makes this difficult for machines. Sounds are not pronounced individually, one phoneme after the other; they mostly come in a constant stream, and finding the boundaries is not easy. Phonemes also differ according to the context. (Compare the l sound at the beginning of “light” with that at the end of “full”.)

Speakers differ in timbre and pitch of voice, and in accent. Conversation is far less clear than careful dictation. People stop and restart much more often than they realise.
All the same, technology has gradually mitigated many of these problems, so error rates in speech-recognition software have fallen steadily over the years—and then sharply with the introduction of deep learning. Microphones have got better and cheaper. With ubiquitous wireless internet, speech recordings can easily be beamed to computers in the cloud for analysis, and even smartphones now often have computers powerful enough to carry out this task.

Bear arms or bare arms?
Perhaps the most important feature of a speech-recognition system is its set of expectations about what someone is likely to say, or its “language model”. Like other training data, the language models are based on large amounts of real human speech, transcribed into text. When a speech-recognition system “hears” a stream of sound, it makes a number of guesses about what has been said, then calculates the odds that it has found the right one, based on the kinds of words, phrases and clauses it has seen earlier in the training text.

At the level of phonemes, each language has strings that are permitted (in English, a word may begin with str-, for example) or banned (an English word cannot start with tsr-). The same goes for words. Some strings of words are more common than others. For example, “the” is far more likely to be followed by a noun or an adjective than by a verb or an adverb. In making guesses about homophones, the computer will have remembered that in its training data the phrase “the right to bear arms” came up much more often than “the right to bare arms”, and will thus have made the right guess.

Training on a specific speaker greatly cuts down on the software’s guesswork. Just a few minutes of reading training text into software like Dragon Dictate, made by Nuance, produces a big jump in accuracy. For those willing to train the software for longer, the improvement continues to something close to 99% accuracy (meaning that of each hundred words of text, not more than one is wrongly added, omitted or changed). A good microphone and a quiet room help.

Advance knowledge of what kinds of things the speaker might be talking about also increases accuracy. Words like “phlebitis” and “gastrointestinal” are not common in general discourse, and uncommon words are ranked lower in the probability tables the software uses to guess what it has heard. But these words are common in medicine, so creating software trained to look out for such words considerably improves the result. This can be done by feeding the system a large number of documents written by the speaker whose voice is to be recognised; common words and phrases can be extracted to improve the system’s guesses.

As with all other areas of language technology, deep learning has sharply brought down error rates. In October Microsoft announced that its latest speech-recognition system had achieved parity with human transcribers in recognising the speech in the Switchboard Corpus, a collection of thousands of recorded conversations in which participants are talking with a stranger about a randomly chosen subject.

Error rates on the Switchboard Corpus are a widely used benchmark, so claims of quality improvements can be easily compared. Fifteen years ago quality had stalled, with word-error rates of 20-30%. Microsoft’s latest system, which has six neural networks running in parallel, has reached 5.9% (see chart), the same as a human transcriber’s. Xuedong Huang, Microsoft’s chief speech scientist, says that he expected it to take two or three years to reach parity with humans. It got there in less than one.

The improvements in the lab are now being applied to products in the real world. More and more cars are being fitted with voice-activated controls of various kinds; the vocabulary involved is limited (there are only so many things you might want to say to your car), which ensures high accuracy. Microphones—or often arrays of microphones with narrow fields of pick-up—are getting better at identifying the relevant speaker among a group.

Some problems remain. Children and elderly speakers, as well as people moving around in a room, are harder to understand. Background noise remains a big concern; if it is different from that in the training data, the software finds it harder to generalise from what it has learned. So Microsoft, for example, offers businesses a product called CRIS that lets users customise speech-recognition systems for the background noise, special vocabulary and other idiosyncrasies they will encounter in that particular environment. That could be useful anywhere from a noisy factory floor to a care home for the elderly.

But for a computer to know what a human has said is only a beginning. Proper interaction between the two, of the kind that comes up in almost every science-fiction story, calls for machines that can speak back.

Hasta la vista, robot voice
Machines are starting to sound more like humans
“I’LL be back.” “Hasta la vista, baby.” Arnold Schwarzenegger’s Teutonic drone in the “Terminator” films is world-famous. But in this instance film-makers looking into the future were overly pessimistic. Some applications do still feature a monotonous “robot voice”, but that is changing fast.

Examples of speech synthesis from OSX synthesiser:
A basic sample:

An advanced sample:

Example from Amazon’s “Polly” synthesiser:
Amazon’s Polly:

Creating speech is roughly the inverse of understanding it. Again, it requires a basic model of the structure of speech. What are the sounds in a language, and how do they combine? What words does it have, and how do they combine in sentences? These are well-understood questions, and most systems can now generate sound waves that are a fair approximation of human speech, at least in short bursts.
Heteronyms require special care. How should a computer pronounce a word like “lead”, which can be a present-tense verb or a noun for a heavy metal, pronounced quite differently? Once again a language model can make accurate guesses: “Lead us not into temptation” can be parsed for its syntax, and once the software has worked out that the first word is almost certainly a verb, it can cause it to be pronounced to rhyme with “reed”, not “red”.

Traditionally, text-to-speech models have been “concatenative”, consisting of very short segments recorded by a human and then strung together as in the acoustic model described above. More recently, “parametric” models have been generating raw audio without the need to record a human voice, which makes these systems more flexible but less natural-sounding.

DeepMind, an artificial-intelligence company bought by Google in 2014, has announced a new way of synthesising speech, again using deep neural networks. The network is trained on recordings of people talking, and on the texts that match what they say. Given a text to reproduce as speech, it churns out a far more fluent and natural-sounding voice than the best concatenative and parametric approaches.

The last step in generating speech is giving it prosody—generally, the modulation of speed, pitch and volume to convey an extra (and critical) channel of meaning. In English, “a German teacher”, with the stress on “teacher”, can teach anything but must be German. But “a German teacher” with the emphasis on “German” is usually a teacher of German (and need not be German). Words like prepositions and conjunctions are not usually stressed. Getting machines to put the stresses in the correct places is about 50% solved, says Mark Liberman of the University of Pennsylvania.

Many applications do not require perfect prosody. A satellite-navigation system giving instructions on where to turn uses just a small number of sentence patterns, and prosody is not important. The same goes for most single-sentence responses given by a virtual assistant on a smartphone.

But prosody matters when someone is telling a story. Pitch, speed and volume can be used to pass quickly over things that are already known, or to build interest and tension for new information. Myriad tiny clues communicate the speaker’s attitude to his subject. The phrase “a German teacher”, with stress on the word “German”, may, in the context of a story, not be a teacher of German, but a teacher being explicitly contrasted with a teacher who happens to be French or British.

Text-to-speech engines are not much good at using context to provide such accentuation, and where they do, it rarely extends beyond a single sentence. When Alexa, the assistant in Amazon’s Echo device, reads a news story, her prosody is jarringly un-humanlike. Talking computers have yet to learn how to make humans want to listen.

Machine translation: Beyond Babel
Computer translations have got strikingly better, but still need human input
IN “STAR TREK” it was a hand-held Universal Translator; in “The Hitchhiker’s Guide to the Galaxy” it was the Babel Fish popped conveniently into the ear. In science fiction, the meeting of distant civilisations generally requires some kind of device to allow them to talk. High-quality automated translation seems even more magical than other kinds of language technology because many humans struggle to speak more than one language, let alone translate from one to another.

Computer translation is still known as “machine translation”
The idea has been around since the 1950s, and computerised translation is still known by the quaint moniker “machine translation” (MT). It goes back to the early days of the cold war, when American scientists were trying to get computers to translate from Russian. They were inspired by the code-breaking successes of the second world war, which had led to the development of computers in the first place. To them, a scramble of Cyrillic letters on a page of Russian text was just a coded version of English, and turning it into English was just a question of breaking the code.

Scientists at IBM and Georgetown University were among those who thought that the problem would be cracked quickly. Having programmed just six rules and a vocabulary of 250 words into a computer, they gave a demonstration in New York on January 7th 1954 and proudly produced 60 automated translations, including that of “Mi pyeryedayem mislyi posryedstvom ryechyi,” which came out correctly as “We transmit thoughts by means of speech.” Leon Dostert of Georgetown, the lead scientist, breezily predicted that fully realised MT would be “an accomplished fact” in three to five years.

Instead, after more than a decade of work, the report in 1966 by a committee chaired by John Pierce, mentioned in the introduction to this report, recorded bitter disappointment with the results and urged researchers to focus on narrow, achievable goals such as automated dictionaries. Government-sponsored work on MT went into near-hibernation for two decades. What little was done was carried out by private companies. The most notable of them was Systran, which provided rough translations, mostly to America’s armed forces.
La plume de mon ordinateur
The scientists got bogged down by their rules-based approach. Having done relatively well with their six-rule system, they came to believe that if they programmed in more rules, the system would become more sophisticated and subtle. Instead, it became more likely to produce nonsense. Adding extra rules, in the modern parlance of software developers, did not “scale”.

Besides the difficulty of programming grammar’s many rules and exceptions, some early observers noted a conceptual problem. The meaning of a word often depends not just on its dictionary definition and the grammatical context but the meaning of the rest of the sentence. Yehoshua Bar-Hillel, an Israeli MT pioneer, realised that “the pen is in the box” and “the box is in the pen” would require different translations for “pen”: any pen big enough to hold a box would have to be an animal enclosure, not a writing instrument.

How could machines be taught enough rules to make this kind of distinction? They would have to be provided with some knowledge of the real world, a task far beyond the machines or their programmers at the time. Two decades later, IBM stumbled on an approach that would revive optimism about MT. Its Candide system was the first serious attempt to use statistical probabilities rather than rules devised by humans for translation. Statistical, “phrase-based” machine translation, like speech recognition, needed training data to learn from. Candide used Canada’s Hansard, which publishes that country’s parliamentary debates in French and English, providing a huge amount of data for that time. The phrase-based approach would ensure that the translation of a word would take the surrounding words properly into account.

But quality did not take a leap until Google, which had set itself the goal of indexing the entire internet, decided to use those data to train its translation engines; in 2007 it switched from a rules-based engine (provided by Systran) to its own statistics-based system. To build it, Google trawled about a trillion web pages, looking for any text that seemed to be a translation of another—for example, pages designed identically but with different words, and perhaps a hint such as the address of one page ending in /en and the other ending in /fr. According to Macduff Hughes, chief engineer on Google Translate, a simple approach using vast amounts of data seemed more promising than a clever one with fewer data.

Training on parallel texts (which linguists call corpora, the plural of corpus) creates a “translation model” that generates not one but a series of possible translations in the target language. The next step is running these possibilities through a monolingual language model in the target language. This is, in effect, a set of expectations about what a well-formed and typical sentence in the target language is likely to be. Single-language models are not too hard to build. (Parallel human-translated corpora are hard to come by; large amounts of monolingual training data are not.) As with the translation model, the language model uses a brute-force statistical approach to learn from the training data, then ranks the outputs from the translation model in order of plausibility.

Statistical machine translation rekindled optimism in the field. Internet users quickly discovered that Google Translate was far better than the rules-based online engines they had used before, such as BabelFish. Such systems still make mistakes—sometimes minor, sometimes hilarious, sometimes so serious or so many as to make nonsense of the result. And language pairs like Chinese-English, which are unrelated and structurally quite different, make accurate translation harder than pairs of related languages like English and German. But more often than not, Google Translate and its free online competitors, such as Microsoft’s Bing Translator, offer a usable approximation.

Such systems are set to get better, again with the help of deep learning from digital neural networks. The Association for Computational Linguistics has been holding workshops on MT every summer since 2006. One of the events is a competition between MT engines turned loose on a collection of news text. In August 2016, in Berlin, neural-net-based MT systems were the top performers (out of 102), a first.
Now Google has released its own neural-net-based engine for eight language pairs, closing much of the quality gap between its old system and a human translator.
This is especially true for closely related languages (like the big European ones) with lots of available training data. The results are still distinctly imperfect, but far smoother and more accurate than before. Translations between English and (say) Chinese and Korean are not as good yet, but the neural system has brought a clear improvement here too.

What machines cannot yet do is have true conversations
The Coca-Cola factor

Neural-network-based translation actually uses two networks. One is an encoder. Each word of an input sentence is converted into a multidimensional vector (a series of numerical values), and the encoding of each new word takes into account what has happened earlier in the sentence. Marcello Federico of Italy’s Fondazione Bruno Kessler, a private research organisation, uses an intriguing analogy to compare neural-net translation with the phrase-based kind. The latter, he says, is like describing Coca-Cola in terms of sugar, water, caffeine and other ingredients. By contrast, the former encodes features such as liquidness, darkness, sweetness and fizziness.
Once the source sentence is encoded, a decoder network generates a word-for-word translation, once again taking account of the immediately preceding word. This can cause problems when the meaning of words such as pronouns depends on words mentioned much earlier in a long sentence. This problem is mitigated by an “attention model”, which helps maintain focus on other words in the sentence outside the immediate context.

Neural-network translation requires heavy-duty computing power, both for the original training of the system and in use. The heart of such a system can be the GPUs that made the deep-learning revolution possible, or specialised hardware like Google’s Tensor Processing Units (TPUs). Smaller translation companies and researchers usually rent this kind of processing power in the cloud. But the data sets used in neural-network training do not need to be as extensive as those for phrase-based systems, which should give smaller outfits a chance to compete with giants like Google.
Fully automated, high-quality machine translation is still a long way off. For now, several problems remain. All current machine translations proceed sentence by sentence. If the translation of such a sentence depends on the meaning of earlier ones, automated systems will make mistakes. Long sentences, despite tricks like the attention model, can be hard to translate. And neural-net-based systems in particular struggle with rare words.

Training data, too, are scarce for many language pairs. They are plentiful between European languages, since the European Union’s institutions churn out vast amounts of material translated by humans between the EU’s 24 official languages. But for smaller languages such resources are thin on the ground. For example, there are few Greek-Urdu parallel texts available on which to train a translation engine. So a system that claims to offer such translation is in fact usually running it through a bridging language, nearly always English. That involves two translations rather than one, multiplying the chance of errors.
Even if machine translation is not yet perfect, technology can already help humans translate much more quickly and accurately. “Translation memories”, software that stores already translated words and segments, first came into use as early as the 1980s. For someone who frequently translates the same kind of material (such as instruction manuals), they serve up the bits that have already been translated, saving lots of duplication and time.

A similar trick is to train MT engines on text dealing with a narrow real-world domain, such as medicine or the law. As software techniques are refined and computers get faster, training becomes easier and quicker. Free software such as Moses, developed with the support of the EU and used by some of its in-house translators, can be trained by anyone with parallel corpora to hand. A specialist in medical translation, for instance, can train the system on medical translations only, which makes them far more accurate.
At the other end of linguistic sophistication, an MT engine can be optimised for the shorter and simpler language people use in speech to spew out rough but near-instantaneous speech-to-speech translations. This is what Microsoft’s Skype Translator does. Its quality is improved by being trained on speech (things like film subtitles and common spoken phrases) rather than the kind of parallel text produced by the European Parliament.

Translation management has also benefited from innovation, with clever software allowing companies quickly to combine the best of MT, translation memory, customisation by the individual translator and so on. Translation-management software aims to cut out the agencies that have been acting as middlemen between clients and an army of freelance translators. Jack Welde, the founder of Smartling, an industry favourite, says that in future translation customers will choose how much human intervention is needed for a translation. A quick automated one will do for low-stakes content with a short life, but the most important content will still require a fully hand-crafted and edited version. Noting that MT has both determined boosters and committed detractors, Mr Welde says he is neither: “If you take a dogmatic stance, you’re not optimised for the needs of the customer.”

Translation software will go on getting better. Not only will engineers keep tweaking their statistical models and neural networks, but users themselves will make improvements to their own systems. For example, a small but much-admired startup, Lilt, uses phrase-based MT as the basis for a translation, but an easy-to-use interface allows the translator to correct and improve the MT system’s output. Every time this is done, the corrections are fed back into the translation engine, which learns and improves in real time. Users can build several different memories—a medical one, a financial one and so on—which will help with future translations in that specialist field.

TAUS, an industry group, recently issued a report on the state of the translation industry saying that “in the past few years the translation industry has burst with new tools, platforms and solutions.” Last year Jaap van der Meer, TAUS’s founder and director, wrote a provocative blogpost entitled “The Future Does Not Need Translators”, arguing that the quality of MT will keep improving, and that for many applications less-than-perfect translation will be good enough.

The “translator” of the future is likely to be more like a quality-control expert, deciding which texts need the most attention to detail and editing the output of MT software. That may be necessary because computers, no matter how sophisticated they have become, cannot yet truly grasp what a text means.

Meaning and machine intelligence: What are you talking about?
Machines cannot conduct proper conversations with humans because they do not understand the world

IN “BLACK MIRROR”, a British science-fiction satire series set in a dystopian near future, a young woman loses her boyfriend in a car accident. A friend offers to help her deal with her grief. The dead man was a keen social-media user, and his archived accounts can be used to recreate his personality. Before long she is messaging with a facsimile, then speaking to one. As the system learns to mimic him ever better, he becomes increasingly real.

This is not quite as bizarre as it sounds. Computers today can already produce an eerie echo of human language if fed with the appropriate material. What they cannot yet do is have true conversations. Truly robust interaction between man and machine would require a broad understanding of the world. In the absence of that, computers are not able to talk about a wide range of topics, follow long conversations or handle surprises.

Machines trained to do a narrow range of tasks, though, can perform surprisingly well. The most obvious examples are the digital assistants created by the technology giants. Users can ask them questions in a variety of natural ways: “What’s the temperature in London?” “How’s the weather outside?” “Is it going to be cold today?” The assistants know a few things about users, such as where they live and who their family are, so they can be personal, too: “How’s my commute looking?” “Text my wife I’ll be home in 15 minutes.”
And they get better with time. Apple’s Siri receives 2bn requests per week, which (after being anonymised) are used for further teaching. For example, Apple says Siri knows every possible way that users ask about a sports score. She also has a delightful answer for children who ask about Father Christmas. Microsoft learned from some of its previous natural-language platforms that about 10% of human interactions were “chitchat”, from “tell me a joke” to “who’s your daddy?”, and used such chat to teach its digital assistant, Cortana.

The writing team for Cortana includes two playwrights, a poet, a screenwriter and a novelist. Google hired writers from Pixar, an animated-film studio, and The Onion, a satirical newspaper, to make its new Google Assistant funnier. No wonder people often thank their digital helpers for a job well done. The assistants’ replies range from “My pleasure, as always” to “You don’t need to thank me.”
Good at grammar

How do natural-language platforms know what people want? They not only recognise the words a person uses, but break down speech for both grammar and meaning. Grammar parsing is relatively advanced; it is the domain of the well-established field of “natural-language processing”. But meaning comes under the heading of “natural-language understanding”, which is far harder.

First, parsing. Most people are not very good at analysing the syntax of sentences, but computers have become quite adept at it, even though most sentences are ambiguous in ways humans are rarely aware of. Take a sign on a public fountain that says, “This is not drinking water.” Humans understand it to mean that the water (“this”) is not a certain kind of water (“drinking water”). But a computer might just as easily parse it to say that “this” (the fountain) is not at present doing something (“drinking water”).

As sentences get longer, the number of grammatically possible but nonsensical options multiplies exponentially. How can a machine parser know which is the right one? It helps for it to know that some combinations of words are more common than others: the phrase “drinking water” is widely used, so parsers trained on large volumes of English will rate those two words as likely to be joined in a noun phrase. And some structures are more common than others: “noun verb noun noun” may be much more common than “noun noun verb noun”. A machine parser can compute the overall probability of all combinations and pick the likeliest.

A “lexicalised” parser might do even better. Take the Groucho Marx joke, “One morning I shot an elephant in my pyjamas. How he got in my pyjamas, I’ll never know.” The first sentence is ambiguous (which makes the joke)—grammatically both “I” and “an elephant” can attach to the prepositional phrase “in my pyjamas”. But a lexicalised parser would recognise that “I [verb phrase] in my pyjamas” is far more common than “elephant in my pyjamas”, and so assign that parse a higher probability.

But meaning is harder to pin down than syntax. “The boy kicked the ball” and “The ball was kicked by the boy” have the same meaning but a different structure. “Time flies like an arrow” can mean either that time flies in the way that an arrow flies, or that insects called “time flies” are fond of an arrow.

“Who plays Thor in ‘Thor’?” Your correspondent could not remember the beefy Australian who played the eponymous Norse god in the Marvel superhero film. But when he asked his iPhone, Siri came up with an unexpected reply: “I don’t see any movies matching ‘Thor’ playing in Thor, IA, US, today.” Thor, Iowa, with a population of 184, was thousands of miles away, and “Thor”, the film, has been out of cinemas for years. Siri parsed the question perfectly properly, but the reply was absurd, violating the rules of what linguists call pragmatics: the shared knowledge and understanding that people use to make sense of the often messy human language they hear. “Can you reach the salt?” is not a request for information but for salt. Natural-language systems have to be manually programmed to handle such requests as humans expect them, and not literally.

Multiple choice
Shared information is also built up over the course of a conversation, which is why digital assistants can struggle with twists and turns in conversations. Tell an assistant, “I’d like to go to an Italian restaurant with my wife,” and it might suggest a restaurant. But then ask, “is it close to her office?”, and the assistant must grasp the meanings of “it” (the restaurant) and “her” (the wife), which it will find surprisingly tricky. Nuance, the language-technology firm, which provides natural-language platforms to many other companies, is working on a “concierge” that can handle this type of challenge, but it is still a prototype.
Such a concierge must also offer only restaurants that are open. Linking requests to common sense (knowing that no one wants to be sent to a closed restaurant), as well as a knowledge of the real world (knowing which restaurants are closed), is one of the most difficult challenges for language technologies.

Common sense, an old observation goes, is uncommon enough in humans. Programming it into computers is harder still. Fernando Pereira of Google points out why. Automated speech recognition and machine translation have something in common: there are huge stores of data (recordings and transcripts for speech recognition, parallel corpora for translation) that can be used to train machines. But there are no training data for common sense.

Brain scan: Terry Winograd
The Winograd Schema tests computers’ “understanding” of the real world

THE Turing Test was conceived as a way to judge whether true artificial intelligence has been achieved. If a computer can fool humans into thinking it is human, there is no reason, say its fans, to say the machine is not truly intelligent.
Few giants in computing stand with Turing in fame, but one has given his name to a similar challenge: Terry Winograd, a computer scientist at Stanford. In his doctoral dissertation Mr Winograd posed a riddle for computers: “The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?”

It is a perfect illustration of a well-recognised point: many things that are easy for humans are crushingly difficult for computers. Mr Winograd went into AI research in the 1960s and 1970s and developed an early natural-language program called SHRDLU that could take commands and answer questions about a group of shapes it could manipulate: “Find a block which is taller than the one you are holding and put it into the box.” This work brought a jolt of optimism to the AI crowd, but Mr Winograd later fell out with them, devoting himself not to making machines intelligent but to making them better at helping human beings. (These camps are sharply divided by philosophy and academic pride.) He taught Larry Page at Stanford, and after Mr Page went on to co-found Google, Mr Winograd became a guest researcher at the company, helping to build Gmail.

In 2011 Hector Levesque of the University of Toronto became annoyed by systems that “passed” the Turing Test by joking and avoiding direct answers. He later asked to borrow Mr Winograd’s name and the format of his dissertation’s puzzle to pose a more genuine test of machine “understanding”: the Winograd Schema. The answers to its battery of questions were obvious to humans but would require computers to have some reasoning ability and some knowledge of the real world. The first official Winograd Schema Challenge was held this year, with a $25,000 prize offered by Nuance, the language-software company, for a program that could answer more than 90% of the questions correctly. The best of them got just 58% right.
Though officially retired, Mr Winograd continues writing and researching. One of his students is working on an application for Google Glass, a computer with a display mounted on eyeglasses. The app would help people with autism by reading the facial expressions of conversation partners and giving the wearer information about their emotional state. It would allow him to integrate linguistic and non-linguistic information in a way that people with autism find difficult, as do computers.

Asked to trick some of the latest digital assistants, like Siri and Alexa, he asks them things like “Where can I find a nightclub my Methodist uncle would like?”, which requires knowledge about both nightclubs (which such systems have) and Methodist uncles (which they don’t). When he tried “Where did I leave my glasses?”, one of them came up with a link to a book of that name. None offered the obvious answer: “How would I know?”

Knowledge of the real world is another matter. AI has helped data-rich companies such as America’s West-Coast tech giants organise much of the world’s information into interactive databases such as Google’s Knowledge Graph. Some of the content of that appears in a box to the right of a Google page of search results for a famous figure or thing. It knows that Jacob Bernoulli studied at the University of Basel (as did other people, linked to Bernoulli through this node in the Graph) and wrote “On the Law of Large Numbers” (which it knows is a book).

Organising information this way is not difficult for a company with lots of data and good AI capabilities, but linking information to language is hard. Google touts its assistant’s ability to answer questions like “Who was president when the Rangers won the World Series?” But Mr Pereira concedes that this was the result of explicit training. Another such complex query—“What was the population of London when Samuel Johnson wrote his dictionary?”—would flummox the assistant, even though the Graph knows about things like the historical population of London and the date of Johnson’s dictionary. IBM’s Watson system, which in 2011 beat two human champions at the quiz show “Jeopardy!”, succeeded mainly by calculating huge numbers of potential answers based on key words by probability, not by a human-like understanding of the question.

Making real-world information computable is challenging, but it has inspired some creative approaches. Cortical.io, a Vienna-based startup, took hundreds of Wikipedia articles, cut them into thousands of small snippets of information and ran an “unsupervised” machine-learning algorithm over it that required the computer not to look for anything in particular but to find patterns. These patterns were then represented as a visual “semantic fingerprint” on a grid of 128×128 pixels. Clumps of pixels in similar places represented semantic similarity. This method can be used to disambiguate words with multiple meanings: the fingerprint of “organ” shares features with both “liver” and “piano” (because the word occurs with both in different parts of the training data). This might allow a natural-language system to distinguish between pianos and church organs on one hand, and livers and other internal organs on the other.
Proper conversation between humans and machines can be seen as a series of linked challenges: speech recognition, speech synthesis, syntactic analysis, semantic analysis, pragmatic understanding, dialogue, common sense and real-world knowledge. Because all the technologies have to work together, the chain as a whole is only as strong as its weakest link, and the first few of these are far better developed than the last few.

The hardest part is linking them together. Scientists do not know how the human brain draws on so many different kinds of knowledge at the same time. Programming a machine to replicate that feat is very much a work in progress.

Looking ahead: For my next trick
Talking machines are the new must-haves
IN “WALL-E”, an animated children’s film set in the future, all humankind lives on a spaceship after the Earth’s environment has been trashed. The humans are whisked around in intelligent hovering chairs; machines take care of their every need, so they are all morbidly obese. Even the ship’s captain is not really in charge; the actual pilot is an intelligent and malevolent talking robot, Auto, and like so many talking machines in science fiction, he eventually makes a grab for power.

Speech is quintessentially human, so it is hard to imagine machines that can truly speak conversationally as humans do without also imagining them to be superintelligent. And if they are super intelligent, with none of humans’ flaws, it is hard to imagine them not wanting to take over, not only for their good but for that of humanity. Even in a fairly benevolent future like “WALL-E’s”, where the machines are doing all the work, it is easy to see that the lack of anything challenging to do would be harmful to people.
Fortunately, the tasks that talking machines can take off humans’ to-do lists are the sort that many would happily give up. Machines are increasingly able to handle difficult but well-defined jobs. Soon all that their users will have to do is pipe up and ask them, using a naturally phrased voice command. Once upon a time, just one tinkerer in a given family knew how to work the computer or the video recorder. Then graphical interfaces (icons and a mouse) and touchscreens made such technology accessible to everyone. Frank Chen of Andreessen Horowitz, a venture-capital firm, sees natural-language interfaces between humans and machines as just another step in making information and services available to all. Silicon Valley, he says, is enjoying a golden age of AI technologies. Just as in the early 1990s companies were piling online and building websites without quite knowing why, now everyone is going for natural language. Yet, he adds, “we’re in 1994 for voice.”
1995 will soon come. This does not mean that people will communicate with their computers exclusively by talking to them. Websites did not make the telephone obsolete, and mobile devices did not make desktop computers obsolete. In the same way, people will continue to have a choice between voice and text when interacting with their machines.
Not all will choose voice. For example, in Japan yammering into a phone is not done in public, whether the interlocutor is a human or a digital assistant, so usage of Siri is low during business hours but high in the evening and at the weekend. For others, voice-enabled technology is an obvious boon. It allows dyslexic people to write without typing, and the very elderly may find it easier to talk than to type on a tiny keyboard. The very young, some of whom today learn to type before they can write, may soon learn to talk to machines before they can type.
Those with injuries or disabilities that make it hard for them to write will also benefit. Microsoft is justifiably proud of a new device that will allow people with amyotrophic lateral sclerosis (ALS), which immobilises nearly all of the body but leaves the mind working, to speak by using their eyes to pick letters on a screen. The critical part is predictive text, which improves as it gets used to a particular individual. An experienced user will be able to “speak” at around 15 words per minute.
People may even turn to machines for company. Microsoft’s Xiaoice, a chatbot launched in China, learns to come up with the responses that will keep a conversation going longest. Nobody would think it was human, but it does make users open up in surprising ways. Jibo, a new “social robot”, is intended to tell children stories, help far-flung relatives stay in touch and the like.

Another group that may benefit from technology is smaller language communities. Networked computers can encourage a winner-take-all effect: if there is a lot of good software and content in English and Chinese, smaller languages become less valuable online. If they are really tiny, their very survival may be at stake. But Ross Perlin of the Endangered Languages Alliance notes that new software allows researchers to document small languages more quickly than ever. With enough data comes the possibility of developing resources—from speech recognition to interfaces with software—for smaller and smaller languages. The Silicon Valley giants already localise their services in dozens of languages; neural networks and other software allow new versions to be generated faster and more efficiently than ever.

There are two big downsides to the rise in natural-language technologies: the implications for privacy, and the disruption it will bring to many jobs.

Increasingly, devices are always listening. Digital assistants like Alexa, Cortana, Siri and Google Assistant are programmed to wait for a prompt, such as “Hey, Siri” or “OK, Google”, to activate them. But allowing always-on microphones into people’s pockets and homes amounts to a further erosion of traditional expectations of privacy. The same might be said for all the ways in which language software improves by training on a single user’s voice, vocabulary, written documents and habits.

All the big companies’ location-based services—even the accelerometers in phones that detect small movements—are making ever-improving guesses about users’ wants and needs. The moment when a digital assistant surprises a user with “The chemist is nearby—do you want to buy more haemorrhoid cream, Steve?” could be when many may choose to reassess the trade-off between amazing new services and old-fashioned privacy. The tech companies can help by giving users more choice; the latest iPhone will not be activated when it is laid face down on a table. But hackers will inevitably find ways to get at some of these data.

The other big concern is for jobs. To the extent that they are routine, they face being automated away. A good example is customer support. When people contact a company for help, the initial encounter is usually highly scripted. A company employee will verify a customer’s identity and follow a decision-tree. Language technology is now mature enough to take on many of these tasks.

For a long transition period humans will still be needed, but the work they do will become less routine. Nuance, which sells lots of automated online and phone-based help systems, is bullish on voice biometrics (customers identifying themselves by saying “my voice is my password”). Using around 200 parameters for identifying a speaker, it is probably more secure than a fingerprint, says Brett Beranek, a senior manager at the company. It will also eliminate the tedium, for both customers and support workers, of going through multi-step identification procedures with PINs, passwords and security questions. When Barclays, a British bank, offered it to frequent users of customer-support services, 84% signed up within five months.

Digital assistants on personal smartphones can get away with mistakes, but for some business applications the tolerance for error is close to zero, notes Nikita Ivanov. His company, Datalingvo, a Silicon Valley startup, answers questions phrased in natural language about a company’s business data. If a user wants to know which online ads resulted in the most sales in California last month, the software automatically translates his typed question into a database query. But behind the scenes a human working for Datalingvo vets the query to make sure it is correct. This is because the stakes are high: the technology is bound to make mistakes in its early days, and users could make decisions based on bad data.
This process can work the other way round, too: rather than natural-language input producing data, data can produce language. Arria, a company based in London, makes software into which a spreadsheet full of data can be dragged and dropped, to be turned automatically into a written description of the contents, complete with trends. Matt Gould, the company’s chief strategy officer, likes to think that this will free chief financial officers from having to write up the same old routine analyses for the board, giving them time to develop more creative approaches.

Carl Benedikt Frey, an economist at Oxford University, has researched the likely effect of artificial intelligence on the labour market and concluded that the jobs most likely to remain immune include those requiring creativity and skill at complex social interactions. But not every human has those traits. Call centres may need fewer people as more routine work is handled by automated systems, but the trickier inquiries will still go to humans.

Much of this seems familiar. When Google search first became available, it turned up documents in seconds that would have taken a human operator hours, days or years to find. This removed much of the drudgery from being a researcher, librarian or journalist. More recently, young lawyers and paralegals have taken to using e-discovery. These innovations have not destroyed the professions concerned but merely reshaped them.

Machines that relieve drudgery and allow people to do more interesting jobs are a fine thing. In net terms they may even create extra jobs. But any big adjustment is most painful for those least able to adapt. Upheavals brought about by social changes—like the emancipation of women or the globalisation of labour markets—are already hard for some people to bear. When those changes are wrought by machines, they become even harder, and all the more so when those machines seem to behave more and more like humans. People already treat inanimate objects as if they were alive: who has never shouted at a computer in frustration? The more that machines talk, and the more that they seem to understand people, the more their users will be tempted to attribute human traits to them.

That raises questions about what it means to be human. Language is widely seen as humankind’s most distinguishing trait. AI researchers insist that their machines do not think like people, but if they can listen and talk like humans, what does that make them? As humans teach ever more capable machines to use language, the once-obvious line between them will blur.

NYT on Google Brain, Google Translate, and AI Progress

Amazing progress!

New York Times Article on Google and AI Progress

The Great A.I. Awakening
How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
BY GIDEON LEWIS-KRAUSDEC. 14, 2016

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Greg Corrado, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Apple’s Siri, Facebook’s M, Amazon’s Echo, Alan Turing, GO (the Board Game), convolutional neural network of Yann LeCun, supervised learning, machine learning, deep learning, Mike Schuster, T.P.U.s

Prologue: You Are What You Have Read
Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.
Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”
The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.
NO. 1:
Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
NO. 2:
Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:
Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.
Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

Sadiq Khan, the mayor of London, stood to make a few opening remarks. A friend, he began, had recently told him he reminded him of Google. “Why, because I know all the answers?” the mayor asked. “No,” the friend replied, “because you’re always trying to finish my sentences.” The crowd tittered politely. Khan concluded by introducing Google’s chief executive, Sundar Pichai, who took the stage.
Pichai was in London in part to inaugurate Google’s new building there, the cornerstone of a new “knowledge quarter” under construction at King’s Cross, and in part to unveil the completion of the initial phase of a company transformation he announced last year. The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”
A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce. It was only with the refugee crisis, Pichai explained from the lectern, that the company came to reckon with Translate’s geopolitical importance: On the screen behind him appeared a graph whose steep curve indicated a recent fivefold increase in translations between Arabic and German. (It was also close to Pichai’s own heart. He grew up in India, a land divided by dozens of languages.) The team had been steadily adding new languages and features, but gains in quality over the last four years had slowed considerably.
Until today. As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
Pichai has an affection for the obscure literary reference; he told me a month earlier, in his office in Mountain View, Calif., that Translate in part exists because not everyone can be like the physicist Robert Oppenheimer, who learned Sanskrit to read the Bhagavad Gita in the original. In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”
Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”
It was a fitting remark: The new Google Translate was run on the first machines that had, in a sense, ever learned to read anything at all.
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform.

The phrase “artificial intelligence” is invoked as if its meaning were self-evident, but it has always been a source of confusion and controversy. Imagine if you went back to the 1970s, stopped someone on the street, pulled out a smartphone and showed her Google Maps. Once you managed to convince her you weren’t some oddly dressed wizard, and that what you withdrew from your pocket wasn’t a black-arts amulet but merely a tiny computer more powerful than that onboard the Apollo shuttle, Google Maps would almost certainly seem to her a persuasive example of “artificial intelligence.” In a very real sense, it is. It can do things any map-literate human can manage, like get you from your hotel to the airport — though it can do so much more quickly and reliably. It can also do things that humans simply and obviously cannot: It can evaluate the traffic, plan the best route and reorient itself when you take the wrong exit.
Practically nobody today, however, would bestow upon Google Maps the honorific “A.I.,” so sentimental and sparing are we in our use of the word “intelligence.” Artificial intelligence, we believe, must be something that distinguishes HAL from whatever it is a loom or wheelbarrow can do. The minute we can automate a task, we downgrade the relevant skill involved to one of mere mechanism. Today Google Maps seems, in the pejorative sense of the term, robotic: It simply accepts an explicit demand (the need to get from one place to another) and tries to satisfy that demand as efficiently as possible. The goal posts for “artificial intelligence” are thus constantly receding.
When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.
In a famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor. Once a machine can translate fluently between two natural languages, the foundation has been laid for a machine that might one day “understand” human language well enough to engage in plausible conversation. Google Brain’s members, who pushed and helped oversee the Translate project, believe that such a machine would be on its way to serving as a generally intelligent all-encompassing personal digital assistant.

What follows here is the story of how a team of Google researchers and engineers — at first one or two, then three or four, and finally more than a hundred — made considerable progress in that direction. It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to. It does not feature people who think that everything will be unrecognizably different tomorrow or the next day because of some restless tinkerer in his garage. It is neither a story about people who think technology will solve all our problems nor one about people who think technology is ineluctably bound to create apocalyptic new ones. It is not about disruption, at least not in the way that word tends to be used.
It is, in fact, three overlapping stories that converge in Google Translate’s successful metamorphosis to A.I. — a technical story, an institutional story and a story about the evolution of ideas. The technical story is about one team on one product at one company, and the process by which they refined, tested and introduced a brand-new version of an old product in only about a quarter of the time anyone, themselves included, might reasonably have expected. The institutional story is about the employees of a small but influential artificial-intelligence group within that company, and the process by which their intuitive faith in some old, unproven and broadly unpalatable notions about computing upended every other company within a large radius. The story of ideas is about the cognitive scientists, psychologists and wayward engineers who long toiled in obscurity, and the process by which their ostensibly irrational convictions ultimately inspired a paradigm shift in our understanding not only of technology but also, in theory, of consciousness itself.
It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to.

The first story, the story of Google Translate, takes place in Mountain View over nine months, and it explains the transformation of machine translation. The second story, the story of Google Brain and its many competitors, takes place in Silicon Valley over five years, and it explains the transformation of that entire community. The third story, the story of deep learning, takes place in a variety of far-flung laboratories — in Scotland, Switzerland, Japan and most of all Canada — over seven decades, and it might very well contribute to the revision of our self-image as first and foremost beings who think.
All three are stories about artificial intelligence. The seven-decade story is about what we might conceivably expect or want from it. The five-year story is about what it might do in the near future. The nine-month story is about what it can do right this minute. These three stories are themselves just proof of concept. All of this is only the beginning.

Part I: Learning Machine
1. The Birth of Brain
Jeff Dean, though his title is senior fellow, is the de facto head of Google Brain. Dean is a sinewy, energy-efficient man with a long, narrow face, deep-set eyes and an earnest, soapbox-derby sort of enthusiasm. The son of a medical anthropologist and a public-health epidemiologist, Dean grew up all over the world — Minnesota, Hawaii, Boston, Arkansas, Geneva, Uganda, Somalia, Atlanta — and, while in high school and college, wrote software used by the World Health Organization. He has been with Google since 1999, as employee 25ish, and has had a hand in the core software systems beneath nearly every significant undertaking since then. A beloved artifact of company culture is Jeff Dean Facts, written in the style of the Chuck Norris Facts meme: “Jeff Dean’s PIN is the last four digits of pi.” “When Alexander Graham Bell invented the telephone, he saw a missed call from Jeff Dean.” “Jeff Dean got promoted to Level 11 in a system where the maximum level is 10.” (This last one is, in fact, true.)
Photo

One day in early 2011, Dean walked into one of the Google campus’s “microkitchens” — the “Googley” word for the shared break spaces on most floors of the Mountain View complex’s buildings — and ran into Andrew Ng, a young Stanford computer-science professor who was working for the company as a consultant. Ng told him about Project Marvin, an internal effort (named after the celebrated A.I. pioneer Marvin Minsky) he had recently helped establish to experiment with “neural networks,” pliant digital lattices based loosely on the architecture of the brain. Dean himself had worked on a primitive version of the technology as an undergraduate at the University of Minnesota in 1990, during one of the method’s brief windows of mainstream acceptability. Now, over the previous five years, the number of academics working on neural networks had begun to grow again, from a handful to a few dozen. Ng told Dean that Project Marvin, which was being underwritten by Google’s secretive X lab, had already achieved some promising results.
Dean was intrigued enough to lend his “20 percent” — the portion of work hours every Google employee is expected to contribute to programs outside his or her core job — to the project. Pretty soon, he suggested to Ng that they bring in another colleague with a neuroscience background, Greg Corrado. (In graduate school, Corrado was taught briefly about the technology, but strictly as a historical curiosity. “It was good I was paying attention in class that day,” he joked to me.) In late spring they brought in one of Ng’s best graduate students, Quoc Le, as the project’s first intern. By then, a number of the Google engineers had taken to referring to Project Marvin by another name: Google Brain.
Since the term “artificial intelligence” was first coined, at a kind of constitutional convention of the mind at Dartmouth in the summer of 1956, a majority of researchers have long thought the best approach to creating A.I. would be to write a very big, comprehensive program that laid out both the rules of logical reasoning and sufficient knowledge of the world. If you wanted to translate from English to Japanese, for example, you would program into the computer all of the grammatical rules of English, and then the entirety of definitions contained in the Oxford English Dictionary, and then all of the grammatical rules of Japanese, as well as all of the words in the Japanese dictionary, and only after all of that feed it a sentence in a source language and ask it to tabulate a corresponding sentence in the target language. You would give the machine a language map that was, as Borges would have had it, the size of the territory. This perspective is usually called “symbolic A.I.” — because its definition of cognition is based on symbolic logic — or, disparagingly, “good old-fashioned A.I.”
There are two main problems with the old-fashioned approach. The first is that it’s awfully time-consuming on the human end. The second is that it only really works in domains where rules and definitions are very clear: in mathematics, for example, or chess. Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.

There were, however, limits to what this system could do. In the 1980s, a robotics researcher at Carnegie Mellon pointed out that it was easy to get computers to do adult things but nearly impossible to get them to do things a 1-year-old could do, like hold a ball or identify a cat. By the 1990s, despite punishing advancements in computer chess, we still weren’t remotely close to artificial general intelligence.
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
There was no reason you couldn’t try to mimic this structure in electronic form, and in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.
This attitude toward artificial intelligence was evolutionary rather than creationist. If you wanted a flexible mechanism, you wanted one that could adapt to its environment. If you wanted something that could adapt, you didn’t want to begin with the indoctrination of the rules of chess. You wanted to begin with very basic abilities — sensory perception and motor control — in the hope that advanced skills would emerge organically. Humans don’t learn to understand language by memorizing dictionaries and grammar books, so why should we possibly expect our computers to do so?
Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I. Dean, Corrado and Ng began their work as a part-time, collaborative experiment, but they made immediate progress. They took architectural inspiration for their models from recent theoretical outlines — as well as ideas that had been on the shelf since the 1980s and 1990s — and drew upon both the company’s peerless reserves of data and its massive computing infrastructure. They instructed the networks on enormous banks of “labeled” data — speech files with correct transcriptions, for example — and the computers improved their responses to better match reality.
“The portion of evolution in which animals developed eyes was a big development,” Dean told me one day, with customary understatement. We were sitting, as usual, in a whiteboarded meeting room, on which he had drawn a crowded, snaking timeline of Google Brain and its relation to inflection points in the recent history of neural networks. “Now computers have eyes. We can build them around the capabilities that now exist to understand photos. Robots will be drastically transformed. They’ll be able to operate in an unknown environment, on much different problems.” These capacities they were building may have seemed primitive, but their implications were profound.

2. The Unlikely Intern
In its first year or so of existence, Brain’s experiments in the development of a machine with the talents of a 1-year-old had, as Dean said, worked to great effect. Its speech-recognition team swapped out part of their old system for a neural network and encountered, in pretty much one fell swoop, the best quality improvements anyone had seen in 20 years. Their system’s object-recognition abilities improved by an order of magnitude. This was not because Brain’s personnel had generated a sheaf of outrageous new ideas in just a year. It was because Google had finally devoted the resources — in computers and, increasingly, personnel — to fill in outlines that had been around for a long time.
A great preponderance of these extant and neglected notions had been proposed or refined by a peripatetic English polymath named Geoffrey Hinton. In the second year of Brain’s existence, Hinton was recruited to Brain as Andrew Ng left. (Ng now leads the 1,300-person A.I. team at Baidu.) Hinton wanted to leave his post at the University of Toronto for only three months, so for arcane contractual reasons he had to be hired as an intern. At intern training, the orientation leader would say something like, “Type in your LDAP” — a user login — and he would flag a helper to ask, “What’s an LDAP?” All the smart 25-year-olds in attendance, who had only ever known deep learning as the sine qua non of artificial intelligence, snickered: “Who is that old guy? Why doesn’t he get it?”
“At lunchtime,” Hinton said, “someone in the queue yelled: ‘Professor Hinton! I took your course! What are you doing here?’ After that, it was all right.”
A few months later, Hinton and two of his students demonstrated truly astonishing gains in a big image-recognition contest, run by an open-source collective called ImageNet, that asks computers not only to identify a monkey but also to distinguish between spider monkeys and howler monkeys, and among God knows how many different breeds of cat. Google soon approached Hinton and his students with an offer. They accepted. “I thought they were interested in our I.P.,” he said. “Turns out they were interested in us.”
Hinton comes from one of those old British families emblazoned like the Darwins at eccentric angles across the intellectual landscape, where regardless of titular preoccupation a person is expected to make sideline contributions to minor problems in astronomy or fluid dynamics. His great-great-grandfather was George Boole, whose foundational work in symbolic logic underpins the computer; another great-great-grandfather was a celebrated surgeon, his father a venturesome entomologist, his father’s cousin a Los Alamos researcher; the list goes on. He trained at Cambridge and Edinburgh, then taught at Carnegie Mellon before he ended up at Toronto, where he still spends half his time. (His work has long been supported by the largess of the Canadian government.) I visited him in his office at Google there. He has tousled yellowed-pewter hair combed forward in a mature Noel Gallagher style and wore a baggy striped dress shirt that persisted in coming untucked, and oval eyeglasses that slid down to the tip of a prominent nose. He speaks with a driving if shambolic wit, and says things like, “Computers will understand sarcasm before Americans do.”
Hinton had been working on neural networks since his undergraduate days at Cambridge in the late 1960s, and he is seen as the intellectual primogenitor of the contemporary field. For most of that time, whenever he spoke about machine learning, people looked at him as though he were talking about the Ptolemaic spheres or bloodletting by leeches. Neural networks were taken as a disproven folly, largely on the basis of one overhyped project: the Perceptron, an artificial neural network that Frank Rosenblatt, a Cornell psychologist, developed in the late 1950s. The New York Times reported that the machine’s sponsor, the United States Navy, expected it would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” It went on to do approximately none of those things. Marvin Minsky, the dean of artificial intelligence in America, had worked on neural networks for his 1954 Princeton thesis, but he’d since grown tired of the inflated claims that Rosenblatt — who was a contemporary at Bronx Science — made for the neural paradigm. (He was also competing for Defense Department funding.) Along with an M.I.T. colleague, Minsky published a book that proved that there were painfully simple problems the Perceptron could never solve.
Minsky’s criticism of the Perceptron extended only to networks of one “layer,” i.e., one layer of artificial neurons between what’s fed to the machine and what you expect from it — and later in life, he expounded ideas very similar to contemporary deep learning. But Hinton already knew at the time that complex tasks could be carried out if you had recourse to multiple layers. The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns. Take the case of image recognition, which tends to rely on a contraption called a “convolutional neural net.” (These were elaborated in a seminal 1998 paper whose lead author, a Frenchman named Yann LeCun, did his postdoctoral research in Toronto under Hinton and now directs a huge A.I. endeavor at Facebook.) The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something (an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face.

A demonstration from 1993 showing an early version of the researcher Yann LeCun’s convolutional neural network, which by the late 1990s was processing 10 to 20 percent of all checks in the United States. A similar technology now drives most state-of-the-art image-recognition systems. Video posted on YouTube by Yann LeCun
The issue with multilayered, “deep” neural networks was that the trial-and-error part got extraordinarily complicated. In a single layer, it’s easy. Imagine that you’re playing with a child. You tell the child, “Pick up the green ball and put it into Box A.” The child picks up a green ball and puts it into Box B. You say, “Try again to put the green ball in Box A.” The child tries Box A. Bravo.
Now imagine you tell the child, “Pick up a green ball, go through the door marked 3 and put the green ball into Box A.” The child takes a red ball, goes through the door marked 2 and puts the red ball into Box B. How do you begin to correct the child? You cannot just repeat your initial instructions, because the child does not know at which point he went wrong. In real life, you might start by holding up the red ball and the green ball and saying, “Red ball, green ball.” The whole point of machine learning, however, is to avoid that kind of explicit mentoring. Hinton and a few others went on to invent a solution (or rather, reinvent an older one) to this layered-error problem, over the halting course of the late 1970s and 1980s, and interest among computer scientists in neural networks was briefly revived. “People got very excited about it,” he said. “But we oversold it.” Computer scientists quickly went back to thinking that people like Hinton were weirdos and mystics.
These ideas remained popular, however, among philosophers and psychologists, who called it “connectionism” or “parallel distributed processing.” “This idea,” Hinton told me, “of a few people keeping a torch burning, it’s a nice myth. It was true within artificial intelligence. But within psychology lots of people believed in the approach but just couldn’t do it.” Neither could Hinton, despite the generosity of the Canadian government. “There just wasn’t enough computer power or enough data. People on our side kept saying, ‘Yeah, but if I had a really big one, it would work.’ It wasn’t a very persuasive argument.”
‘The portion of evolution in which animals developed eyes was a big development. Now computers have eyes.’

3. A Deep Explanation of Deep Learning
When Pichai said that Google would henceforth be “A.I. first,” he was not just making a claim about his company’s business strategy; he was throwing in his company’s lot with this long-unworkable idea. Pichai’s allocation of resources ensured that people like Dean could ensure that people like Hinton would have, at long last, enough computers and enough data to make a persuasive argument. An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion. For a simple artificial neural network of the sort proposed in the 1940s, the attempt to even try to replicate this was unimaginable. We’re still far from the construction of a network of that size, but Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.
To understand why scale is so important, however, you have to start to understand some of the more technical details of what, exactly, machine intelligences are doing with the data they consume. A lot of our ambient fears about A.I. rest on the idea that they’re just vacuuming up knowledge like a sociopathic prodigy in a library, and that an artificial intelligence constructed to make paper clips might someday decide to treat humans like ants or lettuce. This just isn’t how they work. All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place.
If that brief explanation seems sufficiently reassuring, the reassured nontechnical reader is invited to skip forward to the next section, which is about cats. If not, then read on. (This section is also, luckily, about cats.)
Imagine you want to program a cat-recognizer on the old symbolic-A.I. model. You stay up for days preloading the machine with an exhaustive, explicit definition of “cat.” You tell it that a cat has four legs and pointy ears and whiskers and a tail, and so on. All this information is stored in a special place in memory called Cat. Now you show it a picture. First, the machine has to separate out the various distinct elements of the image. Then it has to take these elements and apply the rules stored in its memory. If(legs=4) and if(ears=pointy) and if(whiskers=yes) and if(tail=yes) and if(expression=supercilious), then(cat=yes). But what if you showed this cat-recognizer a Scottish Fold, a heart-rending breed with a prized genetic defect that leads to droopy doubled-over ears? Our symbolic A.I. gets to (ears=pointy) and shakes its head solemnly, “Not cat.” It is hyperliteral, or “brittle.” Even the thickest toddler shows much greater inferential acuity.
Now imagine that instead of hard-wiring the machine with a set of rules for classification stored in one location of the computer’s memory, you try the same thing on a neural network. There is no special place that can hold the definition of “cat.” There is just a giant blob of interconnected switches, like forks in a path. On one side of the blob, you present the inputs (the pictures); on the other side, you present the corresponding outputs (the labels). Then you just tell it to work out for itself, via the individual calibration of all of these interconnected switches, whatever path the data should take so that the inputs are mapped to the correct outputs. The training is the process by which a labyrinthine series of elaborate tunnels are excavated through the blob, tunnels that connect any given input to its proper output. The more training data you have, the greater the number and intricacy of the tunnels that can be dug. Once the training is complete, the middle of the blob has enough tunnels that it can make reliable predictions about how to handle data it has never seen before. This is called “supervised learning.”
The reason that the network requires so many neurons and so much data is that it functions, in a way, like a sort of giant machine democracy. Imagine you want to train a computer to differentiate among five different items. Your network is made up of millions and millions of neuronal “voters,” each of whom has been given five different cards: one for cat, one for dog, one for spider monkey, one for spoon and one for defibrillator. You show your electorate a photo and ask, “Is this a cat, a dog, a spider monkey, a spoon or a defibrillator?” All the neurons that voted the same way collect in groups, and the network foreman peers down from above and identifies the majority classification: “A dog?”
You say: “No, maestro, it’s a cat. Try again.”
Now the network foreman goes back to identify which voters threw their weight behind “cat” and which didn’t. The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes. What’s important is not the individual vote, exactly, but the pattern of votes. If Joe, Frank and Mary all vote together, it’s a dog; but if Joe, Kate and Jessica vote together, it’s a cat; and if Kate, Jessica and Frank vote together, it’s a defibrillator. The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals. If you have only Joe, Frank and Mary, you can maybe use them only to differentiate among a cat, a dog and a defibrillator. If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with incredible granularity. Your trained voter assembly will be able to look at an unlabeled picture and identify it more or less accurately.
Part of the reason there was so much resistance to these ideas in computer-science departments is that because the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them. This wooliness, however, is the point. The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes. You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.
It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.
Image-recognition networks like our cat-identifier are only one of many varieties of deep learning, but they are disproportionately invoked as teaching examples because each layer does something at least vaguely recognizable to humans — picking out edges first, then circles, then faces. This means there’s a safeguard against error. For instance, an early oddity in Google’s image-recognition software meant that it could not always identify a barbell in isolation, even though the team had trained it on an image set that included a lot of exercise categories. A visualization tool showed them the machine had learned not the concept of “dumbbell” but the concept of “dumbbell+arm,” because all the dumbbells in the training set were attached to arms. They threw into the training mix some photos of solo barbells. The problem was solved. Not everything is so easy.

Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.

4. The Cat Paper
Over the course of its first year or two, Brain’s efforts to cultivate in machines the skills of a 1-year-old were auspicious enough that the team was graduated out of the X lab and into the broader research organization. (The head of Google X once noted that Brain had paid for the entirety of X’s costs.) They still had fewer than 10 people and only a vague sense for what might ultimately come of it all. But even then they were thinking ahead to what ought to happen next. First a human mind learns to recognize a ball and rests easily with the accomplishment for a moment, but sooner or later, it wants to ask for the ball. And then it wades into language.
The first step in that direction was the cat paper, which made Brain famous.
What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself. (The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data. The cat paper showed that machines could also deal with raw unlabeled data, perhaps even data of which humans had no established foreknowledge. This seemed like a major advance not only in cat-recognition studies but also in overall artificial intelligence.
The lead author on the cat paper was Quoc Le. Le is short and willowy and soft-spoken, with a quick, enigmatic smile and shiny black penny loafers. He grew up outside Hue, Vietnam. His parents were rice farmers, and he did not have electricity at home. His mathematical abilities were obvious from an early age, and he was sent to study at a magnet school for science. In the late 1990s, while still in school, he tried to build a chatbot to talk to. He thought, How hard could this be?
“But actually,” he told me in a whispery deadpan, “it’s very hard.”
He left the rice paddies on a scholarship to a university in Canberra, Australia, where he worked on A.I. tasks like computer vision. The dominant method of the time, which involved feeding the machine definitions for things like edges, felt to him like cheating. Le didn’t know then, or knew only dimly, that there were at least a few dozen computer scientists elsewhere in the world who couldn’t help imagining, as he did, that machines could learn from scratch. In 2006, Le took a position at the Max Planck Institute for Biological Cybernetics in the medieval German university town of Tübingen. In a reading group there, he encountered two new papers by Geoffrey Hinton. People who entered the discipline during the long diaspora all have conversion stories, and when Le read those papers, he felt the scales fall away from his eyes.
“There was a big debate,” he told me. “A very big debate.” We were in a small interior conference room, a narrow, high-ceilinged space outfitted with only a small table and two whiteboards. He looked to the curve he’d drawn on the whiteboard behind him and back again, then softly confided, “I’ve never seen such a big debate.”
He remembers standing up at the reading group and saying, “This is the future.” It was, he said, an “unpopular decision at the time.” A former adviser from Australia, with whom he had stayed close, couldn’t quite understand Le’s decision. “Why are you doing this?” he asked Le in an email.
“I didn’t have a good answer back then,” Le said. “I was just curious. There was a successful paradigm, but to be honest I was just curious about the new paradigm. In 2006, there was very little activity.” He went to join Ng at Stanford and began to pursue Hinton’s ideas. “By the end of 2010, I was pretty convinced something was going to happen.”
What happened, soon afterward, was that Le went to Brain as its first intern, where he carried on with his dissertation work — an extension of which ultimately became the cat paper. On a simple level, Le wanted to see if the computer could be trained to identify on its own the information that was absolutely essential to a given image. He fed the neural network a still he had taken from YouTube. He then told the neural network to throw away some of the information contained in the image, though he didn’t specify what it should or shouldn’t throw away. The machine threw away some of the information, initially at random. Then he said: “Just kidding! Now recreate the initial image you were shown based only on the information you retained.” It was as if he were asking the machine to find a way to “summarize” the image, and then expand back to the original from the summary. If the summary was based on irrelevant data — like the color of the sky rather than the presence of whiskers — the machine couldn’t perform a competent reconstruction. Its reaction would be akin to that of a distant ancestor whose takeaway from his brief exposure to saber-tooth tigers was that they made a restful swooshing sound when they moved. Le’s neural network, unlike that ancestor, got to try again, and again and again and again. Each time it mathematically “chose” to prioritize different pieces of information and performed incrementally better. A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer. The same network that hit on our concept of cat also became enthusiastic about a pattern that looked like some sort of furniture-animal compound, like a cross between an ottoman and a goat.
Le didn’t see himself in those heady cat years as a language guy, but he felt an urge to connect the dots to his early chatbot. After the cat paper, he realized that if you could ask a network to summarize a photo, you could perhaps also ask it to summarize a sentence. This problem preoccupied Le, along with a Brain colleague named Tomas Mikolov, for the next two years.
In that time, the Brain team outgrew several offices around him. For a while they were on a floor they shared with executives. They got an email at one point from the administrator asking that they please stop allowing people to sleep on the couch in front of Larry Page and Sergey Brin’s suite. It unsettled incoming V.I.P.s. They were then allocated part of a research building across the street, where their exchanges in the microkitchen wouldn’t be squandered on polite chitchat with the suits. That interim also saw dedicated attempts on the part of Google’s competitors to catch up. (As Le told me about his close collaboration with Tomas Mikolov, he kept repeating Mikolov’s name over and over, in an incantatory way that sounded poignant. Le had never seemed so solemn. I finally couldn’t help myself and began to ask, “Is he … ?” Le nodded. “At Facebook,” he replied.)
Photo

They spent this period trying to come up with neural-network architectures that could accommodate not only simple photo classifications, which were static, but also complex structures that unfolded over time, like language or music. Many of these were first proposed in the 1990s, and Le and his colleagues went back to those long-ignored contributions to see what they could glean. They knew that once you established a facility with basic linguistic prediction, you could then go on to do all sorts of other intelligent things — like predict a suitable reply to an email, for example, or predict the flow of a sensible conversation. You could sidle up to the sort of prowess that would, from the outside at least, look a lot like thinking.

Part II: Language Machine
5. The Linguistic Turn
The hundred or so current members of Brain — it often feels less like a department within a colossal corporate hierarchy than it does a club or a scholastic society or an intergalactic cantina — came in the intervening years to count among the freest and most widely admired employees in the entire Google organization. They are now quartered in a tiered two-story eggshell building, with large windows tinted a menacing charcoal gray, on the leafy northwestern fringe of the company’s main Mountain View campus. Their microkitchen has a foosball table I never saw used; a Rock Band setup I never saw used; and a Go kit I saw used on a few occasions. (I did once see a young Brain research associate introducing his colleagues to ripe jackfruit, carving up the enormous spiky orb like a turkey.)
When I began spending time at Brain’s offices, in June, there were some rows of empty desks, but most of them were labeled with Post-it notes that said things like “Jesse, 6/27.” Now those are all occupied. When I first visited, parking was not an issue. The closest spaces were those reserved for expectant mothers or Teslas, but there was ample space in the rest of the lot. By October, if I showed up later than 9:30, I had to find a spot across the street.
Brain’s growth made Dean slightly nervous about how the company was going to handle the demand. He wanted to avoid what at Google is known as a “success disaster” — a situation in which the company’s capabilities in theory outpaced its ability to implement a product in practice. At a certain point he did some back-of-the-envelope calculations, which he presented to the executives one day in a two-slide presentation.
“If everyone in the future speaks to their Android phone for three minutes a day,” he told them, “this is how many machines we’ll need.” They would need to double or triple their global computational footprint.
“That,” he observed with a little theatrical gulp and widened eyes, “sounded scary. You’d have to” — he hesitated to imagine the consequences — “build new buildings.”
There was, however, another option: just design, mass-produce and install in dispersed data centers a new kind of chip to make everything faster. These chips would be called T.P.U.s, or “tensor processing units,” and their value proposition — counterintuitively — is that they are deliberately less precise than normal chips. Rather than compute 12.246 times 54.392, they will give you the perfunctory answer to 12 times 54. On a mathematical level, rather than a metaphorical one, a neural network is just a structured series of hundreds or thousands or tens of thousands of matrix multiplications carried out in succession, and it’s much more important that these processes be fast than that they be exact. “Normally,” Dean said, “special-purpose hardware is a bad idea. It usually works to speed up one thing. But because of the generality of neural networks, you can leverage this special-purpose hardware for a lot of other things.”
Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.
The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.
It’s no great tragedy if neural networks mislabel 1 percent of cats as dogs, but in something like a self-driving car we all want greater assurances.

6. The Ambush
Le’s paper showed that neural translation was plausible, but he had used only a relatively small public data set. (Small for Google, that is — it was actually the biggest public data set in the world. A decade of the old Translate had gathered production data that was between a hundred and a thousand times bigger.) More important, Le’s model didn’t work very well for sentences longer than about seven words.
Mike Schuster, who then was a staff research scientist at Brain, picked up the baton. He knew that if Google didn’t find a way to scale these theoretical insights up to a production level, someone else would. The project took him the next two years. “You think,” Schuster says, “to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.”
Schuster is a taut, focused, ageless being with a tanned, piston-shaped head, narrow shoulders, long camo cargo shorts tied below the knee and neon-green Nike Flyknits. He looks as if he woke up in the lotus position, reached for his small, rimless, elliptical glasses, accepted calories in the form of a modest portion of preserved acorn and completed a relaxed desert decathlon on the way to the office; in reality, he told me, it’s only an 18-mile bike ride each way. Schuster grew up in Duisburg, in the former West Germany’s blast-furnace district, and studied electrical engineering before moving to Kyoto to work on early neural networks. In the 1990s, he ran experiments with a neural-networking machine as big as a conference room; it cost millions of dollars and had to be trained for weeks to do something you could now do on your desktop in less than an hour. He published a paper in 1997 that was barely cited for a decade and a half; this year it has been cited around 150 times. He is not humorless, but he does often wear an expression of some asperity, which I took as his signature combination of German restraint and Japanese restraint.
The issues Schuster had to deal with were tangled. For one thing, Le’s code was custom-written, and it wasn’t compatible with the new open-source machine-learning platform Google was then developing, TensorFlow. Dean directed to Schuster two other engineers, Yonghui Wu and Zhifeng Chen, in the fall of 2015. It took them two months just to replicate Le’s results on the new system. Le was around, but even he couldn’t always make heads or tails of what they had done.
As Schuster put it, “Some of the stuff was not done in full consciousness. They didn’t know themselves why they worked.”
This February, Google’s research organization — the loose division of the company, roughly a thousand employees in all, dedicated to the forward-looking and the unclassifiable — convened their leads at an offsite retreat at the Westin St. Francis, on Union Square, a luxury hotel slightly less splendid than Google’s own San Francisco shop a mile or so to the east. The morning was reserved for rounds of “lightning talks,” quick updates to cover the research waterfront, and the afternoon was idled away in cross-departmental “facilitated discussions.” The hope was that the retreat might provide an occasion for the unpredictable, oblique, Bell Labs-ish exchanges that kept a mature company prolific.
At lunchtime, Corrado and Dean paired up in search of Macduff Hughes, director of Google Translate. Hughes was eating alone, and the two Brain members took positions at either side. As Corrado put it, “We ambushed him.”
“O.K.,” Corrado said to the wary Hughes, holding his breath for effect. “We have something to tell you.”
They told Hughes that 2016 seemed like a good time to consider an overhaul of Google Translate — the code of hundreds of engineers over 10 years — with a neural network. The old system worked the way all machine translation has worked for about 30 years: It sequestered each successive sentence fragment, looked up those words in a large statistically derived vocabulary table, then applied a battery of post-processing rules to affix proper endings and rearrange it all to make sense. The approach is called “phrase-based statistical machine translation,” because by the time the system gets to the next phrase, it doesn’t know what the last one was. This is why Translate’s output sometimes looked like a shaken bag of fridge magnets. Brain’s replacement would, if it came together, read and render entire sentences at one draft. It would capture context — and something akin to meaning.
The stakes may have seemed low: Translate generates minimal revenue, and it probably always will. For most Anglophone users, even a radical upgrade in the service’s performance would hardly be hailed as anything more than an expected incremental bump. But there was a case to be made that human-quality machine translation is not only a short-term necessity but also a development very likely, in the long term, to prove transformational. In the immediate future, it’s vital to the company’s business strategy. Google estimates that 50 percent of the internet is in English, which perhaps 20 percent of the world’s population speaks. If Google was going to compete in China — where a majority of market share in search-engine traffic belonged to its competitor Baidu — or India, decent machine translation would be an indispensable part of the infrastructure. Baidu itself had published a pathbreaking paper about the possibility of neural machine translation in July 2015.
‘You think to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.’

And in the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language. This would represent a major inflection point — perhaps the major inflection point — in the development of something that felt like true artificial intelligence.
Most people in Silicon Valley were aware of machine learning as a fast-approaching horizon, so Hughes had seen this ambush coming. He remained skeptical. A modest, sturdily built man of early middle age with mussed auburn hair graying at the temples, Hughes is a classic line engineer, the sort of craftsman who wouldn’t have been out of place at a drafting table at 1970s Boeing. His jeans pockets often look burdened with curious tools of ungainly dimension, as if he were porting around measuring tapes or thermocouples, and unlike many of the younger people who work for him, he has a wardrobe unreliant on company gear. He knew that various people in various places at Google and elsewhere had been trying to make neural translation work — not in a lab but at production scale — for years, to little avail.
Hughes listened to their case and, at the end, said cautiously that it sounded to him as if maybe they could pull it off in three years.
Dean thought otherwise. “We can do it by the end of the year, if we put our minds to it.” One reason people liked and admired Dean so much was that he had a long record of successfully putting his mind to it. Another was that he wasn’t at all embarrassed to say sincere things like “if we put our minds to it.”
Hughes was sure the conversion wasn’t going to happen any time soon, but he didn’t personally care to be the reason. “Let’s prepare for 2016,” he went back and told his team. “I’m not going to be the one to say Jeff Dean can’t deliver speed.”
A month later, they were finally able to run a side-by-side experiment to compare Schuster’s new system with Hughes’s old one. Schuster wanted to run it for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”
It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations. At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.
The neural system, on the English-French language pair, showed an improvement over the old system of seven points.
Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.
To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.
7. Theory Becomes Product
Until then, the neural-translation team had been only three people — Schuster, Wu and Chen — but with Hughes’s support, the broader team began to coalesce. They met under Schuster’s command on Wednesdays at 2 p.m. in a corner room of the Brain building called Quartz Lake. The meeting was generally attended by a rotating cast of more than a dozen people. When Hughes or Corrado were there, they were usually the only native English speakers. The engineers spoke Chinese, Vietnamese, Polish, Russian, Arabic, German and Japanese, though they mostly spoke in their own efficient pidgin and in math. It is not always totally clear, at Google, who is running a meeting, but in Schuster’s case there was no ambiguity.
The steps they needed to take, even then, were not wholly clear. “This story is a lot about uncertainty — uncertainty throughout the whole process,” Schuster told me at one point. “The software, the data, the hardware, the people. It was like” — he extended his long, gracile arms, slightly bent at the elbows, from his narrow shoulders — “swimming in a big sea of mud, and you can only see this far.” He held out his hand eight inches in front of his chest. “There’s a goal somewhere, and maybe it’s there.”
Most of Google’s conference rooms have videochat monitors, which when idle display extremely high-resolution oversaturated public Google+ photos of a sylvan dreamscape or the northern lights or the Reichstag. Schuster gestured toward one of the panels, which showed a crystalline still of the Washington Monument at night.
“The view from outside is that everyone has binoculars and can see ahead so far.”
The theoretical work to get them to this point had already been painstaking and drawn-out, but the attempt to turn it into a viable product — the part that academic scientists might dismiss as “mere” engineering — was no less difficult. For one thing, they needed to make sure that they were training on good data. Google’s billions of words of training “reading” were mostly made up of complete sentences of moderate complexity, like the sort of thing you might find in Hemingway. Some of this is in the public domain: The original Rosetta Stone of statistical machine translation was millions of pages of the complete bilingual records of the Canadian Parliament. Much of it, however, was culled from 10 years of collected data, including human translations that were crowdsourced from enthusiastic respondents. The team had in their storehouse about 97 million unique English “words.” But once they removed the emoticons, and the misspellings, and the redundancies, they had a working vocabulary of only around 160,000.
Then you had to refocus on what users actually wanted to translate, which frequently had very little to do with reasonable language as it is employed. Many people, Google had found, don’t look to the service to translate full, complex sentences; they translate weird little shards of language. If you wanted the network to be able to handle the stream of user queries, you had to be sure to orient it in that direction. The network was very sensitive to the data it was trained on. As Hughes put it to me at one point: “The neural-translation system is learning everything it can. It’s like a toddler. ‘Oh, Daddy says that word when he’s mad!’ ” He laughed. “You have to be careful.”
More than anything, though, they needed to make sure that the whole thing was fast and reliable enough that their users wouldn’t notice. In February, the translation of a 10-word sentence took 10 seconds. They could never introduce anything that slow. The Translate team began to conduct latency experiments on a small percentage of users, in the form of faked delays, to identify tolerance. They found that a translation that took twice as long, or even five times as long, wouldn’t be registered. An eightfold slowdown would. They didn’t need to make sure this was true across all languages. In the case of a high-traffic language, like French or Chinese, they could countenance virtually no slowdown. For something more obscure, they knew that users wouldn’t be so scared off by a slight delay if they were getting better quality. They just wanted to prevent people from giving up and switching over to some competitor’s service.
Schuster, for his part, admitted he just didn’t know if they ever could make it fast enough. He remembers a conversation in the microkitchen during which he turned to Chen and said, “There must be something we don’t know to make it fast enough, but I don’t know what it could be.”
He did know, though, that they needed more computers — “G.P.U.s,” graphics processors reconfigured for neural networks — for training.
Hughes went to Schuster to ask what he thought. “Should we ask for a thousand G.P.U.s?”
Schuster said, “Why not 2,000?”

In the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language.

Ten days later, they had the additional 2,000 processors.
By April, the original lineup of three had become more than 30 people — some of them, like Le, on the Brain side, and many from Translate. In May, Hughes assigned a kind of provisional owner to each language pair, and they all checked their results into a big shared spreadsheet of performance evaluations. At any given time, at least 20 people were running their own independent weeklong experiments and dealing with whatever unexpected problems came up. One day a model, for no apparent reason, started taking all the numbers it came across in a sentence and discarding them. There were months when it was all touch and go. “People were almost yelling,” Schuster said.
By late spring, the various pieces were coming together. The team introduced something called a “word-piece model,” a “coverage penalty,” “length normalization.” Each part improved the results, Schuster says, by maybe a few percentage points, but in aggregate they had significant effects. Once the model was standardized, it would be only a single multilingual model that would improve over time, rather than the 150 different models that Translate currently used. Still, the paradox — that a tool built to further generalize, via learning machines, the process of automation required such an extraordinary amount of concerted human ingenuity and effort — was not lost on them. So much of what they did was just gut. How many neurons per layer did you use? 1,024 or 512? How many layers? How many sentences did you run through at a time? How long did you train for?
“We did hundreds of experiments,” Schuster told me, “until we knew that we could stop the training after one week. You’re always saying: When do we stop? How do I know I’m done? You never know you’re done. The machine-learning mechanism is never perfect. You need to train, and at some point you have to stop. That’s the very painful nature of this whole system. It’s hard for some people. It’s a little bit an art — where you put your brush to make it nice. It comes from just doing it. Some people are better, some worse.”
By May, the Brain team understood that the only way they were ever going to make the system fast enough to implement as a product was if they could run it on T.P.U.s, the special-purpose chips that Dean had called for. As Chen put it: “We did not even know if the code would work. But we did know that without T.P.U.s, it definitely wasn’t going to work.” He remembers going to Dean one on one to plead, “Please reserve something for us.” Dean had reserved them. The T.P.U.s, however, didn’t work right out of the box. Wu spent two months sitting next to someone from the hardware team in an attempt to figure out why. They weren’t just debugging the model; they were debugging the chip. The neural-translation project would be proof of concept for the whole infrastructural investment.
One Wednesday in June, the meeting in Quartz Lake began with murmurs about a Baidu paper that had recently appeared on the discipline’s chief online forum. Schuster brought the room to order. “Yes, Baidu came out with a paper. It feels like someone looking through our shoulder — similar architecture, similar results.” The company’s BLEU scores were essentially what Google achieved in its internal tests in February and March. Le didn’t seem ruffled; his conclusion seemed to be that it was a sign Google was on the right track. “It is very similar to our system,” he said with quiet approval.
The Google team knew that they could have published their results earlier and perhaps beaten their competitors, but as Schuster put it: “Launching is more important than publishing. People say, ‘Oh, I did something first,’ but who cares, in the end?”
This did, however, make it imperative that they get their own service out first and better. Hughes had a fantasy that they wouldn’t even inform their users of the switch. They would just wait and see if social media lit up with suspicions about the vast improvements.
“We don’t want to say it’s a new system yet,” he told me at 5:36 p.m. two days after Labor Day, one minute before they rolled out Chinese-to-English to 10 percent of their users, without telling anyone. “We want to make sure it works. The ideal is that it’s exploding on Twitter: ‘Have you seen how awesome Google Translate got?’ ”
8. A Celebration
The only two reliable measures of time in the seasonless Silicon Valley are the rotations of seasonal fruit in the microkitchens — from the pluots of midsummer to the Asian pears and Fuyu persimmons of early fall — and the zigzag of technological progress. On an almost uncomfortably warm Monday afternoon in late September, the team’s paper was at last released. It had an almost comical 31 authors. The next day, the members of Brain and Translate gathered to throw themselves a little celebratory reception in the Translate microkitchen. The rooms in the Brain building, perhaps in homage to the long winters of their diaspora, are named after Alaskan locales; the Translate building’s theme is Hawaiian.
The Hawaiian microkitchen has a slightly grainy beach photograph on one wall, a small lei-garlanded thatched-hut service counter with a stuffed parrot at the center and ceiling fixtures fitted to resemble paper lanterns. Two sparse histograms of bamboo poles line the sides, like the posts of an ill-defended tropical fort. Beyond the bamboo poles, glass walls and doors open onto rows of identical gray desks on either side. That morning had seen the arrival of new hooded sweatshirts to honor 10 years of Translate, and many team members went over to the party from their desks in their new gear. They were in part celebrating the fact that their decade of collective work was, as of that day, en route to retirement. At another institution, these new hoodies might thus have become a costume of bereavement, but the engineers and computer scientists from both teams all seemed pleased.

‘It was like swimming in a big sea of mud, and you can only see this far.’ Schuster held out his hand eight inches in front of his chest.

Google’s neural translation was at last working. By the time of the party, the company’s Chinese-English test had already processed 18 million queries. One engineer on the Translate team was running around with his phone out, trying to translate entire sentences from Chinese to English using Baidu’s alternative. He crowed with glee to anybody who would listen. “If you put in more than two characters at once, it times out!” (Baidu says this problem has never been reported by users.)
When word began to spread, over the following weeks, that Google had introduced neural translation for Chinese to English, some people speculated that it was because that was the only language pair for which the company had decent results. Everybody at the party knew that the reality of their achievement would be clear in November. By then, however, many of them would be on to other projects.
Hughes cleared his throat and stepped in front of the tiki bar. He wore a faded green polo with a rumpled collar, lightly patterned across the midsection with dark bands of drying sweat. There had been last-minute problems, and then last-last-minute problems, including a very big measurement error in the paper and a weird punctuation-related bug in the system. But everything was resolved — or at least sufficiently resolved for the moment. The guests quieted. Hughes ran efficient and productive meetings, with a low tolerance for maundering or side conversation, but he was given pause by the gravity of the occasion. He acknowledged that he was, perhaps, stretching a metaphor, but it was important to him to underline the fact, he began, that the neural translation project itself represented a “collaboration between groups that spoke different languages.”
Their neural-translation project, he continued, represented a “step function forward” — that is, a discontinuous advance, a vertical leap rather than a smooth curve. The relevant translation had been not just between the two teams but from theory into reality. He raised a plastic demi-flute of expensive-looking Champagne.
“To communication,” he said, “and cooperation!”
The engineers assembled looked around at one another and gave themselves over to little circumspect whoops and applause.
Jeff Dean stood near the center of the microkitchen, his hands in his pockets, shoulders hunched slightly inward, with Corrado and Schuster. Dean saw that there was some diffuse preference that he contribute to the observance of the occasion, and he did so in a characteristically understated manner, with a light, rapid, concise addendum.
What they had shown, Dean said, was that they could do two major things at once: “Do the research and get it in front of, I dunno, half a billion people.”
Everyone laughed, not because it was an exaggeration but because it wasn’t.

Epilogue: Machines Without Ghosts
Perhaps the most famous historic critique of artificial intelligence, or the claims made on its behalf, implicates the question of translation. The Chinese Room argument was proposed in 1980 by the Berkeley philosopher John Searle. In Searle’s thought experiment, a monolingual English speaker sits alone in a cell. An unseen jailer passes him, through a slot in the door, slips of paper marked with Chinese characters. The prisoner has been given a set of tables and rules in English for the composition of replies. He becomes so adept with these instructions that his answers are soon “absolutely indistinguishable from those of Chinese speakers.” Should the unlucky prisoner be said to “understand” Chinese? Searle thought the answer was obviously not. This metaphor for a computer, Searle later wrote, exploded the claim that “the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds.”
For the Google Brain team, though, or for nearly everyone else who works in machine learning in Silicon Valley, that view is entirely beside the point. This doesn’t mean they’re just ignoring the philosophical question. It means they have a fundamentally different view of the mind. Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms. The implication is that our facility with what we consider the higher registers of thought are no different in kind from what we’re tempted to perceive as the lower registers. Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems. As Corrado said to me on my very first day at Google, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does,’ and — more importantly — what it doesn’t do yet.”
Where you come down on “knowing” versus “doing” has real cultural and social implications. At the party, Schuster came over to me to express his frustration with the paper’s media reception. “Did you see the first press?” he asked me. He paraphrased a headline from that morning, blocking it word by word with his hand as he recited it: GOOGLE SAYS A.I. TRANSLATION IS INDISTINGUISHABLE FROM HUMANS’. Over the final weeks of the paper’s composition, the team had struggled with this; Schuster often repeated that the message of the paper was “It’s much better than it was before, but not as good as humans.” He had hoped it would be clear that their efforts weren’t about replacing people but helping them.
And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view. At a 2015 M.I.T. conference about the roots of artificial intelligence, Noam Chomsky was asked what he thought of machine learning. He pooh-poohed the whole enterprise as mere statistical prediction, a glorified weather forecast. Even if neural translation attained perfect functionality, it would reveal nothing profound about the underlying nature of language. It could never tell you if a pronoun took the dative or the accusative case. This kind of prediction makes for a good tool to accomplish our ends, but it doesn’t succeed by the standards of furthering our understanding of why things happen the way they do. A machine can already detect tumors in medical scans better than human radiologists, but the machine can’t tell you what’s causing the cancer.
Then again, can the radiologist?
Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.
The most important thing happening in Silicon Valley right now is not disruption. Rather, it’s institution-building — and the consolidation of power — on a scale and at a pace that are both probably unprecedented in human history. Brain has interns; it has residents; it has “ninja” classes to train people in other departments. Everywhere there are bins of free bike helmets, and free green umbrellas for the two days a year it rains, and little fruit salads, and nap pods, and shared treadmill desks, and massage chairs, and random cartons of high-end pastries, and places for baby-clothes donations, and two-story climbing walls with scheduled instructors, and reading groups and policy talks and variegated support networks. The recipients of these major investments in human cultivation — for they’re far more than perks for proles in some digital salt mine — have at hand the power of complexly coordinated servers distributed across 13 data centers on four continents, data centers that draw enough electricity to light up large cities.

But even enormous institutions like Google will be subject to this wave of automation; once machines can learn from human speech, even the comfortable job of the programmer is threatened. As the party in the tiki bar was winding down, a Translate engineer brought over his laptop to show Hughes something. The screen swirled and pulsed with a vivid, kaleidoscopic animation of brightly colored spheres in long looping orbits that periodically collapsed into nebulae before dispersing once more.
Hughes recognized what it was right away, but I had to look closely before I saw all the names — of people and files. It was an animation of the history of 10 years of changes to the Translate code base, every single buzzing and blooming contribution by every last team member. Hughes reached over gently to skip forward, from 2006 to 2008 to 2015, stopping every once in a while to pause and remember some distant campaign, some ancient triumph or catastrophe that now hurried by to be absorbed elsewhere or to burst on its own. Hughes pointed out how often Jeff Dean’s name expanded here and there in glowing spheres.

Hughes called over Corrado, and they stood transfixed. To break the spell of melancholic nostalgia, Corrado, looking a little wounded, looked up and said, “So when do we get to delete it?”
“Don’t worry about it,” Hughes said. “The new code base is going to grow. Everything grows.”
Gideon Lewis-Kraus is a writer at large for the magazine and a fellow at New America.

========Appendix ======

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Marvin Minsky, 

THE WORK ISSUE 
What Google Learned From Its Quest to Build the Perfect Team FEB. 25, 2016 





When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ NOV. 27, 2016 



Deep Learning Update

This TED talk by Jeremy Howard in Brussels, created in December 2014, reveals the staggering progress made in the field of deep learning:

Jeremy Howard’s TED Talk on Deep Learning

In this TED talk, he speaks about:

– Amazon and NetFlicks use machine learning to suggest products that you would like.
_ IBM’s Watson beat the two world champions in Jeopardy
– Google’s car has now driven without a driver over a million miles without an accident.
– Jeffrey Hinton beat all the others, in just two weeks, to identify new drugs.
– Deep Learning learned how to recognize a wide variety of German Street signs.

He demonstrates:

– that computers can see …. image recognition application where 1.5 million pictures of cars are classified, where a human help the machine learn by “training” it to recognize “front”, “back” “angle” etc. He says that there are 16,000 dimensions to the analysis. He asks: can a pathologist look for areas of mitosis? Can a radiologist ….? A second image Stanford application where a computer can look at an image and describe in text, with some success, what is the image about. Humans asked about the text preferred the computer description 25% of the time – he predicts it will pass human performance in less than a year.
– computers can understand …. showing how a Stanford-based approach can read a sentence and understand the sentiment.
– computers can search images …. and match them to text …. he points out that this breakthrough is just in the last few months. The approach by Google searches text tags of the image (and thus is not this)
– that computers can listen ….voice recognition application where an English speaker can have his voice (not another voice) translated real-time into Chinese.
– computers can write ….

He speaks about the exponential growth in understanding which is underway. He believes that in the next five years, machine Learning performance will exceed human learning performance.

He does not believe better education will help. He thinks now is the time to begin adjusting our social structures to accommodate this new world.

He also speaks about applications:
– medical diagnostics through analysis of cancer tissue.

From Wikipedia:
Jeremy Howard
Jeremy Howard (born 1973) is an Australian data scientist and entrepreneur.[3] He is the CEO and Founder at Enlitic, an advanced machine learning company in San Francisco, California. Previously, Howard was the President and Chief Scientist at Kaggle, a community and competition platform of over 200,000 data scientists. Howard is the youngest faculty member at Singularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on “Jobs For The Machines.”[4] Howard advised Khosla Ventures as their Data Strategist, identifying the biggest opportunities for investing in data driven startups and mentoring their portfolio companies to build data-driven businesses. Howard was the founding CEO of two successful Australian startups, FastMail and Optimal Decisions Group. Before that, he spent eight years in management consulting, at McKinsey & Company and AT Kearney.

Kaggle
Howard first became involved with Kaggle, founded in April 2010,[8] after becoming the globally top-ranked participant in data science competitions in both 2010 and 2011. The competitions that Howard won involved tourism forecasting[1] and predicting the success of grant applications.[2] Howard then became the President and Chief Scientist of Kaggle.[9]

Enlitic
In August 2014, Howard founded Enlitic with the mission of leveraging recent advances in machine learning to make medical diagnostics and clinical decision support tools faster, more accurate, and more accessible. Enlitic uses state-of-the-art Deep Learning algorithms to diagnosis illness and disease.[13] Howard believes that today, machine learning algorithms are actually as good as or better than humans at many things that we think of as being uniquely human capabilities.[14] He projects that the application of deep learning will have the most significant impact on medicine out of any technology during this decade by effectively aggregating data.[15] On October 28, 2014, Howard announced Enlitic’s seed funding round.[16]

History of Computing

http://us.penguingroup.com/static/packages/us/kurzweil/excerpts/timeline/timeline2.htm

TIME LINE
1950
Eckert and Mauchley develop UNIVAC, the first commercially marketed computer. It is used to compile the results of the U.S. census, marking the first time this census is handled by a programmable computer.
1950
In his paper “Computing Machinery and Intelligence,” Alan Turing presents the Turing Test, a means for determining whether a machine is intelligent.
1950
Commercial color television is first broadcast in the United States, and transcontinental black-and-white television is available within the next year.
1950
Claude Elwood Shannon writes “Programming a Computer for Playing Chess,” published in Philosophical Magazine.
1951
Eckert and Mauchley build EDVAC, which is the first computer to use the stored-program concept. The work takes place at the Moore School at the University of Pennsylvania.
1951
Paris is the host to a Cybernetics Congress.
1952
UNIVAC, used by the Columbia Broadcasting System (CBS) television network, successfully predicts the election of Dwight D. Eisenhower as president of the United States.
1952
Pocket-sized transistor radios are introduced.
1952
Nathaniel Rochester designs the 701, IBM’s first production-line electronic digital computer. It is marketed for scientific use.
1953
The chemical structure of the DNA molecule is discovered by James D. Watson and Francis H. C. Crick.
1953
Philosophical Investigations by Ludwig Wittgenstein and Waiting for Godot, a play by Samuel Beckett, are published. Both documents are considered of major importance to modern existentialism.
1953
Marvin Minsky and John McCarthy get summer jobs at Bell Laboratories.
1955
William Shockley’s Semiconductor Laboratory is founded, thereby starting Silicon Valley.
1955
The Remington Rand Corporation and Sperry Gyroscope join forces and become the Sperry-Rand Corporation. For a time, it presents serious competition to IBM.
1955
IBM introduces its first transistor calculator. It uses 2,200 transistors instead of the 1,200 vacuum tubes that would otherwise be required for equivalent computing power.
1955
A U.S. company develops the first design for a robotlike machine to be used in industry.
1955
IPL-II, the first artificial intelligence language, is created by Allen Newell, J. C. Shaw, and Herbert Simon.
1955
The new space program and the U.S. military recognize the importance of having computers with enough power to launch rockets to the moon and missiles through the stratosphere. Both organizations supply major funding for research.
1956
The Logic Theorist, which uses recursive search techniques to solve mathematical problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1956
John Backus and a team at IBM invent FORTRAN, the first scientific computer-programming language.
1956
Stanislaw Ulam develops MANIAC I, the first computer program to beat a human being in a chess game.
1956
The first commercial watch to run on electric batteries is presented by the Lip company of France.
1956
The term Artificial Intelligence is coined at a computer conference at Dartmouth College.
1957
Kenneth H. Olsen founds Digital Equipment Corporation.
1957
The General Problem Solver, which uses recursive search to solve problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1957
Noam Chomsky writes Syntactic Structures, in which he seriously considers the computation required for natural-language understanding. This is the first of the many important works that will earn him the title Father of Modern Linguistics.
1958
An integrated circuit is created by Texas Instruments’ Jack St. Clair Kilby.
1958
The Artificial Intelligence Laboratory at the Massachusetts Institute of Technology is founded by John McCarthy and Marvin Minsky.
1958
Allen Newell and Herbert Simon make the prediction that a digital computer will be the world’s chess champion within ten years.
1958
LISP, an early AI language, is developed by John McCarthy.
1958
The Defense Advanced Research Projects Agency, which will fund important computer-science research for years in the future, is established.
1958
Seymour Cray builds the Control Data Corporation 1604, the first fully transistorized supercomputer.
1958-1959
Jack Kilby and Robert Noyce each develop the computer chip independently. The computer chip leads to the development of much cheaper and smaller computers.
1959
Arthur Samuel completes his study in machine learning. The project, a checkers-playing program, performs as well as some of the best players of the time.
1959
Electronic document preparation increases the consumption of paper in the United States. This year, the nation will consume 7 million tons of paper. In 1986, 22 million tons will be used. American businesses alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and 4 trillion in 1990.
1959
COBOL, a computer language designed for business use, is developed by Grace Murray Hopper, who was also one of the first programmers of the Mark I.
1959
Xerox introduces the first commercial copier.
1960
Theodore Harold Maimen develops the first laser. It uses a ruby cylinder.
1960
The recently established Defense Department’s Advanced Research Projects Agency substantially increases its funding for computer research.
1960
There are now about six thousand computers in operation in the United States.
1960s
Neural-net machines are quite simple and incorporate a small number of neurons organized in only one or two layers. These models are shown to be limited in their capabilities.
1961
The first time-sharing computer is developed at MIT.
1961
President John F. Kennedy provides the support for space project Apollo and inspiration for important research in computer science when he addresses a joint session of Congress, saying, “I believe we should go to the moon.”
1962
The world’s first industrial robots are marketed by a U.S. company.
1962
Frank Rosenblatt defines the Perceptron in his Principles of Neurodynamics. Rosenblatt first introduced the Perceptron, a simple processing element for neural networks, at a conference in 1959.
1963
The Artificial Intelligence Laboratory at Stanford University is founded by John McCarthy.
1963
The influential Steps Toward Artificial Intelligence by Marvin Minsky is published.
1963
Digital Equipment Corporation announces the PDP-8, which is the first successful minicomputer.
1964
IBM introduces its 360 series, thereby further strengthening its leadership in the computer industry.
1964
Thomas E. Kurtz and John G. Kenny of Dartmouth College invent BASIC (Beginner’s All-purpose Symbolic Instruction Code).
1964
Daniel Bobrow completes his doctoral work on Student, a natural-language program that can solve high-school-level word problems in algebra.
1964
Gordon Moore’s prediction, made this year, says integrated circuits will double in complexity each year. This will become known as Moore’s Law and prove true (with later revisions) for decades to come.
1964
Marshall McLuhan, via his Understanding Media, foresees the potential for electronic media, especially television, to create a “global village” in which “the medium is the message.”
1965
The Robotics Institute at Carnegie Mellon University, which will become a leading research center for AI, is founded by Raj Reddy.
1965
Hubert Dreyfus presents a set of philosophical arguments against the possibility of artificial intelligence in a RAND corporate memo entitled “Alchemy and Artificial Intelligence.”
1965
Herbert Simon predicts that by 1985 “machines will be capable of doing any work a man can do.”
1966
The Amateur Computer Society, possibly the first personal computer club, is founded by Stephen B. Gray. The Amateur Computer Society Newsletter is one of the first magazines about computers.
1967
The first internal pacemaker is developed by Medtronics. It uses integrated circuits.
1968
Gordon Moore and Robert Noyce found Intel (Integrated Electronics) Corporation.
1968
The idea of a computer that can see, speak, hear, and think sparks imaginations when HAL is presented in the film 2001: A Space Odyssey, by Arthur C. Clarke and Stanley Kubrick.
1969
Marvin Minsky and Seymour Papert present the limitation of single-layer neural nets in their book Perceptrons. The book’s pivotal theorem shows that a Perceptron is unable to determine if a line drawing is fully connected. The book essentially halts funding for neural-net research.
1970
The GNP, on a per capita basis and in constant 1958 dollars, is $3,500, or more than six times as much as a century before.
1970
The floppy disc is introduced for storing data in computers.
c. 1970
Researchers at the Xerox Palo Alto Research Center (PARC) develop the first personal computer, called Alto. PARC’s Alto pioneers the use of bit-mapped graphics, windows, icons, and mouse pointing devices.
1970
Terry Winograd completes his landmark thesis on SHRDLU, a natural-language system that exhibits diverse intelligent behavior in the small world of children’s blocks. SHRDLU is criticized, however, for its lack of generality.
1971
The Intel 4004, the first microprocessor, is introduced by Intel.
1971
The first pocket calculator is introduced. It can add, subtract, multiply, and divide.
1972
Continuing his criticism of the capabilities of AI, Hubert Dreyfus publishes What Computers Can’t Do, in which he argues that symbol manipulation cannot be the basis of human intelligence.
1973
Stanley H. Cohen and Herbert W. Boyer show that DNA strands can be cut, joined, and then reproduced by inserting them into the bacterium Escherichia coli. This work creates the foundation for genetic engineering.
1974
Creative Computing starts publication. It is the first magazine for home computer hobbyists.
1974
The 8-bit 8080, which is the first general-purpose microprocessor, is announced by Intel.
1975
Sales of microcomputers in the United States reach more than five thousand, and the first personal computer, the Altair 8800, is introduced. It has 256 bytes of memory.
1975
BYTE, the first widely distributed computer magazine, is published.
1975
Gordon Moore revises his observation on the doubling rate of transistors on an integrated circuit from twelve months to twenty-four months.
1976
Kurzweil Computer Products introduces the Kurzweil Reading Machine (KRM), the first print-to-speech reading machine for the blind. Based on the first omni-font (any font) optical character recognition (OCR) technology, the KRM scans and reads aloud any printed materials (books, magazines, typed documents).
1976
Stephen G. Wozniak and Steven P. Jobs found Apple Computer Corporation.
1977
The concept of true-to-life robots with convincing human emotions is imaginatively portrayed in the film Star Wars.
1977
For the first time, a telephone company conducts large-scale experiments with fiber optics in a telephone system.
1977
The Apple II, the first personal computer to be sold in assembled form and the first with color graphics capability, is introduced and successfully marketed. (JCR buys first Apple II at KO in 1978
J1978
Speak & Spell, a computerized learning aid for young children, is introduced by Texas Instruments. This is the first product that electronically duplicates the human vocal tract on a chip.
1979
In a landmark study by nine researchers published in the Journal of the American Medical Association, the performance of the computer program MYCIN is compared with that of doctors in diagnosing ten test cases of meningitis. MYCIN does at least as well as the medical experts. The potential of expert systems in medicine becomes widely recognized.
1979
Dan Bricklin and Bob Frankston establish the personal computer as a serious business tool when they develop VisiCalc, the first electronic spreadsheet.
1980
AI industry revenue is a few million dollars this year.
1980s
As neuron models are becoming potentially more sophisticated, the neural network paradigm begins to make a comeback, and networks with multiple layers are commonly used.
1981
Xerox introduces the Star Computer, thus launching the concept of Desktop Publishing. Apple’s Laserwriter, available in 1985, will further increase the viability of this inexpensive and efficient way for writers and artists to create their own finished documents.
1981
IBM introduces its Personal Computer (PC).
1981
The prototype of the Bubble Jet printer is presented by Canon.
1982
Compact disc players are marketed for the first time.
1982
Mitch Kapor presents Lotus 1-2-3, an enormously popular spreadsheet program.
1983
Fax machines are fast becoming a necessity in the business world.
1983
The Musical Instrument Digital Interface (MIDI) is presented in Los Angeles at the first North American Music Manufacturers show.
1983
Six million personal computers are sold in the United States.
1984
The Apple Macintosh introduces the “desktop metaphor,” pioneered at Xerox, including bit-mapped graphics, icons, and the mouse.
1984
William Gibson uses the term cyberspace in his book Neuromancer.
1984
The Kurzweil 250 (K250) synthesizer, considered to be the first electronic instrument to successfully emulate the sounds of acoustic instruments, is introduced to the market.
1985
Marvin Minsky publishes The Society of Mind, in which he presents a theory of the mind where intelligence is seen to be the result of proper organization of a hierarchy of minds with simple mechanisms at the lowest level of the hierarchy.
1985
MIT’s Media Laboratory is founded by Jerome Weisner and Nicholas Negroponte. The lab is dedicated to researching possible applications and interactions of computer science, sociology, and artificial intelligence in the context of media technology.
1985
There are 116 million jobs in the United States, compared to 12 million in 1870. In the same period, the number of those employed has grown from 31 percent to 48 percent, and the per capita GNP in constant dollars has increased by 600 percent. These trends show no signs of abating.
1986
Electronic keyboards account for 55.2 percent of the American musical keyboard market, up from 9.5 percent in 1980.
1986
Life expectancy is about 74 years in the United States. Only 3 percent of the American workforce is involved in the production of food. Fully 76 percent of American adults have high-school diplomas, and 7.3 million U.S. students are enrolled in college.
1987
NYSE stocks have their greatest single-day loss due, in part, to computerized trading.
1987
Current speech systems can provide any one of the following: a large vocabulary, continuous speech recognition, or speaker independence.
1987
Robotic-vision systems are now a $300 million industry and will grow to $800 million by 1990.
1988
Computer memory today costs only one hundred millionth of what it did in 1950.
1988
Marvin Minsky and Seymour Papert publish a revised edition of Perceptrons in which they discuss recent developments in neural network machinery for intelligence.
1988
In the United States, 4,700,000 microcomputers, 120,000 minicomputers, and 11,500 mainframes are sold this year.
1988
W. Daniel Hillis’s Connection Machine is capable of 65,536 computations at the same time.
1988
Notebook computers are replacing the bigger laptops in popularity.
1989
Intel introduces the 16-megahertz (MHz) 80386SX, 2.5 MIPS microprocessor.
1990
Nautilus, the first CD-ROM magazine, is published.
1990
The development of HypterText Markup Language by researcher Tim Berners-Lee and its release by CERN, the high-energy physics laboratory in Geneva, Switzerland, leads to the conception of the World Wide Web.
1991
Cell phones and e-mail are increasing in popularity as business and personal communication tools.
1992
The first double-speed CD-ROM drive becomes available from NEC.
1992
The first personal digital assistant (PDA), a hand-held computer, is introduced at the Consumer Electronics Show in Chicago. The developer is Apple Computer.
1993
The Pentium 32-bit microprocessor is launched by Intel. This chip has 3.1 million transistors.
1994
The World Wide Web emerges.
1994
America Online now has more than 1 million subscribers.
1994
Scanners and CD-ROMs are becoming widely used.
1994
Digital Equipment Corporation introduces a 300-MHz version of the Alpha AXP processor that executes 1 billion instructions per second.
1996
Compaq Computer and NEC Computer Systems ship hand-held computers running Windows CE.
1996
NEC Electronics ships the R4101 processor for personal digital assistants. It includes a touch-screen interface.
1997
Deep Blue defeats Gary Kasparov, the world chess champion, in a regulation tournament.
1997
Dragon Systems introduces Naturally Speaking, the first continuous-speech dictation software product.
1997
Video phones are being used in business settings.
1997
Face-recognition systems are beginning to be used in payroll check-cashing machines.
1998
The Dictation Division of Lernout & Hauspie Speech Products (formerly Kurzweil Applied Intelligence) introduces Voice Xpress Plus, the first continuous-speech-recognition program with the ability to understand natural-language commands.
1998
Routine business transactions over the phone are beginning to be conducted between a human customer and an automated system that engages in a verbal dialogue with the customer (e.g., United Airlines reservations).
1998
Investment funds are emerging that use evolutionary algorithms and neural nets to make investment decisions (e.g., Advanced Investment Technologies).
1998
The World Wide Web is ubiquitous. It is routine for high-school students and local grocery stores to have web sites.
1998
Automated personalities, which appear as animated faces that speak with realistic mouth movements and facial expressions, are working in laboratories. These personalities respond to the spoken statements and facial expressions of their human users. They are being developed to be used in future user interfaces for products and services, as personalized research and business assistants, and to conduct transactions.
1998
Microvision’s Virtual Retina Display (VRD) projects images directly onto the user’s retinas. Although expensive, consumer versions are projected for 1999.
1998
“Bluetooth” technology is being developed for “body” local area networks (LANs) and for wireless communication between personal computers and associated peripherals. Wireless communication is being developed for high-bandwidth connection to the Web.
1999
Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence is published, available at your local bookstore!

FORECASTS:

2009
A $1,000 personal computer can perform about a trillion calculations per second.
Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech- to- text conversion) for the deaf, and computer- controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.< Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which had been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvement in the price performance of computing. Human musicians routinely jam with cybernetic musicians. Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases. The neo-Luddite movement is growing. 2019 A $1,000 computing device (in 1999 dollars) is now approximately equal to the computational ability of the human brain. Computers are now largely invisible and are embedded everywhere -- in walls, tables, chairs, desks, clothing, jewelry, and bodies. Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory "lenses," are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality. Most interaction with computing is through gestures and two-way natural-language spoken communication. Nanoengineered machines are beginning to be applied to manufacturing and process-control applications. High-resolution, three-dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity. Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers. Blind persons routinely use eyeglass-mounted reading-navigation systems. Deaf persons read what other people are saying through their lens displays. Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices. The vast majority of transactions include a simulated person. Automated driving systems are now installed in most roads. People are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers, and lovers. Virtual artists, with their own reputations, are emerging in all of the arts. There are widespread reports of computers passing the Turing Test, although these tests do not meet the criteria established by knowledgeable observers. 2029 A $1,000 (in 1999 dollars) unit of computation has the computing capacity of approximately 1,000 human brains. Permanent or removable implants (similar to contact lenses) for the eyes as well as cochlear implants are now used to provide input and output between the human user and the worldwide computing network. Direct neural pathways have been perfected for high-bandwidth connection to the human brain. A range of neural implants is becoming available to enhance visual and auditory perception and interpretation, memory, and reasoning. Automated agents are now learning on their own, and significant knowledge is being created by machines with little or no human intervention. Computers have read all available human- and machine-generated literature and multimedia material. There is widespread use of all-encompassing visual, auditory, and tactile communication using direct neural connections, allowing virtual reality to take place without having to be in a "total touch enclosure." The majority of communication does not involve a human. The majority of communication involving a human is between a human and a machine. There is almost no human employment in production, agriculture, or transportation. Basic life needs are available for the vast majority of the human race. There is a growing discussion about the legal rights of computers and what constitutes being "human." Although computers routinely pass apparently valid forms of the Turing Test, controversy persists about whether or not machine intelligence equals human intelligence in all of its diversity. Machines claim to be conscious. These claims are largely accepted. 2049 The common use of nanoproduced food, which has the correct nutritional composition and the same taste and texture of organically produced food, means that the availability of food is no longer affected by limited resources, bad crop weather, or spoilage.< Nanobot swarm projections are used to create visual-auditory-tactile projections of people and objects in real reality. 2072 Picoengineering (developing technology at the scale of picometers or trillionths of a meter) becomes practical.1 By the year 2099 There is a strong trend toward a merger of human thinking with the world of machine intelligence that the human species initially created. There is no longer any clear distinction between humans and computers. Most conscious entities do not have a permanent physical presence. Machine-based intelligences derived from extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic equivalents. Most of these intelligences are not tied to a specific computational processing unit. The number of software-based humans vastly exceeds those still using native neuron-cell-based computation. Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do. Because most information is published using standard assimilated knowledge protocols, information can be instantly understood. The goal of education, and of intelligent beings, is discovering new knowledge to learn. Femtoengineering (engineering at the scale of femtometers or one thousandth of a trillionth of a meter) proposals are controversial.2 Life expectancy is no longer a viable term in relation to intelligent beings. Some many millenniums hence . . . Intelligent beings consider the fate of the Universe.