Category Archives: Influences

Philip Roth Update

I found this chock full of wisdom:

CREDIT: NYT Interview with Philip Roth

In an exclusive interview, the (former) novelist shares his thoughts on Trump, #MeToo and retirement.

With the death of Richard Wilbur in October, Philip Roth became the longest-serving member in the literature department of the American Academy of Arts and Letters, that august Hall of Fame on Audubon Terrace in northern Manhattan, which is to the arts what Cooperstown is to baseball. He’s been a member so long he can recall when the academy included now all-but-forgotten figures like Malcolm Cowley and Glenway Wescott — white-haired luminaries from another era. Just recently Roth joined William Faulkner, Henry James and Jack London as one of very few Americans to be included in the French Pleiades editions (the model for our own Library of America), and the Italian publisher Mondadori is also bringing out his work in its Meridiani series of classic authors. All this late-life eminence — which also includes the Spanish Prince of Asturias Award in 2012 and being named a commander in the Légion d’Honneur of France in 2013 — seems both to gratify and to amuse him. “Just look at this,” he said to me last month, holding up the ornately bound Mondadori volume, as thick as a Bible and comprising titles like “Lamento di Portnoy” and “Zuckerman Scatenato.” “Who reads books like this?”
In 2012, as he approached 80, Roth famously announced that he had retired from writing. (He actually stopped two years earlier.) In the years since, he has spent a certain amount of time setting the record straight. He wrote a lengthy and impassioned letter to Wikipedia, for example, challenging the online encyclopedia’s preposterous contention that he was not a credible witness to his own life. (Eventually, Wikipedia backed down and redid the Roth entry in its entirety.) Roth is also in regular touch with Blake Bailey, whom he appointed as his official biographer and who has already amassed 1,900 pages of notes for a book expected to be half that length. And just recently, he supervised the publication of “Why Write?,” the 10th and last volume in the Library of America edition of his work. A sort of final sweeping up, a polishing of the legacy, it includes a selection of literary essays from the 1960s and ’70s; the full text of “Shop Talk,” his 2001 collection of conversations and interviews with other writers, many of them European; and a section of valedictory essays and addresses, several published here for the first time. Not accidentally, the book ends with the three-word sentence “Here I am” — between hard covers, that is.
But mostly now Roth leads the quiet life of an Upper West Side retiree. (His house in Connecticut, where he used to seclude himself for extended bouts of writing, he now uses only in the summer.) He sees friends, goes to concerts, checks his email, watches old movies on FilmStruck. Not long ago he had a visit from David Simon, the creator of “The Wire,” who is making a six-part mini-series of “The Plot Against America,” and afterward he said he was sure his novel was in good hands. Roth’s health is good, though he has had several surgeries for a recurring back problem, and he seems cheerful and contented. He’s thoughtful but still, when he wants to be, very funny.
I have interviewed Roth on several occasions over the years, and last month I asked if we could talk again. Like a lot of his readers, I wondered what the author of “American Pastoral,” “I Married a Communist” and “The Plot Against America” made of this strange period we are living in now. And I was curious about how he spent his time. Sudoku? Daytime TV? He agreed to be interviewed but only if it could be done via email. He needed to take some time, he said, and think about what he wanted to say.
C.M. In a few months you’ll turn 85. Do you feel like an elder? What has growing old been like?
P.R. Yes, in just a matter of months I’ll depart old age to enter deep old age — easing ever deeper daily into the redoubtable Valley of the Shadow. Right now it is astonishing to find myself still here at the end of each day. Getting into bed at night I smile and think, “I lived another day.” And then it’s astonishing again to awaken eight hours later and to see that it is morning of the next day and that I continue to be here. “I survived another night,” which thought causes me to smile once more. I go to sleep smiling and I wake up smiling. I’m very pleased that I’m still alive. Moreover, when this happens, as it has, week after week and month after month since I began drawing Social Security, it produces the illusion that this thing is just never going to end, though of course I know that it can stop on a dime. It’s something like playing a game, day in and day out, a high-stakes game that for now, even against the odds, I just keep winning. We will see how long my luck holds out.
C.M. Now that you’ve retired as a novelist, do you ever miss writing, or think about un-retiring?
P.R. No, I don’t. That’s because the conditions that prompted me to stop writing fiction seven years ago haven’t changed. As I say in “Why Write?,” by 2010 I had “a strong suspicion that I’d done my best work and anything more would be inferior. I was by this time no longer in possession of the mental vitality or the verbal energy or the physical fitness needed to mount and sustain a large creative attack of any duration on a complex structure as demanding as a novel…. Every talent has its terms — its nature, its scope, its force; also its term, a tenure, a life span…. Not everyone can be fruitful forever.”
C.M. Looking back, how do you recall your 50-plus years as a writer?
P.R. Exhilaration and groaning. Frustration and freedom. Inspiration and uncertainty. Abundance and emptiness. Blazing forth and muddling through. The day-by-day repertoire of oscillating dualities that any talent withstands — and tremendous solitude, too. And the silence: 50 years in a room silent as the bottom of a pool, eking out, when all went well, my minimum daily allowance of usable prose.
C.M. In “Why Write?” you reprint your famous essay “Writing American Fiction,” which argues that American reality is so crazy that it almost outstrips the writer’s imagination. It was 1960 when you said that. What about now? Did you ever foresee an America like the one we live in today?
P.R. No one I know of has foreseen an America like the one we live in today. No one (except perhaps the acidic H. L. Mencken, who famously described American democracy as “the worship of jackals by jackasses”) could have imagined that the 21st-century catastrophe to befall the U.S.A., the most debasing of disasters, would appear not, say, in the terrifying guise of an Orwellian Big Brother but in the ominously ridiculous commedia dell’arte figure of the boastful buffoon. How naïve I was in 1960 to think that I was an American living in preposterous times! How quaint! But then what could I know in 1960 of 1963 or 1968 or 1974 or 2001 or 2016?
C.M. Your 2004 novel, “The Plot Against America,” seems eerily prescient today. When that novel came out, some people saw it as a commentary on the Bush administration, but there were nowhere near as many parallels then as there seem to be now.
P.R. However prescient “The Plot Against America” might seem to you, there is surely one enormous difference between the political circumstances I invent there for the U.S. in 1940 and the political calamity that dismays us so today. It’s the difference in stature between a President Lindbergh and a President Trump. Charles Lindbergh, in life as in my novel, may have been a genuine racist and an anti-Semite and a white supremacist sympathetic to Fascism, but he was also — because of the extraordinary feat of his solo trans-Atlantic flight at the age of 25 — an authentic American hero 13 years before I have him winning the presidency. Lindbergh, historically, was the courageous young pilot who in 1927, for the first time, flew nonstop across the Atlantic, from Long Island to Paris. He did it in 33.5 hours in a single-seat, single-engine monoplane, thus making him a kind of 20th-century Leif Ericson, an aeronautical Magellan, one of the earliest beacons of the age of aviation. Trump, by comparison, is a massive fraud, the evil sum of his deficiencies, devoid of everything but the hollow ideology of a megalomaniac.
C.M. One of your recurrent themes has been male sexual desire — thwarted desire, as often as not — and its many manifestations. What do you make of the moment we seem to be in now, with so many women coming forth and accusing so many highly visible men of sexual harassment and abuse?
P.R. I am, as you indicate, no stranger as a novelist to the erotic furies. Men enveloped by sexual temptation is one of the aspects of men’s lives that I’ve written about in some of my books. Men responsive to the insistent call of sexual pleasure, beset by shameful desires and the undauntedness of obsessive lusts, beguiled even by the lure of the taboo — over the decades, I have imagined a small coterie of unsettled men possessed by just such inflammatory forces they must negotiate and contend with. I’ve tried to be uncompromising in depicting these men each as he is, each as he behaves, aroused, stimulated, hungry in the grip of carnal fervor and facing the array of psychological and ethical quandaries the exigencies of desire present. I haven’t shunned the hard facts in these fictions of why and how and when tumescent men do what they do, even when these have not been in harmony with the portrayal that a masculine public-relations campaign — if there were such a thing — might prefer. I’ve stepped not just inside the male head but into the reality of those urges whose obstinate pressure by its persistence can menace one’s rationality, urges sometimes so intense they may even be experienced as a form of lunacy. Consequently, none of the more extreme conduct I have been reading about in the newspapers lately has astonished me.
C.M. Before you were retired, you were famous for putting in long, long days. Now that you’ve stopped writing, what do you do with all that free time?
P.R. I read — strangely or not so strangely, very little fiction. I spent my whole working life reading fiction, teaching fiction, studying fiction and writing fiction. I thought of little else until about seven years ago. Since then I’ve spent a good part of each day reading history, mainly American history but also modern European history. Reading has taken the place of writing, and constitutes the major part, the stimulus, of my thinking life.
C.M. What have you been reading lately?
P.R. I seem to have veered off course lately and read a heterogeneous collection of books. I’ve read three books by Ta-Nehisi Coates, the most telling from a literary point of view, “The Beautiful Struggle,” his memoir of the boyhood challenge from his father. From reading Coates I learned about Nell Irvin Painter’s provocatively titled compendium “The History of White People.” Painter sent me back to American history, to Edmund Morgan’s “American Slavery, American Freedom,” a big scholarly history of what Morgan calls “the marriage of slavery and freedom” as it existed in early Virginia. Reading Morgan led me circuitously to reading the essays of Teju Cole, though not before my making a major swerve by reading Stephen Greenblatt’s “The Swerve,” about the circumstances of the 15th-century discovery of the manuscript of Lucretius’ subversive “On the Nature of Things.” This led to my tackling some of Lucretius’ long poem, written sometime in the first century B.C.E., in a prose translation by A. E. Stallings. From there I went on to read Greenblatt’s book about “how Shakespeare became Shakespeare,” “Will in the World.” How in the midst of all this I came to read and enjoy Bruce Springsteen’s autobiography, “Born to Run,” I can’t explain other than to say that part of the pleasure of now having so much time at my disposal to read whatever comes my way invites unpremeditated surprises.
Pre-publication copies of books arrive regularly in the mail, and that’s how I discovered Steven Zipperstein’s “Pogrom: Kishinev and the Tilt of History.” Zipperstein pinpoints the moment at the start of the 20th century when the Jewish predicament in Europe turned deadly in a way that foretold the end of everything. “Pogrom” led me to find a recent book of interpretive history, Yuri Slezkine’s “The Jewish Century,” which argues that “the Modern Age is the Jewish Age, and the 20th century, in particular, is the Jewish Century.” I read Isaiah Berlin’s “Personal Impressions,” his essay-portraits of the cast of influential 20th-century figures he’d known or observed. There is a cameo of Virginia Woolf in all her terrifying genius and there are especially gripping pages about the initial evening meeting in badly bombarded Leningrad in 1945 with the magnificent Russian poet Anna Akhmatova, when she was in her 50s, isolated, lonely, despised and persecuted by the Soviet regime. Berlin writes, “Leningrad after the war was for her nothing but a vast cemetery, the graveyard of her friends. … The account of the unrelieved tragedy of her life went far beyond anything which anyone had ever described to me in spoken words.” They spoke until 3 or 4 in the morning. The scene is as moving as anything in Tolstoy.
Just in the past week, I read books by two friends, Edna O’Brien’s wise little biography of James Joyce and an engagingly eccentric autobiography, “Confessions of an Old Jewish Painter,” by one of my dearest dead friends, the great American artist R. B. Kitaj. I have many dear dead friends. A number were novelists. I miss finding their new books in the mail.
Follow New York Times Books on Facebook and Twitter (@nytimesbooks), and sign up for our newsletter.
Charles McGrath, a former editor of the Book Review, is a contributing writer for The Times. He is the editor of a Library of America collection of John O’Hara stories.

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Jeff Bezos

I found this interesting, about Jeff Bezos:

“I worked with Jeff, heading up hiring for Amazon. So I’ll tell you what it’s like to work directly with Jeff from my personal experience.

Jeff is unlike any other CEO you have ever met. Steve Jobs is probably the closest from a visionary perspective, yet they are very, very different. Jeff isn’t dictatorial or a tyrant, as some have suggested. Some of his directs might be, but not Jeff. Jeff is a visionary who deals with problems at a very high level. He has a vision of where he wants to take the company which is far beyond the view of most others, including the exec leadership team. There are plenty of “aha” moments where you finally get a glimpse of where he is going.

Jeff is probably one of the smartest, if not THE smartest, CEOs of the Fortune 500. He has a brilliance all his own. And he has very high standards for himself personally, which carry over to his team. He expects a lot out of people. Is that being a tyrant? No, not if you want to work hard and grow. If you’re lazy, don’t get anywhere near Jeff. He focuses on those who deliver.

While he may seem like he’s comfortable addressing large crowds or being on TV, he’s actually not. Many people think he’s this crazy extrovert, but that’s not really who he is. He’s a thinker. And a doer. He’s actually somewhat shy and introspective as a person. That’s why you will see lots of different laugh tracks of Jeff laughing. Here’s one compilation:

He has this crazy honking laugh that is one of the funniest laughs you will ever hear. Roaring loud and you can hear him through the walls or down the hallway. “Jeff is in the conference room next door.” It’s an infectious laugh that often starts others laughing. Often it’s Jeff laughing at himself or something he said. He often finds thing he has said to be very funny after he has said them, which is kinda funny in itself, when you think about it. But sometimes it’s simply Jeff being uncomfortable being in the spotlight. Laughing is his way out.

Jeff highly values the customer, probably more than any CEO I know, large company or small. If there is difficulty in making a decision, Jeff typically (although not always) comes back to doing what is best for the customer. That ends up being the tiebreaker of the tough decisions, which is really important in understanding how he’s wired.

He’s also very concerned with the company culture at Amazon. In one very tough meeting with his directs on a very difficult and contentious issue, Jeff just sat back listening, then we talked in the hallway afterward. Jeff commented on how it was a really tough decision and I agreed, but I said it was a culture decision. The light bulb went on with Jeff and he said, “You’re right, it is a culture decision!” When he framed the decision that way (instead of the financial impact view being presented in the meeting), he looked at it more as an internal customer. Getting the culture right drives a lot of his decision making.

Jeff is laser focused on talent acquisition and talent development. Few CEOs put as much time into talent as Jeff. He knows that the company is defined by its internal talent rather than its products. But, like most CEOs, he can only directly affect the next two layers (SVP and VP) directly. Beyond that, he delegates to each leader to build and grow their team individually to deliver results. The measurement consistently comes back to delivering results. Jeff is OK with a level of quirkiness in talent that most other CEOs wouldn’t be comfortable assimilating into the company culture. Part of Amazon’s culture is that quirkiness that doesn’t really exist at any other company.

Jeff can get down to the detail level, but rarely does so. He’s smart and has the technical chops to understand, he just doesn’t have the time to do so. He has an extremely competent group of directs surrounding him, people whom you rarely hear about, but these are the people driving the operational implementation of the vision Jeff has laid out for the company. He gives them plenty of opportunity to try new things, make errors, yet still survive. Not many CEOs are OK with failure. Jeff is. He knows that in order to innovate, you have to be accepting of failures along the way to success. And that’s a big difference between Jeff and Steve Jobs. Steve couldn’t and wouldn’t accept failure at any level, while Jeff is actually OK with it, as long as it brings you closer to success.

I loved working with Jeff. Probably my second favorite boss of my entire career.

I guess I can sum it up by saying I often ask “What would Jeff do?” when making a decision on how I run my company (CollegeGrad.com).

P.S. Jeff would really rather be flying rockets into outer space than running Amazon.
130.2k Views ·

Telic and Atelic

Telic and Atelic

Telic activities have goals. Atelic activities have more to do with “being”.

CREDIT: NYT

As Aristotle wrote in his “Metaphysics”: “If you are learning, you have not at the same time learned.” When you care about telic activities, projects such as writing a report, getting married or making dinner, satisfaction is always in the future or the past. It is yet to be achieved and then it is gone. Telic activities are exhaustible; in fact, they aim at their own exhaustion. They thus exhibit a peculiar self-subversion. In valuing and so pursuing these activities, we aim to complete them, and so to expel them from our lives.
Atelic activities, by contrast, do not by nature come to an end and are not incomplete. In defining such activities, we could emphasize their inexhaustibility, the fact that they do not aim at terminal states. But we could also emphasize what Aristotle does: They are fully realized in the present. “At the same time, one is seeing and has seen, is understanding and has understood, is thinking and has thought.” There is nothing you need to do in order to perform an atelic activity except what you are doing right now. If what you care about is reflecting on your life or spending time with family or friends, and that is what you are doing, you are not on the way to achieving your end: You are already there.

Folly of One-Way Loyalty

Maybe, instead of bashing Trump at every turn, we can step back and learn from him.

In this case, John Pitney makes a great point about the folly of one way loyalty:

“John J. Pitney, a political scientist with sterling conservative credentials, has a blistering piece in Politico explaining Trump’s problem: He thinks loyalty flows only one way. “Trump’s life has been a long trail of betrayals,” Pitney writes. He has dumped wives, friends, mentors, protégés, colleagues, business associates, Trump University students and, more recently, political advisers.

“Loyalty is about strength,” Pitney, a professor at Claremont McKenna, writes. “It is about sticking with a person, a cause, an idea or a country even when it is costly, difficult, or unpopular.”

CREDIT: NYT Op Ed

I Thought I Understood the American Right. Trump Proved Me Wrong.

How to explain Trump? This feature-length article in today’s New York Times Magazine does a great job of pulling together, into one place, the historical strands that made Trump possible.

Including:

-The New Deal put conservatives on their “back foot”, and set the stage for an emerging liberal consensus that helps for over fifty years.
-The effort by William F. Buckley and the National Review, beginning in 1955, to make conservatism intellectually attractive – and defensible.
-New South talking points, instead of outright racism, that were more palatable, like “stable housing values” and “quality local education,”. These had enormous appeal to the white American middle class.
– Alan Brinkley arguing, when Reagan was elected, that American conservatism “had been something of an orphan in historical scholarship.”
– Reagan himself, who portrayed a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave.
-Harvard’s Lisa McGirr writing in her 2001 book that conservative, largely suburban “highly educated and thoroughly modern group of men and women,” took on “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

Two quotes stick with me, one which summarizes:

“Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.”

And a second quote about Goldwater:

Richard Hofstadter said, one month before the defeat of Barry Goldwater for president: “when, in all our history, has anyone with ideas so bizarre, so self-confounding, so remote from the basic American consensus, ever gone so far?”

I find that quote revealing. He called the Goldwater crushing defeat. And one would have thought that the exact same quote could apply to November 1, 2016, anticipating a crushing defeat for Donald J. Trump. And yet, he prevailed!

We owe it to ourselves to ask: Why? Here is one historian’s take:

===============
CREDIT: Feature Article from New York Times Magazine
===============
I Thought I Understood the American Right. Trump Proved Me Wrong.

A historian of conservatism looks back at how he and his peers failed to anticipate the rise of the president.

BY RICK PERLSTEIN
APRIL 11, 2017

Until Nov. 8, 2016, historians of American politics shared a rough consensus about the rise of modern American conservatism. It told a respectable tale. By the end of World War II, the story goes, conservatives had become a scattered and obscure remnant, vanquished by the New Deal and the apparent reality that, as the critic Lionel Trilling wrote in 1950, liberalism was “not only the dominant but even the sole intellectual tradition.”

Year Zero was 1955, when William F. Buckley Jr. started National Review, the small-circulation magazine whose aim, Buckley explained, was to “articulate a position on world affairs which a conservative candidate can adhere to without fear of intellectual embarrassment or political surrealism.” Buckley excommunicated the John Birch Society, anti-Semites and supporters of the hyperindividualist Ayn Rand, and his cohort fused the diverse schools of conservative thinking — traditionalist philosophers, militant anti-Communists, libertarian economists — into a coherent ideology, one that eventually came to dominate American politics.

I was one of the historians who helped forge this narrative. My first book, “Before the Storm,” was about the rise of Senator Barry Goldwater, the uncompromising National Review favorite whose refusal to exploit the violent backlash against civil rights, and whose bracingly idealistic devotion to the Constitution as he understood it — he called for Social Security to be made “voluntary” — led to his crushing defeat in the 1964 presidential election. Goldwater’s loss, far from dooming the American right, inspired a new generation of conservative activists to redouble their efforts, paving the way for the Reagan revolution. Educated whites in the prosperous metropolises of the New South sublimated the frenetic, violent anxieties that once marked race relations in their region into more palatable policy concerns about “stable housing values” and “quality local education,” backfooting liberals and transforming conservatives into mainstream champions of a set of positions with enormous appeal to the white American middle class.

These were the factors, many historians concluded, that made America a “center right” nation. For better or for worse, politicians seeking to lead either party faced a new reality. Democrats had to honor the public’s distrust of activist government (as Bill Clinton did with his call for the “end of welfare as we know it”). Republicans, for their part, had to play the Buckley role of denouncing the political surrealism of the paranoid fringe (Mitt Romney’s furious backpedaling after joking, “No one’s ever asked to see my birth certificate”).

Then the nation’s pre-eminent birther ran for president. Trump’s campaign was surreal and an intellectual embarrassment, and political experts of all stripes told us he could never become president. That wasn’t how the story was supposed to end. National Review devoted an issue to writing Trump out of the conservative movement; an editor there, Jonah Goldberg, even became a leader of the “Never Trump” crusade. But Trump won — and some conservative intellectuals embraced a man who exploited the same brutish energies that Buckley had supposedly banished.

The professional guardians of America’s past, in short, had made a mistake. We advanced a narrative of the American right that was far too constricted to anticipate the rise of a man like Trump. Historians, of course, are not called upon to be seers. Our professional canons warn us against presentism — we are supposed to weigh the evidence of the past on its own terms — but at the same time, the questions we ask are conditioned by the present. That is, ultimately, what we are called upon to explain. Which poses a question: If Donald Trump is the latest chapter of conservatism’s story, might historians have been telling that story wrong?

American historians’ relationship to conservatism itself has a troubled history. Even after Ronald Reagan’s electoral-college landslide in 1980, we paid little attention to the right: The central narrative of America’s political development was still believed to be the rise of the liberal state. But as Newt Gingrich’s right-wing revolutionaries prepared to take over the House of Representatives in 1994, the scholar Alan Brinkley published an essay called “The Problem of American Conservatism” in The American Historical Review. American conservatism, Brinkley argued, “had been something of an orphan in historical scholarship,” and that was “coming to seem an ever-more-curious omission.” The article inaugurated the boom in scholarship that brought us the story, now widely accepted, of conservatism’s triumphant rise.

That story was in part a rejection of an older story. Until the 1990s, the most influential writer on the subject of the American right was Richard Hofstadter, a colleague of Trilling’s at Columbia University in the postwar years. Hofstadter was the leader of the “consensus” school of historians; the “consensus” being Americans’ supposed agreement upon moderate liberalism as the nation’s natural governing philosophy. He didn’t take the self-identified conservatives of his own time at all seriously. He called them “pseudoconservatives” and described, for instance, followers of the red-baiting Republican senator Joseph McCarthy as cranks who salved their “status anxiety” with conspiracy theories and bizarre panaceas. He named this attitude “the paranoid style in American politics” and, in an article published a month before Barry Goldwater’s presidential defeat, asked, “When, in all our history, has anyone with ideas so bizarre, so archaic, so self-confounding, so remote from the basic American consensus, ever gone so far?”

It was a strangely ahistoric question; many of Goldwater’s ideas hewed closely to a well-established American distrust of statism that goes back all the way to the nation’s founding. It betokened too a certain willful blindness toward the evidence that was already emerging of a popular backlash against liberalism. Reagan’s gubernatorial victory in California two years later, followed by his two landslide presidential wins, made a mockery of Hofstadter. Historians seeking to grasp conservatism’s newly revealed mass appeal would have to take the movement on its own terms.

That was my aim when I took up the subject in the late 1990s — and, even more explicitly, the aim of Lisa McGirr, now of Harvard University, whose 2001 book, “Suburban Warriors: The Origins of the New American Right,” became a cornerstone of the new literature. Instead of pronouncing upon conservatism from on high, as Hofstadter had, McGirr, a social historian, studied it from the ground up, attending respectfully to what activists understood themselves to be doing. What she found was “a highly educated and thoroughly modern group of men and women,” normal participants in the “bureaucratized world of post-World War II America.” They built a “vibrant and remarkable political mobilization,” she wrote, in an effort to address political concerns that would soon be resonating nationwide — for instance, their anguish at “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

But if Hofstadter was overly dismissive of how conservatives understood themselves, the new breed of historians at times proved too credulous. McGirr diligently played down the sheer bloodcurdling hysteria of conservatives during the period she was studying — for example, one California senator’s report in 1962 that he had received thousands of letters from constituents concerned about a rumor that Communist Chinese commandos were training in Mexico for an imminent invasion of San Diego. I sometimes made the same mistake. Writing about the movement that led to Goldwater’s 1964 Republican nomination, for instance, it never occurred to me to pay much attention to McCarthyism, even though McCarthy helped Goldwater win his Senate seat in 1952, and Goldwater supported McCarthy to the end. (As did William F. Buckley.) I was writing about the modern conservative movement, the one that led to Reagan, not about the brutish relics of a more gothic, ill-formed and supposedly incoherent reactionary era that preceded it.

A few historians have provocatively followed a different intellectual path, avoiding both the bloodlessness of the new social historians and the psychologizing condescension of the old Hofstadter school. Foremost among them is Leo Ribuffo, a professor at George Washington University. Ribuffo’s surname announces his identity in the Dickensian style: Irascible, brilliant and deeply learned, he is one of the profession’s great rebuffers. He made his reputation with an award-winning 1983 study, “The Old Christian Right: The Protestant Far Right From the Great Depression to the Cold War,” and hasn’t published a proper book since — just a series of coruscating essays that frequently focus on what everyone else is getting wrong. In the 1994 issue of The American Historical Review that featured Alan Brinkley’s “The Problem of American Conservatism,” Ribuffo wrote a response contesting Brinkley’s contention, now commonplace, that Trilling was right about American conservatism’s shallow roots. Ribuffo argued that America’s anti-liberal traditions were far more deeply rooted in the past, and far angrier, than most historians would acknowledge, citing a long list of examples from “regional suspicions of various metropolitan centers and the snobs who lived there” to “white racism institutionalized in slavery and segregation.”

After the election, Ribuffo told me that if he were to write a similar response today, he would call it, “Why Is There So Much Scholarship on ‘Conservatism,’ and Why Has It Left the Historical Profession So Obtuse About Trumpism?” One reason, as Ribuffo argues, is the conceptual error of identifying a discrete “modern conservative movement” in the first place. Another reason, though, is that historians of conservatism, like historians in general, tend to be liberal, and are prone to liberalism’s traditions of politesse. It’s no surprise that we are attracted to polite subjects like “colorblind conservatism” or William F. Buckley.

Our work might have been less obtuse had we shared the instincts of a New York University professor named Kim Phillips-Fein. “Historians who write about the right should find ways to do so with a sense of the dignity of their subjects,” she observed in a 2011 review, “but they should not hesitate to keep an eye out for the bizarre, the unusual, or the unsettling.”

Looking back from that perspective, we can now see a history that is indeed unsettling — but also unsettlingly familiar. Consider, for example, an essay published in 1926 by Hiram Evans, the imperial wizard of the Ku Klux Klan, in the exceedingly mainstream North American Review. His subject was the decline of “Americanism.” Evans claimed to speak for an abused white majority, “the so-called Nordic race,” which, “with all its faults, has given the world almost the whole of modern civilization.” Evans, a former dentist, proposed that his was “a movement of plain people,” and acknowledged that this “lays us open to the charge of being hicks and ‘rubes’ and ‘drivers of secondhand Fords.’ ” But over the course of the last generation, he wrote, these good people “have found themselves increasingly uncomfortable, and finally deeply distressed,” watching a “moral breakdown” that was destroying a once-great nation. First, there was “confusion in thought and opinion, a groping and hesitancy about national affairs and private life alike, in sharp contrast to the clear, straightforward purposes of our earlier years.” Next, they found “the control of much of our industry and commerce taken over by strangers, who stacked the cards of success and prosperity against us,” and ultimately these strangers “came to dominate our government.” The only thing that would make America great again, as it were, was “a return of power into the hands of everyday, not highly cultured, not overly intellectualized, but entirely unspoiled and not de-Americanized average citizens of old stock.”

This “Second Klan” (the first was formed during Reconstruction) scrambles our pre-Trump sense of what right-wing ideology does and does not comprise. (Its doctrines, for example, included support for public education, to weaken Catholic parochial schools.) The Klan also put the predations of the international banking class at the center of its rhetoric. Its worldview resembles, in fact, the right-wing politics of contemporary Europe — a tradition, heretofore judged foreign to American politics, called “herrenvolk republicanism,” that reserved social democracy solely for the white majority. By reaching back to the reactionary traditions of the 1920s, we might better understand the alliance between the “alt-right” figures that emerged as fervent Trump supporters during last year’s election and the ascendant far-right nativist political parties in Europe.

None of this history is hidden. Indeed, in the 1990s, a rich scholarly literature emerged on the 1920s Klan and its extraordinary, and decidedly national, influence. (One hotbed of Klan activity, for example, was Anaheim, Calif. McGirr’s “Suburban Warriors” mentions this but doesn’t discuss it; neither did I in my own account of Orange County conservatism in “Before the Storm.” Again, it just didn’t seem relevant to the subject of the modern conservative movement.) The general belief among historians, however, was that the Klan’s national influence faded in the years after 1925, when Indiana’s grand dragon, D.C. Stephenson, who served as the de facto political boss for the entire state, was convicted of murdering a young woman.

But the Klan remained relevant far beyond the South. In 1936 a group called the Black Legion, active in the industrial Midwest, burst into public consciousness after members assassinated a Works Progress Administration official in Detroit. The group, which considered itself a Klan enforcement arm, dominated the news that year. The F.B.I. estimated its membership at 135,000, including a large number of public officials, possibly including Detroit’s police chief. The Associated Press reported in 1936 that the group was suspected of assassinating as many as 50 people. In 1937, Humphrey Bogart starred in a film about it. In an informal survey, however, I found that many leading historians of the right — including one who wrote an important book covering the 1930s — hadn’t heard of the Black Legion.

Stephen H. Norwood, one of the few historians who did study the Black Legion, also mined another rich seam of neglected history in which far-right vigilantism and outright fascism routinely infiltrated the mainstream of American life. The story begins with Father Charles Coughlin, the Detroit-based “radio priest” who at his peak reached as many as 30 million weekly listeners. In 1938, Coughlin’s magazine, Social Justice, began reprinting “Protocols of the Learned Elders of Zion,” a forged tract about a global Jewish conspiracy first popularized in the United States by Henry Ford. After presenting this fictitious threat, Coughlin’s paper called for action, in the form of a “crusade against the anti-Christian forces of the red revolution” — a call that was answered, in New York and Boston, by a new organization, the Christian Front. Its members were among the most enthusiastic participants in a 1939 pro-Hitler rally that packed Madison Square Garden, where the leader of the German-American Bund spoke in front of an enormous portrait of George Washington flanked by swastikas.

The Bund took a mortal hit that same year — its leader was caught embezzling — but the Christian Front soldiered on. In 1940, a New York chapter was raided by the F.B.I. for plotting to overthrow the government. The organization survived, and throughout World War II carried out what the New York Yiddish paper The Day called “small pogroms” in Boston and New York that left Jews in “mortal fear” of “almost daily” beatings. Victims who complained to authorities, according to news reports, were “insulted and beaten again.” Young Irish-Catholic men inspired by the Christian Front desecrated nearly every synagogue in Washington Heights. The New York Catholic hierarchy, the mayor of Boston and the governor of Massachusetts largely looked the other way.

Why hasn’t the presence of organized mobs with backing in powerful places disturbed historians’ conclusion that the American right was dormant during this period? In fact, the “far right” was never that far from the American mainstream. The historian Richard Steigmann-Gall, writing in the journal Social History, points out that “scholars of American history are by and large in agreement that, in spite of a welter of fringe radical groups on the right in the United States between the wars, fascism never ‘took’ here.” And, unlike in Europe, fascists did not achieve governmental power. Nevertheless, Steigmann-Gall continues, “fascism had a very real presence in the U.S.A., comparable to that on continental Europe.” He cites no less mainstream an organization than the American Legion, whose “National Commander” Alvin Owsley proclaimed in 1922, “the Fascisti are to Italy what the American Legion is to the United States.” A decade later, Chicago named a thoroughfare after the Fascist military leader Italo Balbo. In 2011, Italian-American groups in Chicago protested a movement to rename it.

Anti-Semitism in America declined after World War II. But as Leo Ribuffo points out, the underlying narrative — of a diabolical transnational cabal of aliens plotting to undermine the very foundations of Christian civilization — survived in the anti-Communist diatribes of Joseph McCarthy. The alien narrative continues today in the work of National Review writers like Andrew McCarthy (“How Obama Embraces Islam’s Sharia Agenda”) and Lisa Schiffren (who argued that Obama’s parents could be secret Communists because “for a white woman to marry a black man in 1958, or ’60, there was almost inevitably a connection to explicit Communist politics”). And it found its most potent expression in Donald Trump’s stubborn insistence that Barack Obama was not born in the United States.

Trump’s connection to this alternate right-wing genealogy is not just rhetorical. In 1927, 1,000 hooded Klansmen fought police in Queens in what The Times reported as a “free for all.” One of those arrested at the scene was the president’s father, Fred Trump. (Trump’s role in the melee is unclear; the charge — “refusing to disperse” — was later dropped.) In the 1950s, Woody Guthrie, at the time a resident of the Beach Haven housing complex the elder Trump built near Coney Island, wrote a song about “Old Man Trump” and the “Racial hate/He stirred up/In the bloodpot of human hearts/When he drawed/That color line” in one of his housing developments. In 1973, when Donald Trump was working at Fred’s side, both father and son were named in a federal housing-discrimination suit. The family settled with the Justice Department in the face of evidence that black applicants were told units were not available even as whites were welcomed with open arms.

The 1960s and ’70s New York in which Donald Trump came of age, as much as Klan-ridden Indiana in the 1920s or Barry Goldwater’s Arizona in the 1950s, was at conservatism’s cutting edge, setting the emotional tone for a politics of rage. In 1966, when Trump was 20, Mayor John Lindsay placed civilians on a board to more effectively monitor police abuse. The president of the Patrolmen’s Benevolent Association — responding, “I am sick and tired of giving in to minority groups and their gripes and their shouting” — led a referendum effort to dissolve the board that won 63 percent of the vote. Two years later, fights between supporters and protesters of George Wallace at a Madison Square Garden rally grew so violent that, The New Republic observed, “never again will you read about Berlin in the ’30s without remembering this wild confrontation here of two irrational forces.”

The rest of the country followed New York’s lead. In 1970, after the shooting deaths of four students during antiwar protests at Kent State University in Ohio, a Gallup poll found that 58 percent of Americans blamed the students for their own deaths. (“If they didn’t do what the Guards told them, they should have been mowed down,” one parent of Kent State students told an interviewer.) Days later, hundreds of construction workers from the World Trade Center site beat antiwar protesters at City Hall with their hard hats. (“It was just like Iwo Jima,” an impressed witness remarked.) That year, reports the historian Katherine Scott, 76 percent of Americans “said they did not support the First Amendment right to assemble and dissent from government policies.”

In 1973, the reporter Gail Sheehy joined a group of blue-collar workers watching the Watergate hearings in a bar in Astoria, Queens. “If I was Nixon,” one of them said, “I’d shoot every one of them.” (Who “they” were went unspecified.) This was around the time when New Yorkers were leaping to their feet and cheering during screenings of “Death Wish,” a hit movie about a liberal architect, played by Charles Bronson, who shoots muggers at point-blank range. At an October 2015 rally near Nashville, Donald Trump told his supporters: “I have a license to carry in New York, can you believe that? Nobody knows that. Somebody attacks me, oh, they’re gonna be shocked.” He imitated a cowboy-style quick draw, and an appreciative crowd shouted out the name of Bronson’s then-41-year-old film: “ ‘Death Wish’!”

In 1989, a young white woman was raped in Central Park. Five teenagers, four black and one Latino, confessed to participating in the crime. At the height of the controversy, Donald Trump took out full-page ads in all the major New York daily papers calling for the return of the death penalty. It was later proved the police had essentially tortured the five into their confessions, and they were eventually cleared by DNA evidence. Trump, however, continues to insist upon their guilt. That confidence resonates deeply with what the sociologist Lawrence Rosenthal calls New York’s “hard-hat populism” — an attitude, Rosenthal hypothesizes, that Trump learned working alongside the tradesmen in his father’s real estate empire. But the case itself also resonates deeply with narratives dating back to the first Ku Klux Klan of white womanhood defiled by dark savages. Trump’s public call for the supposed perpetrators’ hides, no matter the proof of guilt or innocence, mimics the rituals of Southern lynchings.

When Trump vowed on the campaign trail to Make America Great Again, he was generally unclear about when exactly it stopped being great. The Vanderbilt University historian Jefferson Cowie tells a story that points to a possible answer. In his book “The Great Exception,” he suggests that what historians considered the main event in 20th century American political development — the rise and consolidation of the “New Deal order” — was in fact an anomaly, made politically possible by a convergence of political factors. One of those was immigration. At the beginning of the 20th century, millions of impoverished immigrants, mostly Catholic and Jewish, entered an overwhelmingly Protestant country. It was only when that demographic transformation was suspended by the 1924 Immigration Act that majorities of Americans proved willing to vote for many liberal policies. In 1965, Congress once more allowed large-scale immigration to the United States — and it is no accident that this date coincides with the increasing conservative backlash against liberalism itself, now that its spoils would be more widely distributed among nonwhites.

The liberalization of immigration law is an obsession of the alt-right. Trump has echoed their rage. “We’ve admitted 59 million immigrants to the United States between 1965 and 2015,” he noted last summer, with rare specificity. “ ‘Come on in, anybody. Just come on in.’ Not anymore.” This was a stark contrast to Reagan, who venerated immigrants, proudly signing a 1986 bill, sponsored by the conservative Republican senator Alan Simpson, that granted many undocumented immigrants citizenship. Shortly before announcing his 1980 presidential run, Reagan even boasted of his wish “to create, literally, a common market situation here in the Americas with an open border between ourselves and Mexico.” But on immigration, at least, it is Trump, not Reagan, who is the apotheosis of the brand of conservatism that now prevails.

A puzzle remains. If Donald Trump was elected as a Marine Le Pen-style — or Hiram Evans-style — herrenvolk republican, what are we to make of the fact that he placed so many bankers and billionaires in his cabinet, and has relentlessly pursued so many 1-percent-friendly policies? More to the point, what are we to the make of the fact that his supporters don’t seem to mind?

Here, however, Trump is far from unique. The history of bait-and-switch between conservative electioneering and conservative governance is another rich seam that calls out for fresh scholarly excavation: not of how conservative voters see their leaders, but of the neglected history of how conservative leaders see their voters.

In their 1987 book, “Right Turn,” the political scientists Joel Rogers and Thomas Ferguson presented public-opinion data demonstrating that Reagan’s crusade against activist government, which was widely understood to be the source of his popularity, was not, in fact, particularly popular. For example, when Reagan was re-elected in 1984, only 35 percent of voters favored significant cuts in social programs to reduce the deficit. Much excellent scholarship, well worth revisiting in the age of Trump, suggests an explanation for Reagan’s subsequent success at cutting back social programs in the face of hostile public opinion: It was business leaders, not the general public, who moved to the right, and they became increasingly aggressive and skilled in manipulating the political process behind the scenes.
But another answer hides in plain sight. The often-cynical negotiation between populist electioneering and plutocratic governance on the right has long been not so much a matter of policy as it has been a matter of show business. The media scholar Tim Raphael, in his 2009 book, “The President Electric: Ronald Reagan and the Politics of Performance,” calls the three-minute commercials that interrupted episodes of The General Electric Theater — starring Reagan and his family in their state-of-the-art Pacific Palisades home, outfitted for them by G.E. — television’s first “reality show.” For the California voters who soon made him governor, the ads created a sense of Reagan as a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave. Years later, the producers of “The Apprentice” carefully crafted a Trump character who was the quintessence of steely resolve and all-knowing mastery. American voters noticed. Linda Lucchese, a Trump convention delegate from Illinois who had never previously been involved in politics, told me that she watched “The Apprentice” and decided that Trump would make a perfect president. “All those celebrities,” she told me: “They showed him respect.”

It is a short leap from advertising and reality TV to darker forms of manipulation. Consider the parallels since the 1970s between conservative activism and the traditional techniques of con men. Direct-mail pioneers like Richard Viguerie created hair-on-fire campaign-fund-raising letters about civilization on the verge of collapse. One 1979 pitch warned that “federal and state legislatures are literally flooded with proposed laws that are aimed at total confiscation of firearms from law-abiding citizens.” Another, from the 1990s, warned that “babies are being harvested and sold on the black market by Planned Parenthood clinics.” Recipients of these alarming missives sent checks to battle phony crises, and what they got in return was very real tax cuts for the rich. Note also the more recent connection between Republican politics and “multilevel marketing” operations like Amway (Trump’s education secretary, Betsy DeVos, is the wife of Amway’s former president and the daughter-in-law of its co-founder); and how easily some of these marketing schemes shade into the promotion of dubious miracle cures (Ben Carson, secretary of housing and urban development, with “glyconutrients”; Mike Huckabee shilling for a “solution kit” to “reverse” diabetes; Trump himself taking on a short-lived nutritional-supplements multilevel marketing scheme in 2009). The dubious grifting of Donald Trump, in short, is a part of the structure of conservative history.

Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.

Editors’ Note: April 16, 2017
An essay on Page 36 this weekend by a historian about how conservatism has changed over the years cites Jonah Goldberg of the National Review as an example of a conservative intellectual who embraced Donald J. Trump following the presidential election. That is a mischaracterization of the views of Mr. Goldberg, who has continued to be critical of Mr. Trump.

Rick Perlstein is the author, most recently, of “The Invisible Bridge: The Fall of Nixon and the Rise of Reagan.”

Media Eco-Systems

CREDIT: ARTICLE FROM COLUMBIA JOURNALISM REVIEW

CJR has done a fine piece of work here! They studied 1.25 million stories, published by 25,000 sources, between 4/15 and 11/16.

A few of the most choice insights:

“What we find in our data is a network of mutually-reinforcing hyper-partisan sites that revive what Richard Hofstadter called “the paranoid style in American politics,” combining decontextualized truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world.”

“Take a look at Ending the Fed, which, according to Buzzfeed’s examination of fake news in November 2016, accounted for five of the top 10 of the top fake stories in the election. In our data, Ending the Fed is indeed prominent by Facebook measures, but not by Twitter shares. In the month before the election, for example, it was one of the three most-shared right-wing sites on Facebook, alongside Breitbart and Truthfeed.”

JCR note: take a look at www.endingthefed.com. I wasn’t even aware of it. Total scum reporting. If this website is even half as powerful as CJR says, we are in a world of hurt. For more on this, see Buzzfeed Commentary on End the Fed

“Use of disinformation by partisan media sources is neither new nor limited to the right wing, but the insulation of the partisan right-wing media from traditional journalistic media sources, and the vehemence of its attacks on journalism in common cause with a similarly outspoken president, is new and distinctive.”

“It is a mistake to dismiss these stories as “fake news”; their power stems from a potent mix of verifiable facts (the leaked Podesta emails), familiar repeated falsehoods, paranoid logic, and consistent political orientation within a mutually-reinforcing network of like-minded sites.”

“A remarkable feature of the right-wing media ecosystem is how new it is. Out of all the outlets favored by Trump followers, only the New York Post existed when Ronald Reagan was elected president in 1980. By the election of Bill Clinton in 1992, only the Washington Times, Rush Limbaugh, and arguably Sean Hannity had joined the fray. Alex Jones of Infowars started his first outlet on the radio in 1996. Fox News was not founded until 1996. Breitbart was founded in 2007, and most of the other major nodes in the right-wing media system were created even later.”

And my own reflection is:

I am guilty, as usual, of assuming that revolutions of one time are revolutions for all time. What I mean is … I was so, so impressed with the social media revolution that arguably swept President Obama into the White House. That campaign’s ability to pivot quickly, disseminate their points social media ways, was a thing to behold!

What I failed to realize is that the NEXT revolution was following right on its heels! And, sadly, I think the Democratic Party missed it too.

The next revolution was the right wing social media eco-system, a complex fabric of sites that reinforced each other. Rather than spouting “fake news”, as the New York Enquirer did with regularity way back when, these sites specialized in disinformation.

And they came on the scene very recently – led by Breitbart. It is stunning to me how Breitbart nudged Fox News out of the center of the media eco-system in early 2016, and then invited them back into the center, along with them, as their views increasingly aligned. This has got to be one of the greatest media coups of all time, orchestrated by Breitbart News, whose leader is now in the White House.

====================STUDY FOLLOWS==================
Study: Breitbart-led right-wing media ecosystem altered broader media agenda
By Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman
MARCH 3, 2017

THE 2016 PRESIDENTIAL ELECTION SHOOK the foundations of American politics. Media reports immediately looked for external disruption to explain the unanticipated victory—with theories ranging from Russian hacking to “fake news.”

We have a less exotic, but perhaps more disconcerting explanation: Our own study of over 1.25 million stories published online between April 1, 2015 and Election Day shows that a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world. This pro-Trump media sphere appears to have not only successfully set the agenda for the conservative media sphere, but also strongly influenced the broader media agenda, in particular coverage of Hillary Clinton.

While concerns about political and media polarization online are longstanding, our study suggests that polarization was asymmetric. Pro-Clinton audiences were highly attentive to traditional media outlets, which continued to be the most prominent outlets across the public sphere, alongside more left-oriented online sites. But pro-Trump audiences paid the majority of their attention to polarized outlets that have developed recently, many of them only since the 2008 election season.

Attacks on the integrity and professionalism of opposing media were also a central theme of right-wing media. Rather than “fake news” in the sense of wholly fabricated falsities, many of the most-shared stories can more accurately be understood as disinformation: the purposeful construction of true or partly true bits of information into a message that is, at its core, misleading. Over the course of the election, this turned the right-wing media system into an internally coherent, relatively insulated knowledge community, reinforcing the shared worldview of readers and shielding them from journalism that challenged it. The prevalence of such material has created an environment in which the President can tell supporters about events in Sweden that never happened, or a presidential advisor can reference a non-existent “Bowling Green massacre.”

We began to study this ecosystem by looking at the landscape of what sites people share. If a person shares a link from Breitbart, is he or she more likely also to share a link from Fox News or from The New York Times? We analyzed hyperlinking patterns, social media sharing patterns on Facebook and Twitter, and topic and language patterns in the content of the 1.25 million stories, published by 25,000 sources over the course of the election, using Media Cloud, an open-source platform for studying media ecosystems developed by Harvard’s Berkman Klein Center for Internet & Society and MIT’s Center for Civic Media.

When we map media sources this way, we see that Breitbart became the center of a distinct right-wing media ecosystem, surrounded by Fox News, the Daily Caller, the Gateway Pundit, the Washington Examiner, Infowars, Conservative Treehouse, and Truthfeed.

Fig. 1: Media sources shared on Twitter during the election (nodes sized in proportion to Twitter shares).

(Chart not printed here)

Fig. 2: Media sources shared on Twitter during the election (nodes sized in proportion to Facebook shares).

(Chart not printed here) 

The most frequently shared media sources for Twitter users that retweeted either Trump or Clinton.

Notes: In the above clouds, the nodes are sized according to how often they were shared on Twitter (Fig. 1) or Facebook (Fig. 2). The location of nodes is determined by whether two sites were shared by the same Twitter user on the same day, representing the extent to which two sites draw similar audiences. The colors assigned to a site in the map reflect the share of that site’s stories tweeted by users who also retweeted either Clinton or Trump during the election. These colors therefore reflect the attention patterns of audiences, not analysis of content of the sites. Dark blue sites draw attention in ratios of at least 4:1 from Clinton followers; red sites 4:1 Trump followers. Green sites are retweeted more or less equally by followers of each candidate. Light-blue sites draw 3:2 Clinton followers, and pink draw 3:2 Trump followers.

Our analysis challenges a simple narrative that the internet as a technology is what fragments public discourse and polarizes opinions, by allowing us to inhabit filter bubbles or just read “the daily me.” If technology were the most important driver towards a “post-truth” world, we would expect to see symmetric patterns on the left and the right. Instead, different internal political dynamics in the right and the left led to different patterns in the reception and use of the technology by each wing. While Facebook and Twitter certainly enabled right-wing media to circumvent the gatekeeping power of traditional media, the pattern was not symmetric.

The size of the nodes marking traditional professional media like The New York Times, The Washington Post, and CNN, surrounded by the Hill, ABC, and NBC, tell us that these media drew particularly large audiences. Their color tells us that Clinton followers attended to them more than Trump followers, and their proximity on the map to more quintessentially partisan sites—like Huffington Post, MSNBC, or the Daily Beast—suggests that attention to these more partisan outlets on the left was more tightly interwoven with attention to traditional media. The Breitbart-centered wing, by contrast, is farther from the mainstream set and lacks bridging nodes that draw attention and connect it to that mainstream.

Moreover, the fact that these asymmetric patterns of attention were similar on both Twitter and Facebook suggests that human choices and political campaigning, not one company’s algorithm, were responsible for the patterns we observe. These patterns might be the result of a coordinated campaign, but they could also be an emergent property of decentralized behavior, or some combination of both. Our data to this point cannot distinguish between these alternatives.

Another way of seeing this asymmetry is to graph how much attention is given to sites that draw attention mostly from one side of the partisan divide. There are very few center-right sites: sites that draw many Trump followers, but also a substantial number of Clinton followers. Between the moderately conservative Wall Street Journal, which draws Clinton and Trump supporters in equal shares, and the starkly partisan sites that draw Trump supporters by ratios of 4:1 or more, there are only a handful of sites. Once a threshold of partisan-only attention is reached, the number of sites in the clearly partisan right increases, and indeed exceeds the number of sites in the clearly partisan left. By contrast, starting at The Wall Street Journal and moving left, attention is spread more evenly across a range of sites whose audience reflects a gradually increasing proportion of Clinton followers as opposed to Trump followers. Unlike on the right, on the left there is no dramatic increase in either the number of sites or levels of attention they receive as we move to  more clearly partisan sites.

(Chart not printed here)

Sites by partisan attention and Twitter shares.

(Chart not printed here)

Sites by partisan attention and Facebook shares.
 
The primary explanation of such asymmetric polarization is more likely politics and culture than technology.

A remarkable feature of the right-wing media ecosystem is how new it is. Out of all the outlets favored by Trump followers, only the New York Post existed when Ronald Reagan was elected president in 1980. By the election of Bill Clinton in 1992, only the Washington Times, Rush Limbaugh, and arguably Sean Hannity had joined the fray. Alex Jones of Infowars started his first outlet on the radio in 1996. Fox News was not founded until 1996. Breitbart was founded in 2007, and most of the other major nodes in the right-wing media system were created even later. Outside the right-wing, the map reflects a mixture of high attention to traditional journalistic outlets and dispersed attention to new, online-only, and partisan media.

The pattern of hyper-partisan attack was set during the primary campaign, targeting not only opposing candidates but also media that did not support Trump’s candidacy. In our data, looking at the most widely-shared stories during the primary season and at the monthly maps of media during those months, we see that Jeb Bush, Marco Rubio, and Fox News were the targets of attack.

The first and seventh most highly-tweeted stories from Infowars.com, one of the 10 most influential sites in the right-wing media system.
 
The February map, for example, shows Fox News as a smaller node quite distant from the Breitbart-centered right. It reflects the fact that Fox News received less attention than it did earlier or later in the campaign, and less attention, in particular, from users who also paid attention to the core Breitbart-centered sites and whose attention would have drawn Fox closer to Breitbart. The March map is similar, and only over April and May will Fox’s overall attention and attention from Breitbart followers revive.

This sidelining of Fox News in early 2016 coincided with sustained attacks against it by Breitbart. The top-20 stories in the right-wing media ecology during January included, for example, “Trump Campaign Manager Reveals Fox News Debate Chief Has Daughter Working for Rubio.” More generally, the five most-widely shared stories in which Breitbart refers to Fox are stories aimed to delegitimize Fox as the central arbiter of conservative news, tying it to immigration, terrorism and Muslims, and corruption:
• The Anti-Trump Network: Fox News Money Flows into Open Borders Group;
• NY Times Bombshell Scoop: Fox News Colluded with Rubio to Give Amnesty to Illegal Aliens;
• Google and Fox TV Invite Anti-Trump, Hitler-Citing, Muslim Advocate to Join Next GOP TV-Debate;
• Fox, Google Pick 1994 Illegal Immigrant To Ask Question In Iowa GOP Debate;
• Fox News At Facebook Meeting Is Misdirection: Murdoch and Zuckerberg Are Deeply Connected Over Immigration.

The repeated theme of conspiracy, corruption, and media betrayal is palpable in these highly shared Breitbart headlines linking Fox News, Rubio, and illegal immigration.
 

As the primaries ended, our maps show that attention to Fox revived and was more closely integrated with Breitbart and the remainder of the right-wing media sphere. The primary target of the right-wing media then became all other traditional media. While the prominence of different media sources in the right-wing sphere vary when viewed by shares on Facebook and Twitter, the content and core structure, with Breitbart at the center, is stable across platforms. Infowars, and similarly radical sites Truthfeed and Ending the Fed, gain in prominence in the Facebook map.

(Chart not printed here)

October 2016 by Twitter shares

(Chart not printed here)

October 2016 by Facebook shares

These two maps reveal the same pattern. Even in the highly-charged pre-election month, everyone outside the Breitbart-centered universe forms a tightly interconnected attention network, with major traditional mass media and professional sources at the core. The right, by contrast, forms its own insular sphere.
 
The right-wing media was also able to bring the focus on immigration, Clinton emails, and scandals more generally to the broader media environment. A sentence-level analysis of stories throughout the media environment suggests that Donald Trump’s substantive agenda—heavily focused on immigration and direct attacks on Hillary Clinton—came to dominate public discussions.

Number of sentences in mainstream media that address Trump and Clinton issues and scandals.
 
Coverage of Clinton overwhelmingly focused on emails, followed by the Clinton Foundation and Benghazi. Coverage of Trump included some scandal, but the most prevalent topic of Trump-focused stories was his main substantive agenda item—immigration—and his arguments about jobs and trade also received more attention than his scandals.

Proportion of election coverage that discusses immigration for selected media sources.
 
While mainstream media coverage was often critical, it nonetheless revolved around the agenda that the right-wing media sphere set: immigration. Right-wing media, in turn, framed immigration in terms of terror, crime, and Islam, as a review of Breitbart and other right-wing media stories about immigration most widely shared on social media exhibits.
Immigration is the key topic around which Trump and Breitbart found common cause; just as Trump made this a focal point for his campaign, Breitbart devoted disproportionate attention to the topic.
 

Top immigration related stories from right wing media shared on Twitter or Facebook.
 
What we find in our data is a network of mutually-reinforcing hyper-partisan sites that revive what Richard Hofstadter called “the paranoid style in American politics,” combining decontextualized truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world. “Fake news,” which implies made of whole cloth by politically disinterested parties out to make a buck of Facebook advertising dollars, rather than propaganda and disinformation, is not an adequate term. By repetition, variation, and circulation through many associated sites, the network of sites make their claims familiar to readers, and this fluency with the core narrative gives credence to the incredible.

Take a look at Ending the Fed, which, according to Buzzfeed’s examination of fake news in November 2016, accounted for five of the top 10 of the top fake stories in the election. In our data, Ending the Fed is indeed prominent by Facebook measures, but not by Twitter shares. In the month before the election, for example, it was one of the three most-shared right-wing sites on Facebook, alongside Breitbart and Truthfeed. While Ending the Fed clearly had great success marketing stories on Facebook, our analysis shows nothing distinctive about the site—it is simply part-and-parcel of the Breitbart-centered sphere.

And the false claims perpetuated in Ending the Fed’s most-shared posts are well established tropes in right wing media: the leaked Podesta emails, alleged Saudi funding of Clinton’s campaign, and a lack of credibility in media. The most Facebook-shared story by Ending the Fed in October was “IT’S OVER: Hillary’s ISIS Email Just Leaked & It’s Worse Than Anyone Could Have Imagined.” See also, Infowars’ “Saudi Arabia has funded 20% of Hillary’s Presidential Campaign, Saudi Crown Prince Claims,” and Breitbart’s  “Clinton Cash: Khizr Khan’s Deep Legal, Financial Connections to Saudi Arabia, Hillary’s Clinton Foundation Tie Terror, Immigration, Email Scandals Together.” This mix of claims and facts, linked through paranoid logic characterizes much of the most shared content linked to Breitbart. It is a mistake to dismiss these stories as “fake news”; their power stems from a potent mix of verifiable facts (the leaked Podesta emails), familiar repeated falsehoods, paranoid logic, and consistent political orientation within a mutually-reinforcing network of like-minded sites.

Use of disinformation by partisan media sources is neither new nor limited to the right wing, but the insulation of the partisan right-wing media from traditional journalistic media sources, and the vehemence of its attacks on journalism in common cause with a similarly outspoken president, is new and distinctive.

Rebuilding a basis on which Americans can form a shared belief about what is going on is a precondition of democracy, and the most important task confronting the press going forward. Our data strongly suggest that most Americans, including those who access news through social networks, continue to pay attention to traditional media, following professional journalistic practices, and cross-reference what they read on partisan sites with what they read on mass media sites.

To accomplish this, traditional media needs to reorient, not by developing better viral content and clickbait to compete in the social media environment, but by recognizing that it is operating in a propaganda and disinformation-rich environment. This, not Macedonian teenagers or Facebook, is the real challenge of the coming years. Rising to this challenge could usher in a new golden age for the Fourth Estate.

The election study was funded by the Open Society Foundations U.S. Program.  Media Cloud has received funding from The Bill and Melinda Gates Foundation, the Robert Woods Johnson Foundation, the Ford Foundation, and the Open Societies Foundations.

Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman are the authors. Benkler is a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society at Harvard; Faris is research director at BKC; Roberts is a fellow at BKC and technical lead of Media Cloud; and Zuckerman is director of the MIT Center for Civic Media.

NYT on Google Brain, Google Translate, and AI Progress

Amazing progress!

New York Times Article on Google and AI Progress

The Great A.I. Awakening
How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
BY GIDEON LEWIS-KRAUSDEC. 14, 2016

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Greg Corrado, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Apple’s Siri, Facebook’s M, Amazon’s Echo, Alan Turing, GO (the Board Game), convolutional neural network of Yann LeCun, supervised learning, machine learning, deep learning, Mike Schuster, T.P.U.s

Prologue: You Are What You Have Read
Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.
Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”
The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.
NO. 1:
Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
NO. 2:
Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:
Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.
Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

Sadiq Khan, the mayor of London, stood to make a few opening remarks. A friend, he began, had recently told him he reminded him of Google. “Why, because I know all the answers?” the mayor asked. “No,” the friend replied, “because you’re always trying to finish my sentences.” The crowd tittered politely. Khan concluded by introducing Google’s chief executive, Sundar Pichai, who took the stage.
Pichai was in London in part to inaugurate Google’s new building there, the cornerstone of a new “knowledge quarter” under construction at King’s Cross, and in part to unveil the completion of the initial phase of a company transformation he announced last year. The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”
A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce. It was only with the refugee crisis, Pichai explained from the lectern, that the company came to reckon with Translate’s geopolitical importance: On the screen behind him appeared a graph whose steep curve indicated a recent fivefold increase in translations between Arabic and German. (It was also close to Pichai’s own heart. He grew up in India, a land divided by dozens of languages.) The team had been steadily adding new languages and features, but gains in quality over the last four years had slowed considerably.
Until today. As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
Pichai has an affection for the obscure literary reference; he told me a month earlier, in his office in Mountain View, Calif., that Translate in part exists because not everyone can be like the physicist Robert Oppenheimer, who learned Sanskrit to read the Bhagavad Gita in the original. In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”
Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”
It was a fitting remark: The new Google Translate was run on the first machines that had, in a sense, ever learned to read anything at all.
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform.

The phrase “artificial intelligence” is invoked as if its meaning were self-evident, but it has always been a source of confusion and controversy. Imagine if you went back to the 1970s, stopped someone on the street, pulled out a smartphone and showed her Google Maps. Once you managed to convince her you weren’t some oddly dressed wizard, and that what you withdrew from your pocket wasn’t a black-arts amulet but merely a tiny computer more powerful than that onboard the Apollo shuttle, Google Maps would almost certainly seem to her a persuasive example of “artificial intelligence.” In a very real sense, it is. It can do things any map-literate human can manage, like get you from your hotel to the airport — though it can do so much more quickly and reliably. It can also do things that humans simply and obviously cannot: It can evaluate the traffic, plan the best route and reorient itself when you take the wrong exit.
Practically nobody today, however, would bestow upon Google Maps the honorific “A.I.,” so sentimental and sparing are we in our use of the word “intelligence.” Artificial intelligence, we believe, must be something that distinguishes HAL from whatever it is a loom or wheelbarrow can do. The minute we can automate a task, we downgrade the relevant skill involved to one of mere mechanism. Today Google Maps seems, in the pejorative sense of the term, robotic: It simply accepts an explicit demand (the need to get from one place to another) and tries to satisfy that demand as efficiently as possible. The goal posts for “artificial intelligence” are thus constantly receding.
When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.
In a famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor. Once a machine can translate fluently between two natural languages, the foundation has been laid for a machine that might one day “understand” human language well enough to engage in plausible conversation. Google Brain’s members, who pushed and helped oversee the Translate project, believe that such a machine would be on its way to serving as a generally intelligent all-encompassing personal digital assistant.

What follows here is the story of how a team of Google researchers and engineers — at first one or two, then three or four, and finally more than a hundred — made considerable progress in that direction. It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to. It does not feature people who think that everything will be unrecognizably different tomorrow or the next day because of some restless tinkerer in his garage. It is neither a story about people who think technology will solve all our problems nor one about people who think technology is ineluctably bound to create apocalyptic new ones. It is not about disruption, at least not in the way that word tends to be used.
It is, in fact, three overlapping stories that converge in Google Translate’s successful metamorphosis to A.I. — a technical story, an institutional story and a story about the evolution of ideas. The technical story is about one team on one product at one company, and the process by which they refined, tested and introduced a brand-new version of an old product in only about a quarter of the time anyone, themselves included, might reasonably have expected. The institutional story is about the employees of a small but influential artificial-intelligence group within that company, and the process by which their intuitive faith in some old, unproven and broadly unpalatable notions about computing upended every other company within a large radius. The story of ideas is about the cognitive scientists, psychologists and wayward engineers who long toiled in obscurity, and the process by which their ostensibly irrational convictions ultimately inspired a paradigm shift in our understanding not only of technology but also, in theory, of consciousness itself.
It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to.

The first story, the story of Google Translate, takes place in Mountain View over nine months, and it explains the transformation of machine translation. The second story, the story of Google Brain and its many competitors, takes place in Silicon Valley over five years, and it explains the transformation of that entire community. The third story, the story of deep learning, takes place in a variety of far-flung laboratories — in Scotland, Switzerland, Japan and most of all Canada — over seven decades, and it might very well contribute to the revision of our self-image as first and foremost beings who think.
All three are stories about artificial intelligence. The seven-decade story is about what we might conceivably expect or want from it. The five-year story is about what it might do in the near future. The nine-month story is about what it can do right this minute. These three stories are themselves just proof of concept. All of this is only the beginning.

Part I: Learning Machine
1. The Birth of Brain
Jeff Dean, though his title is senior fellow, is the de facto head of Google Brain. Dean is a sinewy, energy-efficient man with a long, narrow face, deep-set eyes and an earnest, soapbox-derby sort of enthusiasm. The son of a medical anthropologist and a public-health epidemiologist, Dean grew up all over the world — Minnesota, Hawaii, Boston, Arkansas, Geneva, Uganda, Somalia, Atlanta — and, while in high school and college, wrote software used by the World Health Organization. He has been with Google since 1999, as employee 25ish, and has had a hand in the core software systems beneath nearly every significant undertaking since then. A beloved artifact of company culture is Jeff Dean Facts, written in the style of the Chuck Norris Facts meme: “Jeff Dean’s PIN is the last four digits of pi.” “When Alexander Graham Bell invented the telephone, he saw a missed call from Jeff Dean.” “Jeff Dean got promoted to Level 11 in a system where the maximum level is 10.” (This last one is, in fact, true.)
Photo

One day in early 2011, Dean walked into one of the Google campus’s “microkitchens” — the “Googley” word for the shared break spaces on most floors of the Mountain View complex’s buildings — and ran into Andrew Ng, a young Stanford computer-science professor who was working for the company as a consultant. Ng told him about Project Marvin, an internal effort (named after the celebrated A.I. pioneer Marvin Minsky) he had recently helped establish to experiment with “neural networks,” pliant digital lattices based loosely on the architecture of the brain. Dean himself had worked on a primitive version of the technology as an undergraduate at the University of Minnesota in 1990, during one of the method’s brief windows of mainstream acceptability. Now, over the previous five years, the number of academics working on neural networks had begun to grow again, from a handful to a few dozen. Ng told Dean that Project Marvin, which was being underwritten by Google’s secretive X lab, had already achieved some promising results.
Dean was intrigued enough to lend his “20 percent” — the portion of work hours every Google employee is expected to contribute to programs outside his or her core job — to the project. Pretty soon, he suggested to Ng that they bring in another colleague with a neuroscience background, Greg Corrado. (In graduate school, Corrado was taught briefly about the technology, but strictly as a historical curiosity. “It was good I was paying attention in class that day,” he joked to me.) In late spring they brought in one of Ng’s best graduate students, Quoc Le, as the project’s first intern. By then, a number of the Google engineers had taken to referring to Project Marvin by another name: Google Brain.
Since the term “artificial intelligence” was first coined, at a kind of constitutional convention of the mind at Dartmouth in the summer of 1956, a majority of researchers have long thought the best approach to creating A.I. would be to write a very big, comprehensive program that laid out both the rules of logical reasoning and sufficient knowledge of the world. If you wanted to translate from English to Japanese, for example, you would program into the computer all of the grammatical rules of English, and then the entirety of definitions contained in the Oxford English Dictionary, and then all of the grammatical rules of Japanese, as well as all of the words in the Japanese dictionary, and only after all of that feed it a sentence in a source language and ask it to tabulate a corresponding sentence in the target language. You would give the machine a language map that was, as Borges would have had it, the size of the territory. This perspective is usually called “symbolic A.I.” — because its definition of cognition is based on symbolic logic — or, disparagingly, “good old-fashioned A.I.”
There are two main problems with the old-fashioned approach. The first is that it’s awfully time-consuming on the human end. The second is that it only really works in domains where rules and definitions are very clear: in mathematics, for example, or chess. Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.

There were, however, limits to what this system could do. In the 1980s, a robotics researcher at Carnegie Mellon pointed out that it was easy to get computers to do adult things but nearly impossible to get them to do things a 1-year-old could do, like hold a ball or identify a cat. By the 1990s, despite punishing advancements in computer chess, we still weren’t remotely close to artificial general intelligence.
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
There was no reason you couldn’t try to mimic this structure in electronic form, and in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.
This attitude toward artificial intelligence was evolutionary rather than creationist. If you wanted a flexible mechanism, you wanted one that could adapt to its environment. If you wanted something that could adapt, you didn’t want to begin with the indoctrination of the rules of chess. You wanted to begin with very basic abilities — sensory perception and motor control — in the hope that advanced skills would emerge organically. Humans don’t learn to understand language by memorizing dictionaries and grammar books, so why should we possibly expect our computers to do so?
Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I. Dean, Corrado and Ng began their work as a part-time, collaborative experiment, but they made immediate progress. They took architectural inspiration for their models from recent theoretical outlines — as well as ideas that had been on the shelf since the 1980s and 1990s — and drew upon both the company’s peerless reserves of data and its massive computing infrastructure. They instructed the networks on enormous banks of “labeled” data — speech files with correct transcriptions, for example — and the computers improved their responses to better match reality.
“The portion of evolution in which animals developed eyes was a big development,” Dean told me one day, with customary understatement. We were sitting, as usual, in a whiteboarded meeting room, on which he had drawn a crowded, snaking timeline of Google Brain and its relation to inflection points in the recent history of neural networks. “Now computers have eyes. We can build them around the capabilities that now exist to understand photos. Robots will be drastically transformed. They’ll be able to operate in an unknown environment, on much different problems.” These capacities they were building may have seemed primitive, but their implications were profound.

2. The Unlikely Intern
In its first year or so of existence, Brain’s experiments in the development of a machine with the talents of a 1-year-old had, as Dean said, worked to great effect. Its speech-recognition team swapped out part of their old system for a neural network and encountered, in pretty much one fell swoop, the best quality improvements anyone had seen in 20 years. Their system’s object-recognition abilities improved by an order of magnitude. This was not because Brain’s personnel had generated a sheaf of outrageous new ideas in just a year. It was because Google had finally devoted the resources — in computers and, increasingly, personnel — to fill in outlines that had been around for a long time.
A great preponderance of these extant and neglected notions had been proposed or refined by a peripatetic English polymath named Geoffrey Hinton. In the second year of Brain’s existence, Hinton was recruited to Brain as Andrew Ng left. (Ng now leads the 1,300-person A.I. team at Baidu.) Hinton wanted to leave his post at the University of Toronto for only three months, so for arcane contractual reasons he had to be hired as an intern. At intern training, the orientation leader would say something like, “Type in your LDAP” — a user login — and he would flag a helper to ask, “What’s an LDAP?” All the smart 25-year-olds in attendance, who had only ever known deep learning as the sine qua non of artificial intelligence, snickered: “Who is that old guy? Why doesn’t he get it?”
“At lunchtime,” Hinton said, “someone in the queue yelled: ‘Professor Hinton! I took your course! What are you doing here?’ After that, it was all right.”
A few months later, Hinton and two of his students demonstrated truly astonishing gains in a big image-recognition contest, run by an open-source collective called ImageNet, that asks computers not only to identify a monkey but also to distinguish between spider monkeys and howler monkeys, and among God knows how many different breeds of cat. Google soon approached Hinton and his students with an offer. They accepted. “I thought they were interested in our I.P.,” he said. “Turns out they were interested in us.”
Hinton comes from one of those old British families emblazoned like the Darwins at eccentric angles across the intellectual landscape, where regardless of titular preoccupation a person is expected to make sideline contributions to minor problems in astronomy or fluid dynamics. His great-great-grandfather was George Boole, whose foundational work in symbolic logic underpins the computer; another great-great-grandfather was a celebrated surgeon, his father a venturesome entomologist, his father’s cousin a Los Alamos researcher; the list goes on. He trained at Cambridge and Edinburgh, then taught at Carnegie Mellon before he ended up at Toronto, where he still spends half his time. (His work has long been supported by the largess of the Canadian government.) I visited him in his office at Google there. He has tousled yellowed-pewter hair combed forward in a mature Noel Gallagher style and wore a baggy striped dress shirt that persisted in coming untucked, and oval eyeglasses that slid down to the tip of a prominent nose. He speaks with a driving if shambolic wit, and says things like, “Computers will understand sarcasm before Americans do.”
Hinton had been working on neural networks since his undergraduate days at Cambridge in the late 1960s, and he is seen as the intellectual primogenitor of the contemporary field. For most of that time, whenever he spoke about machine learning, people looked at him as though he were talking about the Ptolemaic spheres or bloodletting by leeches. Neural networks were taken as a disproven folly, largely on the basis of one overhyped project: the Perceptron, an artificial neural network that Frank Rosenblatt, a Cornell psychologist, developed in the late 1950s. The New York Times reported that the machine’s sponsor, the United States Navy, expected it would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” It went on to do approximately none of those things. Marvin Minsky, the dean of artificial intelligence in America, had worked on neural networks for his 1954 Princeton thesis, but he’d since grown tired of the inflated claims that Rosenblatt — who was a contemporary at Bronx Science — made for the neural paradigm. (He was also competing for Defense Department funding.) Along with an M.I.T. colleague, Minsky published a book that proved that there were painfully simple problems the Perceptron could never solve.
Minsky’s criticism of the Perceptron extended only to networks of one “layer,” i.e., one layer of artificial neurons between what’s fed to the machine and what you expect from it — and later in life, he expounded ideas very similar to contemporary deep learning. But Hinton already knew at the time that complex tasks could be carried out if you had recourse to multiple layers. The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns. Take the case of image recognition, which tends to rely on a contraption called a “convolutional neural net.” (These were elaborated in a seminal 1998 paper whose lead author, a Frenchman named Yann LeCun, did his postdoctoral research in Toronto under Hinton and now directs a huge A.I. endeavor at Facebook.) The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something (an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face.

A demonstration from 1993 showing an early version of the researcher Yann LeCun’s convolutional neural network, which by the late 1990s was processing 10 to 20 percent of all checks in the United States. A similar technology now drives most state-of-the-art image-recognition systems. Video posted on YouTube by Yann LeCun
The issue with multilayered, “deep” neural networks was that the trial-and-error part got extraordinarily complicated. In a single layer, it’s easy. Imagine that you’re playing with a child. You tell the child, “Pick up the green ball and put it into Box A.” The child picks up a green ball and puts it into Box B. You say, “Try again to put the green ball in Box A.” The child tries Box A. Bravo.
Now imagine you tell the child, “Pick up a green ball, go through the door marked 3 and put the green ball into Box A.” The child takes a red ball, goes through the door marked 2 and puts the red ball into Box B. How do you begin to correct the child? You cannot just repeat your initial instructions, because the child does not know at which point he went wrong. In real life, you might start by holding up the red ball and the green ball and saying, “Red ball, green ball.” The whole point of machine learning, however, is to avoid that kind of explicit mentoring. Hinton and a few others went on to invent a solution (or rather, reinvent an older one) to this layered-error problem, over the halting course of the late 1970s and 1980s, and interest among computer scientists in neural networks was briefly revived. “People got very excited about it,” he said. “But we oversold it.” Computer scientists quickly went back to thinking that people like Hinton were weirdos and mystics.
These ideas remained popular, however, among philosophers and psychologists, who called it “connectionism” or “parallel distributed processing.” “This idea,” Hinton told me, “of a few people keeping a torch burning, it’s a nice myth. It was true within artificial intelligence. But within psychology lots of people believed in the approach but just couldn’t do it.” Neither could Hinton, despite the generosity of the Canadian government. “There just wasn’t enough computer power or enough data. People on our side kept saying, ‘Yeah, but if I had a really big one, it would work.’ It wasn’t a very persuasive argument.”
‘The portion of evolution in which animals developed eyes was a big development. Now computers have eyes.’

3. A Deep Explanation of Deep Learning
When Pichai said that Google would henceforth be “A.I. first,” he was not just making a claim about his company’s business strategy; he was throwing in his company’s lot with this long-unworkable idea. Pichai’s allocation of resources ensured that people like Dean could ensure that people like Hinton would have, at long last, enough computers and enough data to make a persuasive argument. An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion. For a simple artificial neural network of the sort proposed in the 1940s, the attempt to even try to replicate this was unimaginable. We’re still far from the construction of a network of that size, but Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.
To understand why scale is so important, however, you have to start to understand some of the more technical details of what, exactly, machine intelligences are doing with the data they consume. A lot of our ambient fears about A.I. rest on the idea that they’re just vacuuming up knowledge like a sociopathic prodigy in a library, and that an artificial intelligence constructed to make paper clips might someday decide to treat humans like ants or lettuce. This just isn’t how they work. All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place.
If that brief explanation seems sufficiently reassuring, the reassured nontechnical reader is invited to skip forward to the next section, which is about cats. If not, then read on. (This section is also, luckily, about cats.)
Imagine you want to program a cat-recognizer on the old symbolic-A.I. model. You stay up for days preloading the machine with an exhaustive, explicit definition of “cat.” You tell it that a cat has four legs and pointy ears and whiskers and a tail, and so on. All this information is stored in a special place in memory called Cat. Now you show it a picture. First, the machine has to separate out the various distinct elements of the image. Then it has to take these elements and apply the rules stored in its memory. If(legs=4) and if(ears=pointy) and if(whiskers=yes) and if(tail=yes) and if(expression=supercilious), then(cat=yes). But what if you showed this cat-recognizer a Scottish Fold, a heart-rending breed with a prized genetic defect that leads to droopy doubled-over ears? Our symbolic A.I. gets to (ears=pointy) and shakes its head solemnly, “Not cat.” It is hyperliteral, or “brittle.” Even the thickest toddler shows much greater inferential acuity.
Now imagine that instead of hard-wiring the machine with a set of rules for classification stored in one location of the computer’s memory, you try the same thing on a neural network. There is no special place that can hold the definition of “cat.” There is just a giant blob of interconnected switches, like forks in a path. On one side of the blob, you present the inputs (the pictures); on the other side, you present the corresponding outputs (the labels). Then you just tell it to work out for itself, via the individual calibration of all of these interconnected switches, whatever path the data should take so that the inputs are mapped to the correct outputs. The training is the process by which a labyrinthine series of elaborate tunnels are excavated through the blob, tunnels that connect any given input to its proper output. The more training data you have, the greater the number and intricacy of the tunnels that can be dug. Once the training is complete, the middle of the blob has enough tunnels that it can make reliable predictions about how to handle data it has never seen before. This is called “supervised learning.”
The reason that the network requires so many neurons and so much data is that it functions, in a way, like a sort of giant machine democracy. Imagine you want to train a computer to differentiate among five different items. Your network is made up of millions and millions of neuronal “voters,” each of whom has been given five different cards: one for cat, one for dog, one for spider monkey, one for spoon and one for defibrillator. You show your electorate a photo and ask, “Is this a cat, a dog, a spider monkey, a spoon or a defibrillator?” All the neurons that voted the same way collect in groups, and the network foreman peers down from above and identifies the majority classification: “A dog?”
You say: “No, maestro, it’s a cat. Try again.”
Now the network foreman goes back to identify which voters threw their weight behind “cat” and which didn’t. The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes. What’s important is not the individual vote, exactly, but the pattern of votes. If Joe, Frank and Mary all vote together, it’s a dog; but if Joe, Kate and Jessica vote together, it’s a cat; and if Kate, Jessica and Frank vote together, it’s a defibrillator. The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals. If you have only Joe, Frank and Mary, you can maybe use them only to differentiate among a cat, a dog and a defibrillator. If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with incredible granularity. Your trained voter assembly will be able to look at an unlabeled picture and identify it more or less accurately.
Part of the reason there was so much resistance to these ideas in computer-science departments is that because the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them. This wooliness, however, is the point. The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes. You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.
It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.
Image-recognition networks like our cat-identifier are only one of many varieties of deep learning, but they are disproportionately invoked as teaching examples because each layer does something at least vaguely recognizable to humans — picking out edges first, then circles, then faces. This means there’s a safeguard against error. For instance, an early oddity in Google’s image-recognition software meant that it could not always identify a barbell in isolation, even though the team had trained it on an image set that included a lot of exercise categories. A visualization tool showed them the machine had learned not the concept of “dumbbell” but the concept of “dumbbell+arm,” because all the dumbbells in the training set were attached to arms. They threw into the training mix some photos of solo barbells. The problem was solved. Not everything is so easy.

Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.

4. The Cat Paper
Over the course of its first year or two, Brain’s efforts to cultivate in machines the skills of a 1-year-old were auspicious enough that the team was graduated out of the X lab and into the broader research organization. (The head of Google X once noted that Brain had paid for the entirety of X’s costs.) They still had fewer than 10 people and only a vague sense for what might ultimately come of it all. But even then they were thinking ahead to what ought to happen next. First a human mind learns to recognize a ball and rests easily with the accomplishment for a moment, but sooner or later, it wants to ask for the ball. And then it wades into language.
The first step in that direction was the cat paper, which made Brain famous.
What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself. (The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data. The cat paper showed that machines could also deal with raw unlabeled data, perhaps even data of which humans had no established foreknowledge. This seemed like a major advance not only in cat-recognition studies but also in overall artificial intelligence.
The lead author on the cat paper was Quoc Le. Le is short and willowy and soft-spoken, with a quick, enigmatic smile and shiny black penny loafers. He grew up outside Hue, Vietnam. His parents were rice farmers, and he did not have electricity at home. His mathematical abilities were obvious from an early age, and he was sent to study at a magnet school for science. In the late 1990s, while still in school, he tried to build a chatbot to talk to. He thought, How hard could this be?
“But actually,” he told me in a whispery deadpan, “it’s very hard.”
He left the rice paddies on a scholarship to a university in Canberra, Australia, where he worked on A.I. tasks like computer vision. The dominant method of the time, which involved feeding the machine definitions for things like edges, felt to him like cheating. Le didn’t know then, or knew only dimly, that there were at least a few dozen computer scientists elsewhere in the world who couldn’t help imagining, as he did, that machines could learn from scratch. In 2006, Le took a position at the Max Planck Institute for Biological Cybernetics in the medieval German university town of Tübingen. In a reading group there, he encountered two new papers by Geoffrey Hinton. People who entered the discipline during the long diaspora all have conversion stories, and when Le read those papers, he felt the scales fall away from his eyes.
“There was a big debate,” he told me. “A very big debate.” We were in a small interior conference room, a narrow, high-ceilinged space outfitted with only a small table and two whiteboards. He looked to the curve he’d drawn on the whiteboard behind him and back again, then softly confided, “I’ve never seen such a big debate.”
He remembers standing up at the reading group and saying, “This is the future.” It was, he said, an “unpopular decision at the time.” A former adviser from Australia, with whom he had stayed close, couldn’t quite understand Le’s decision. “Why are you doing this?” he asked Le in an email.
“I didn’t have a good answer back then,” Le said. “I was just curious. There was a successful paradigm, but to be honest I was just curious about the new paradigm. In 2006, there was very little activity.” He went to join Ng at Stanford and began to pursue Hinton’s ideas. “By the end of 2010, I was pretty convinced something was going to happen.”
What happened, soon afterward, was that Le went to Brain as its first intern, where he carried on with his dissertation work — an extension of which ultimately became the cat paper. On a simple level, Le wanted to see if the computer could be trained to identify on its own the information that was absolutely essential to a given image. He fed the neural network a still he had taken from YouTube. He then told the neural network to throw away some of the information contained in the image, though he didn’t specify what it should or shouldn’t throw away. The machine threw away some of the information, initially at random. Then he said: “Just kidding! Now recreate the initial image you were shown based only on the information you retained.” It was as if he were asking the machine to find a way to “summarize” the image, and then expand back to the original from the summary. If the summary was based on irrelevant data — like the color of the sky rather than the presence of whiskers — the machine couldn’t perform a competent reconstruction. Its reaction would be akin to that of a distant ancestor whose takeaway from his brief exposure to saber-tooth tigers was that they made a restful swooshing sound when they moved. Le’s neural network, unlike that ancestor, got to try again, and again and again and again. Each time it mathematically “chose” to prioritize different pieces of information and performed incrementally better. A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer. The same network that hit on our concept of cat also became enthusiastic about a pattern that looked like some sort of furniture-animal compound, like a cross between an ottoman and a goat.
Le didn’t see himself in those heady cat years as a language guy, but he felt an urge to connect the dots to his early chatbot. After the cat paper, he realized that if you could ask a network to summarize a photo, you could perhaps also ask it to summarize a sentence. This problem preoccupied Le, along with a Brain colleague named Tomas Mikolov, for the next two years.
In that time, the Brain team outgrew several offices around him. For a while they were on a floor they shared with executives. They got an email at one point from the administrator asking that they please stop allowing people to sleep on the couch in front of Larry Page and Sergey Brin’s suite. It unsettled incoming V.I.P.s. They were then allocated part of a research building across the street, where their exchanges in the microkitchen wouldn’t be squandered on polite chitchat with the suits. That interim also saw dedicated attempts on the part of Google’s competitors to catch up. (As Le told me about his close collaboration with Tomas Mikolov, he kept repeating Mikolov’s name over and over, in an incantatory way that sounded poignant. Le had never seemed so solemn. I finally couldn’t help myself and began to ask, “Is he … ?” Le nodded. “At Facebook,” he replied.)
Photo

They spent this period trying to come up with neural-network architectures that could accommodate not only simple photo classifications, which were static, but also complex structures that unfolded over time, like language or music. Many of these were first proposed in the 1990s, and Le and his colleagues went back to those long-ignored contributions to see what they could glean. They knew that once you established a facility with basic linguistic prediction, you could then go on to do all sorts of other intelligent things — like predict a suitable reply to an email, for example, or predict the flow of a sensible conversation. You could sidle up to the sort of prowess that would, from the outside at least, look a lot like thinking.

Part II: Language Machine
5. The Linguistic Turn
The hundred or so current members of Brain — it often feels less like a department within a colossal corporate hierarchy than it does a club or a scholastic society or an intergalactic cantina — came in the intervening years to count among the freest and most widely admired employees in the entire Google organization. They are now quartered in a tiered two-story eggshell building, with large windows tinted a menacing charcoal gray, on the leafy northwestern fringe of the company’s main Mountain View campus. Their microkitchen has a foosball table I never saw used; a Rock Band setup I never saw used; and a Go kit I saw used on a few occasions. (I did once see a young Brain research associate introducing his colleagues to ripe jackfruit, carving up the enormous spiky orb like a turkey.)
When I began spending time at Brain’s offices, in June, there were some rows of empty desks, but most of them were labeled with Post-it notes that said things like “Jesse, 6/27.” Now those are all occupied. When I first visited, parking was not an issue. The closest spaces were those reserved for expectant mothers or Teslas, but there was ample space in the rest of the lot. By October, if I showed up later than 9:30, I had to find a spot across the street.
Brain’s growth made Dean slightly nervous about how the company was going to handle the demand. He wanted to avoid what at Google is known as a “success disaster” — a situation in which the company’s capabilities in theory outpaced its ability to implement a product in practice. At a certain point he did some back-of-the-envelope calculations, which he presented to the executives one day in a two-slide presentation.
“If everyone in the future speaks to their Android phone for three minutes a day,” he told them, “this is how many machines we’ll need.” They would need to double or triple their global computational footprint.
“That,” he observed with a little theatrical gulp and widened eyes, “sounded scary. You’d have to” — he hesitated to imagine the consequences — “build new buildings.”
There was, however, another option: just design, mass-produce and install in dispersed data centers a new kind of chip to make everything faster. These chips would be called T.P.U.s, or “tensor processing units,” and their value proposition — counterintuitively — is that they are deliberately less precise than normal chips. Rather than compute 12.246 times 54.392, they will give you the perfunctory answer to 12 times 54. On a mathematical level, rather than a metaphorical one, a neural network is just a structured series of hundreds or thousands or tens of thousands of matrix multiplications carried out in succession, and it’s much more important that these processes be fast than that they be exact. “Normally,” Dean said, “special-purpose hardware is a bad idea. It usually works to speed up one thing. But because of the generality of neural networks, you can leverage this special-purpose hardware for a lot of other things.”
Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.
The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.
It’s no great tragedy if neural networks mislabel 1 percent of cats as dogs, but in something like a self-driving car we all want greater assurances.

6. The Ambush
Le’s paper showed that neural translation was plausible, but he had used only a relatively small public data set. (Small for Google, that is — it was actually the biggest public data set in the world. A decade of the old Translate had gathered production data that was between a hundred and a thousand times bigger.) More important, Le’s model didn’t work very well for sentences longer than about seven words.
Mike Schuster, who then was a staff research scientist at Brain, picked up the baton. He knew that if Google didn’t find a way to scale these theoretical insights up to a production level, someone else would. The project took him the next two years. “You think,” Schuster says, “to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.”
Schuster is a taut, focused, ageless being with a tanned, piston-shaped head, narrow shoulders, long camo cargo shorts tied below the knee and neon-green Nike Flyknits. He looks as if he woke up in the lotus position, reached for his small, rimless, elliptical glasses, accepted calories in the form of a modest portion of preserved acorn and completed a relaxed desert decathlon on the way to the office; in reality, he told me, it’s only an 18-mile bike ride each way. Schuster grew up in Duisburg, in the former West Germany’s blast-furnace district, and studied electrical engineering before moving to Kyoto to work on early neural networks. In the 1990s, he ran experiments with a neural-networking machine as big as a conference room; it cost millions of dollars and had to be trained for weeks to do something you could now do on your desktop in less than an hour. He published a paper in 1997 that was barely cited for a decade and a half; this year it has been cited around 150 times. He is not humorless, but he does often wear an expression of some asperity, which I took as his signature combination of German restraint and Japanese restraint.
The issues Schuster had to deal with were tangled. For one thing, Le’s code was custom-written, and it wasn’t compatible with the new open-source machine-learning platform Google was then developing, TensorFlow. Dean directed to Schuster two other engineers, Yonghui Wu and Zhifeng Chen, in the fall of 2015. It took them two months just to replicate Le’s results on the new system. Le was around, but even he couldn’t always make heads or tails of what they had done.
As Schuster put it, “Some of the stuff was not done in full consciousness. They didn’t know themselves why they worked.”
This February, Google’s research organization — the loose division of the company, roughly a thousand employees in all, dedicated to the forward-looking and the unclassifiable — convened their leads at an offsite retreat at the Westin St. Francis, on Union Square, a luxury hotel slightly less splendid than Google’s own San Francisco shop a mile or so to the east. The morning was reserved for rounds of “lightning talks,” quick updates to cover the research waterfront, and the afternoon was idled away in cross-departmental “facilitated discussions.” The hope was that the retreat might provide an occasion for the unpredictable, oblique, Bell Labs-ish exchanges that kept a mature company prolific.
At lunchtime, Corrado and Dean paired up in search of Macduff Hughes, director of Google Translate. Hughes was eating alone, and the two Brain members took positions at either side. As Corrado put it, “We ambushed him.”
“O.K.,” Corrado said to the wary Hughes, holding his breath for effect. “We have something to tell you.”
They told Hughes that 2016 seemed like a good time to consider an overhaul of Google Translate — the code of hundreds of engineers over 10 years — with a neural network. The old system worked the way all machine translation has worked for about 30 years: It sequestered each successive sentence fragment, looked up those words in a large statistically derived vocabulary table, then applied a battery of post-processing rules to affix proper endings and rearrange it all to make sense. The approach is called “phrase-based statistical machine translation,” because by the time the system gets to the next phrase, it doesn’t know what the last one was. This is why Translate’s output sometimes looked like a shaken bag of fridge magnets. Brain’s replacement would, if it came together, read and render entire sentences at one draft. It would capture context — and something akin to meaning.
The stakes may have seemed low: Translate generates minimal revenue, and it probably always will. For most Anglophone users, even a radical upgrade in the service’s performance would hardly be hailed as anything more than an expected incremental bump. But there was a case to be made that human-quality machine translation is not only a short-term necessity but also a development very likely, in the long term, to prove transformational. In the immediate future, it’s vital to the company’s business strategy. Google estimates that 50 percent of the internet is in English, which perhaps 20 percent of the world’s population speaks. If Google was going to compete in China — where a majority of market share in search-engine traffic belonged to its competitor Baidu — or India, decent machine translation would be an indispensable part of the infrastructure. Baidu itself had published a pathbreaking paper about the possibility of neural machine translation in July 2015.
‘You think to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.’

And in the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language. This would represent a major inflection point — perhaps the major inflection point — in the development of something that felt like true artificial intelligence.
Most people in Silicon Valley were aware of machine learning as a fast-approaching horizon, so Hughes had seen this ambush coming. He remained skeptical. A modest, sturdily built man of early middle age with mussed auburn hair graying at the temples, Hughes is a classic line engineer, the sort of craftsman who wouldn’t have been out of place at a drafting table at 1970s Boeing. His jeans pockets often look burdened with curious tools of ungainly dimension, as if he were porting around measuring tapes or thermocouples, and unlike many of the younger people who work for him, he has a wardrobe unreliant on company gear. He knew that various people in various places at Google and elsewhere had been trying to make neural translation work — not in a lab but at production scale — for years, to little avail.
Hughes listened to their case and, at the end, said cautiously that it sounded to him as if maybe they could pull it off in three years.
Dean thought otherwise. “We can do it by the end of the year, if we put our minds to it.” One reason people liked and admired Dean so much was that he had a long record of successfully putting his mind to it. Another was that he wasn’t at all embarrassed to say sincere things like “if we put our minds to it.”
Hughes was sure the conversion wasn’t going to happen any time soon, but he didn’t personally care to be the reason. “Let’s prepare for 2016,” he went back and told his team. “I’m not going to be the one to say Jeff Dean can’t deliver speed.”
A month later, they were finally able to run a side-by-side experiment to compare Schuster’s new system with Hughes’s old one. Schuster wanted to run it for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”
It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations. At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.
The neural system, on the English-French language pair, showed an improvement over the old system of seven points.
Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.
To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.
7. Theory Becomes Product
Until then, the neural-translation team had been only three people — Schuster, Wu and Chen — but with Hughes’s support, the broader team began to coalesce. They met under Schuster’s command on Wednesdays at 2 p.m. in a corner room of the Brain building called Quartz Lake. The meeting was generally attended by a rotating cast of more than a dozen people. When Hughes or Corrado were there, they were usually the only native English speakers. The engineers spoke Chinese, Vietnamese, Polish, Russian, Arabic, German and Japanese, though they mostly spoke in their own efficient pidgin and in math. It is not always totally clear, at Google, who is running a meeting, but in Schuster’s case there was no ambiguity.
The steps they needed to take, even then, were not wholly clear. “This story is a lot about uncertainty — uncertainty throughout the whole process,” Schuster told me at one point. “The software, the data, the hardware, the people. It was like” — he extended his long, gracile arms, slightly bent at the elbows, from his narrow shoulders — “swimming in a big sea of mud, and you can only see this far.” He held out his hand eight inches in front of his chest. “There’s a goal somewhere, and maybe it’s there.”
Most of Google’s conference rooms have videochat monitors, which when idle display extremely high-resolution oversaturated public Google+ photos of a sylvan dreamscape or the northern lights or the Reichstag. Schuster gestured toward one of the panels, which showed a crystalline still of the Washington Monument at night.
“The view from outside is that everyone has binoculars and can see ahead so far.”
The theoretical work to get them to this point had already been painstaking and drawn-out, but the attempt to turn it into a viable product — the part that academic scientists might dismiss as “mere” engineering — was no less difficult. For one thing, they needed to make sure that they were training on good data. Google’s billions of words of training “reading” were mostly made up of complete sentences of moderate complexity, like the sort of thing you might find in Hemingway. Some of this is in the public domain: The original Rosetta Stone of statistical machine translation was millions of pages of the complete bilingual records of the Canadian Parliament. Much of it, however, was culled from 10 years of collected data, including human translations that were crowdsourced from enthusiastic respondents. The team had in their storehouse about 97 million unique English “words.” But once they removed the emoticons, and the misspellings, and the redundancies, they had a working vocabulary of only around 160,000.
Then you had to refocus on what users actually wanted to translate, which frequently had very little to do with reasonable language as it is employed. Many people, Google had found, don’t look to the service to translate full, complex sentences; they translate weird little shards of language. If you wanted the network to be able to handle the stream of user queries, you had to be sure to orient it in that direction. The network was very sensitive to the data it was trained on. As Hughes put it to me at one point: “The neural-translation system is learning everything it can. It’s like a toddler. ‘Oh, Daddy says that word when he’s mad!’ ” He laughed. “You have to be careful.”
More than anything, though, they needed to make sure that the whole thing was fast and reliable enough that their users wouldn’t notice. In February, the translation of a 10-word sentence took 10 seconds. They could never introduce anything that slow. The Translate team began to conduct latency experiments on a small percentage of users, in the form of faked delays, to identify tolerance. They found that a translation that took twice as long, or even five times as long, wouldn’t be registered. An eightfold slowdown would. They didn’t need to make sure this was true across all languages. In the case of a high-traffic language, like French or Chinese, they could countenance virtually no slowdown. For something more obscure, they knew that users wouldn’t be so scared off by a slight delay if they were getting better quality. They just wanted to prevent people from giving up and switching over to some competitor’s service.
Schuster, for his part, admitted he just didn’t know if they ever could make it fast enough. He remembers a conversation in the microkitchen during which he turned to Chen and said, “There must be something we don’t know to make it fast enough, but I don’t know what it could be.”
He did know, though, that they needed more computers — “G.P.U.s,” graphics processors reconfigured for neural networks — for training.
Hughes went to Schuster to ask what he thought. “Should we ask for a thousand G.P.U.s?”
Schuster said, “Why not 2,000?”

In the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language.

Ten days later, they had the additional 2,000 processors.
By April, the original lineup of three had become more than 30 people — some of them, like Le, on the Brain side, and many from Translate. In May, Hughes assigned a kind of provisional owner to each language pair, and they all checked their results into a big shared spreadsheet of performance evaluations. At any given time, at least 20 people were running their own independent weeklong experiments and dealing with whatever unexpected problems came up. One day a model, for no apparent reason, started taking all the numbers it came across in a sentence and discarding them. There were months when it was all touch and go. “People were almost yelling,” Schuster said.
By late spring, the various pieces were coming together. The team introduced something called a “word-piece model,” a “coverage penalty,” “length normalization.” Each part improved the results, Schuster says, by maybe a few percentage points, but in aggregate they had significant effects. Once the model was standardized, it would be only a single multilingual model that would improve over time, rather than the 150 different models that Translate currently used. Still, the paradox — that a tool built to further generalize, via learning machines, the process of automation required such an extraordinary amount of concerted human ingenuity and effort — was not lost on them. So much of what they did was just gut. How many neurons per layer did you use? 1,024 or 512? How many layers? How many sentences did you run through at a time? How long did you train for?
“We did hundreds of experiments,” Schuster told me, “until we knew that we could stop the training after one week. You’re always saying: When do we stop? How do I know I’m done? You never know you’re done. The machine-learning mechanism is never perfect. You need to train, and at some point you have to stop. That’s the very painful nature of this whole system. It’s hard for some people. It’s a little bit an art — where you put your brush to make it nice. It comes from just doing it. Some people are better, some worse.”
By May, the Brain team understood that the only way they were ever going to make the system fast enough to implement as a product was if they could run it on T.P.U.s, the special-purpose chips that Dean had called for. As Chen put it: “We did not even know if the code would work. But we did know that without T.P.U.s, it definitely wasn’t going to work.” He remembers going to Dean one on one to plead, “Please reserve something for us.” Dean had reserved them. The T.P.U.s, however, didn’t work right out of the box. Wu spent two months sitting next to someone from the hardware team in an attempt to figure out why. They weren’t just debugging the model; they were debugging the chip. The neural-translation project would be proof of concept for the whole infrastructural investment.
One Wednesday in June, the meeting in Quartz Lake began with murmurs about a Baidu paper that had recently appeared on the discipline’s chief online forum. Schuster brought the room to order. “Yes, Baidu came out with a paper. It feels like someone looking through our shoulder — similar architecture, similar results.” The company’s BLEU scores were essentially what Google achieved in its internal tests in February and March. Le didn’t seem ruffled; his conclusion seemed to be that it was a sign Google was on the right track. “It is very similar to our system,” he said with quiet approval.
The Google team knew that they could have published their results earlier and perhaps beaten their competitors, but as Schuster put it: “Launching is more important than publishing. People say, ‘Oh, I did something first,’ but who cares, in the end?”
This did, however, make it imperative that they get their own service out first and better. Hughes had a fantasy that they wouldn’t even inform their users of the switch. They would just wait and see if social media lit up with suspicions about the vast improvements.
“We don’t want to say it’s a new system yet,” he told me at 5:36 p.m. two days after Labor Day, one minute before they rolled out Chinese-to-English to 10 percent of their users, without telling anyone. “We want to make sure it works. The ideal is that it’s exploding on Twitter: ‘Have you seen how awesome Google Translate got?’ ”
8. A Celebration
The only two reliable measures of time in the seasonless Silicon Valley are the rotations of seasonal fruit in the microkitchens — from the pluots of midsummer to the Asian pears and Fuyu persimmons of early fall — and the zigzag of technological progress. On an almost uncomfortably warm Monday afternoon in late September, the team’s paper was at last released. It had an almost comical 31 authors. The next day, the members of Brain and Translate gathered to throw themselves a little celebratory reception in the Translate microkitchen. The rooms in the Brain building, perhaps in homage to the long winters of their diaspora, are named after Alaskan locales; the Translate building’s theme is Hawaiian.
The Hawaiian microkitchen has a slightly grainy beach photograph on one wall, a small lei-garlanded thatched-hut service counter with a stuffed parrot at the center and ceiling fixtures fitted to resemble paper lanterns. Two sparse histograms of bamboo poles line the sides, like the posts of an ill-defended tropical fort. Beyond the bamboo poles, glass walls and doors open onto rows of identical gray desks on either side. That morning had seen the arrival of new hooded sweatshirts to honor 10 years of Translate, and many team members went over to the party from their desks in their new gear. They were in part celebrating the fact that their decade of collective work was, as of that day, en route to retirement. At another institution, these new hoodies might thus have become a costume of bereavement, but the engineers and computer scientists from both teams all seemed pleased.

‘It was like swimming in a big sea of mud, and you can only see this far.’ Schuster held out his hand eight inches in front of his chest.

Google’s neural translation was at last working. By the time of the party, the company’s Chinese-English test had already processed 18 million queries. One engineer on the Translate team was running around with his phone out, trying to translate entire sentences from Chinese to English using Baidu’s alternative. He crowed with glee to anybody who would listen. “If you put in more than two characters at once, it times out!” (Baidu says this problem has never been reported by users.)
When word began to spread, over the following weeks, that Google had introduced neural translation for Chinese to English, some people speculated that it was because that was the only language pair for which the company had decent results. Everybody at the party knew that the reality of their achievement would be clear in November. By then, however, many of them would be on to other projects.
Hughes cleared his throat and stepped in front of the tiki bar. He wore a faded green polo with a rumpled collar, lightly patterned across the midsection with dark bands of drying sweat. There had been last-minute problems, and then last-last-minute problems, including a very big measurement error in the paper and a weird punctuation-related bug in the system. But everything was resolved — or at least sufficiently resolved for the moment. The guests quieted. Hughes ran efficient and productive meetings, with a low tolerance for maundering or side conversation, but he was given pause by the gravity of the occasion. He acknowledged that he was, perhaps, stretching a metaphor, but it was important to him to underline the fact, he began, that the neural translation project itself represented a “collaboration between groups that spoke different languages.”
Their neural-translation project, he continued, represented a “step function forward” — that is, a discontinuous advance, a vertical leap rather than a smooth curve. The relevant translation had been not just between the two teams but from theory into reality. He raised a plastic demi-flute of expensive-looking Champagne.
“To communication,” he said, “and cooperation!”
The engineers assembled looked around at one another and gave themselves over to little circumspect whoops and applause.
Jeff Dean stood near the center of the microkitchen, his hands in his pockets, shoulders hunched slightly inward, with Corrado and Schuster. Dean saw that there was some diffuse preference that he contribute to the observance of the occasion, and he did so in a characteristically understated manner, with a light, rapid, concise addendum.
What they had shown, Dean said, was that they could do two major things at once: “Do the research and get it in front of, I dunno, half a billion people.”
Everyone laughed, not because it was an exaggeration but because it wasn’t.

Epilogue: Machines Without Ghosts
Perhaps the most famous historic critique of artificial intelligence, or the claims made on its behalf, implicates the question of translation. The Chinese Room argument was proposed in 1980 by the Berkeley philosopher John Searle. In Searle’s thought experiment, a monolingual English speaker sits alone in a cell. An unseen jailer passes him, through a slot in the door, slips of paper marked with Chinese characters. The prisoner has been given a set of tables and rules in English for the composition of replies. He becomes so adept with these instructions that his answers are soon “absolutely indistinguishable from those of Chinese speakers.” Should the unlucky prisoner be said to “understand” Chinese? Searle thought the answer was obviously not. This metaphor for a computer, Searle later wrote, exploded the claim that “the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds.”
For the Google Brain team, though, or for nearly everyone else who works in machine learning in Silicon Valley, that view is entirely beside the point. This doesn’t mean they’re just ignoring the philosophical question. It means they have a fundamentally different view of the mind. Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms. The implication is that our facility with what we consider the higher registers of thought are no different in kind from what we’re tempted to perceive as the lower registers. Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems. As Corrado said to me on my very first day at Google, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does,’ and — more importantly — what it doesn’t do yet.”
Where you come down on “knowing” versus “doing” has real cultural and social implications. At the party, Schuster came over to me to express his frustration with the paper’s media reception. “Did you see the first press?” he asked me. He paraphrased a headline from that morning, blocking it word by word with his hand as he recited it: GOOGLE SAYS A.I. TRANSLATION IS INDISTINGUISHABLE FROM HUMANS’. Over the final weeks of the paper’s composition, the team had struggled with this; Schuster often repeated that the message of the paper was “It’s much better than it was before, but not as good as humans.” He had hoped it would be clear that their efforts weren’t about replacing people but helping them.
And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view. At a 2015 M.I.T. conference about the roots of artificial intelligence, Noam Chomsky was asked what he thought of machine learning. He pooh-poohed the whole enterprise as mere statistical prediction, a glorified weather forecast. Even if neural translation attained perfect functionality, it would reveal nothing profound about the underlying nature of language. It could never tell you if a pronoun took the dative or the accusative case. This kind of prediction makes for a good tool to accomplish our ends, but it doesn’t succeed by the standards of furthering our understanding of why things happen the way they do. A machine can already detect tumors in medical scans better than human radiologists, but the machine can’t tell you what’s causing the cancer.
Then again, can the radiologist?
Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.
The most important thing happening in Silicon Valley right now is not disruption. Rather, it’s institution-building — and the consolidation of power — on a scale and at a pace that are both probably unprecedented in human history. Brain has interns; it has residents; it has “ninja” classes to train people in other departments. Everywhere there are bins of free bike helmets, and free green umbrellas for the two days a year it rains, and little fruit salads, and nap pods, and shared treadmill desks, and massage chairs, and random cartons of high-end pastries, and places for baby-clothes donations, and two-story climbing walls with scheduled instructors, and reading groups and policy talks and variegated support networks. The recipients of these major investments in human cultivation — for they’re far more than perks for proles in some digital salt mine — have at hand the power of complexly coordinated servers distributed across 13 data centers on four continents, data centers that draw enough electricity to light up large cities.

But even enormous institutions like Google will be subject to this wave of automation; once machines can learn from human speech, even the comfortable job of the programmer is threatened. As the party in the tiki bar was winding down, a Translate engineer brought over his laptop to show Hughes something. The screen swirled and pulsed with a vivid, kaleidoscopic animation of brightly colored spheres in long looping orbits that periodically collapsed into nebulae before dispersing once more.
Hughes recognized what it was right away, but I had to look closely before I saw all the names — of people and files. It was an animation of the history of 10 years of changes to the Translate code base, every single buzzing and blooming contribution by every last team member. Hughes reached over gently to skip forward, from 2006 to 2008 to 2015, stopping every once in a while to pause and remember some distant campaign, some ancient triumph or catastrophe that now hurried by to be absorbed elsewhere or to burst on its own. Hughes pointed out how often Jeff Dean’s name expanded here and there in glowing spheres.

Hughes called over Corrado, and they stood transfixed. To break the spell of melancholic nostalgia, Corrado, looking a little wounded, looked up and said, “So when do we get to delete it?”
“Don’t worry about it,” Hughes said. “The new code base is going to grow. Everything grows.”
Gideon Lewis-Kraus is a writer at large for the magazine and a fellow at New America.

========Appendix ======

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Marvin Minsky, 

THE WORK ISSUE 
What Google Learned From Its Quest to Build the Perfect Team FEB. 25, 2016 





When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ NOV. 27, 2016