Category Archives: Academics

Academics, History, Philosophy, Literature, Music, Drama, Science, Mathematics, Logic, Sociology, Economics, Behavioral Economics, Sociology, Psychology

Neo.Life

This beta site NeoLife link beyond the splash pagee is tracking the “neobiological revolution”. I wholeheartedly agree that some of our best and brightest are on the case. Here they are:

ABOUT
NEO.LIFE
Making Sense of the Neobiological Revolution
NOTE FROM THE EDITOR
Mapping the brain, sequencing the genome, decoding the microbiome, extending life, curing diseases, editing mutations. We live in a time of awe and possibility — and also enormous responsibility. Are you prepared?

EDITORS

FOUNDER

Jane Metcalfe
Founder of Neo.life. Entrepreneur in media (Wired) and food (TCHO). Lover of mountains, horses, roses, and kimchee, though not necessarily in that order.
Follow

EDITOR
Brian Bergstein
Story seeker and story teller. Editor at NEO.LIFE. Former executive editor of MIT Technology Review; former technology & media editor at The Associated Press
Follow

ART DIRECTOR
Nicholas Vokey
Los Angeles-based graphic designer and animator.
Follow

CONSULTANT
Saul Carlin
founder @subcasthq. used to work here.

EDITOR
Rachel Lehmann-Haupt
Editor, www.theartandscienceoffamily.com & NEO.LIFE, author of In Her Own Sweet Time: Egg Freezing and the New Frontiers of Family

Laura Cochrane
“To oppose something is to maintain it.” — Ursula K. Le Guin

WRITERS

Amanda Schaffer
writes for the New Yorker and Neo.life, and is a former medical columnist for Slate. @abschaffer

Mallory Pickett
freelance journalist in Los Angeles

Karen Weintraub
Health/Science journalist passionate about human health, cool researcher and telling stories.

Anna Nowogrodzki
Science and tech journalist. Writing in Nature, National Geographic, Smithsonian, mental_floss, & others.
Follow

Juan Enriquez
Best-selling author, Managing Director of Excel Venture Management.

Christina Farr
Tech and features writer. @Stanford grad.

NEO.LIFE
Making sense of the Neobiological Revolution. Get the email at www.neo.life.

Maria Finn
I’m an author and tell stories across multiple mediums including prose, food, gardens, technology & narrative mapping. www.mariafinn.com Instagram maria_finn1.

Stephanie Pappas
I write about science, technology and the things people do with them.

David Eagleman
Neuroscientist at Stanford, internationally bestselling author of fiction and non-fiction, creator and presenter of PBS’ The Brain.

Kristen V. Brown
Reporter @Gizmodo covering biotech.

Thomas Goetz

David Ewing Duncan
Life science journalist; bestselling author, 9 books; NY Times, Atlantic, Wired, Daily Beast, NPR, ABC News, more; Curator, Arc Fusion www.davidewingduncan.com

Dorothy Santos
writer, editor, curator, and educator based in the San Francisco Bay Area about.me/dorothysantos.com

Dr. Sophie Zaaijer
CEO of PlayDNA, Postdoctoral fellow at the New York Genome Center, Runway postdoc at Cornell Tech.

Andrew Rosenblum
I’m a freelance tech writer based in Oakland, CA. You can find my work at Neo.Life, the MIT Technology Review, Popular Science, and many other places.

Zoe Cormier

Diana Crow
Fledgling science journalist here, hoping to foster discussion about the ways science acts as a catalyst for social change #biology

Ashton Applewhite
Calling for a radical aging movement. Anti-ageism blog+talk+book

Grace Rubenstein
Journalist, editor, media producer. Social/bio science geek. Tweets on health science, journalism, immigration. Spanish speaker & dancing fool.

Science and other sundries.

Esther Dyson
Internet court jEsther — I occupy Esther Dyson. Founder @HICCup_co https://t.co/5dWfUSratQ http://t.co/a1Gmo3FTQv

Jessica Leber
Freelance science and technology journalist and editor, formerly on staff at Fast Company, Vocativ, MIT Technology Review, and ClimateWire.

Jessica Carew Kraft
An anthropologist, artist, and naturalist writing about health, education, and rewilding. Mother to two girls in San Francisco.

Corby Kummer
Senior editor, The Atlantic, five-time James Beard Journalism Award winner, restaurant reviewer for New York, Boston, and Atlanta magazines

K McGowan
Journalist. Reporting on health, medicine, science, other excellent things. T: @mcgowankat

Rob Waters
I’m a journalist living in Berkeley. I write about health, science, social justice and policy. Father of 1. From Detroit.
Follow

Yiting Sun
writes for MIT Technology Review and Neo.life from Beijing, and was based in Accra, Ghana, in 2014 and 2015.
Follow

Michael Hawley
Follow

Richard Sprague
Curious amateur. Years of near-daily microbiome experiments. US CEO of AI healthcare startup http://airdoc.com
Follow

Bob Parks ✂
Connoisseur of the slap dash . . . maker . . . runner . . . writer of Outside magazine’s Gear Guy blog . . . freelance writer and reporter.

CREDIT: https://medium.com/neodotlife/review-of-daytwo-microbiome-test-deacd5464cd5

Microbiome Apps Personalize EAT recommendations

Richard Sprague provides a useful update about the microbiome landscape below. Microbiome is exploding. Your gut can be measured, and your gut can influence your health and well-being. But now …. these gut measurements can offer people a first: personalized nutrition information.

Among the more relevant points:

– Israel’s Weitzman Institute is the global leader academically. Eran Elinav, a physician and immunologist at the Weizmann Institute and one of their lead investigators (see prior post).
– The older technology for measuring the gut is called “16S” sequencing. It tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories,
– The companies competing to measure your microbiome are uBiome, American Gut, Thryve, DayTwo and Viome. DayTwo and Viome offer more advanced technology (see below).
– The latest technology seems to be “metagenomic sequencing”. It is better because it is more specific and detailed.
– By combining “metagenomic sequencing” information with extensive research about how certain species interact with particular foods, machine-learning algorithms can recommend what you should eat.
– DayTwo offers a metagenomic sequencing for $299, and then combines that with all available research to offer personalized nutrition information.
– DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S.
– DayTwo draws its academic understandings from Israel’s Weitzman Institute. The app is based on more than five years of highly cited research showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

CREDIT: Article on Microbiome Advances

When a Double-Chocolate Brownie is Better for You Than Quinoa

A $299 microbiome test from DayTwo turns up some counterintuitive dietary advice.

Why do certain diets work well for some people but not others? Although several genetic tests try to answer that question and might help you craft ideal nutrition plans, your DNA reveals only part of the picture. A new generation of tests from DayTwo and Viome offer diet advice based on a more complete view: they look at your microbiome, the invisible world of bacteria that help you metabolize food, and, unlike your DNA, change constantly throughout your life.
These bugs are involved in the synthesis of vitamins and other compounds in food, and they even play a role in the digestion of gluten. Artificial sweeteners may not contain calories, but they do modify the bacteria in your gut, which may explain why some people continue to gain weight on diet soda. Everyone’s microbiome is different.

So how well do these new tests work?
Basic microbiome tests, long available from uBiome, American Gut, Thryve, and others, based on older “16S” sequencing, can tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories, the bacterial equivalent of, say, canines versus felines. But just as your life might depend on knowing the difference between a wolf and a Chihuahua, your body’s reaction to food often depends on distinctions that can be known only at the species level. The difference between a “good” microbe and a pathogen can be a single DNA base pair.

New tests use more precise “metagenomic” sequencing that can make those distinctions. And by combining that information with extensive research about how those species interact with particular foods, machine-learning algorithms can recommend what you should eat. (Disclosure: I am a former “citizen scientist in residence” at uBiome. But I have no current relationship with any of these companies; I’m just an enthusiast about the microbiome.)

I recently tested myself with DayTwo ($299) to see what it would recommend for me, and I was pleased that the advice was not always the standard “eat more vegetables” that you’ll get from other products claiming to help you eat healthily. DayTwo’s advice is much more specific and often refreshingly counterintuitive. It’s based on more than five years of highly cited research at Israel’s Weizmann Institute, showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

In my case, whole grain breads all rate C-. French toast with challah bread: A.

The DayTwo test was pretty straightforward: you collect what comes out of your, ahem, gut, which involves mailing a sample from your time on the toilet. Unlike the other tests, which can analyze the DNA found in just a tiny swab from a stain on a piece of toilet paper, DayTwo requires more like a tablespoon. The extra amount is needed for DayTwo’s more comprehensive metagenomics sequencing.

Since you can get a microbiome test from other companies for under $100, does the additional metagenomic information from DayTwo justify its much higher price? Generally, I found the answer is yes.

About two months after I sent my sample, my iPhone lit up with my results in a handy app that gave me a personalized rating for most common foods, graded from A+ to C-. In my case, whole grain breads all rate C-. Slightly better are pasta and oatmeal, each ranked C+. Even “healthy” quinoa — a favorite of gluten-free diets — was a mere B-. Why? DayTwo’s algorithm can’t say precisely, but among the hundreds of thousands of gut microbe and meal combinations it was trained on, it finds that my microbiome doesn’t work well with these grains. They make my blood sugar rise too high.

So what kinds of bread are good for me? How about a butter croissant (B+) or cheese ravioli (A-)? The ultimate bread winner for me: French toast with challah bread (A). These recommendations are very different from the one-size-fits-all advice from the U.S. Department of Agriculture or the American Diabetes Association.

I was also pleased to learn that a Starbucks double chocolate brownie is an A- for me, while a 100-calorie pack of Snyder’s of Hanover pretzels gets a C-. That might go against general diet advice, but an algorithm determined that the thousands of bacterial species inside me tend to metabolize fatty foods in a way that results in healthier blood sugar levels than what I get from high-carb foods. Of course, that’s advice just for me; your mileage may vary.

Although the research behind DayTwo has been well-reviewed for more than five years, the app is new to the U.S., so the built-in food suggestions often seem skewed toward Middle Eastern eaters, perhaps the Israeli subjects who formed the original research cohort. That might explain why the app’s suggestions for me include lamb souvlaki with yogurt garlic dip for dinner (A+) and lamb kabob and a side of lentils (A) for lunch. They sound delicious, but to many American ears they might not have the ring of “pork ribs” or “ribeye steak,” which have the same A+ rating. Incidentally, DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S., so I expect the menu to expand with more familiar fare.

Fortunately you’re not limited to the built-in menu choices. The app includes a “build a meal” function that lets you enter combinations of foods from a large database that includes packaged items from Trader Joe’s and Whole Foods.

There is much more to the product, such as a graphical rendering of where my microbiome fits on the spectrum of the rest of the population that eats a particular food. Since the microbiome changes constantly, this will help me see what is different when I do a retest and when I try Viome and other tests.

I’ve had my DayTwo results for only a few weeks, so it’s too soon to know what happens if I take the app’s advice over the long term. Thankfully I’m in good health and reasonably fit, but for now I’ll be eating more strawberries (A+) and blackberries (A-), and fewer apples (B-) and bananas (C+). And overall I’m looking forward to a future where each of us will insist on personalized nutritional information. We all have unique microbiomes, and an app like DayTwo lets us finally eat that way too.

Richard Sprague is a technology executive and quantified-self enthusiast who has worked at Apple, Microsoft, and other tech companies. He is now the U.S. CEO of an AI healthcare startup, Airdoc.

====================APPENDIX: Older Posts about the microbiome =========

Microbiome Update
CREDIT: https://www.wsj.com/articles/how-disrupting-your-guts-rhythm-affects-your-health-1488164400?mod=e2tw A healthy community of microbes in the gut maintains regular daily cycles of activities. A healthy community of microbes in the gut maintains regular daily cycles of activities.PHOTO: WEIZMANN INSTITUTE By LARRY M. GREENBERG Updated Feb. 27, 2017 3:33 p.m. ET 4 COMMENTS New research is helping to unravel the mystery of how […]

Vibrant Health measures microbiome

Home

Microbiome Update
My last research on this subject was in August, 2014. I looked at both microbiomes and proteomics. Today, the New York Times published a very comprehensive update on microbiome research: Link to New York Time Microbiome Article Here is the article itself: = = = = = = = ARTICLE BEGINS HERE = = = […]

Microbiomes
Science is advancing on microbiomes in the gut. The key to food is fiber, and the key to best fiber is long fibers, like cellulose, uncooked or slightly sauteed (cooking shortens fiber length). The best vegetable, in the view of Jeff Leach, is a leek. Eating Well Article on Microbiome = = = = = […]

Arivale Launches LABS company
“Arivale” Launched and Moving Fast. They launched last month. They have 19 people in the Company and a 107 person pilot – but their plans are way more ambitious than that. Moreover: “The founders said they couldn’t envision Arivale launching even two or three years ago.” Read on …. This is an important development: the […]

Precision Wellness at Mt Sinai
My Sinai announcement Mount Sinai to Establish Precision Wellness Center to Advance Personalized Healthcare Mount Sinai Health System Launches Telehealth Initiatives Joshua Harris, co-Founder of Apollo Global Management, and his wife, Marjorie has made a $5 million gift to the Icahn School of Medicine at Mount Sinai to establish the Harris Center for Precision Wellness. […]

Proteomics
“Systems biology…is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different….It means changing our philosophy, in the full sense of the term” (Denis Noble).[5] Proteomics From Wikipedia, the free encyclopedia For the journal […]

Quantified Water Movement (QWM)

Think FITBITS for water. The Quantified Water Movement (QWM) is here to stay, with devices that make real-time monitoring of water quality in streams, rivers, lakes and oceans for less than $1,000 per device.

The Stroud Water Research Center in Pennsylvania is leading the way, along with other center of excellence around the world. Stroud has been leading the way on water for fifty years. It is an elite water quality study organization, renowned for its globally relevant science and scientist excellence. Find out more at www.stroudcenter.org.

As a part of this global leadership in the study of water quality, Stroud is advancing the applied technologies that comprise the “quantified water movement” – the real-time monitoring of water quality in streams, rivers, lakes and oceans.

QWM is very much like the “quantified self movement”(see Post on QSM. QSM takes full advantage of low cost sensor and communication technology to “quantify my self”. In other words, I can dramatically advance my understanding about my own personal well-being win areas like exercise, sleep, glucose levels in blood, etc This movement already has proven that real-time reporting on metrics is possible at a very low cost, and on a one-person-at-a-time scale. Apple Watch and FITBIT are examples of commercial products arising out of QSM.

In the same way, QWM takes full advantage of sensors and communication technology to provide real-time reporting on water quality for a given stream, lake, river, or ocean. While still in a formative stage. QWM uses the well-known advances in sensor, big data, and data mining technology to monitor water quality on a real-time basis. Best of all, this applied technology has now reached an affordable price point.

For less than $1,000 per device, it is now possible to fully monitor any body of water, and to report out the findings in a comprehensive dataset. Many leaders believe that less than $100 is possible very soon.

The applied technology ends up being a simple “data logger” coupled with a simple radio transmitter.

Examples of easy-to-measure metrics are:

1. water depth
2. conductivity (measures saltiness or salinity)
3. dissolved oxygen (supports fish and beneficial bacteria)
4. turbidity (a sign of runoff from erosion. Cloudy water actually abrades fish, and prevent fish from finding food)

Training now exists, thanks to Stroud, that is super simple. For example, in one hour, you can learn the capability of this low cost equipment, and the science as to why it is important.

In a two day training, citizen scientists and civil engineers alike can learn how to program their own data logger, attach sensors to the data logger, and deploy and maintain the equipment in an aquatic environment.

All of this and more is illuminated at www.enviroDIY.org.

Four Daily Well-Being Workouts

Marty Seligman is a renowned well-being researcher, and writes in today’s NYT about four practices for flourishing:

Identify Signature Strengths: Focus every day on personal strengths exhibited when you were at your best.

Find the Good: Focus every day on “why did this good thing happen”?

Make a Gratitude Visit: Visit a person you feel gratitude toward.

Respond Constructively: Practice active, constructive responses.

===================

CREDIT: Article Below Can Be Found at This Link

Get Happy: Four Well-Being Workouts

By JULIE SCELFO
APRIL 5, 2017
Relieving stress and anxiety might help you feel better — for a bit. Martin E.P. Seligman, a professor of psychology at the University of Pennsylvania and a pioneer in the field of positive psychology, does not see alleviating negative emotions as a path to happiness.
“Psychology is generally focused on how to relieve depression, anger and worry,” he said. “Freud and Schopenhauer said the most you can ever hope for in life is not to suffer, not to be miserable, and I think that view is empirically false, morally insidious, and a political and educational dead-end.”
“What makes life worth living,” he said, “is much more than the absence of the negative.”

To Dr. Seligman, the most effective long-term strategy for happiness is to actively cultivate well-being.

In his 2012 book, “Flourish: A Visionary New Understanding of Happiness and Well-Being,” he explored how well-being consists not merely of feeling happy (an emotion that can be fleeting) but of experiencing a sense of contentment in the knowledge that your life is flourishing and has meaning beyond your own pleasure.

To cultivate the components of well-being, which include engagement, good relationships, accomplishment and purpose, Dr. Seligman suggests these four exercises based on research at the Penn Positive Psychology Center, which he directs, and at other universities.

Identify Signature Strengths
Write down a story about a time when you were at your best. It doesn’t need to be a life-changing event but should have a clear beginning, middle and end. Reread it every day for a week, and each time ask yourself: “What personal strengths did I display when I was at my best?” Did you show a lot of creativity? Good judgment? Were you kind to other people? Loyal? Brave? Passionate? Forgiving? Honest?

Writing down your answers “puts you in touch with what you’re good at,” Dr. Seligman explained. The next step is to contemplate how to use these strengths to your advantage, intentionally organizing and structuring your life around them.

In a study by Dr. Seligman and colleagues published in American Psychologist, participants looked for an opportunity to deploy one of their signature strengths “in a new and different way” every day for one week.

“A week later, a month later, six months later, people had on average lower rates of depression and higher life satisfaction,” Dr. Seligman said. “Possible mechanisms could be more positive emotions. People like you more, relationships go better, life goes better.”

Find the Good
Set aside 10 minutes before you go to bed each night to write down three things that went really well that day. Next to each event answer the question, “Why did this good thing happen?”
Instead of focusing on life’s lows, which can increase the likelihood of depression, the exercise “turns your attention to the good things in life, so it changes what you attend to,” Dr. Seligman said. “Consciousness is like your tongue: It swirls around in the mouth looking for a cavity, and when it finds it, you focus on it. Imagine if your tongue went looking for a beautiful, healthy tooth.” Polish it.

Make a Gratitude Visit
Think of someone who has been especially kind to you but you have not properly thanked. Write a letter describing what he or she did and how it affected your life, and how you often remember the effort. Then arrange a meeting and read the letter aloud, in person.

“It’s common that when people do the gratitude visit both people weep out of joy,” Dr. Seligman said. Why is the experience so powerful? “It puts you in better touch with other people, with your place in the world.”

Respond Constructively
This exercise was inspired by the work of Shelly Gable, a social psychologist at the University of California, Santa Barbara, who has extensively studied marriages and other close relationships. The next time someone you care about shares good news, give what Dr. Gable calls an “active constructive response.”

That is, instead of saying something passive like, “Oh, that’s nice” or being dismissive, express genuine excitement. Prolong the discussion by, say, encouraging them to tell others or suggest a celebratory activity.

“Love goes better, commitment increases, and from the literature, even sex gets better after that.”

Julie Scelfo is a former staff writer for The Times who writes often about human behavior.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============

Mr President

Credit: Washington Post Article authored by David Maraniss, author of ‘Barack Obama: The Story’

His journey to become a leader of consequence
How Barack Obama’s understanding of his place in the world, as a mixed-race American with a multicultural upbringing affected his presidency.
By David Maraniss, author of ‘Barack Obama: The Story’  

When Barack Obama worked as a community organizer amid the bleak industrial decay of Chicago’s far South Side during the 1980s, he tried to follow a mantra of that profession: Dream of the world as you wish it to be, but deal with the world as it is.

The notion of an Obama presidency was beyond imagining in the world as it was then. But, three decades later, it has happened, and a variation of that saying seems appropriate to the moment: Stop comparing Obama with the president you thought he might be, and deal with the one he has been.

Seven-plus years into his White House tenure, Obama is working through the final months before his presidency slips from present to past, from daily headlines to history books. That will happen at noontime on the 20th of January next year, but the talk of his legacy began much earlier and has intensified as he rounds the final corner of his improbable political career.

Of the many ways of looking at Obama’s presidency, the first is to place it in the continuum of his life. The past is prologue for all presidents to one degree or another, even as the job tests them in ways that nothing before could. For Obama, the line connecting his life’s story with the reality of what he has been as the 44th president is consistently evident.

The first connection involves Obama’s particular form of ambition. His political design arrived relatively late. He was no grade school or high school or college leader. Unlike Bill Clinton, he did not have a mother telling everyone that her first-grader would grow up to be president. When Obama was a toddler in Honolulu, his white grandfather boasted that his grandson was a Hawaiian prince, but that was more to explain his skin color than to promote family aspirations.
But once ambition took hold of Obama, it was with an intense sense of mission, sometimes tempered by self-doubt but more often self-assured and sometimes bordering messianic. At the end of his sophomore year at Occidental College, he started to talk about wanting to change the world. At the end of his time as a community organizer in Chicago, he started to talk about how the only way to change the world was through electoral power. When he was defeated for the one and only time in his career in a race for Congress in 2000, he questioned whether he indeed had been chosen for greatness, as he had thought he was, but soon concluded that he needed another test and began preparing to run for the Senate seat from Illinois that he won in 2004.

That is the sensibility he took into the White House. It was not a careless slip when he said during the 2008 campaign that he wanted to emulate Ronald Reagan and change “the trajectory of America” in ways that recent presidents, including Clinton, had been unable to do. Obama did not just want to be president. His mission was to leave a legacy as a president of consequence, the liberal counter to Reagan. To gauge himself against the highest-ranked presidents, and to learn from their legacies, Obama held private White House sessions with an elite group of American historians.

It is now becoming increasingly possible to argue that he has neared his goal. His decisions were ineffective in stemming the human wave of disaster in Syria, and he has thus far failed to close the detention camp at Guantanamo Bay, Cuba, and to make anything more than marginal changes on two domestic issues of importance to him, immigration and gun control. But from the Affordable Care Act to the legalization of same-sex marriage and the nuclear deal with Iran, from the stimulus package that started the slow recovery from the 2008 recession to the Detroit auto industry bailout, from global warming and renewable energy initiatives to the veto of the Keystone pipeline, from the withdrawal of combat troops from Iraq and Afghanistan and the killing of Osama bin Laden to the opening of relations with Cuba, the liberal achievements have added up, however one judges the policies.

This was done at the same time that he faced criticism from various quarters for seeming aloof, if not arrogant, for not being more effective in his dealings with members of Congress of either party, for not being angry enough when some thought he should be, or for not being an alpha male leader.

A promise of unity
His accomplishments were bracketed by two acts of negation by opponents seeking to minimize his authority: first a vow by Republican leaders to do what it took to render him a one-term president; and then, with 11 months left in his second term, a pledge to deny him the appointment of a nominee for the crucial Supreme Court seat vacated by the death of Antonin Scalia, a conservative icon. Obama’s White House years also saw an effort to delegitimize him personally by shrouding his story in fallacious myth — questioning whether he was a foreigner in our midst, secretly born in Kenya, despite records to the contrary, and insinuating that he was a closet Muslim, again defying established fact. Add to that a raucous new techno-political world of unending instant judgments and a decades-long erosion of economic stability for the working class and middle class that was making an increasingly large segment of the population, of various ideologies, feel left behind, uncertain, angry and divided, and the totality was a national condition that was anything but conducive to the promise of unity that brought Obama into the White House.

To the extent that his campaign rhetoric raised expectations that he could bridge the nation’s growing political divide, Obama owns responsibility for the way his presidency was perceived. His political rise, starting in 2004, when his keynote convention speech propelled him into the national consciousness, was based on his singular ability to tie his personal story as the son of a father from Kenya and mother from small-town Kansas to some transcendent common national purpose. Unity out of diversity, the ideal of the American mosaic that was constantly being tested, generation after generation, part reality, part myth. Even though Obama romanticized his parents’ relationship, which was brief and dysfunctional, his story of commonality was more than a campaign construct; it was deeply rooted in his sense of self.

As a young man, Obama at times felt apart from his high school and college friends of various races and perspectives as he watched them settle into defined niches in culture, outlook and occupation. He told one friend that he felt “large dollops of envy for them” but believed that because of his own life’s story, his mixed-race heritage, his experiences in multicultural Hawaii and exotic Indonesia, his childhood without “a structure or tradition to support me,” he had no choice but to seek the largest possible embrace of the world. “The only way to assuage my feelings of isolation are to absorb all the traditions [and all the] classes, make them mine, me theirs,” he wrote. He carried that notion with him through his political career in Illinois and all the way to the White House, where it was challenged in ways he had never confronted before.

With most politicians, their strengths are their weaknesses, and their weaknesses are their strengths.

With Obama, one way that was apparent was in his coolness. At various times in his presidency, there were calls from all sides for him to be hotter. He was criticized by liberals for not expressing more anger at Republicans who were stifling his agenda, or at Wall Street financiers and mortgage lenders whose wheeler-dealing helped drag the country into recession. He was criticized by conservatives for not being more vociferous in denouncing Islamic terrorists, or belligerent in standing up to Russian President Vladimir Putin.

His coolness as president can best be understood by the sociological forces that shaped him before he reached the White House. There is a saying among native Hawaiians that goes: Cool head, main thing. This was the culture in which Obama reached adolescence on the island of Oahu, and before that during the four years he lived with his mother in Jakarta. Never show too much. Never rush into things. Maintain a personal reserve and live by your own sense of time. This sensibility was heightened when he developed an affection for jazz, the coolest mode of music, as part of his self-tutorial on black society that he undertook while living with white grandparents in a place where there were very few African Americans. As he entered the political world, the predominantly white society made it clear to him the dangers of coming across as an angry black man. As a community organizer, he refined the skill of leading without being overt about it, making the dispossessed citizens he was organizing feel their own sense of empowerment. As a constitutional law professor at the University of Chicago, he developed an affinity for rational thought.

Differing approaches
All of this created a president who was comfortable coolly working in his own way at his own speed, waiting for events to turn his way.
Was he too cool in his dealings with other politicians? One way to consider that question is by comparing him with Clinton. Both came out of geographic isolation, Hawaii and southwest Arkansas, far from the center of power, in states that had never before offered up presidents. Both came out of troubled families defined by fatherlessness and alcoholism. Both at various times felt a sense of abandonment. Obama had the additional quandary of trying to figure out his racial identity. And the two dealt with their largely similar situations in diametrically different ways.

Rather than deal with the problems and contradictions of his life head-on, Clinton became skilled at moving around and past them. He had an insatiable need to be around people for affirmation. As a teenager, he would ask a friend to come over to the house just to watch him do a crossword puzzle. His life became all about survival and reading the room. He kept shoeboxes full of file cards of the names and phone numbers of people who might help him someday. His nature was to always move forward. He would wake up each day and forgive himself and keep going. His motto became “What’s next?” He refined these skills to become a political force of nature, a master of transactional politics. This got him to the White House, and into trouble in the White House, and out of trouble again, in acycle of loss and recovery.

Obama spent much of his young adulthood, from when he left Hawaii for the mainland and college in 1979 to the time he left Chicago for Harvard Law School nearly a decade later, trying to figure himself out, examining the racial, cultural, personal, sociological and political contradictions that life threw at him. He internalized everything, first withdrawing from the world during a period in New York City and then slowly reentering it as he was finding his identity as a community organizer in Chicago.

Rather than plow forward relentlessly, like Clinton, Obama slowed down. He woke up each day and wrote in his journal, analyzing the world and his place in it. He emerged from that process with a sense of self that helped him rise in politics all the way to the White House, then led him into difficulties in the White House, or at least criticism for the way he operated. His sensibility was that if he could resolve the contradictions of his own life, why couldn’t the rest of the country resolve the larger contradictions of American life? Why couldn’t Congress? The answer from Republicans was that his actions were different from his words, and that while he talked the language of compromise, he did not often act on it. He had built an impressive organization to get elected, but it relied more on the idea of Obama than on a long history of personal contacts. He did not have a figurative equivalent of Clinton’s shoebox full of allies, and he did not share his Democratic predecessor’s profound need to be around people. He was not as interested in the personal side of politics that was so second nature to presidents such as Clinton and Lyndon Johnson.

Politicians of both parties complained that Obama seemed distant. He was not calling them often enough. When he could be schmoozing with members of Congress, cajoling them and making them feel important, he was often back in the residence having dinner with his wife, Michelle, and their two daughters, or out golfing with the same tight group of high school chums and White House subordinates.

Here again, some history provided context. Much of Obama’s early life had been a long search for home, which he finally found with Michelle and their girls, Malia and Sasha. There were times when Obama was an Illinois state senator and living for a few months at a time in a hotel room in Springfield, when Michelle made clear her unhappiness with his political obsession, and the sense of home that he had strived so hard to find was jeopardized. Once he reached the White House, with all the demands on his time, if there was a choice, he was more inclined to be with his family than hang out with politicians. A weakness in one sense, a strength in another, enriching the image of the first-ever black first family.

A complex question
The fact that Obama was the first black president, and that his family was the first African American first family, provides him with an uncontested hold on history. Not long into his presidency, even to mention that seemed beside the point, if not tedious, but it was a prejudice-shattering event when he was elected in 2008, and its magnitude is not likely to diminish. Even as some of the political rhetoric this year longs for a past America, the odds are greater that as the century progresses, no matter what happens in the 2016 election, Obama will be seen as the pioneer who broke an archaic and distant 220-year period of white male dominance.

But what kind of black president has he been?

His life illuminates the complexity of that question. His white mother, who conscientiously taught him black history at an early age but died nearly a decade before her son reached the White House, would have been proud that he broke the racial barrier. But she also inculcated him in the humanist idea of the universality of humankind, a philosophy that her life exemplified as she married a Kenyan and later an Indonesian and worked to help empower women in many of the poorest countries in the world. Obama eventually found his own comfort as a black man with a black family, but his public persona, and his political persona, was more like his mother’s.

At various times during his career, Obama faced criticism from some African Americans that, because Obama did not grow up in a minority community and received an Ivy League education, he was not “black enough.” That argument was one of the reasons he lost that 2000 congressional race to Bobby L. Rush, a former Black Panther, but fortunes shift and attitudes along with them; there was no more poignant and revealing scene at Obama’s final State of the Union address to Congress than Rep. Rush waiting anxiously at the edge of the aisle and reaching out in the hope of recognition from the passing president.

As president, Obama rarely broke character to show what was inside. He was reluctant to bring race into the political discussion, and never publicly stated what many of his supporters believed: that some of the antagonism toward his presidency was rooted in racism. He wished to be judged by the content of his presidency rather than the color of his skin. One exception came after February 2012, when Trayvon Martin, an unarmed black teenager, was shot and killed in Florida by a gun-toting neighborhood zealot. In July 2013, commenting on the verdict in the case, Obama talked about the common experience of African American men being followed when shopping in a department store, or being passed up by a taxi on the street, or a car door lock clicking as they walked by — all of which he said had happened to him. He said Trayvon Martin could have been his son, and then added, “another way of saying that is: Trayvon Martin could have been me 35 years ago.”

Nearly two years later, in June 2015, Obama hit what might be considered the most powerful emotional note of his presidency, a legacy moment, by finding a universal message in black spiritual expression. Time after time during his two terms, he had performed the difficult task of trying to console the country after another mass shooting, choking up with tears whenever he talked about little children being the victims, as they had been in 2012 at Sandy Hook Elementary School in Newtown, Conn. Now he was delivering the heart-rending message one more time, nearing the end of a eulogy in Charleston, S.C., for the Rev. Clementa Pinckney, one of nine African Americans killed by a young white gunman during a prayer service at Emanuel African Methodist Episcopal Church. It is unlikely that any other president could have done what Barack Obama did that day, when all the separate parts of his life story came together with a national longing for reconciliation as he started to sing, “Amazing grace, how sweet the sound, that saved a wretch like me. . . .”

NYT on Google Brain, Google Translate, and AI Progress

Amazing progress!

New York Times Article on Google and AI Progress

The Great A.I. Awakening
How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
BY GIDEON LEWIS-KRAUSDEC. 14, 2016

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Greg Corrado, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Apple’s Siri, Facebook’s M, Amazon’s Echo, Alan Turing, GO (the Board Game), convolutional neural network of Yann LeCun, supervised learning, machine learning, deep learning, Mike Schuster, T.P.U.s

Prologue: You Are What You Have Read
Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.
Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”
The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.
NO. 1:
Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
NO. 2:
Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:
Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.
Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

Sadiq Khan, the mayor of London, stood to make a few opening remarks. A friend, he began, had recently told him he reminded him of Google. “Why, because I know all the answers?” the mayor asked. “No,” the friend replied, “because you’re always trying to finish my sentences.” The crowd tittered politely. Khan concluded by introducing Google’s chief executive, Sundar Pichai, who took the stage.
Pichai was in London in part to inaugurate Google’s new building there, the cornerstone of a new “knowledge quarter” under construction at King’s Cross, and in part to unveil the completion of the initial phase of a company transformation he announced last year. The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”
A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce. It was only with the refugee crisis, Pichai explained from the lectern, that the company came to reckon with Translate’s geopolitical importance: On the screen behind him appeared a graph whose steep curve indicated a recent fivefold increase in translations between Arabic and German. (It was also close to Pichai’s own heart. He grew up in India, a land divided by dozens of languages.) The team had been steadily adding new languages and features, but gains in quality over the last four years had slowed considerably.
Until today. As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
Pichai has an affection for the obscure literary reference; he told me a month earlier, in his office in Mountain View, Calif., that Translate in part exists because not everyone can be like the physicist Robert Oppenheimer, who learned Sanskrit to read the Bhagavad Gita in the original. In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”
Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”
It was a fitting remark: The new Google Translate was run on the first machines that had, in a sense, ever learned to read anything at all.
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform.

The phrase “artificial intelligence” is invoked as if its meaning were self-evident, but it has always been a source of confusion and controversy. Imagine if you went back to the 1970s, stopped someone on the street, pulled out a smartphone and showed her Google Maps. Once you managed to convince her you weren’t some oddly dressed wizard, and that what you withdrew from your pocket wasn’t a black-arts amulet but merely a tiny computer more powerful than that onboard the Apollo shuttle, Google Maps would almost certainly seem to her a persuasive example of “artificial intelligence.” In a very real sense, it is. It can do things any map-literate human can manage, like get you from your hotel to the airport — though it can do so much more quickly and reliably. It can also do things that humans simply and obviously cannot: It can evaluate the traffic, plan the best route and reorient itself when you take the wrong exit.
Practically nobody today, however, would bestow upon Google Maps the honorific “A.I.,” so sentimental and sparing are we in our use of the word “intelligence.” Artificial intelligence, we believe, must be something that distinguishes HAL from whatever it is a loom or wheelbarrow can do. The minute we can automate a task, we downgrade the relevant skill involved to one of mere mechanism. Today Google Maps seems, in the pejorative sense of the term, robotic: It simply accepts an explicit demand (the need to get from one place to another) and tries to satisfy that demand as efficiently as possible. The goal posts for “artificial intelligence” are thus constantly receding.
When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.
In a famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor. Once a machine can translate fluently between two natural languages, the foundation has been laid for a machine that might one day “understand” human language well enough to engage in plausible conversation. Google Brain’s members, who pushed and helped oversee the Translate project, believe that such a machine would be on its way to serving as a generally intelligent all-encompassing personal digital assistant.

What follows here is the story of how a team of Google researchers and engineers — at first one or two, then three or four, and finally more than a hundred — made considerable progress in that direction. It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to. It does not feature people who think that everything will be unrecognizably different tomorrow or the next day because of some restless tinkerer in his garage. It is neither a story about people who think technology will solve all our problems nor one about people who think technology is ineluctably bound to create apocalyptic new ones. It is not about disruption, at least not in the way that word tends to be used.
It is, in fact, three overlapping stories that converge in Google Translate’s successful metamorphosis to A.I. — a technical story, an institutional story and a story about the evolution of ideas. The technical story is about one team on one product at one company, and the process by which they refined, tested and introduced a brand-new version of an old product in only about a quarter of the time anyone, themselves included, might reasonably have expected. The institutional story is about the employees of a small but influential artificial-intelligence group within that company, and the process by which their intuitive faith in some old, unproven and broadly unpalatable notions about computing upended every other company within a large radius. The story of ideas is about the cognitive scientists, psychologists and wayward engineers who long toiled in obscurity, and the process by which their ostensibly irrational convictions ultimately inspired a paradigm shift in our understanding not only of technology but also, in theory, of consciousness itself.
It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to.

The first story, the story of Google Translate, takes place in Mountain View over nine months, and it explains the transformation of machine translation. The second story, the story of Google Brain and its many competitors, takes place in Silicon Valley over five years, and it explains the transformation of that entire community. The third story, the story of deep learning, takes place in a variety of far-flung laboratories — in Scotland, Switzerland, Japan and most of all Canada — over seven decades, and it might very well contribute to the revision of our self-image as first and foremost beings who think.
All three are stories about artificial intelligence. The seven-decade story is about what we might conceivably expect or want from it. The five-year story is about what it might do in the near future. The nine-month story is about what it can do right this minute. These three stories are themselves just proof of concept. All of this is only the beginning.

Part I: Learning Machine
1. The Birth of Brain
Jeff Dean, though his title is senior fellow, is the de facto head of Google Brain. Dean is a sinewy, energy-efficient man with a long, narrow face, deep-set eyes and an earnest, soapbox-derby sort of enthusiasm. The son of a medical anthropologist and a public-health epidemiologist, Dean grew up all over the world — Minnesota, Hawaii, Boston, Arkansas, Geneva, Uganda, Somalia, Atlanta — and, while in high school and college, wrote software used by the World Health Organization. He has been with Google since 1999, as employee 25ish, and has had a hand in the core software systems beneath nearly every significant undertaking since then. A beloved artifact of company culture is Jeff Dean Facts, written in the style of the Chuck Norris Facts meme: “Jeff Dean’s PIN is the last four digits of pi.” “When Alexander Graham Bell invented the telephone, he saw a missed call from Jeff Dean.” “Jeff Dean got promoted to Level 11 in a system where the maximum level is 10.” (This last one is, in fact, true.)
Photo

One day in early 2011, Dean walked into one of the Google campus’s “microkitchens” — the “Googley” word for the shared break spaces on most floors of the Mountain View complex’s buildings — and ran into Andrew Ng, a young Stanford computer-science professor who was working for the company as a consultant. Ng told him about Project Marvin, an internal effort (named after the celebrated A.I. pioneer Marvin Minsky) he had recently helped establish to experiment with “neural networks,” pliant digital lattices based loosely on the architecture of the brain. Dean himself had worked on a primitive version of the technology as an undergraduate at the University of Minnesota in 1990, during one of the method’s brief windows of mainstream acceptability. Now, over the previous five years, the number of academics working on neural networks had begun to grow again, from a handful to a few dozen. Ng told Dean that Project Marvin, which was being underwritten by Google’s secretive X lab, had already achieved some promising results.
Dean was intrigued enough to lend his “20 percent” — the portion of work hours every Google employee is expected to contribute to programs outside his or her core job — to the project. Pretty soon, he suggested to Ng that they bring in another colleague with a neuroscience background, Greg Corrado. (In graduate school, Corrado was taught briefly about the technology, but strictly as a historical curiosity. “It was good I was paying attention in class that day,” he joked to me.) In late spring they brought in one of Ng’s best graduate students, Quoc Le, as the project’s first intern. By then, a number of the Google engineers had taken to referring to Project Marvin by another name: Google Brain.
Since the term “artificial intelligence” was first coined, at a kind of constitutional convention of the mind at Dartmouth in the summer of 1956, a majority of researchers have long thought the best approach to creating A.I. would be to write a very big, comprehensive program that laid out both the rules of logical reasoning and sufficient knowledge of the world. If you wanted to translate from English to Japanese, for example, you would program into the computer all of the grammatical rules of English, and then the entirety of definitions contained in the Oxford English Dictionary, and then all of the grammatical rules of Japanese, as well as all of the words in the Japanese dictionary, and only after all of that feed it a sentence in a source language and ask it to tabulate a corresponding sentence in the target language. You would give the machine a language map that was, as Borges would have had it, the size of the territory. This perspective is usually called “symbolic A.I.” — because its definition of cognition is based on symbolic logic — or, disparagingly, “good old-fashioned A.I.”
There are two main problems with the old-fashioned approach. The first is that it’s awfully time-consuming on the human end. The second is that it only really works in domains where rules and definitions are very clear: in mathematics, for example, or chess. Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.

There were, however, limits to what this system could do. In the 1980s, a robotics researcher at Carnegie Mellon pointed out that it was easy to get computers to do adult things but nearly impossible to get them to do things a 1-year-old could do, like hold a ball or identify a cat. By the 1990s, despite punishing advancements in computer chess, we still weren’t remotely close to artificial general intelligence.
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
There was no reason you couldn’t try to mimic this structure in electronic form, and in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.
This attitude toward artificial intelligence was evolutionary rather than creationist. If you wanted a flexible mechanism, you wanted one that could adapt to its environment. If you wanted something that could adapt, you didn’t want to begin with the indoctrination of the rules of chess. You wanted to begin with very basic abilities — sensory perception and motor control — in the hope that advanced skills would emerge organically. Humans don’t learn to understand language by memorizing dictionaries and grammar books, so why should we possibly expect our computers to do so?
Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I. Dean, Corrado and Ng began their work as a part-time, collaborative experiment, but they made immediate progress. They took architectural inspiration for their models from recent theoretical outlines — as well as ideas that had been on the shelf since the 1980s and 1990s — and drew upon both the company’s peerless reserves of data and its massive computing infrastructure. They instructed the networks on enormous banks of “labeled” data — speech files with correct transcriptions, for example — and the computers improved their responses to better match reality.
“The portion of evolution in which animals developed eyes was a big development,” Dean told me one day, with customary understatement. We were sitting, as usual, in a whiteboarded meeting room, on which he had drawn a crowded, snaking timeline of Google Brain and its relation to inflection points in the recent history of neural networks. “Now computers have eyes. We can build them around the capabilities that now exist to understand photos. Robots will be drastically transformed. They’ll be able to operate in an unknown environment, on much different problems.” These capacities they were building may have seemed primitive, but their implications were profound.

2. The Unlikely Intern
In its first year or so of existence, Brain’s experiments in the development of a machine with the talents of a 1-year-old had, as Dean said, worked to great effect. Its speech-recognition team swapped out part of their old system for a neural network and encountered, in pretty much one fell swoop, the best quality improvements anyone had seen in 20 years. Their system’s object-recognition abilities improved by an order of magnitude. This was not because Brain’s personnel had generated a sheaf of outrageous new ideas in just a year. It was because Google had finally devoted the resources — in computers and, increasingly, personnel — to fill in outlines that had been around for a long time.
A great preponderance of these extant and neglected notions had been proposed or refined by a peripatetic English polymath named Geoffrey Hinton. In the second year of Brain’s existence, Hinton was recruited to Brain as Andrew Ng left. (Ng now leads the 1,300-person A.I. team at Baidu.) Hinton wanted to leave his post at the University of Toronto for only three months, so for arcane contractual reasons he had to be hired as an intern. At intern training, the orientation leader would say something like, “Type in your LDAP” — a user login — and he would flag a helper to ask, “What’s an LDAP?” All the smart 25-year-olds in attendance, who had only ever known deep learning as the sine qua non of artificial intelligence, snickered: “Who is that old guy? Why doesn’t he get it?”
“At lunchtime,” Hinton said, “someone in the queue yelled: ‘Professor Hinton! I took your course! What are you doing here?’ After that, it was all right.”
A few months later, Hinton and two of his students demonstrated truly astonishing gains in a big image-recognition contest, run by an open-source collective called ImageNet, that asks computers not only to identify a monkey but also to distinguish between spider monkeys and howler monkeys, and among God knows how many different breeds of cat. Google soon approached Hinton and his students with an offer. They accepted. “I thought they were interested in our I.P.,” he said. “Turns out they were interested in us.”
Hinton comes from one of those old British families emblazoned like the Darwins at eccentric angles across the intellectual landscape, where regardless of titular preoccupation a person is expected to make sideline contributions to minor problems in astronomy or fluid dynamics. His great-great-grandfather was George Boole, whose foundational work in symbolic logic underpins the computer; another great-great-grandfather was a celebrated surgeon, his father a venturesome entomologist, his father’s cousin a Los Alamos researcher; the list goes on. He trained at Cambridge and Edinburgh, then taught at Carnegie Mellon before he ended up at Toronto, where he still spends half his time. (His work has long been supported by the largess of the Canadian government.) I visited him in his office at Google there. He has tousled yellowed-pewter hair combed forward in a mature Noel Gallagher style and wore a baggy striped dress shirt that persisted in coming untucked, and oval eyeglasses that slid down to the tip of a prominent nose. He speaks with a driving if shambolic wit, and says things like, “Computers will understand sarcasm before Americans do.”
Hinton had been working on neural networks since his undergraduate days at Cambridge in the late 1960s, and he is seen as the intellectual primogenitor of the contemporary field. For most of that time, whenever he spoke about machine learning, people looked at him as though he were talking about the Ptolemaic spheres or bloodletting by leeches. Neural networks were taken as a disproven folly, largely on the basis of one overhyped project: the Perceptron, an artificial neural network that Frank Rosenblatt, a Cornell psychologist, developed in the late 1950s. The New York Times reported that the machine’s sponsor, the United States Navy, expected it would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” It went on to do approximately none of those things. Marvin Minsky, the dean of artificial intelligence in America, had worked on neural networks for his 1954 Princeton thesis, but he’d since grown tired of the inflated claims that Rosenblatt — who was a contemporary at Bronx Science — made for the neural paradigm. (He was also competing for Defense Department funding.) Along with an M.I.T. colleague, Minsky published a book that proved that there were painfully simple problems the Perceptron could never solve.
Minsky’s criticism of the Perceptron extended only to networks of one “layer,” i.e., one layer of artificial neurons between what’s fed to the machine and what you expect from it — and later in life, he expounded ideas very similar to contemporary deep learning. But Hinton already knew at the time that complex tasks could be carried out if you had recourse to multiple layers. The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns. Take the case of image recognition, which tends to rely on a contraption called a “convolutional neural net.” (These were elaborated in a seminal 1998 paper whose lead author, a Frenchman named Yann LeCun, did his postdoctoral research in Toronto under Hinton and now directs a huge A.I. endeavor at Facebook.) The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something (an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face.

A demonstration from 1993 showing an early version of the researcher Yann LeCun’s convolutional neural network, which by the late 1990s was processing 10 to 20 percent of all checks in the United States. A similar technology now drives most state-of-the-art image-recognition systems. Video posted on YouTube by Yann LeCun
The issue with multilayered, “deep” neural networks was that the trial-and-error part got extraordinarily complicated. In a single layer, it’s easy. Imagine that you’re playing with a child. You tell the child, “Pick up the green ball and put it into Box A.” The child picks up a green ball and puts it into Box B. You say, “Try again to put the green ball in Box A.” The child tries Box A. Bravo.
Now imagine you tell the child, “Pick up a green ball, go through the door marked 3 and put the green ball into Box A.” The child takes a red ball, goes through the door marked 2 and puts the red ball into Box B. How do you begin to correct the child? You cannot just repeat your initial instructions, because the child does not know at which point he went wrong. In real life, you might start by holding up the red ball and the green ball and saying, “Red ball, green ball.” The whole point of machine learning, however, is to avoid that kind of explicit mentoring. Hinton and a few others went on to invent a solution (or rather, reinvent an older one) to this layered-error problem, over the halting course of the late 1970s and 1980s, and interest among computer scientists in neural networks was briefly revived. “People got very excited about it,” he said. “But we oversold it.” Computer scientists quickly went back to thinking that people like Hinton were weirdos and mystics.
These ideas remained popular, however, among philosophers and psychologists, who called it “connectionism” or “parallel distributed processing.” “This idea,” Hinton told me, “of a few people keeping a torch burning, it’s a nice myth. It was true within artificial intelligence. But within psychology lots of people believed in the approach but just couldn’t do it.” Neither could Hinton, despite the generosity of the Canadian government. “There just wasn’t enough computer power or enough data. People on our side kept saying, ‘Yeah, but if I had a really big one, it would work.’ It wasn’t a very persuasive argument.”
‘The portion of evolution in which animals developed eyes was a big development. Now computers have eyes.’

3. A Deep Explanation of Deep Learning
When Pichai said that Google would henceforth be “A.I. first,” he was not just making a claim about his company’s business strategy; he was throwing in his company’s lot with this long-unworkable idea. Pichai’s allocation of resources ensured that people like Dean could ensure that people like Hinton would have, at long last, enough computers and enough data to make a persuasive argument. An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion. For a simple artificial neural network of the sort proposed in the 1940s, the attempt to even try to replicate this was unimaginable. We’re still far from the construction of a network of that size, but Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.
To understand why scale is so important, however, you have to start to understand some of the more technical details of what, exactly, machine intelligences are doing with the data they consume. A lot of our ambient fears about A.I. rest on the idea that they’re just vacuuming up knowledge like a sociopathic prodigy in a library, and that an artificial intelligence constructed to make paper clips might someday decide to treat humans like ants or lettuce. This just isn’t how they work. All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place.
If that brief explanation seems sufficiently reassuring, the reassured nontechnical reader is invited to skip forward to the next section, which is about cats. If not, then read on. (This section is also, luckily, about cats.)
Imagine you want to program a cat-recognizer on the old symbolic-A.I. model. You stay up for days preloading the machine with an exhaustive, explicit definition of “cat.” You tell it that a cat has four legs and pointy ears and whiskers and a tail, and so on. All this information is stored in a special place in memory called Cat. Now you show it a picture. First, the machine has to separate out the various distinct elements of the image. Then it has to take these elements and apply the rules stored in its memory. If(legs=4) and if(ears=pointy) and if(whiskers=yes) and if(tail=yes) and if(expression=supercilious), then(cat=yes). But what if you showed this cat-recognizer a Scottish Fold, a heart-rending breed with a prized genetic defect that leads to droopy doubled-over ears? Our symbolic A.I. gets to (ears=pointy) and shakes its head solemnly, “Not cat.” It is hyperliteral, or “brittle.” Even the thickest toddler shows much greater inferential acuity.
Now imagine that instead of hard-wiring the machine with a set of rules for classification stored in one location of the computer’s memory, you try the same thing on a neural network. There is no special place that can hold the definition of “cat.” There is just a giant blob of interconnected switches, like forks in a path. On one side of the blob, you present the inputs (the pictures); on the other side, you present the corresponding outputs (the labels). Then you just tell it to work out for itself, via the individual calibration of all of these interconnected switches, whatever path the data should take so that the inputs are mapped to the correct outputs. The training is the process by which a labyrinthine series of elaborate tunnels are excavated through the blob, tunnels that connect any given input to its proper output. The more training data you have, the greater the number and intricacy of the tunnels that can be dug. Once the training is complete, the middle of the blob has enough tunnels that it can make reliable predictions about how to handle data it has never seen before. This is called “supervised learning.”
The reason that the network requires so many neurons and so much data is that it functions, in a way, like a sort of giant machine democracy. Imagine you want to train a computer to differentiate among five different items. Your network is made up of millions and millions of neuronal “voters,” each of whom has been given five different cards: one for cat, one for dog, one for spider monkey, one for spoon and one for defibrillator. You show your electorate a photo and ask, “Is this a cat, a dog, a spider monkey, a spoon or a defibrillator?” All the neurons that voted the same way collect in groups, and the network foreman peers down from above and identifies the majority classification: “A dog?”
You say: “No, maestro, it’s a cat. Try again.”
Now the network foreman goes back to identify which voters threw their weight behind “cat” and which didn’t. The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes. What’s important is not the individual vote, exactly, but the pattern of votes. If Joe, Frank and Mary all vote together, it’s a dog; but if Joe, Kate and Jessica vote together, it’s a cat; and if Kate, Jessica and Frank vote together, it’s a defibrillator. The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals. If you have only Joe, Frank and Mary, you can maybe use them only to differentiate among a cat, a dog and a defibrillator. If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with incredible granularity. Your trained voter assembly will be able to look at an unlabeled picture and identify it more or less accurately.
Part of the reason there was so much resistance to these ideas in computer-science departments is that because the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them. This wooliness, however, is the point. The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes. You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.
It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.
Image-recognition networks like our cat-identifier are only one of many varieties of deep learning, but they are disproportionately invoked as teaching examples because each layer does something at least vaguely recognizable to humans — picking out edges first, then circles, then faces. This means there’s a safeguard against error. For instance, an early oddity in Google’s image-recognition software meant that it could not always identify a barbell in isolation, even though the team had trained it on an image set that included a lot of exercise categories. A visualization tool showed them the machine had learned not the concept of “dumbbell” but the concept of “dumbbell+arm,” because all the dumbbells in the training set were attached to arms. They threw into the training mix some photos of solo barbells. The problem was solved. Not everything is so easy.

Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.

4. The Cat Paper
Over the course of its first year or two, Brain’s efforts to cultivate in machines the skills of a 1-year-old were auspicious enough that the team was graduated out of the X lab and into the broader research organization. (The head of Google X once noted that Brain had paid for the entirety of X’s costs.) They still had fewer than 10 people and only a vague sense for what might ultimately come of it all. But even then they were thinking ahead to what ought to happen next. First a human mind learns to recognize a ball and rests easily with the accomplishment for a moment, but sooner or later, it wants to ask for the ball. And then it wades into language.
The first step in that direction was the cat paper, which made Brain famous.
What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself. (The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data. The cat paper showed that machines could also deal with raw unlabeled data, perhaps even data of which humans had no established foreknowledge. This seemed like a major advance not only in cat-recognition studies but also in overall artificial intelligence.
The lead author on the cat paper was Quoc Le. Le is short and willowy and soft-spoken, with a quick, enigmatic smile and shiny black penny loafers. He grew up outside Hue, Vietnam. His parents were rice farmers, and he did not have electricity at home. His mathematical abilities were obvious from an early age, and he was sent to study at a magnet school for science. In the late 1990s, while still in school, he tried to build a chatbot to talk to. He thought, How hard could this be?
“But actually,” he told me in a whispery deadpan, “it’s very hard.”
He left the rice paddies on a scholarship to a university in Canberra, Australia, where he worked on A.I. tasks like computer vision. The dominant method of the time, which involved feeding the machine definitions for things like edges, felt to him like cheating. Le didn’t know then, or knew only dimly, that there were at least a few dozen computer scientists elsewhere in the world who couldn’t help imagining, as he did, that machines could learn from scratch. In 2006, Le took a position at the Max Planck Institute for Biological Cybernetics in the medieval German university town of Tübingen. In a reading group there, he encountered two new papers by Geoffrey Hinton. People who entered the discipline during the long diaspora all have conversion stories, and when Le read those papers, he felt the scales fall away from his eyes.
“There was a big debate,” he told me. “A very big debate.” We were in a small interior conference room, a narrow, high-ceilinged space outfitted with only a small table and two whiteboards. He looked to the curve he’d drawn on the whiteboard behind him and back again, then softly confided, “I’ve never seen such a big debate.”
He remembers standing up at the reading group and saying, “This is the future.” It was, he said, an “unpopular decision at the time.” A former adviser from Australia, with whom he had stayed close, couldn’t quite understand Le’s decision. “Why are you doing this?” he asked Le in an email.
“I didn’t have a good answer back then,” Le said. “I was just curious. There was a successful paradigm, but to be honest I was just curious about the new paradigm. In 2006, there was very little activity.” He went to join Ng at Stanford and began to pursue Hinton’s ideas. “By the end of 2010, I was pretty convinced something was going to happen.”
What happened, soon afterward, was that Le went to Brain as its first intern, where he carried on with his dissertation work — an extension of which ultimately became the cat paper. On a simple level, Le wanted to see if the computer could be trained to identify on its own the information that was absolutely essential to a given image. He fed the neural network a still he had taken from YouTube. He then told the neural network to throw away some of the information contained in the image, though he didn’t specify what it should or shouldn’t throw away. The machine threw away some of the information, initially at random. Then he said: “Just kidding! Now recreate the initial image you were shown based only on the information you retained.” It was as if he were asking the machine to find a way to “summarize” the image, and then expand back to the original from the summary. If the summary was based on irrelevant data — like the color of the sky rather than the presence of whiskers — the machine couldn’t perform a competent reconstruction. Its reaction would be akin to that of a distant ancestor whose takeaway from his brief exposure to saber-tooth tigers was that they made a restful swooshing sound when they moved. Le’s neural network, unlike that ancestor, got to try again, and again and again and again. Each time it mathematically “chose” to prioritize different pieces of information and performed incrementally better. A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer. The same network that hit on our concept of cat also became enthusiastic about a pattern that looked like some sort of furniture-animal compound, like a cross between an ottoman and a goat.
Le didn’t see himself in those heady cat years as a language guy, but he felt an urge to connect the dots to his early chatbot. After the cat paper, he realized that if you could ask a network to summarize a photo, you could perhaps also ask it to summarize a sentence. This problem preoccupied Le, along with a Brain colleague named Tomas Mikolov, for the next two years.
In that time, the Brain team outgrew several offices around him. For a while they were on a floor they shared with executives. They got an email at one point from the administrator asking that they please stop allowing people to sleep on the couch in front of Larry Page and Sergey Brin’s suite. It unsettled incoming V.I.P.s. They were then allocated part of a research building across the street, where their exchanges in the microkitchen wouldn’t be squandered on polite chitchat with the suits. That interim also saw dedicated attempts on the part of Google’s competitors to catch up. (As Le told me about his close collaboration with Tomas Mikolov, he kept repeating Mikolov’s name over and over, in an incantatory way that sounded poignant. Le had never seemed so solemn. I finally couldn’t help myself and began to ask, “Is he … ?” Le nodded. “At Facebook,” he replied.)
Photo

They spent this period trying to come up with neural-network architectures that could accommodate not only simple photo classifications, which were static, but also complex structures that unfolded over time, like language or music. Many of these were first proposed in the 1990s, and Le and his colleagues went back to those long-ignored contributions to see what they could glean. They knew that once you established a facility with basic linguistic prediction, you could then go on to do all sorts of other intelligent things — like predict a suitable reply to an email, for example, or predict the flow of a sensible conversation. You could sidle up to the sort of prowess that would, from the outside at least, look a lot like thinking.

Part II: Language Machine
5. The Linguistic Turn
The hundred or so current members of Brain — it often feels less like a department within a colossal corporate hierarchy than it does a club or a scholastic society or an intergalactic cantina — came in the intervening years to count among the freest and most widely admired employees in the entire Google organization. They are now quartered in a tiered two-story eggshell building, with large windows tinted a menacing charcoal gray, on the leafy northwestern fringe of the company’s main Mountain View campus. Their microkitchen has a foosball table I never saw used; a Rock Band setup I never saw used; and a Go kit I saw used on a few occasions. (I did once see a young Brain research associate introducing his colleagues to ripe jackfruit, carving up the enormous spiky orb like a turkey.)
When I began spending time at Brain’s offices, in June, there were some rows of empty desks, but most of them were labeled with Post-it notes that said things like “Jesse, 6/27.” Now those are all occupied. When I first visited, parking was not an issue. The closest spaces were those reserved for expectant mothers or Teslas, but there was ample space in the rest of the lot. By October, if I showed up later than 9:30, I had to find a spot across the street.
Brain’s growth made Dean slightly nervous about how the company was going to handle the demand. He wanted to avoid what at Google is known as a “success disaster” — a situation in which the company’s capabilities in theory outpaced its ability to implement a product in practice. At a certain point he did some back-of-the-envelope calculations, which he presented to the executives one day in a two-slide presentation.
“If everyone in the future speaks to their Android phone for three minutes a day,” he told them, “this is how many machines we’ll need.” They would need to double or triple their global computational footprint.
“That,” he observed with a little theatrical gulp and widened eyes, “sounded scary. You’d have to” — he hesitated to imagine the consequences — “build new buildings.”
There was, however, another option: just design, mass-produce and install in dispersed data centers a new kind of chip to make everything faster. These chips would be called T.P.U.s, or “tensor processing units,” and their value proposition — counterintuitively — is that they are deliberately less precise than normal chips. Rather than compute 12.246 times 54.392, they will give you the perfunctory answer to 12 times 54. On a mathematical level, rather than a metaphorical one, a neural network is just a structured series of hundreds or thousands or tens of thousands of matrix multiplications carried out in succession, and it’s much more important that these processes be fast than that they be exact. “Normally,” Dean said, “special-purpose hardware is a bad idea. It usually works to speed up one thing. But because of the generality of neural networks, you can leverage this special-purpose hardware for a lot of other things.”
Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.
The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.
It’s no great tragedy if neural networks mislabel 1 percent of cats as dogs, but in something like a self-driving car we all want greater assurances.

6. The Ambush
Le’s paper showed that neural translation was plausible, but he had used only a relatively small public data set. (Small for Google, that is — it was actually the biggest public data set in the world. A decade of the old Translate had gathered production data that was between a hundred and a thousand times bigger.) More important, Le’s model didn’t work very well for sentences longer than about seven words.
Mike Schuster, who then was a staff research scientist at Brain, picked up the baton. He knew that if Google didn’t find a way to scale these theoretical insights up to a production level, someone else would. The project took him the next two years. “You think,” Schuster says, “to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.”
Schuster is a taut, focused, ageless being with a tanned, piston-shaped head, narrow shoulders, long camo cargo shorts tied below the knee and neon-green Nike Flyknits. He looks as if he woke up in the lotus position, reached for his small, rimless, elliptical glasses, accepted calories in the form of a modest portion of preserved acorn and completed a relaxed desert decathlon on the way to the office; in reality, he told me, it’s only an 18-mile bike ride each way. Schuster grew up in Duisburg, in the former West Germany’s blast-furnace district, and studied electrical engineering before moving to Kyoto to work on early neural networks. In the 1990s, he ran experiments with a neural-networking machine as big as a conference room; it cost millions of dollars and had to be trained for weeks to do something you could now do on your desktop in less than an hour. He published a paper in 1997 that was barely cited for a decade and a half; this year it has been cited around 150 times. He is not humorless, but he does often wear an expression of some asperity, which I took as his signature combination of German restraint and Japanese restraint.
The issues Schuster had to deal with were tangled. For one thing, Le’s code was custom-written, and it wasn’t compatible with the new open-source machine-learning platform Google was then developing, TensorFlow. Dean directed to Schuster two other engineers, Yonghui Wu and Zhifeng Chen, in the fall of 2015. It took them two months just to replicate Le’s results on the new system. Le was around, but even he couldn’t always make heads or tails of what they had done.
As Schuster put it, “Some of the stuff was not done in full consciousness. They didn’t know themselves why they worked.”
This February, Google’s research organization — the loose division of the company, roughly a thousand employees in all, dedicated to the forward-looking and the unclassifiable — convened their leads at an offsite retreat at the Westin St. Francis, on Union Square, a luxury hotel slightly less splendid than Google’s own San Francisco shop a mile or so to the east. The morning was reserved for rounds of “lightning talks,” quick updates to cover the research waterfront, and the afternoon was idled away in cross-departmental “facilitated discussions.” The hope was that the retreat might provide an occasion for the unpredictable, oblique, Bell Labs-ish exchanges that kept a mature company prolific.
At lunchtime, Corrado and Dean paired up in search of Macduff Hughes, director of Google Translate. Hughes was eating alone, and the two Brain members took positions at either side. As Corrado put it, “We ambushed him.”
“O.K.,” Corrado said to the wary Hughes, holding his breath for effect. “We have something to tell you.”
They told Hughes that 2016 seemed like a good time to consider an overhaul of Google Translate — the code of hundreds of engineers over 10 years — with a neural network. The old system worked the way all machine translation has worked for about 30 years: It sequestered each successive sentence fragment, looked up those words in a large statistically derived vocabulary table, then applied a battery of post-processing rules to affix proper endings and rearrange it all to make sense. The approach is called “phrase-based statistical machine translation,” because by the time the system gets to the next phrase, it doesn’t know what the last one was. This is why Translate’s output sometimes looked like a shaken bag of fridge magnets. Brain’s replacement would, if it came together, read and render entire sentences at one draft. It would capture context — and something akin to meaning.
The stakes may have seemed low: Translate generates minimal revenue, and it probably always will. For most Anglophone users, even a radical upgrade in the service’s performance would hardly be hailed as anything more than an expected incremental bump. But there was a case to be made that human-quality machine translation is not only a short-term necessity but also a development very likely, in the long term, to prove transformational. In the immediate future, it’s vital to the company’s business strategy. Google estimates that 50 percent of the internet is in English, which perhaps 20 percent of the world’s population speaks. If Google was going to compete in China — where a majority of market share in search-engine traffic belonged to its competitor Baidu — or India, decent machine translation would be an indispensable part of the infrastructure. Baidu itself had published a pathbreaking paper about the possibility of neural machine translation in July 2015.
‘You think to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.’

And in the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language. This would represent a major inflection point — perhaps the major inflection point — in the development of something that felt like true artificial intelligence.
Most people in Silicon Valley were aware of machine learning as a fast-approaching horizon, so Hughes had seen this ambush coming. He remained skeptical. A modest, sturdily built man of early middle age with mussed auburn hair graying at the temples, Hughes is a classic line engineer, the sort of craftsman who wouldn’t have been out of place at a drafting table at 1970s Boeing. His jeans pockets often look burdened with curious tools of ungainly dimension, as if he were porting around measuring tapes or thermocouples, and unlike many of the younger people who work for him, he has a wardrobe unreliant on company gear. He knew that various people in various places at Google and elsewhere had been trying to make neural translation work — not in a lab but at production scale — for years, to little avail.
Hughes listened to their case and, at the end, said cautiously that it sounded to him as if maybe they could pull it off in three years.
Dean thought otherwise. “We can do it by the end of the year, if we put our minds to it.” One reason people liked and admired Dean so much was that he had a long record of successfully putting his mind to it. Another was that he wasn’t at all embarrassed to say sincere things like “if we put our minds to it.”
Hughes was sure the conversion wasn’t going to happen any time soon, but he didn’t personally care to be the reason. “Let’s prepare for 2016,” he went back and told his team. “I’m not going to be the one to say Jeff Dean can’t deliver speed.”
A month later, they were finally able to run a side-by-side experiment to compare Schuster’s new system with Hughes’s old one. Schuster wanted to run it for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”
It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations. At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.
The neural system, on the English-French language pair, showed an improvement over the old system of seven points.
Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.
To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.
7. Theory Becomes Product
Until then, the neural-translation team had been only three people — Schuster, Wu and Chen — but with Hughes’s support, the broader team began to coalesce. They met under Schuster’s command on Wednesdays at 2 p.m. in a corner room of the Brain building called Quartz Lake. The meeting was generally attended by a rotating cast of more than a dozen people. When Hughes or Corrado were there, they were usually the only native English speakers. The engineers spoke Chinese, Vietnamese, Polish, Russian, Arabic, German and Japanese, though they mostly spoke in their own efficient pidgin and in math. It is not always totally clear, at Google, who is running a meeting, but in Schuster’s case there was no ambiguity.
The steps they needed to take, even then, were not wholly clear. “This story is a lot about uncertainty — uncertainty throughout the whole process,” Schuster told me at one point. “The software, the data, the hardware, the people. It was like” — he extended his long, gracile arms, slightly bent at the elbows, from his narrow shoulders — “swimming in a big sea of mud, and you can only see this far.” He held out his hand eight inches in front of his chest. “There’s a goal somewhere, and maybe it’s there.”
Most of Google’s conference rooms have videochat monitors, which when idle display extremely high-resolution oversaturated public Google+ photos of a sylvan dreamscape or the northern lights or the Reichstag. Schuster gestured toward one of the panels, which showed a crystalline still of the Washington Monument at night.
“The view from outside is that everyone has binoculars and can see ahead so far.”
The theoretical work to get them to this point had already been painstaking and drawn-out, but the attempt to turn it into a viable product — the part that academic scientists might dismiss as “mere” engineering — was no less difficult. For one thing, they needed to make sure that they were training on good data. Google’s billions of words of training “reading” were mostly made up of complete sentences of moderate complexity, like the sort of thing you might find in Hemingway. Some of this is in the public domain: The original Rosetta Stone of statistical machine translation was millions of pages of the complete bilingual records of the Canadian Parliament. Much of it, however, was culled from 10 years of collected data, including human translations that were crowdsourced from enthusiastic respondents. The team had in their storehouse about 97 million unique English “words.” But once they removed the emoticons, and the misspellings, and the redundancies, they had a working vocabulary of only around 160,000.
Then you had to refocus on what users actually wanted to translate, which frequently had very little to do with reasonable language as it is employed. Many people, Google had found, don’t look to the service to translate full, complex sentences; they translate weird little shards of language. If you wanted the network to be able to handle the stream of user queries, you had to be sure to orient it in that direction. The network was very sensitive to the data it was trained on. As Hughes put it to me at one point: “The neural-translation system is learning everything it can. It’s like a toddler. ‘Oh, Daddy says that word when he’s mad!’ ” He laughed. “You have to be careful.”
More than anything, though, they needed to make sure that the whole thing was fast and reliable enough that their users wouldn’t notice. In February, the translation of a 10-word sentence took 10 seconds. They could never introduce anything that slow. The Translate team began to conduct latency experiments on a small percentage of users, in the form of faked delays, to identify tolerance. They found that a translation that took twice as long, or even five times as long, wouldn’t be registered. An eightfold slowdown would. They didn’t need to make sure this was true across all languages. In the case of a high-traffic language, like French or Chinese, they could countenance virtually no slowdown. For something more obscure, they knew that users wouldn’t be so scared off by a slight delay if they were getting better quality. They just wanted to prevent people from giving up and switching over to some competitor’s service.
Schuster, for his part, admitted he just didn’t know if they ever could make it fast enough. He remembers a conversation in the microkitchen during which he turned to Chen and said, “There must be something we don’t know to make it fast enough, but I don’t know what it could be.”
He did know, though, that they needed more computers — “G.P.U.s,” graphics processors reconfigured for neural networks — for training.
Hughes went to Schuster to ask what he thought. “Should we ask for a thousand G.P.U.s?”
Schuster said, “Why not 2,000?”

In the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language.

Ten days later, they had the additional 2,000 processors.
By April, the original lineup of three had become more than 30 people — some of them, like Le, on the Brain side, and many from Translate. In May, Hughes assigned a kind of provisional owner to each language pair, and they all checked their results into a big shared spreadsheet of performance evaluations. At any given time, at least 20 people were running their own independent weeklong experiments and dealing with whatever unexpected problems came up. One day a model, for no apparent reason, started taking all the numbers it came across in a sentence and discarding them. There were months when it was all touch and go. “People were almost yelling,” Schuster said.
By late spring, the various pieces were coming together. The team introduced something called a “word-piece model,” a “coverage penalty,” “length normalization.” Each part improved the results, Schuster says, by maybe a few percentage points, but in aggregate they had significant effects. Once the model was standardized, it would be only a single multilingual model that would improve over time, rather than the 150 different models that Translate currently used. Still, the paradox — that a tool built to further generalize, via learning machines, the process of automation required such an extraordinary amount of concerted human ingenuity and effort — was not lost on them. So much of what they did was just gut. How many neurons per layer did you use? 1,024 or 512? How many layers? How many sentences did you run through at a time? How long did you train for?
“We did hundreds of experiments,” Schuster told me, “until we knew that we could stop the training after one week. You’re always saying: When do we stop? How do I know I’m done? You never know you’re done. The machine-learning mechanism is never perfect. You need to train, and at some point you have to stop. That’s the very painful nature of this whole system. It’s hard for some people. It’s a little bit an art — where you put your brush to make it nice. It comes from just doing it. Some people are better, some worse.”
By May, the Brain team understood that the only way they were ever going to make the system fast enough to implement as a product was if they could run it on T.P.U.s, the special-purpose chips that Dean had called for. As Chen put it: “We did not even know if the code would work. But we did know that without T.P.U.s, it definitely wasn’t going to work.” He remembers going to Dean one on one to plead, “Please reserve something for us.” Dean had reserved them. The T.P.U.s, however, didn’t work right out of the box. Wu spent two months sitting next to someone from the hardware team in an attempt to figure out why. They weren’t just debugging the model; they were debugging the chip. The neural-translation project would be proof of concept for the whole infrastructural investment.
One Wednesday in June, the meeting in Quartz Lake began with murmurs about a Baidu paper that had recently appeared on the discipline’s chief online forum. Schuster brought the room to order. “Yes, Baidu came out with a paper. It feels like someone looking through our shoulder — similar architecture, similar results.” The company’s BLEU scores were essentially what Google achieved in its internal tests in February and March. Le didn’t seem ruffled; his conclusion seemed to be that it was a sign Google was on the right track. “It is very similar to our system,” he said with quiet approval.
The Google team knew that they could have published their results earlier and perhaps beaten their competitors, but as Schuster put it: “Launching is more important than publishing. People say, ‘Oh, I did something first,’ but who cares, in the end?”
This did, however, make it imperative that they get their own service out first and better. Hughes had a fantasy that they wouldn’t even inform their users of the switch. They would just wait and see if social media lit up with suspicions about the vast improvements.
“We don’t want to say it’s a new system yet,” he told me at 5:36 p.m. two days after Labor Day, one minute before they rolled out Chinese-to-English to 10 percent of their users, without telling anyone. “We want to make sure it works. The ideal is that it’s exploding on Twitter: ‘Have you seen how awesome Google Translate got?’ ”
8. A Celebration
The only two reliable measures of time in the seasonless Silicon Valley are the rotations of seasonal fruit in the microkitchens — from the pluots of midsummer to the Asian pears and Fuyu persimmons of early fall — and the zigzag of technological progress. On an almost uncomfortably warm Monday afternoon in late September, the team’s paper was at last released. It had an almost comical 31 authors. The next day, the members of Brain and Translate gathered to throw themselves a little celebratory reception in the Translate microkitchen. The rooms in the Brain building, perhaps in homage to the long winters of their diaspora, are named after Alaskan locales; the Translate building’s theme is Hawaiian.
The Hawaiian microkitchen has a slightly grainy beach photograph on one wall, a small lei-garlanded thatched-hut service counter with a stuffed parrot at the center and ceiling fixtures fitted to resemble paper lanterns. Two sparse histograms of bamboo poles line the sides, like the posts of an ill-defended tropical fort. Beyond the bamboo poles, glass walls and doors open onto rows of identical gray desks on either side. That morning had seen the arrival of new hooded sweatshirts to honor 10 years of Translate, and many team members went over to the party from their desks in their new gear. They were in part celebrating the fact that their decade of collective work was, as of that day, en route to retirement. At another institution, these new hoodies might thus have become a costume of bereavement, but the engineers and computer scientists from both teams all seemed pleased.

‘It was like swimming in a big sea of mud, and you can only see this far.’ Schuster held out his hand eight inches in front of his chest.

Google’s neural translation was at last working. By the time of the party, the company’s Chinese-English test had already processed 18 million queries. One engineer on the Translate team was running around with his phone out, trying to translate entire sentences from Chinese to English using Baidu’s alternative. He crowed with glee to anybody who would listen. “If you put in more than two characters at once, it times out!” (Baidu says this problem has never been reported by users.)
When word began to spread, over the following weeks, that Google had introduced neural translation for Chinese to English, some people speculated that it was because that was the only language pair for which the company had decent results. Everybody at the party knew that the reality of their achievement would be clear in November. By then, however, many of them would be on to other projects.
Hughes cleared his throat and stepped in front of the tiki bar. He wore a faded green polo with a rumpled collar, lightly patterned across the midsection with dark bands of drying sweat. There had been last-minute problems, and then last-last-minute problems, including a very big measurement error in the paper and a weird punctuation-related bug in the system. But everything was resolved — or at least sufficiently resolved for the moment. The guests quieted. Hughes ran efficient and productive meetings, with a low tolerance for maundering or side conversation, but he was given pause by the gravity of the occasion. He acknowledged that he was, perhaps, stretching a metaphor, but it was important to him to underline the fact, he began, that the neural translation project itself represented a “collaboration between groups that spoke different languages.”
Their neural-translation project, he continued, represented a “step function forward” — that is, a discontinuous advance, a vertical leap rather than a smooth curve. The relevant translation had been not just between the two teams but from theory into reality. He raised a plastic demi-flute of expensive-looking Champagne.
“To communication,” he said, “and cooperation!”
The engineers assembled looked around at one another and gave themselves over to little circumspect whoops and applause.
Jeff Dean stood near the center of the microkitchen, his hands in his pockets, shoulders hunched slightly inward, with Corrado and Schuster. Dean saw that there was some diffuse preference that he contribute to the observance of the occasion, and he did so in a characteristically understated manner, with a light, rapid, concise addendum.
What they had shown, Dean said, was that they could do two major things at once: “Do the research and get it in front of, I dunno, half a billion people.”
Everyone laughed, not because it was an exaggeration but because it wasn’t.

Epilogue: Machines Without Ghosts
Perhaps the most famous historic critique of artificial intelligence, or the claims made on its behalf, implicates the question of translation. The Chinese Room argument was proposed in 1980 by the Berkeley philosopher John Searle. In Searle’s thought experiment, a monolingual English speaker sits alone in a cell. An unseen jailer passes him, through a slot in the door, slips of paper marked with Chinese characters. The prisoner has been given a set of tables and rules in English for the composition of replies. He becomes so adept with these instructions that his answers are soon “absolutely indistinguishable from those of Chinese speakers.” Should the unlucky prisoner be said to “understand” Chinese? Searle thought the answer was obviously not. This metaphor for a computer, Searle later wrote, exploded the claim that “the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds.”
For the Google Brain team, though, or for nearly everyone else who works in machine learning in Silicon Valley, that view is entirely beside the point. This doesn’t mean they’re just ignoring the philosophical question. It means they have a fundamentally different view of the mind. Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms. The implication is that our facility with what we consider the higher registers of thought are no different in kind from what we’re tempted to perceive as the lower registers. Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems. As Corrado said to me on my very first day at Google, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does,’ and — more importantly — what it doesn’t do yet.”
Where you come down on “knowing” versus “doing” has real cultural and social implications. At the party, Schuster came over to me to express his frustration with the paper’s media reception. “Did you see the first press?” he asked me. He paraphrased a headline from that morning, blocking it word by word with his hand as he recited it: GOOGLE SAYS A.I. TRANSLATION IS INDISTINGUISHABLE FROM HUMANS’. Over the final weeks of the paper’s composition, the team had struggled with this; Schuster often repeated that the message of the paper was “It’s much better than it was before, but not as good as humans.” He had hoped it would be clear that their efforts weren’t about replacing people but helping them.
And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view. At a 2015 M.I.T. conference about the roots of artificial intelligence, Noam Chomsky was asked what he thought of machine learning. He pooh-poohed the whole enterprise as mere statistical prediction, a glorified weather forecast. Even if neural translation attained perfect functionality, it would reveal nothing profound about the underlying nature of language. It could never tell you if a pronoun took the dative or the accusative case. This kind of prediction makes for a good tool to accomplish our ends, but it doesn’t succeed by the standards of furthering our understanding of why things happen the way they do. A machine can already detect tumors in medical scans better than human radiologists, but the machine can’t tell you what’s causing the cancer.
Then again, can the radiologist?
Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.
The most important thing happening in Silicon Valley right now is not disruption. Rather, it’s institution-building — and the consolidation of power — on a scale and at a pace that are both probably unprecedented in human history. Brain has interns; it has residents; it has “ninja” classes to train people in other departments. Everywhere there are bins of free bike helmets, and free green umbrellas for the two days a year it rains, and little fruit salads, and nap pods, and shared treadmill desks, and massage chairs, and random cartons of high-end pastries, and places for baby-clothes donations, and two-story climbing walls with scheduled instructors, and reading groups and policy talks and variegated support networks. The recipients of these major investments in human cultivation — for they’re far more than perks for proles in some digital salt mine — have at hand the power of complexly coordinated servers distributed across 13 data centers on four continents, data centers that draw enough electricity to light up large cities.

But even enormous institutions like Google will be subject to this wave of automation; once machines can learn from human speech, even the comfortable job of the programmer is threatened. As the party in the tiki bar was winding down, a Translate engineer brought over his laptop to show Hughes something. The screen swirled and pulsed with a vivid, kaleidoscopic animation of brightly colored spheres in long looping orbits that periodically collapsed into nebulae before dispersing once more.
Hughes recognized what it was right away, but I had to look closely before I saw all the names — of people and files. It was an animation of the history of 10 years of changes to the Translate code base, every single buzzing and blooming contribution by every last team member. Hughes reached over gently to skip forward, from 2006 to 2008 to 2015, stopping every once in a while to pause and remember some distant campaign, some ancient triumph or catastrophe that now hurried by to be absorbed elsewhere or to burst on its own. Hughes pointed out how often Jeff Dean’s name expanded here and there in glowing spheres.

Hughes called over Corrado, and they stood transfixed. To break the spell of melancholic nostalgia, Corrado, looking a little wounded, looked up and said, “So when do we get to delete it?”
“Don’t worry about it,” Hughes said. “The new code base is going to grow. Everything grows.”
Gideon Lewis-Kraus is a writer at large for the magazine and a fellow at New America.

========Appendix ======

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Marvin Minsky, 

THE WORK ISSUE 
What Google Learned From Its Quest to Build the Perfect Team FEB. 25, 2016 





When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ NOV. 27, 2016 



Arts Education

Steven J. Tepper is the dean of the Herberger Institute for Design and the Arts at Arizona State University, the nation’s largest, comprehensive design and arts school at a research university. . He was the keynote speaker at the annual luncheon today of the Metropolitan Atlanta Art Fund.

He had some provocative data to share. He was quoting from SNAAP.

His context was the explosion of arts non-for-profits – from 300 in the 1950’s to over 130,000 today.

Dr. Tepper is convinced that education in the arts is poorly understood, and has data to prove it. Too many people, he says, are skeptical about the careers that are possible from an arts education. In fact, many of the competencies developed in an arts education are precisely what employers in the 21st century are looking for – especially creativity. His conclusions:

– “The MFA is the new MBA”
– “The ‘Copyright Industries’ are booming…..they are 3X the size of the construction industry”.
– “the 21st century needs ‘design thinking”.

After the luncheon, I looked him up at ASU. Here is what he has to say – in his own words:

Welcome to the Herberger Institute for Design and the Arts, the largest comprehensive design and arts school in the nation, located within a dynamic 21st-century research university.

With 4,700 students, more than 675 faculty and faculty associates, 135 degrees and a tradition of top-ranked programs, we are committed to redefining the 21st-century design and arts school. Our college is built on a combination of disciplines unlike any other program in the nation, comprising schools of art; arts, media + engineering; design; film, dance and theatre; and music; as well as the ASU Art Museum.

The Institute is dedicated to the following design principles:

Creativity is a core 21st-century competency. Our graduates develop the ability to be generative and enterprising, work collaboratively within and across artistic fields, and generate non-routine solutions to complex problems. With this broad exposure to creative thinking and problem solving, our graduates are well prepared to lead in every arena of our economy, society and culture.

Design and the arts are critical resources for transforming our society. Artists must be embedded in their communities and dedicate their creative energy and talent to building, reimagining and sustaining our world. Design and the arts must be socially relevant and never viewed as extras or as grace notes. The Herberger Institute is committed to placing artists and arts-trained graduates at the center of public life.
The Herberger Institute is committed to enterprise and entrepreneurship. For most college graduates today, the future of work is unpredictable, non-linear and constantly evolving. A recent study found that 47 percent of current occupations will likely not exist in the next few decades. At the Herberger Institute, our faculty, students and graduates are inventing the jobs and the businesses of the future; reimagining how art and culture gets made and distributed; and coming up with new platforms and technology for the exchange of culture and the enrichment of the human experience. The legendary author and expert on city life Jane Jacobs talks about the abundance of “squelchers” — parents, educators, managers and leaders who tend to say no to new ideas. At the Herberger Institute, there are no squelchers. We embrace the cardinal rule of improvisation — always say: “Yes, and…”
Every person, regardless of social background, deserves an equal chance to help tell our nation’s and our world’s stories. Our creative expression defines who we are, what we aspire to and how we hope to live together. At the Herberger Institute, we are committed to projecting all voices – to providing an affordable education to every student who has the talent and the desire to boldly add their creative voice to the world’s evolving story.

Effectiveness requires excellence. We know that our ability to solve problems, build enterprises and create compelling and socially relevant design and art requires high levels of mastery. By being the best in our chosen fields, we can stretch ourselves and our talents to make a difference in the world.

Recently, as part of a weekly installation on campus, a Herberger Institute student hand-lettered the slogan “Here’s to the dreamers and the doers” in chalk on an outdoor blackboard, and we were able to use this for the incoming freshman class t-shirt. Whether you are an architect, designer, artist, performer, filmmaker, media engineer or creative scholar, the Herberger Institute is a place to dream. But unlike the misrepresentation of the artist and scholar as lost in a cloud, our faculty and students “make stuff happen” and leave their well-chiseled mark on the world. Come tour our concert and performance halls, art and design studios, exhibition spaces, dance studios, scene shops, classrooms, clinics and digital culture labs, and you will see the power of dreamers and doers.
If you are reading this message, you are implicated as a potential collaborator. Bring us your talents, your ideas and your passion — we will dream and do great things together.
Enthusiastically yours,

Steven J. Tepper

Dean
Herberger Institute for Design and the Arts
Arizona State University

Corridors

This idea of corridors has occurred to me over the last few months. I know of no references for the way of thinking that I will try to describe here. I am sure these references exist, but I do not know where they are.

Applications of Corridors
Corridors have application in law, and its sister concept of regulation; in design, and its subset applications of architecture, landscape architecture, interior design, and fine arts, such as drama, art, music, and dance; in policy, and its subset applications of corporate policy, or global, national, regional, and local policy (bodies of legislation and accompanying case law and precedent is a broad variant on this idea); in education, when schools ask students to specify a major, to join a department, or to specialize in a field; and in careers, when individuals define their own professional corridors, e.g. in engineering, software design, medicine, law, business, etc.

The Core Idea of Corridors
The core idea is this: productivity is a function of well-designed corridors. Design a corridor that is too narrow, and productivity is stifled. Design a corridor too wide, and productivity suffers from too many permutations and combinations of possibilities.

If any given project is vague, then the progress of the project managers is limited as they attempt to find a path forward that makes sense. Once found, a clear path forward leads to progress in leaps and bounds. If the path forward is not found, among a myriad of possibilities, then project teams flounder and are frustrated.

Corridors in Law
A law is a corridor hammered out by the legislative body. Designed well, a law specifies the corridor by which activity is “legal”. And conversely, a law specifies which activity is “illegal”. Along with the idea of illegal comes the the sanctions applied to those unfortunate enough to be caught doing something illegal.

Corridors in Regulation: the Sister Concept to Corridors in Law
A regulation reflects the desire of a law-making body to avoid making the law itself too narrow (where the language of the law effectively gets into counter-productive micro-management). It reflects the delegation of authority from the law-making body to an agency. The agency is charged with coming up with “regulations’ that define the tactics of the law. Done well, regulations always remain within the corridors outlined in the law. They reflect the intention of the law, and are an executional element of the law. Done poorly, regulation stray beyond the corridors outlined in the law, and can serve to confuse the public and frustrate the law-makers.

An example of Corridors in Law and Regulation: Social Security
FDR is known for making Social Security the law of the land. The US Congress, in adopting Social Security, effectively defined a corridor for aging in the US. From its adoption forward, older citizens who qualify for Social Security are entitled to a “safety net” of income. Because Congress recognized that this entitlement would require dynamic adjustment over time, it authorized the Social Security Administration to publish regulations that would tactically implement, and to adjust over time, the intentions of the law.

Corridors in Design
Creatives focus. The really great ones define corridors for their work. The corridors are broad enough to be highly motivating to the creative – who yearns for freedom of thought and expression. At the same time, they are narrow enough to allow the creative to be highly productive, by applying and reapplying their creative concepts within a relatively narrow scope.

An example of Corridors in Design: Steve Jobs and Apple
An example is Steve Jobs and Apple – a brilliant example of choosing a corridor for creativity and productivity. Apple defined the personal computer as their corridor – with stunning success. As they achieved preeminence in this field, Apple was able to see a larger corridor, which the world now sees as the ipod, iPad, iPhone, and – now – the iWatch. Are these new consumer appliances different than a “personal computer” – the corridor of the original vision? I would argue that they are not different: they are applications of the personal computer corridor, brilliantly subsuming appliances from other corridors into the corridor of personal computing.

Corridors in Policy
I mentioned that policy is an area where the notion of properly chosen, well-defined corridors can lead to high productivity. Corporate, Global, National, Regional, and Local Policy-Makers must constantly struggle to define corridors within which citizens and institutions within their sphere of influence must operate.

Urban Policy as an Example of Corridors in Policy
Take urban policy as an example. Urban design policies found in comprehensive plans and zoning ordinances. These plans and regulations reflect policies about where a given city wants to grow. How much growth should be in industrial, commercial, and residential ? Where are the geographies slated for each? Where does mixed-use fit? What procedures allow for changes over time?

Corridors in Education
Education is probably the most classical application of the example of a “corridor”. It is impossible to know everything. So educators attempt to guide students in narrowing their field of study. An undergraduate education might well define “liberal arts” or “engineering” as a corridor of study. A graduate program might define “public administration” or “mechanical engineering” as a corridor. Unfortunately, however, there are far too many examples of students getting lost in a corridor as large as “liberal arts”. Out of frustration parents and students alike may well force a narrower corridor. Chosen well, such a narrower corridor, e.g. history, can focus the mind and increase productivity and creativity. At the same time, there are far too many examples of those who define an educational corridor that is too narrow, e.g. automotive mechanics.

Example of Corridors in Education
90%+ of US students follow a corridor path that is well-known. They might, for example, take liberal arts as an undergraduate, and major in a science, social science, language, or fine arts. But US students may well have the sites set on graduate school, and so they stay very broad in undergraduate courses so they do not limit their choices in graduate school. A law or medicine graduate student does well to stay broad in undergraduate classes. The medicine corridor in graduate school would naturally expect more science course. The law corridor in graduate school would be inclined to expect high proficiency in writing and communication and analysis as an undergraduate.

Corridors in Careers
What is my career path? Virtually everyone struggles with this question. It is a corridor question and brings with it the same perils of other corridor choices. Choose a corridor that is too narrow, e.g. cost accounting, and the person runs a real risk that opportunities will rapidly fall outside the chosen corridor. The result will be career confusion, as job choices can be endless, and dead-end job choices are everywhere. At the same time, choose a career corridor that is too broad, e.g. systems design, and the person runs a real risk that no employers trusts that the applicant is qualified for a specific job that is available.

Example of Corridors in Careers
Sales is a reasonably common example of a career corridor filled with endless possibilities, and yet it is very specific in the eyes of an employer. “Show me proof that you can sell”, they might say. And with that proof, they may well not care if they have proof that the person can sell a specific widget or software or product or service.

Greece made simple

So Greece owes $310 billion euros to a range of lenders – but note $107 billion were written off by private lenders in 2012, so this brings Greek total debt to almost a half trillion dollars:

Greek lenders and amounts lent

Note that the IMF is a relatively small lender and the “Greek Public Sector” and the EU are large.

The story is a really sad one. Maybe it traces back to 1981, when Greece joined the EU. But arguably the real beginning is 2001, when they joined the Eurozone. As the newest member of the Eurozone, they were fortunate (???) to join as the economy was picking up steam. The go-go years were 2001-2007, when lenders poured money into this promising new member – almost a half trillion dollars!

Thus is it that they were the hardest hit when the recession hit in late 2007. They have been paying a steep price for this massive credit splurge in 2001-2007.

So – – – in summary:

As with so many stories, this one has two sides:

1) a poor country fighting to get resources – to get out of poverty and build a better life for its citizens (don’t believe anyone who starts their story here with “those Greek corrupt politicians”)
2) rich countries who love being bankers – to extend their reach and influence and income while feeling good about trying to help their poor neighbors (don’t believe anyone who starts their story with “those greedy German bankers…”.

Ok, OK, so Greek pride got the best of them when they borrowed almost a half trillion dollars!!!!! 12 million people ….. borrow a half trillion dollars?????!!!!!!

OK, OK, so us rich people got a little carried away when those nice Greeks kept wanting to borrow more ….. so what’s another 100 million when everyone is feeling so fine??????

A few sources explain in ways I trust:

NYT explains
Financial Times Coverage

http://www.nytimes.com/interactive/2015/business/international/greece-debt-crisis-euro.html?_r=0

Fortune Magazine Q&A

Everything to Know About Greece’s Economic Crisis
Geoffrey Smith / Fortune June 29, 2015

How Greece and the eurozone ended up in this mess, and where they go from here

Q. How did we get here?

A. Long story. Greece’s economy was never strong enough to share a currency with Germany’s, but both sides pretended it was, as it satisfied Greek pride and Germany’s ambitions (suffused with war guilt) of building an ‘Ever Closer Union’ in a new, democratic Europe. Reckless lending by French and German banks allowed the Greeks to finance widening budget and current account deficits for six years, but private capital flows dried up sharply after the 2008 crisis, forcing Greece to seek help from Eurozone governments and the International Monetary Fund in 2010.

Q. But all that was 5 years ago. How has Greece not managed to turn the corner since then, when every other Eurozone country that took a bailout has?

A. Greece was the first country to ask for help, and the Eurozone was totally unprepared for it on all levels–political, technological, emotional, whatever. The IMF, too, had no experience of dealing with a country in a monetary union. Consequently, the bailout was badly conceived (a point admitted at the weekend by Dominique Strauss-Kahn, who was head of the IMF at the time), focusing too much on the budget balance and not enough on fixing Greece’s uniquely dysfunctional state apparatus. In a normal recession, government spending can offset the negative effects of private demand contracting, but in this case, the budgetary austerity drove Greece into a vicious spiral. The economy contracted by 25% between 2010 and 2014, fatally weakening Greece’s ability ever to repay its debts.

Q. But didn’t Greece already get a load of debt relief?

A. Yes, €107 billion of it in a 2012 debt restructuring, the biggest in history. But it was only private creditors–i.e., bondholders–who took the hit. The Eurozone and IMF refused to write down their claims (although they did soften the repayment terms), and the new bailout agreement was based on more assumptions (since exposed as too rose-tinted) that Greece could grow itself out of its troubles. The economy continued to shrink in absolute terms and unemployment shot over 25%, forcing an ever bigger burden of taxation onto fewer and fewer shoulders. That created the political environment for this year’s crisis.

Q. You make it sound like this year is different from the previous four…

A. Victory for the radical left-wing Syriza party at elections in January completely changed the political dynamic. Previous governments had come from the political mainstream, and reluctantly played along with rules dictated in Brussels and, indirectly, Berlin. Syriza didn’t have any truck with that. It has campaigned for a 50% write-off of its debts and a relaxation of its budget targets. It has been openly confrontational and reversed key reforms made by the previous governments, despite promising the creditors in February that it wouldn’t. Syriza’s tactics–embodied by Finance Minister Yanis Varoufakis, an economics professor specializing in Game Theory–have been a gamble that the Eurozone would rather make concessions than risk the economic havoc caused by a Greek exit.

5. That gamble has failed, hasn’t it?

As of today, yes. It’s Greece, yet again, which is bearing the burden of everything: the economy had shown signs of bottoming out before Syriza came to power, with business sentiment at its highest in seven years after a very good tourist season in 2014. But the brinkmanship has destroyed confidence, and caused a sharp rise in government arrears and deposit flight, capped now by capital controls and a week-long closure of the banking system. Eurozone financial markets aren’t taking it well, but the prospect of a ‘shock and awe’ intervention by the ECB is keeping the sell-off within limits Monday morning. A real “Grexit” may yet wreak havoc on the Eurozone too, but it’s unlikely that Prime Minister Alexis Tsipras will be around that long to reap the political rewards.

Q. Aren’t the creditors to blame too?

A. For sure, there’s plenty of blame to go round. Most people now recognize that the banks that had lent to Greece pre-crisis should have been forced to take more losses in 2009/2010. Now the Eurozone has effectively swapped the private loans for public ones, any debt write-offs have enormous political costs at home. But governments in Germany and elsewhere have made a rod for their own back by being so stubborn. When Greece defaults, they’re going to lose billions anyway, and the cost of their posturing will become clear to taxpayers who have only been told half the story. They have squandered a host of opportunities to manage that loss in a more orderly way. By failing to accommodate more willing (if still inadequate) Greek governments with debt relief earlier, they prepared the ground for Syriza’s rise.

Q. What happens next?

A. Greece will miss a payment to the IMF Tuesday, and its bailout will expire the same day. The ECB seems likely to ignore the default at least until the planned referendum on Sunday, anxious to avoid responsibility for precipitating the total collapse of the financial system. The creditors are hoping the Greek government will capitulate under the pressure, and be replaced by a new ‘government of national unity’. There’s no sign of that happening yet.

Q. But how long can the current situation go on?

A. The banks are closed until July 7, after the referendum. As long as they still have the lifeline of the ECB’s emergency credit facility (over €85 billion), the banks and the government can continue to operate, albeit in a very restricted fashion. But the government is due to repay €3.5 billion in debts to the ECB on July 20, and if it can’t do that, then the ECB will have to accept that the Greek state is bankrupt, and cancel that credit line. At that point, the banks will be insolvent, and it will only be possible to restore their solvency by re-denominating the rest of their liabilities (i.e. deposits) in a new Greek currency.

Q. How, legally, does Greece leave the Eurozone?

A. Nobody knows. Like Cortes burning his boats after arriving in Mexico, the E.U. deliberately chose not to draft rules for that eventuality when it formed its currency union. There are rules for leaving the E.U., but even Syriza doesn’t want to do that. We will be, as Irish Finance Minister Michael Noonan said at the weekend, “in completely uncharted waters.”

They’ll be damned choppy waterss, too.

Q. How did we get here?

A. Long story. Greece’s economy was never strong enough to share a currency with Germany’s, but both sides pretended it was, as it satisfied Greek pride and Germany’s ambitions (suffused with war guilt) of building an ‘Ever Closer Union’ in a new, democratic Europe. Reckless lending by French and German banks allowed the Greeks to finance widening budget and current account deficits for six years, but private capital flows dried up sharply after the 2008 crisis, forcing Greece to seek help from Eurozone governments and the International Monetary Fund in 2010.

Q. But all that was 5 years ago. How has Greece not managed to turn the corner since then, when every other Eurozone country that took a bailout has?

A. Greece was the first country to ask for help, and the Eurozone was totally unprepared for it on all levels–political, technological, emotional, whatever. The IMF, too, had no experience of dealing with a country in a monetary union. Consequently, the bailout was badly conceived (a point admitted at the weekend by Dominique Strauss-Kahn, who was head of the IMF at the time), focusing too much on the budget balance and not enough on fixing Greece’s uniquely dysfunctional state apparatus. In a normal recession, government spending can offset the negative effects of private demand contracting, but in this case, the budgetary austerity drove Greece into a vicious spiral. The economy contracted by 25% between 2010 and 2014, fatally weakening Greece’s ability ever to repay its debts.

Q. But didn’t Greece already get a load of debt relief?

A. Yes, €107 billion of it in a 2012 debt restructuring, the biggest in history. But it was only private creditors–i.e., bondholders–who took the hit. The Eurozone and IMF refused to write down their claims (although they did soften the repayment terms), and the new bailout agreement was based on more assumptions (since exposed as too rose-tinted) that Greece could grow itself out of its troubles. The economy continued to shrink in absolute terms and unemployment shot over 25%, forcing an ever bigger burden of taxation onto fewer and fewer shoulders. That created the political environment for this year’s crisis.

Q. You make it sound like this year is different from the previous four…

A. Victory for the radical left-wing Syriza party at elections in January completely changed the political dynamic. Previous governments had come from the political mainstream, and reluctantly played along with rules dictated in Brussels and, indirectly, Berlin. Syriza didn’t have any truck with that. It has campaigned for a 50% write-off of its debts and a relaxation of its budget targets. It has been openly confrontational and reversed key reforms made by the previous governments, despite promising the creditors in February that it wouldn’t. Syriza’s tactics–embodied by Finance Minister Yanis Varoufakis, an economics professor specializing in Game Theory–have been a gamble that the Eurozone would rather make concessions than risk the economic havoc caused by a Greek exit.

5. That gamble has failed, hasn’t it?

As of today, yes. It’s Greece, yet again, which is bearing the burden of everything: the economy had shown signs of bottoming out before Syriza came to power, with business sentiment at its highest in seven years after a very good tourist season in 2014. But the brinkmanship has destroyed confidence, and caused a sharp rise in government arrears and deposit flight, capped now by capital controls and a week-long closure of the banking system. Eurozone financial markets aren’t taking it well, but the prospect of a ‘shock and awe’ intervention by the ECB is keeping the sell-off within limits Monday morning. A real “Grexit” may yet wreak havoc on the Eurozone too, but it’s unlikely that Prime Minister Alexis Tsipras will be around that long to reap the political rewards.

Q. Aren’t the creditors to blame too?

A. For sure, there’s plenty of blame to go round. Most people now recognize that the banks that had lent to Greece pre-crisis should have been forced to take more losses in 2009/2010. Now the Eurozone has effectively swapped the private loans for public ones, any debt write-offs have enormous political costs at home. But governments in Germany and elsewhere have made a rod for their own back by being so stubborn. When Greece defaults, they’re going to lose billions anyway, and the cost of their posturing will become clear to taxpayers who have only been told half the story. They have squandered a host of opportunities to manage that loss in a more orderly way. By failing to accommodate more willing (if still inadequate) Greek governments with debt relief earlier, they prepared the ground for Syriza’s rise.

Q. What happens next?

A. Greece will miss a payment to the IMF Tuesday, and its bailout will expire the same day. The ECB seems likely to ignore the default at least until the planned referendum on Sunday, anxious to avoid responsibility for precipitating the total collapse of the financial system. The creditors are hoping the Greek government will capitulate under the pressure, and be replaced by a new ‘government of national unity’. There’s no sign of that happening yet.

Q. But how long can the current situation go on?

A. The banks are closed until July 7, after the referendum. As long as they still have the lifeline of the ECB’s emergency credit facility (over €85 billion), the banks and the government can continue to operate, albeit in a very restricted fashion. But the government is due to repay €3.5 billion in debts to the ECB on July 20, and if it can’t do that, then the ECB will have to accept that the Greek state is bankrupt, and cancel that credit line. At that point, the banks will be insolvent, and it will only be possible to restore their solvency by re-denominating the rest of their liabilities (i.e. deposits) in a new Greek currency.

Q. How, legally, does Greece leave the Eurozone?

A. Nobody knows. Like Cortes burning his boats after arriving in Mexico, the E.U. deliberately chose not to draft rules for that eventuality when it formed its currency union. There are rules for leaving the E.U., but even Syriza doesn’t want to do that. We will be, as Irish Finance Minister Michael Noonan said at the weekend, “in completely uncharted waters.”

They’ll be damned choppy waters, too.

References:

Harvard analysis of Vacation Days