Category Archives: Academics

Academics, History, Philosophy, Literature, Music, Drama, Science, Mathematics, Logic, Sociology, Economics, Behavioral Economics, Sociology, Psychology

Quantified Water Movement (QWM)

Think FITBITS for water. The Quantified Water Movement (QWM) is here to stay, with devices that make real-time monitoring of water quality in streams, rivers, lakes and oceans for less than $1,000 per device.

The Stroud Water Research Center in Pennsylvania is leading the way, along with other center of excellence around the world. Stroud has been leading the way on water for fifty years. It is an elite water quality study organization, renowned for its globally relevant science and scientist excellence. Find out more at www.stroudcenter.org.

As a part of this global leadership in the study of water quality, Stroud is advancing the applied technologies that comprise the “quantified water movement” – the real-time monitoring of water quality in streams, rivers, lakes and oceans.

QWM is very much like the “quantified self movement”(see Post on QSM. QSM takes full advantage of low cost sensor and communication technology to “quantify my self”. In other words, I can dramatically advance my understanding about my own personal well-being win areas like exercise, sleep, glucose levels in blood, etc This movement already has proven that real-time reporting on metrics is possible at a very low cost, and on a one-person-at-a-time scale. Apple Watch and FITBIT are examples of commercial products arising out of QSM.

In the same way, QWM takes full advantage of sensors and communication technology to provide real-time reporting on water quality for a given stream, lake, river, or ocean. While still in a formative stage. QWM uses the well-known advances in sensor, big data, and data mining technology to monitor water quality on a real-time basis. Best of all, this applied technology has now reached an affordable price point.

For less than $1,000 per device, it is now possible to fully monitor any body of water, and to report out the findings in a comprehensive dataset. Many leaders believe that less than $100 is possible very soon.

The applied technology ends up being a simple “data logger” coupled with a simple radio transmitter.

Examples of easy-to-measure metrics are:

1. water depth
2. conductivity (measures saltiness or salinity)
3. dissolved oxygen (supports fish and beneficial bacteria)
4. turbidity (a sign of runoff from erosion. Cloudy water actually abrades fish, and prevent fish from finding food)

Training now exists, thanks to Stroud, that is super simple. For example, in one hour, you can learn the capability of this low cost equipment, and the science as to why it is important.

In a two day training, citizen scientists and civil engineers alike can learn how to program their own data logger, attach sensors to the data logger, and deploy and maintain the equipment in an aquatic environment.

All of this and more is illuminated at www.enviroDIY.org.

Primary Care Best Practice

This post is about two important articles related to Primary Care Best Practice: One by Atul Gawande called “Big Med” and the other from Harvard Medical School about Physician Burnout.

As usual, Atul tells stories. His stories begin with his positive experience at the Cheesecake Factory and with his mother’s knee replacement surgery.

====================
Article by Atul Gawande Big Med and the Cheesecake Factory
====================
JCR NOTES

Article explores the potential for transferring some of the operational excellence of the Cheesecake Factory to aspects of health care.

He finds it tempting to look for 95% standardization and 5% customization.
He sees lessons in rolling out innovations through test kitchens and training that includes how to train others.
He sees heroes in doctors that push to articulate a standard of care, or technology, or equipment, or pharmaceutical.

====================
CREDIT: New Yorker Article by Atul Gawande “Big Med”

Annals of Health Care
August 13, 2012 Issue
Big Med
Restaurant chains have managed to combine quality control, cost control, and innovation. Can health care?

By Atul Gawande

Medicine has long resisted the productivity revolutions that transformed other industries. But the new chains aim to change this.Illustration by Harry Campbell

It was Saturday night, and I was at the local Cheesecake Factory with my two teen-age daughters and three of their friends. You may know the chain: a hundred and sixty restaurants with a catalogue-like menu that, when I did a count, listed three hundred and eight dinner items (including the forty-nine on the “Skinnylicious” menu), plus a hundred and twenty-four choices of beverage. It’s a linen-napkin-and-tablecloth sort of place, but with something for everyone. There’s wine and wasabi-crusted ahi tuna, but there’s also buffalo wings and Bud Light. The kids ordered mostly comfort food—pot stickers, mini crab cakes, teriyaki chicken, Hawaiian pizza, pasta carbonara. I got a beet salad with goat cheese, white-bean hummus and warm flatbread, and the miso salmon.

The place is huge, but it’s invariably packed, and you can see why. The typical entrée is under fifteen dollars. The décor is fancy, in an accessible, Disney-cruise-ship sort of way: faux Egyptian columns, earth-tone murals, vaulted ceilings. The waiters are efficient and friendly. They wear all white (crisp white oxford shirt, pants, apron, sneakers) and try to make you feel as if it were a special night out. As for the food—can I say this without losing forever my chance of getting a reservation at Per Se?—it was delicious.
The chain serves more than eighty million people per year. I pictured semi-frozen bags of beet salad shipped from Mexico, buckets of precooked pasta and production-line hummus, fish from a box. And yet nothing smacked of mass production. My beets were crisp and fresh, the hummus creamy, the salmon like butter in my mouth. No doubt everything we ordered was sweeter, fattier, and bigger than it had to be. But the Cheesecake Factory knows its customers. The whole table was happy (with the possible exception of Ethan, aged sixteen, who picked the onions out of his Hawaiian pizza).

I wondered how they pulled it off. I asked one of the Cheesecake Factory line cooks how much of the food was premade. He told me that everything’s pretty much made from scratch—except the cheesecake, which actually is from a cheesecake factory, in Calabasas, California.
I’d come from the hospital that day. In medicine, too, we are trying to deliver a range of services to millions of people at a reasonable cost and with a consistent level of quality. Unlike the Cheesecake Factory, we haven’t figured out how. Our costs are soaring, the service is typically mediocre, and the quality is unreliable. Every clinician has his or her own way of doing things, and the rates of failure and complication (not to mention the costs) for a given service routinely vary by a factor of two or three, even within the same hospital.

It’s easy to mock places like the Cheesecake Factory—restaurants that have brought chain production to complicated sit-down meals. But the “casual dining sector,” as it is known, plays a central role in the ecosystem of eating, providing three-course, fork-and-knife restaurant meals that most people across the country couldn’t previously find or afford. The ideas start out in élite, upscale restaurants in major cities. You could think of them as research restaurants, akin to research hospitals. Some of their enthusiasms—miso salmon, Chianti-braised short ribs, flourless chocolate espresso cake—spread to other high-end restaurants. Then the casual-dining chains reëngineer them for affordable delivery to millions. Does health care need something like this?

Big chains thrive because they provide goods and services of greater variety, better quality, and lower cost than would otherwise be available. Size is the key. It gives them buying power, lets them centralize common functions, and allows them to adopt and diffuse innovations faster than they could if they were a bunch of small, independent operations. Such advantages have made Walmart the most successful retailer on earth. Pizza Hut alone runs one in eight pizza restaurants in the country. The Cheesecake Factory’s major competitor, Darden, owns Olive Garden, LongHorn Steakhouse, Red Lobster, and the Capital Grille; it has more than two thousand restaurants across the country and employs more than a hundred and eighty thousand people. We can bristle at the idea of chains and mass production, with their homogeneity, predictability, and constant genuflection to the value-for-money god. Then you spend a bad night in a “quaint” “one of a kind” bed-and-breakfast that turns out to have a manic, halitoxic innkeeper who can’t keep the hot water running, and it’s right back to the Hyatt.

Medicine, though, had held out against the trend. Physicians were always predominantly self-employed, working alone or in small private-practice groups. American hospitals tended to be community-based. But that’s changing. Hospitals and clinics have been forming into large conglomerates. And physicians—facing escalating demands to lower costs, adopt expensive information technology, and account for performance—have been flocking to join them. According to the Bureau of Labor Statistics, only a quarter of doctors are self-employed—an extraordinary turnabout from a decade ago, when a majority were independent. They’ve decided to become employees, and health systems have become chains.

I’m no exception. I am an employee of an academic, nonprofit health system called Partners HealthCare, which owns the Brigham and Women’s Hospital and the Massachusetts General Hospital, along with seven other hospitals, and is affiliated with dozens of clinics around eastern Massachusetts. Partners has sixty thousand employees, including six thousand doctors. Our competitors include CareGroup, a system of five regional hospitals, and a new for-profit chain called the Steward Health Care System.

Steward was launched in late 2010, when Cerberus—the multibillion-dollar private-investment firm—bought a group of six failing Catholic hospitals in the Boston area for nine hundred million dollars. Many people were shocked that the Catholic Church would allow a corporate takeover of its charity hospitals. But the hospitals, some of which were more than a century old, had been losing money and patients, and Cerberus is one of those firms which specialize in turning around distressed businesses.

Cerberus has owned controlling stakes in Chrysler and gmac Financing and currently has stakes in Albertsons grocery stories, one of Austria’s largest retail bank chains, and the Freedom Group, which it built into one of the biggest gun-and-ammunition manufacturers in the world. When it looked at the Catholic hospitals, it saw another opportunity to create profit through size and efficiency. In the past year, Steward bought four more Massachusetts hospitals and made an offer to buy six financially troubled hospitals in south Florida. It’s trying to create what some have called the Southwest Airlines of health care—a network of high-quality hospitals that would appeal to a more cost-conscious public.

Steward’s aggressive growth has made local doctors like me nervous. But many health systems, for-profit and not-for-profit, share its goal: large-scale, production-line medicine. The way medical care is organized is changing—because the way we pay for it is changing.
Historically, doctors have been paid for services, not results. In the eighteenth century B.C., Hammurabi’s code instructed that a surgeon be paid ten shekels of silver every time he performed a procedure for a patrician—opening an abscess or treating a cataract with his bronze lancet. It also instructed that if the patient should die or lose an eye, the surgeon’s hands be cut off. Apparently, the Mesopotamian surgeons’ lobby got this results clause dropped. Since then, we’ve generally been paid for what we do, whatever happens. The consequence is the system we have, with plenty of individual transactions—procedures, tests, specialist consultations—and uncertain attention to how the patient ultimately fares.

Health-care reforms—public and private—have sought to reshape that system. This year, my employer’s new contracts with Medicare, BlueCross BlueShield, and others link financial reward to clinical performance. The more the hospital exceeds its cost-reduction and quality-improvement targets, the more money it can keep. If it misses the targets, it will lose tens of millions of dollars. This is a radical shift. Until now, hospitals and medical groups have mainly had a landlord-tenant relationship with doctors. They offered us space and facilities, but what we tenants did behind closed doors was our business. Now it’s their business, too.

The theory the country is about to test is that chains will make us better and more efficient. The question is how. To most of us who work in health care, throwing a bunch of administrators and accountants into the mix seems unlikely to help. Good medicine can’t be reduced to a recipe.

Then again neither can good food: every dish involves attention to detail and individual adjustments that require human judgment. Yet, some chains manage to achieve good, consistent results thousands of times a day across the entire country. I decided to get inside one and find out how they did it.

Dave Luz is the regional manager for the eight Cheesecake Factories in the Boston area. He oversees operations that bring in eighty million dollars in yearly revenue, about as much as a medium-sized hospital. Luz (rhymes with “fuzz”) is forty-seven, and had started out in his twenties waiting tables at a Cheesecake Factory restaurant in Los Angeles. He was writing screenplays, but couldn’t make a living at it. When he and his wife hit thirty and had their second child, they came back east to Boston to be closer to family. He decided to stick with the Cheesecake Factory. Luz rose steadily, and made a nice living. “I wanted to have some business skills,” he said—he started a film-production company on the side—“and there was no other place I knew where you could go in, know nothing, and learn top to bottom how to run a business.”

To show me how a Cheesecake Factory works, he took me into the kitchen of his busiest restaurant, at Prudential Center, a shopping and convention hub. The kitchen design is the same in every restaurant, he explained. It’s laid out like a manufacturing facility, in which raw materials in the back of the plant come together as a finished product that rolls out the front. Along the back wall are the walk-in refrigerators and prep stations, where half a dozen people stood chopping and stirring and mixing. The next zone is where the cooking gets done—two parallel lines of countertop, forty-some feet long and just three shoe-lengths apart, with fifteen people pivoting in place between the stovetops and grills on the hot side and the neatly laid-out bins of fixings (sauces, garnishes, seasonings, and the like) on the cold side. The prep staff stock the pullout drawers beneath the counters with slabs of marinated meat and fish, serving-size baggies of pasta and crabmeat, steaming bowls of brown rice and mashed potatoes. Basically, the prep crew handles the parts, and the cooks do the assembly.

Computer monitors positioned head-high every few feet flashed the orders for a given station. Luz showed me the touch-screen tabs for the recipe for each order and a photo showing the proper presentation. The recipe has the ingredients on the left part of the screen and the steps on the right. A timer counts down to a target time for completion. The background turns from green to yellow as the order nears the target time and to red when it has exceeded it.

I watched Mauricio Gaviria at the broiler station as the lunch crowd began coming in. Mauricio was twenty-nine years old and had worked there eight years. He’d got his start doing simple prep—chopping vegetables—and worked his way up to fry cook, the pasta station, and now the sauté and broiler stations. He bounced in place waiting for the pace to pick up. An order for a “hibachi” steak popped up. He tapped the screen to open the order: medium-rare, no special requests. A ten-minute timer began. He tonged a fat hanger steak soaking in teriyaki sauce onto the broiler and started a nest of sliced onions cooking beside it. While the meat was grilling, other orders arrived: a Kobe burger, a blue-cheese B.L.T. burger, three “old-fashioned” burgers, five veggie burgers, a “farmhouse” burger, and two Thai chicken wraps. Tap, tap, tap. He got each of them grilling.

I brought up the hibachi-steak recipe on the screen. There were instructions to season the steak, sauté the onions, grill some mushrooms, slice the meat, place it on the bed of onions, pile the mushrooms on top, garnish with parsley and sesame seeds, heap a stack of asparagus tempura next to it, shape a tower of mashed potatoes alongside, drop a pat of wasabi butter on top, and serve.

Two things struck me. First, the instructions were precise about the ingredients and the objectives (the steak slices were to be a quarter of an inch thick, the presentation just so), but not about how to get there. The cook has to decide how much to salt and baste, how to sequence the onions and mushrooms and meat so they’re done at the same time, how to swivel from grill to countertop and back, sprinkling a pinch of salt here, flipping a burger there, sending word to the fry cook for the asparagus tempura, all the while keeping an eye on the steak. In producing complicated food, there might be recipes, but there was also a substantial amount of what’s called “tacit knowledge”—knowledge that has not been reduced to instructions.

Second, Mauricio never looked at the instructions anyway. By the time I’d finished reading the steak recipe, he was done with the dish and had plated half a dozen others. “Do you use this recipe screen?” I asked.

“No. I have the recipes right here,” he said, pointing to his baseball-capped head.

He put the steak dish under warming lights, and tapped the screen to signal the servers for pickup. But before the dish was taken away, the kitchen manager stopped to look, and the system started to become clearer. He pulled a clean fork out and poked at the steak. Then he called to Mauricio and the two other cooks manning the grill station.

“Gentlemen,” he said, “this steak is perfect.” It was juicy and pink in the center, he said. “The grill marks are excellent.” The sesame seeds and garnish were ample without being excessive. “But the tower is too tight.” I could see what he meant. The mashed potatoes looked a bit like something a kid at the beach might have molded with a bucket. You don’t want the food to look manufactured, he explained. Mauricio fluffed up the potatoes with a fork.

I watched the kitchen manager for a while. At every Cheesecake Factory restaurant, a kitchen manager is stationed at the counter where the food comes off the line, and he rates the food on a scale of one to ten. A nine is near-perfect. An eight requires one or two corrections before going out to a guest. A seven needs three. A six is unacceptable and has to be redone. This inspection process seemed a tricky task. No one likes to be second-guessed. The kitchen manager prodded gently, being careful to praise as often as he corrected. (“Beautiful. Beautiful!” “The pattern of this pesto glaze is just right.”) But he didn’t hesitate to correct.

“We’re getting sloppy with the plating,” he told the pasta station. He was unhappy with how the fry cooks were slicing the avocado spring rolls. “Gentlemen, a half-inch border on this next time.” He tried to be a coach more than a policeman. “Is this three-quarters of an ounce of Parm-Romano?”

And that seemed to be the spirit in which the line cooks took him and the other managers. The managers had all risen through the ranks. This earned them a certain amount of respect. They in turn seemed respectful of the cooks’ skills and experience. Still, the oversight is tight, and this seemed crucial to the success of the enterprise.

The managers monitored the pace, too—scanning the screens for a station stacking up red flags, indicating orders past the target time, and deciding whether to give the cooks at the station a nudge or an extra pair of hands. They watched for waste—wasted food, wasted time, wasted effort. The formula was Business 101: Use the right amount of goods and labor to deliver what customers want and no more. Anything more is waste, and waste is lost profit.

I spoke to David Gordon, the company’s chief operating officer. He told me that the Cheesecake Factory has worked out a staff-to-customer ratio that keeps everyone busy but not so busy that there’s no slack in the system in the event of a sudden surge of customers. More difficult is the problem of wasted food. Although the company buys in bulk from regional suppliers, groceries are the biggest expense after labor, and the most unpredictable. Everything—the chicken, the beef, the lettuce, the eggs, and all the rest—has a shelf life. If a restaurant were to stock too much, it could end up throwing away hundreds of thousands of dollars’ worth of food. If a restaurant stocks too little, it will have to tell customers that their favorite dish is not available, and they may never come back. Groceries, Gordon said, can kill a restaurant.

The company’s target last year was at least 97.5-per-cent efficiency: the managers aimed at throwing away no more than 2.5 per cent of the groceries they bought, without running out. This seemed to me an absurd target. Achieving it would require knowing in advance almost exactly how many customers would be coming in and what they were going to want, then insuring that the cooks didn’t spill or toss or waste anything. Yet this is precisely what the organization has learned to do. The chain-restaurant industry has produced a field of computer analytics known as “guest forecasting.”

“We have forecasting models based on historical data—the trend of the past six weeks and also the trend of the previous year,” Gordon told me. “The predictability of the business has become astounding.” The company has even learned how to make adjustments for the weather or for scheduled events like playoff games that keep people at home.

A computer program known as Net Chef showed Luz that for this one restaurant food costs accounted for 28.73 per cent of expenses the previous week. It also showed exactly how many chicken breasts were ordered that week ($1,614 worth), the volume sold, the volume on hand, and how much of last week’s order had been wasted (three dollars’ worth). Chain production requires control, and they’d figured out how to achieve it on a mass scale.

As a doctor, I found such control alien—possibly from a hostile planet. We don’t have patient forecasting in my office, push-button waste monitoring, or such stringent, hour-by-hour oversight of the work we do, and we don’t want to. I asked Luz if he had ever thought about the contrast when he went to see a doctor. We were standing amid the bustle of the kitchen, and the look on his face shifted before he answered.
“I have,” he said. His mother was seventy-eight. She had early Alzheimer’s disease, and required a caretaker at home. Getting her adequate medical care was, he said, a constant battle.

Recently, she’d had a fall, apparently after fainting, and was taken to a local emergency room. The doctors ordered a series of tests and scans, and kept her overnight. They never figured out what the problem was. Luz understood that sometimes explanations prove elusive. But the clinicians didn’t seem to be following any coördinated plan of action. The emergency doctor told the family one plan, the admitting internist described another, and the consulting specialist a third. Thousands of dollars had been spent on tests, but nobody ever told Luz the results.

A nurse came at ten the next morning and said that his mother was being discharged. But his mother’s nurse was on break, and the discharge paperwork with her instructions and prescriptions hadn’t been done. So they waited. Then the next person they needed was at lunch. It was as if the clinicians were the customers, and the patients’ job was to serve them. “We didn’t get to go until 6 p.m., with a tired, disabled lady and a long drive home.” Even then she still had to be changed out of her hospital gown and dressed. Luz pressed the call button to ask for help. No answer. He went out to the ward desk.

The aide was on break, the secretary said. “Don’t you dress her yourself at home?” He explained that he didn’t, and made a fuss.

An aide was sent. She was short with him and rough in changing his mother’s clothes. “She was manhandling her,” Luz said. “I felt like, ‘Stop. I’m not one to complain. I respect what you do enormously. But if there were a video camera in here, you’d be on the evening news.’ I sent her out. I had to do everything myself. I’m stuffing my mom’s boob in her bra. It was unbelievable.”

His mother was given instructions to check with her doctor for the results of cultures taken during her stay, for a possible urinary-tract infection. But when Luz tried to follow up, he couldn’t get through to her doctor for days. “Doctors are busy,” he said. “I get it. But come on.” An office assistant finally told him that the results wouldn’t be ready for another week and that she was to see a neurologist. No explanations. No chance to ask questions.

The neurologist, after giving her a two-minute exam, suggested tests that had already been done and wrote a prescription that he admitted was of doubtful benefit. Luz’s family seemed to encounter this kind of disorganization, imprecision, and waste wherever his mother went for help.

“It is unbelievable to me that they would not manage this better,” Luz said. I asked him what he would do if he were the manager of a neurology unit or a cardiology clinic. “I don’t know anything about medicine,” he said. But when I pressed he thought for a moment, and said, “This is pretty obvious. I’m sure you already do it. But I’d study what the best people are doing, figure out how to standardize it, and then bring it to everyone to execute.”

This is not at all the normal way of doing things in medicine. (“You’re scaring me,” he said, when I told him.) But it’s exactly what the new health-care chains are now hoping to do on a mass scale. They want to create Cheesecake Factories for health care. The question is whether the medical counterparts to Mauricio at the broiler station—the clinicians in the operating rooms, in the medical offices, in the intensive-care units—will go along with the plan. Fixing a nice piece of steak is hardly of the same complexity as diagnosing the cause of an elderly patient’s loss of consciousness. Doctors and patients have not had a positive experience with outsiders second-guessing decisions. How will they feel about managers trying to tell them what the “best practices” are?

In March, my mother underwent a total knee replacement, like at least six hundred thousand Americans each year. She’d had a partial knee replacement a decade ago, when arthritis had worn away part of the cartilage, and for a while this served her beautifully. The surgeon warned, however, that the results would be temporary, and about five years ago the pain returned.

She’s originally from Ahmadabad, India, and has spent three decades as a pediatrician, attending to the children of my small Ohio home town. She’s chatty. She can’t go through a grocery checkout line or get pulled over for speeding without learning people’s names and a little bit about them. But she didn’t talk about her mounting pain. I noticed, however, that she had developed a pronounced limp and had become unable to walk even moderate distances. When I asked her about it, she admitted that just getting out of bed in the morning was an ordeal. Her doctor showed me her X-rays. Her partial prosthesis had worn through the bone on the lower surface of her knee. It was time for a total knee replacement.
This past winter, she finally stopped putting it off, and asked me to find her a surgeon. I wanted her to be treated well, in both the technical and the human sense. I wanted a place where everyone and everything—from the clinic secretary to the physical therapists—worked together seamlessly.

My mother planned to come to Boston, where I live, for the surgery so she could stay with me during her recovery. (My father died last year.) Boston has three hospitals in the top rank of orthopedic surgery. But even a doctor doesn’t have much to go on when it comes to making a choice. A place may have a great reputation, but it’s hard to know about actual quality of care.

Unlike some countries, the United States doesn’t have a monitoring system that tracks joint-replacement statistics. Even within an institution, I found, surgeons take strikingly different approaches. They use different makes of artificial joints, different kinds of anesthesia, different regimens for post-surgical pain control and physical therapy.

In the absence of information, I went with my own hospital, the Brigham and Women’s Hospital. Our big-name orthopedic surgeons treat Olympians and professional athletes. Nine of them do knee replacements. Of most interest to me, however, was a surgeon who was not one of the famous names. He has no national recognition. But he has led what is now a decade-long experiment in standardizing joint-replacement surgery.

John Wright is a New Zealander in his late fifties. He’s a tower crane of a man, six feet four inches tall, and so bald he barely seems to have eyebrows. He’s informal in attire—I don’t think I’ve ever seen him in a tie, and he is as apt to do rounds in his zip-up anorak as in his white coat—but he exudes competence.

“Customization should be five per cent, not ninety-five per cent, of what we do,” he told me. A few years ago, he gathered a group of people from every specialty involved—surgery, anesthesia, nursing, physical therapy—to formulate a single default way of doing knee replacements. They examined every detail, arguing their way through their past experiences and whatever evidence they could find. Essentially, they did what Luz considered the obvious thing to do: they studied what the best people were doing, figured out how to standardize it, and then tried to get everyone to follow suit.

They came up with a plan for anesthesia based on research studies—including giving certain pain medications before the patient entered the operating room and using spinal anesthesia plus an injection of local anesthetic to block the main nerve to the knee. They settled on a postoperative regimen, too. The day after a knee replacement, most orthopedic surgeons have their patients use a continuous passive-motion machine, which flexes and extends the knee as they lie in bed. Large-scale studies, though, have suggested that the machines don’t do much good. Sure enough, when the members of Wright’s group examined their own patients, they found that the ones without the machine got out of bed sooner after surgery, used less pain medication, and had more range of motion at discharge. So Wright instructed the hospital to get rid of the machines, and to use the money this saved (ninety thousand dollars a year) to pay for more physical therapy, something that is proven to help patient mobility. Therapy, starting the day after surgery, would increase from once to twice a day, including weekends.

Even more startling, Wright had persuaded the surgeons to accept changes in the operation itself; there was now, for instance, a limit as to which prostheses they could use. Each of our nine knee-replacement surgeons had his preferred type and brand. Knee surgeons are as particular about their implants as professional tennis players are about their racquets. But the hardware is easily the biggest cost of the operation—the average retail price is around eight thousand dollars, and some cost twice that, with no solid evidence of real differences in results.

Knee implants were largely perfected a quarter century ago. By the nineteen-nineties, studies showed that, for some ninety-five per cent of patients, the implants worked magnificently a decade after surgery. Evidence from the Australian registry has shown that not a single new knee or hip prosthesis had a lower failure rate than that of the established prostheses. Indeed, thirty per cent of the new models were likelier to fail. Like others on staff, Wright has advised companies on implant design. He believes that innovation will lead to better implants. In the meantime, however, he has sought to limit the staff to the three lowest-cost knee implants.

These have been hard changes for many people to accept. Wright has tried to figure out how to persuade clinicians to follow the standardized plan. To prevent revolt, he learned, he had to let them deviate at times from the default option. Surgeons could still order a passive-motion machine or a preferred prosthesis. “But I didn’t make it easy,” Wright said. The surgeons had to enter the treatment orders in the computer themselves. To change or add an implant, a surgeon had to show that the performance was superior or the price at least as low.

I asked one of his orthopedic colleagues, a surgeon named John Ready, what he thought about Wright’s efforts. Ready was philosophical. He recognized that the changes were improvements, and liked most of them. But he wasn’t happy when Wright told him that his knee-implant manufacturer wasn’t matching the others’ prices and would have to be dropped.

“It’s not ideal to lose my prosthesis,” Ready said. “I could make the switch. The differences between manufacturers are minor. But there’d be a learning curve.” Each implant has its quirks—how you seat it, what tools you use. “It’s probably a ten-case learning curve for me.” Wright suggested that he explain the situation to the manufacturer’s sales rep. “I’m my rep’s livelihood,” Ready said. “He probably makes five hundred dollars a case from me.” Ready spoke to his rep. The price was dropped.

Wright has become the hospital’s kitchen manager—not always a pleasant role. He told me that about half of the surgeons appreciate what he’s doing. The other half tolerate it at best. One or two have been outright hostile. But he has persevered, because he’s gratified by the results. The surgeons now use a single manufacturer for seventy-five per cent of their implants, giving the hospital bargaining power that has helped slash its knee-implant costs by half. And the start-to-finish standardization has led to vastly better outcomes. The distance patients can walk two days after surgery has increased from fifty-three to eighty-five feet. Nine out of ten could stand, walk, and climb at least a few stairs independently by the time of discharge. The amount of narcotic pain medications they required fell by a third. They could also leave the hospital nearly a full day earlier on average (which saved some two thousand dollars per patient).

My mother was one of the beneficiaries. She had insisted to Dr. Wright that she would need a week in the hospital after the operation and three weeks in a rehabilitation center. That was what she’d required for her previous knee operation, and this one was more extensive.
“We’ll see,” he told her.

The morning after her operation, he came in and told her that he wanted her getting out of bed, standing up, and doing a specific set of exercises he showed her. “He’s pushy, if you want to say it that way,” she told me. The physical therapists and nurses were, too. They were a team, and that was no small matter. I counted sixty-three different people involved in her care. Nineteen were doctors, including the surgeon and chief resident who assisted him, the anesthesiologists, the radiologists who reviewed her imaging scans, and the junior residents who examined her twice a day and adjusted her fluids and medications. Twenty-three were nurses, including her operating-room nurses, her recovery-room nurse, and the many ward nurses on their eight-to-twelve-hour shifts. There were also at least five physical therapists; sixteen patient-care assistants, helping check her vital signs, bathe her, and get her to the bathroom; plus X-ray and EKG technologists, transport workers, nurse practitioners, and physician assistants. I didn’t even count the bioengineers who serviced the equipment used, the pharmacists who dispensed her medications, or the kitchen staff preparing her food while taking into account her dietary limitations. They all had to coördinate their contributions, and they did.

Three days after her operation, she was getting in and out of bed on her own. She was on virtually no narcotic medication. She was starting to climb stairs. Her knee pain was actually less than before her operation. She left the hospital for the rehabilitation center that afternoon.

The biggest complaint that people have about health care is that no one ever takes responsibility for the total experience of care, for the costs, and for the results. My mother experienced what happens in medicine when someone takes charge. Of course, John Wright isn’t alone in trying to design and implement this kind of systematic care, in joint surgery and beyond. The Virginia Mason Medical Center, in Seattle, has done it for knee surgery and cancer care; the Geisinger Health Center, in Pennsylvania, has done it for cardiac surgery and primary care; the University of Michigan Health System standardized how its doctors give blood transfusions to patients, reducing the need for transfusions by thirty-one per cent and expenses by two hundred thousand dollars a month. Yet, unless such programs are ramped up on a nationwide scale, they aren’t going to do much to improve health care for most people or reduce the explosive growth of health-care costs.

In medicine, good ideas still take an appallingly long time to trickle down. Recently, the American Academy of Neurology and the American Headache Society released new guidelines for migraine-headache-treatment. They recommended treating severe migraine sufferers—who have more than six attacks a month—with preventive medications and listed several drugs that markedly reduce the occurrence of attacks. The authors noted, however, that previous guidelines going back more than a decade had recommended such remedies, and doctors were still not providing them to more than two-thirds of patients. One study examined how long it took several major discoveries, such as the finding that the use of beta-blockers after a heart attack improves survival, to reach even half of Americans. The answer was, on average, more than fifteen years.

Scaling good ideas has been one of our deepest problems in medicine. Regulation has had its place, but it has proved no more likely to produce great medicine than food inspectors are to produce great food. During the era of managed care, insurance-company reviewers did hardly any better. We’ve been stuck. But do we have to be?

Every six months, the Cheesecake Factory puts out a new menu. This means that everyone who works in its restaurants expects to learn something new twice a year. The March, 2012, Cheesecake Factory menu included thirteen new items. The teaching process is now finely honed: from start to finish, rollout takes just seven weeks.

The ideas for a new dish, or for tweaking an old one, can come from anywhere. One of the Boston prep cooks told me about an idea he once had that ended up in a recipe. David Overton, the founder and C.E.O. of the Cheesecake Factory, spends much of his time sampling a range of cuisines and comes up with many dishes himself. All the ideas, however, go through half a dozen chefs in the company’s test kitchen, in Calabasas. They figure out how to make each recipe reproducible, appealing, and affordable. Then they teach the new recipe to the company’s regional managers and kitchen managers.

Dave Luz, the Boston regional manager, went to California for training this past January with his chief kitchen manager, Tom Schmidt, a chef with fifteen years’ experience. They attended lectures, watched videos, participated in workshops. It sounded like a surgical conference. Where I might be taught a new surgical technique, they were taught the steps involved in preparing a “Santorini farro salad.” But there was a crucial difference. The Cheesecake instructors also trained the attendees how to teach what they were learning. In medicine, we hardly ever think about how to implement what we’ve learned. We learn what we want to, when we want to.

On the first training day, the kitchen managers worked their way through thirteen stations, preparing each new dish, and their performances were evaluated. The following day, they had to teach their regional managers how to prepare each dish—Schmidt taught Luz—and this time the instructors assessed how well the kitchen managers had taught.
The managers returned home to replicate the training session for the general manager and the chief kitchen manager of every restaurant in their region. The training at the Boston Prudential Center restaurant took place on two mornings, before the lunch rush. The first day, the managers taught the kitchen staff the new menu items. There was a lot of poring over the recipes and videos and fussing over the details. The second day, the cooks made the new dishes for the servers. This gave the cooks some practice preparing the food at speed, while allowing the servers to learn the new menu items. The dishes would go live in two weeks. I asked a couple of the line cooks how long it took them to learn to make the new food.

“I know it already,” one said.
“I make it two times, and that’s all I need,” the other said.
Come on, I said. How long before they had it down pat?
“One day,” they insisted. “It’s easy.”

I asked Schmidt how much time he thought the cooks required to master the recipes. They thought a day, I told him. He grinned. “More like a month,” he said.

Even a month would be enviable in medicine, where innovations commonly spread at a glacial pace. The new health-care chains, though, are betting that they can change that, in much the same way that other chains have.
Armin Ernst is responsible for intensive-care-unit operations in Steward’s ten hospitals. The I.C.U.s he oversees serve some eight thousand patients a year. In another era, an I.C.U. manager would have been a facilities expert. He would have spent his time making sure that the equipment, electronics, pharmacy resources, and nurse staffing were up to snuff. He would have regarded the I.C.U. as the doctors’ workshop, and he would have wanted to give them the best possible conditions to do their work as they saw fit.
Ernst, though, is a doctor—a new kind of doctor, whose goal is to help disseminate good ideas. He doesn’t see the I.C.U. as a doctors’ workshop. He sees it as the temporary home of the sickest, most fragile people in the country. Nowhere in health care do we expend more resources. Although fewer than one in four thousand Americans are in intensive care at any given time, they account for four per cent of national health-care costs. Ernst believes that his job is to make sure that everyone is collaborating to provide the most effective and least wasteful care possible.

He looked like a regular doctor to me. Ernst is fifty years old, a native German who received his medical degree at the University of Heidelberg before training in pulmonary and critical-care medicine in the United States. He wears a white hospital coat and talks about drips and ventilator settings, like any other critical-care specialist. But he doesn’t deal with patients: he deals with the people who deal with patients.

Ernst says he’s not telling clinicians what to do. Instead, he’s trying to get clinicians to agree on precise standards of care, and then make sure that they follow through on them. (The word “consensus” comes up a lot.) What I didn’t understand was how he could enforce such standards in ten hospitals across three thousand square miles.

Late one Friday evening, I joined an intensive-care-unit team on night duty. But this team was nowhere near a hospital. We were in a drab one-story building behind a meat-trucking facility outside of Boston, in a back section that Ernst called his I.C.U. command center. It was outfitted with millions of dollars’ worth of technology. Banks of computer screens carried a live feed of cardiac-monitor readings, radiology-imaging scans, and laboratory results from I.C.U. patients throughout Steward’s hospitals. Software monitored the stream and produced yellow and red alerts when it detected patterns that raised concerns. Doctors and nurses manned consoles where they could toggle on high-definition video cameras that allowed them to zoom into any I.C.U. room and talk directly to the staff on the scene or to the patients themselves.

The command center was just a few months old. The team had gone live in only four of the ten hospitals. But in the next several months Ernst’s “tele-I.C.U.” team will have the ability to monitor the care for every patient in every I.C.U. bed in the Steward health-care system.
A doctor, two nurses, and an administrative assistant were on duty in the command center each night I visited. Christina Monti was one of the nurses. A pixie-like thirty-year-old with nine years’ experience as a cardiac intensive-care nurse, she was covering Holy Family Hospital, on the New Hampshire border, and St. Elizabeth’s Medical Center, in Boston’s Brighton neighborhood. When I sat down with her, she was making her rounds, virtually.

First, she checked on the patients she had marked as most critical. She reviewed their most recent laboratory results, clinical notes, and medication changes in the electronic record. Then she made a “visit,” flicking on the two-way camera and audio system. If the patients were able to interact, she would say hello to them in their beds. She asked the staff members whether she could do anything for them. The tele-I.C.U. team provided the staff with extra eyes and ears when needed. If a crashing patient diverts the staff’s attention, the members of the remote team can keep an eye on the other patients. They can handle computer paperwork if a nurse falls behind; they can look up needed clinical information. The hospital staff have an OnStar-like button in every room that they can push to summon the tele-I.C.U. team.

Monti also ran through a series of checks for each patient. She had a reference list of the standards that Ernst had negotiated with the people running the I.C.U.s, and she looked to see if they were being followed. The standards covered basics, from hand hygiene to measures for stomach-ulcer prevention. In every room with a patient on a respirator, for instance, Monti made sure the nurse had propped the head of the bed up at least thirty degrees, which makes pneumonia less likely. She made sure the breathing tube in the patient’s mouth was secure, to reduce the risk of the tube’s falling out or becoming disconnected. She zoomed in on the medication pumps to check that the drips were dosed properly. She was not looking for bad nurses or bad doctors. She was looking for the kinds of misses that even excellent nurses and doctors can make under pressure.
The concept of the remote I.C.U. started with an effort to let specialists in critical-care medicine, who are in short supply, cover not just one but several community hospitals. Two hundred and fifty hospitals from Alaska to Virginia have installed a version of the tele-I.C.U. It produced significant improvements in outcomes and costs—and, some discovered, a means of driving better practices even in hospitals that had specialists on hand.
After five minutes of observation, however, I realized that the remote I.C.U. team wasn’t exactly in command; it was in negotiation. I observed Monti perform a video check on a middle-aged man who had just come out of heart surgery. A soft chime let the people in the room know she was dropping in. The man was unconscious, supported by a respirator and intravenous drips. At his bedside was a nurse hanging a bag of fluid. She seemed to stiffen at the chime’s sound.

“Hi,” Monti said to her. “I’m Chris. Just making my evening rounds. How are you?” The bedside nurse gave the screen only a sidelong glance.
Ernst wasn’t oblivious of the issue. He had taken pains to introduce the command center’s team, spending weeks visiting the units and bringing doctors and nurses out to tour the tele-I.C.U. before a camera was ever turned on. But there was no escaping the fact that these were strangers peering over the staff’s shoulders. The bedside nurse’s chilliness wasn’t hard to understand.

In a single hour, however, Monti had caught a number of problems. She noticed, for example, that a patient’s breathing tube had come loose. Another patient wasn’t getting recommended medication to prevent potentially fatal blood clots. Red alerts flashed on the screen—a patient with an abnormal potassium level that could cause heart-rhythm problems, another with a sudden leap in heart rate.

Monti made sure that the team wasn’t already on the case and that the alerts weren’t false alarms. Checking the computer, she figured out that a doctor had already ordered a potassium infusion for the woman with the low level. Flipping on a camera, she saw that the patient with the high heart rate was just experiencing the stress of being helped out of bed for the first time after surgery. But the unsecured breathing tube and the forgotten blood-clot medication proved to be oversights. Monti raised the concerns with the bedside staff.

Sometimes they resist. “You have got to be careful from patient to patient,” Gerard Hayes, the tele-I.C.U. doctor on duty, explained. “Pushing hard on one has ramifications for how it goes with a lot of patients. You don’t want to sour whole teams on the tele-I.C.U.” Across the country, several hospitals have decommissioned their systems. Clinicians have been known to place a gown over the camera, or even rip the camera out of the wall. Remote monitoring will never be the same as being at the bedside. One nurse called the command center to ask the team not to turn on the video system in her patient’s room: he was delirious and confused, and the sudden appearance of someone talking to him from the television would freak him out.
Still, you could see signs of change. I watched Hayes make his virtual rounds through the I.C.U. at St. Anne’s Hospital, in Fall River, near the Rhode Island border. He didn’t yet know all the members of the hospital staff—this was only his second night in the command center, and when he sees patients in person it’s at a hospital sixty miles north. So, in his dealings with the on-site clinicians, he was feeling his way.

Checking on one patient, he found a few problems. Mr. Karlage, as I’ll call him, was in his mid-fifties, an alcoholic smoker with cirrhosis of the liver, severe emphysema, terrible nutrition, and now a pneumonia that had put him into respiratory failure. The I.C.U. team injected him with antibiotics and sedatives, put a breathing tube down his throat, and forced pure oxygen into his lungs. Over a few hours, he stabilized, and the I.C.U. doctor was able to turn his attention to other patients.

But stabilizing a sick patient is like putting out a house fire. There can be smoldering embers just waiting to reignite. Hayes spotted a few. The ventilator remained set to push breaths at near-maximum pressure, and, given the patient’s severe emphysema, this risked causing a blowout. The oxygen concentration was still cranked up to a hundred per cent, which, over time, can damage the lungs. The team had also started several broad-spectrum antibiotics all at once, and this regimen had to be dialled back if they were to avoid breeding resistant bacteria.

Hayes had to notify the unit doctor. An earlier interaction, however, had not been promising. During a video check on a patient, Hayes had introduced himself and mentioned an issue he’d noticed. The unit doctor stared at him with folded arms, mouth shut tight. Hayes was a former Navy flight surgeon with twenty years’ experience as an I.C.U. doctor and looked to have at least a decade on the St. Anne’s doctor. But the doctor was no greenhorn, either, and gave him the brushoff: “The morning team can deal with that.” Now Hayes needed to call him about Mr. Karlage. He decided to do it by phone.

“Sounds like you’re having a busy night,” Hayes began when he reached the doctor. “Mr. Karlage is really turning around, huh?” Hayes praised the doctor’s work. Then he brought up his three issues, explaining what he thought could be done and why. He spoke like a consultant brought in to help. This went over better. The doctor seemed to accept Hayes’s suggestions.

Unlike a mere consultant, however, Hayes took a few extra steps to make sure his suggestions were carried out. He spoke to the nurse and the respiratory therapist by video and explained the changes needed. To carry out the plan, they needed written orders from the unit doctor. Hayes told them to call him back if they didn’t get the orders soon.

Half an hour later, Hayes called Mr. Karlage’s nurse again. She hadn’t received the orders. For all the millions of dollars of technology spent on the I.C.U. command center, this is where the plug meets the socket. The fundamental question in medicine is: Who is in charge? With the opening of the command center, Steward was trying to change the answer—it gave the remote doctors the authority to issue orders as well. The idea was that they could help when a unit doctor got too busy and fell behind, and that’s what Hayes chose to believe had happened. He entered the orders into the computer. In a conflict, however, the on-site physician has the final say. So Hayes texted the St. Anne’s doctor, informing him of the changes and asking if he’d let him know if he disagreed.

Hayes received no reply. No “thanks” or “got it” or “O.K.” After midnight, though, the unit doctor pressed the video call button and his face flashed onto Hayes’s screen. Hayes braced for a confrontation. Instead, the doctor said, “So I’ve got this other patient and I wanted to get your opinion.”
Hayes suppressed a smile. “Sure,” he said.

When he signed off, he seemed ready to high-five someone. “He called us,” he marvelled. The command center was gaining credibility.
Armin Ernst has big plans for the command center—a rollout of full-scale treatment protocols for patients with severe sepsis, acute respiratory-distress syndrome, and other conditions; strategies to reduce unnecessary costs; perhaps even computer forecasting of patient volume someday. Steward is already extending the command-center concept to in-patient psychiatry. Emergency rooms and surgery may be next. Other health systems are pursuing similar models. The command-center concept provides the possibility of, well, command.

Today, some ninety “super-regional” health-care systems have formed across the country—large, growing chains of clinics, hospitals, and home-care agencies. Most are not-for-profit. Financial analysts expect the successful ones to drive independent medical centers out of existence in much of the country—either by buying them up or by drawing away their patients with better quality and cost control. Some small clinics and stand-alone hospitals will undoubtedly remain successful, perhaps catering to the luxury end of health care the way gourmet restaurants do for food. But analysts expect that most of us will gravitate to the big systems, just as we have moved away from small pharmacies to CVS and Walmart.
Already, there have been startling changes. Cleveland Clinic, for example, opened nine regional hospitals in northeast Ohio, as well as health centers in southern Florida, Toronto, and Las Vegas, and is now going international, with a three-hundred-and-sixty-four-bed hospital in Abu Dhabi scheduled to open next year. It reached an agreement with Lowe’s, the home-improvement chain, guaranteeing a fixed price for cardiac surgery for the company’s employees and dependents. The prospect of getting better care for a lower price persuaded Lowe’s to cover all out-of-pocket costs for its insured workers to go to Cleveland, including co-payments, airfare, transportation, and lodging. Three other companies, including Kohl’s department stores, have made similar deals, and a dozen more, including Boeing, are in negotiations.

Big Medicine is on the way.
Reinventing medical care could produce hundreds of innovations. Some may be as simple as giving patients greater e-mail and online support from their clinicians, which would enable timelier advice and reduce the need for emergency-room visits. Others might involve smartphone apps for coaching the chronically ill in the management of their disease, new methods for getting advice from specialists, sophisticated systems for tracking outcomes and costs, and instant delivery to medical teams of up-to-date care protocols. Innovations could take a system that requires sixty-three clinicians for a knee replacement and knock the number down by half or more. But most significant will be the changes that finally put people like John Wright and Armin Ernst in charge of making care coherent, coördinated, and affordable. Essentially, we’re moving from a Jeffersonian ideal of small guilds and independent craftsmen to a Hamiltonian recognition of the advantages that size and centralized control can bring.

Yet it seems strange to pin our hopes on chains. We have no guarantee that Big Medicine will serve the social good. Whatever the industry, an increase in size and control creates the conditions for monopoly, which could do the opposite of what we want: suppress innovation and drive up costs over time. In the past, certainly, health-care systems that pursued size and market power were better at raising prices than at lowering them.
A new generation of medical leaders and institutions professes to have a different aim. But a lesson of the past century is that government can influence the behavior of big corporations, by requiring transparency about their performance and costs, and by enacting rules and limitations to protect the ordinary citizen. The federal government has broken up monopolies like Standard Oil and A.T. & T.; in some parts of the country, similar concerns could develop in health care.

Mixed feelings about the transformation are unavoidable. There’s not just the worry about what Big Medicine will do; there’s also the worry about how society and government will respond. For the changes to live up to our hopes—lower costs and better care for everyone—liberals will have to accept the growth of Big Medicine, and conservatives will have to accept the growth of strong public oversight.

The vast savings of Big Medicine could be widely shared—or reserved for a few. The clinicians who are trying to reinvent medicine aren’t doing it to make hedge-fund managers and bondholders richer; they want to see that everyone benefits from the savings their work generates—and that won’t be automatic.

Our new models come from industries that have learned to increase the capabilities and efficiency of the human beings who work for them. Yet the same industries have also tended to devalue those employees. The frontline worker, whether he is making cars, solar panels, or wasabi-crusted ahi tuna, now generates unprecedented value but receives little of the wealth he is creating. Can we avoid this as we revolutionize health care?

Those of us who work in the health-care chains will have to contend with new protocols and technology rollouts every six months, supervisors and project managers, and detailed metrics on our performance. Patients won’t just look for the best specialist anymore; they’ll look for the best system. Nurses and doctors will have to get used to delivering care in which our own convenience counts for less and the patients’ experience counts for more. We’ll also have to figure out how to reward people for taking the time and expense to teach the next generations of clinicians. All this will be an enormous upheaval, but it’s long overdue, and many people recognize that. When I asked Christina Monti, the Steward tele-I.C.U. nurse, why she wanted to work in a remote facility tangling with staffers who mostly regarded her with indifference or hostility, she told me, “Because I wanted to be part of the change.”

And we are seeing glimpses of this change. In my mother’s rehabilitation center, miles away from where her surgery was done, the physical therapists adhered to the exercise protocols that Dr. Wright’s knee factory had developed. He didn’t have a video command center, so he came out every other day to check on all the patients and make sure that the staff was following the program. My mother was sure she’d need a month in rehab, but she left in just a week, incurring a fraction of the costs she would have otherwise. She walked out the door using a cane. On her first day at home with me, she climbed two flights of stairs and walked around the block for exercise.

The critical question is how soon that sort of quality and cost control will be available to patients everywhere across the country. We’ve let health-care systems provide us with the equivalent of greasy-spoon fare at four-star prices, and the results have been ruinous. The Cheesecake Factory model represents our best prospect for change. Some will see danger in this. Many will see hope. And that’s probably the way it should be. ♦

======================
Article on Physician Burnout and Best Practice
======================
JCR Notes:

A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.

To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week.

“Complete Care” across 8500 physicians and 4.4 million members at SCPMG has four elements:

1. Share accountability:
share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties.

2. Delegation:
One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians

3. Information technology
“Outreach team” manages information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant).

4. Standardized Care Process (see Atul Gawande Big Med)
“Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports.

Glossary:
Medical assistants (MAs)
Licensed vocational nurses (LVNs)

======================
======================
======================

CREDIT HBR Case Study on SCPMG Primary Care Best Practice

How One California Medical Group Is Decreasing Physician Burnout
Sophia Arabadjis
Erin E. Sullivan
JUNE 07, 2017

Physician burnout is a growing problem for all health care systems in the United States. Burned-out physicians deliver lower quality care, reduce their hours, or stop practicing, reducing access to care around the country. Primary care physicians are particularly vulnerable: They have some of the highest burnout rates of any medical discipline.

As part of our work researching high-performing primary care systems, we discovered a system-wide approach launched by Southern California Permanente Medical Group (SCPMG) in 2004 that unburdens primary care physicians. We believe the program — Complete Care — may be a viable model for other institutions looking to decrease burnout or increase physician satisfaction. (While burnout can easily be measured, institutions often don’t publicly report their own rates and the associated turnover they experience. Consequently, we used physician satisfaction as a proxy for burnout in our research.)

In most health care systems, primary care physicians are the first stop for patients needing care. As a result, their patients’ needs — and their own tasks — vary immensely. A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.

Some studies have examined just how much time a primary care physician needs to do all of these tasks and the results are staggering. To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week. Given such workloads, the high burnout rate is hardly surprising.

While designed with the intent to improve quality of care, SCPMG’s Complete Care program also alleviates some of the identified drivers of physician burnout by following a systematic approach to care delivery. Comprised of 8,500 physicians, SCPMG consistently provides the highest quality care to the region’s 4.4 million plan members. And a recent study of SCPMG physician satisfaction suggests that regardless of discipline, physicians feel high levels of satisfaction in three key areas: their compensation, their perceived ability to deliver high-quality care, and their day-to-day professional lives.

Complete Care has four core elements:

Share Accountability with Specialists
A few years ago, SCPMG’s regional medical director of quality and clinical analysis noticed a plateauing effect in some preventative screenings where screenings rates failed to increase after a certain percentage. He asked his team to analyze how certain patient populations — for example, women in need of a mammogram — accessed the health care system. As approximately one in eight women will develop invasive breast cancer over the course of their lifetimes, a failure to receive the recommended preventative screening could have serious health repercussions.
What the team found was startling: Over the course of a year, nearly two-thirds of women clinically eligible for a mammogram never set foot in their primary care physician’s office. Instead they showed up in specialty care or urgent care.

While this discovery spurred more research into patient access, the outcome remained the same: To achieve better rates of preventative and chronic care compliance, specialists had to be brought into the fold.
SCPMG slowly started to share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties. In order to bring the specialists on board, SCPMG identified and enlisted physician champions across the medical group to promote the program throughout the region; carefully timed the rollouts of different elements of the program pieces so increased demands wouldn’t overwhelm specialists; and crafted incentive programs whose payout was tied to their performance of preventative and chronic-care activities.

This reallocation of traditional primary care responsibilities has allowed SCPMG to achieve a high level of care integration and challenge traditional notions of roles and systems. Its specialists now have to respond to patients’ needs outside their immediate expertise: For example, a podiatrist will inquire whether a diabetic patient has had his or her regular eye examination, and an emergency room doctor will stitch up a cut and give immunizations in the same visit. And the whole system, not just primary care, is responsible for quality metrics related to prevention and chronic care (e.g., the percentage of eligible patients who received a mammogram).

In addition, SCPMG revamped the way it provided care to match how patients accessed and used their system. For example, it began promoting the idea of the comprehensive visit, where patients could see their primary care provider, get blood drawn, and pick up prescribed medications in the same building.

Ultimately, the burden on primary care physicians started to ease. Even more important, SCPMG estimates that Complete Care has saved over 17,000 lives.

Delegate Responsibility
“Right work, right people,” a guiding principle, helped shape the revamping of the organization’s infrastructure. One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians so physicians could spend their time doing tasks only they could do and everyone was working at the top of his or her license. For example, embedded nurse managers of diabetic patients help coordinate care visits, regularly communicate directly with patients about meeting their health goals (such as weekly calls about lower HbA1c levels), and track metrics on diabetic populations across the entire organization. At the same time, dedicated prescribing nurse practitioners work closely with physicians to monitor medication use, which in the case of blood thinners, is very time intensive and requires careful titration.

Leverage Technology

SCPMG invested in information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant). It also established a small outreach team (about four people) that uses large automated registries of patients to mail seasonal reminders (e.g., “it’s time for your flu vaccine shot”) and alerts about routine checkups (e.g., “you are due for a mammogram”) and handle other duties (e.g., coordinating mail-order, at-home fecal tests for colon cancer). In addition, the outreach team manages automated calls and e-mail reminders for the regions 4.4 million members.

Thanks to this reorganization of responsibilities and use of new technology, traditional primary care tasks such as monitoring blood thinners, managing diabetic care, and tracking patients eligibility for cancer screenings have been transferred to other people and processes within the SCPMG system.

Standardize Care Processes
The final element of Complete Care is the kind of process standardization advocated by Atul Gawande’s in his New Yorker article “Big Med.” Standardizing processes — and in particular, workflows — removes duplicative work, strengthens working relationships, and results in higher-functioning teams, reliable routines and higher-quality outcomes. In primary care, standardized workflows help create consistent communications between providers and staff and providers and patients, which allows physicians to spend more time during visits on patients’ pressing needs.
One such process, the “Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports. It was originally developed to improve communications between support staff and physicians after SCPMG’s electronic medical record was introduced.
Medical assistants (MAs) and licensed vocational nurses (LVNs) are key players. A series of checklists embedded into the medical record guide their work both before and after the visit. These checklists contain symptoms, actions, and questions that are timely and specific to each patient based on age, disease status, and reason for his or her visit. Prior to the visit, MAs or LVNs contact patients with pre-visit instructions or to schedule necessary lab work. During the visit, they use the same checklists to follow up pre-visit instructions, take vitals, conduct medication reconciliation and prep the patient for the provider.

Pop-ups within the medical record indicate a patient’s eligibility for a new screening or regular test based on new literature, prompting the MAs or LVNs to ask patients for additional information. During the visit, physicians have access to the same checklists and data collected by the MAs or LVNs. This enables them to review the work quickly and efficiently and follow up on any flagged issues. After the visit with the physician, patients see an MA or LVN again and receive a summary of topics discussed with the provider and specific instructions or health education resources.

Contemporary physicians face many challenges: an aging population, rising rates of chronic conditions, workforce shortages, technological uncertainty, changing governmental policies, and greater disparities in health outcomes across populations. All of this, it could be argued, disproportionately affect primary care specialties. These factors promise to increase physician burnout unless something is done by health care organizations to ease their burden. SCPMG’s Complete Care initiative offers a viable blueprint to do just that.

Sophia Arabadjis is a researcher and case writer at the Harvard Medical School Center for Primary Care and a research assistant at the University of Colorado. She has investigated health systems in Europe and the United States.

Erin E. Sullivan is the research director of the Harvard Medical School Center for Primary Care. Her research focuses on high-performing primary care systems.

Four Daily Well-Being Workouts

Marty Seligman is a renowned well-being researcher, and writes in today’s NYT about four practices for flourishing:

Identify Signature Strengths: Focus every day on personal strengths exhibited when you were at your best.

Find the Good: Focus every day on “why did this good thing happen”?

Make a Gratitude Visit: Visit a person you feel gratitude toward.

Respond Constructively: Practice active, constructive responses.

===================

CREDIT: Article Below Can Be Found at This Link

Get Happy: Four Well-Being Workouts

By JULIE SCELFO
APRIL 5, 2017
Relieving stress and anxiety might help you feel better — for a bit. Martin E.P. Seligman, a professor of psychology at the University of Pennsylvania and a pioneer in the field of positive psychology, does not see alleviating negative emotions as a path to happiness.
“Psychology is generally focused on how to relieve depression, anger and worry,” he said. “Freud and Schopenhauer said the most you can ever hope for in life is not to suffer, not to be miserable, and I think that view is empirically false, morally insidious, and a political and educational dead-end.”
“What makes life worth living,” he said, “is much more than the absence of the negative.”

To Dr. Seligman, the most effective long-term strategy for happiness is to actively cultivate well-being.

In his 2012 book, “Flourish: A Visionary New Understanding of Happiness and Well-Being,” he explored how well-being consists not merely of feeling happy (an emotion that can be fleeting) but of experiencing a sense of contentment in the knowledge that your life is flourishing and has meaning beyond your own pleasure.

To cultivate the components of well-being, which include engagement, good relationships, accomplishment and purpose, Dr. Seligman suggests these four exercises based on research at the Penn Positive Psychology Center, which he directs, and at other universities.

Identify Signature Strengths
Write down a story about a time when you were at your best. It doesn’t need to be a life-changing event but should have a clear beginning, middle and end. Reread it every day for a week, and each time ask yourself: “What personal strengths did I display when I was at my best?” Did you show a lot of creativity? Good judgment? Were you kind to other people? Loyal? Brave? Passionate? Forgiving? Honest?

Writing down your answers “puts you in touch with what you’re good at,” Dr. Seligman explained. The next step is to contemplate how to use these strengths to your advantage, intentionally organizing and structuring your life around them.

In a study by Dr. Seligman and colleagues published in American Psychologist, participants looked for an opportunity to deploy one of their signature strengths “in a new and different way” every day for one week.

“A week later, a month later, six months later, people had on average lower rates of depression and higher life satisfaction,” Dr. Seligman said. “Possible mechanisms could be more positive emotions. People like you more, relationships go better, life goes better.”

Find the Good
Set aside 10 minutes before you go to bed each night to write down three things that went really well that day. Next to each event answer the question, “Why did this good thing happen?”
Instead of focusing on life’s lows, which can increase the likelihood of depression, the exercise “turns your attention to the good things in life, so it changes what you attend to,” Dr. Seligman said. “Consciousness is like your tongue: It swirls around in the mouth looking for a cavity, and when it finds it, you focus on it. Imagine if your tongue went looking for a beautiful, healthy tooth.” Polish it.

Make a Gratitude Visit
Think of someone who has been especially kind to you but you have not properly thanked. Write a letter describing what he or she did and how it affected your life, and how you often remember the effort. Then arrange a meeting and read the letter aloud, in person.

“It’s common that when people do the gratitude visit both people weep out of joy,” Dr. Seligman said. Why is the experience so powerful? “It puts you in better touch with other people, with your place in the world.”

Respond Constructively
This exercise was inspired by the work of Shelly Gable, a social psychologist at the University of California, Santa Barbara, who has extensively studied marriages and other close relationships. The next time someone you care about shares good news, give what Dr. Gable calls an “active constructive response.”

That is, instead of saying something passive like, “Oh, that’s nice” or being dismissive, express genuine excitement. Prolong the discussion by, say, encouraging them to tell others or suggest a celebratory activity.

“Love goes better, commitment increases, and from the literature, even sex gets better after that.”

Julie Scelfo is a former staff writer for The Times who writes often about human behavior.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============

History of Community Foundations in the US

Below is my essay:

The Double Trust Imperative: A History of Community Foundations in the United States

Background:

I became the Chairman of the Board of the Community Foundation for Greater Atlanta (CFGA) in January, 2017. Its a three-year appointment.

Last year, as Vice Chair, I decided to study the history of these institutions. Because I couldn’t find a good history, I decided I would write a History of Community Foundations in the United States. In addition to researching the subject extensively, I have been discussing the work with other heads of community foundations nationally. Through these discussions, I decided to try to identify the key difference between community foundations and other institutions. I put that difference right in the title: The Double Trust Imperative…..because community foundations uniquely build trust in two directions: the community and donors.

The essay documents how community foundations came to be. It documents how it came to be that $82 billion in philanthropic assets came to be housed in these institutions – so that the institutions can invest those assets back into the communities they serve. The impact on any given community? Well, in ATL alone, we have 900+ donors with $900 million+ in assets….and the CFGA gave away $130 million+ last year to non-profits of all shapes and sizes in ATL last year. The ATL community foundation (CFGA) is the second largest foundation in Georgia, and the 17th largest community foundation in the United States.

Well, the essay was selected as one of the pre-reads for the upcoming Conference for Large Community Foundations in San Diego. Over 200 people will be there from all over the country. These are the Chairs and the CEO’s of all the big community foundations – the movers and shakers of the movement (Alicia Phillipp, the CEO of the Community Foundation for Greater Atlanta and I will attend representing ATL).

They have a tradition of reading the pre-reads (so a lot of movers and shakers will read this).

Its a pre-read for the second day session – which is themed “where have community foundations been and where are they headed.”

So there you go. A bit of news about me as a writer, kind of …… not a best seller, but step by step……

UHVDC and China

Credit: Economist Article about UHVDC and China

A greener grid
China’s embrace of a new electricity-transmission technology holds lessons for others
The case for high-voltage direct-current connectors
Jan 14th 2017

YOU cannot negotiate with nature. From the offshore wind farms of the North Sea to the solar panels glittering in the Atacama desert, renewable energy is often generated in places far from the cities and industrial centres that consume it. To boost renewables and drive down carbon-dioxide emissions, a way must be found to send energy over long distances efficiently.

The technology already exists (see article). Most electricity is transmitted today as alternating current (AC), which works well over short and medium distances. But transmission over long distances requires very high voltages, which can be tricky for AC systems. Ultra-high-voltage direct-current (UHVDC) connectors are better suited to such spans. These high-capacity links not only make the grid greener, but also make it more stable by balancing supply. The same UHVDC links that send power from distant hydroelectric plants, say, can be run in reverse when their output is not needed, pumping water back above the turbines.

Boosters of UHVDC lines envisage a supergrid capable of moving energy around the planet. That is wildly premature. But one country has grasped the potential of these high-capacity links. State Grid, China’s state-owned electricity utility, is halfway through a plan to spend $88bn on UHVDC lines between 2009 and 2020. It wants 23 lines in operation by 2030.

That China has gone furthest in this direction is no surprise. From railways to cities, China’s appetite for big infrastructure projects is legendary (see article). China’s deepest wells of renewable energy are remote—think of the sun-baked Gobi desert, the windswept plains of Xinjiang and the mountain ranges of Tibet where rivers drop precipitously. Concerns over pollution give the government an additional incentive to locate coal-fired plants away from population centres. But its embrace of the technology holds two big lessons for others. The first is a demonstration effect. China shows that UHVDC lines can be built on a massive scale. The largest, already under construction, will have the capacity to power Greater London almost three times over, and will span more than 3,000km.

The second lesson concerns the co-ordination problems that come with long-distance transmission. UHVDCs are as much about balancing interests as grids. The costs of construction are hefty. Utilities that already sell electricity at high prices are unlikely to welcome competition from suppliers of renewable energy; consumers in renewables-rich areas who buy electricity at low prices may balk at the idea of paying more because power is being exported elsewhere. Reconciling such interests is easier the fewer the utilities involved—and in China, State Grid has a monopoly.

That suggests it will be simpler for some countries than others to follow China’s lead. Developing economies that lack an established electricity infrastructure have an advantage. Solar farms on Africa’s plains and hydroplants on its powerful rivers can use UHVDC lines to get energy to growing cities. India has two lines on the drawing-board, and should have more.

Things are more complicated in the rich world. Europe’s utilities work pretty well together but a cross-border UHVDC grid will require a harmonised regulatory framework. America is the biggest anomaly. It is a continental-sized economy with the wherewithal to finance UHVDCs. It is also horribly fragmented. There are 3,000 utilities, each focused on supplying power to its own customers. Consumers a few states away are not a priority, no matter how much sense it might make to send them electricity. A scheme to connect the three regional grids in America is stuck. The only way that America will create a green national grid will be if the federal government throws its weight behind it.

Live wire
Building a UHVDC network does not solve every energy problem. Security of supply remains an issue, even within national borders: any attacker who wants to disrupt the electricity supply to China’s east coast will soon have a 3,000km-long cable to strike. Other routes to a cleaner grid are possible, such as distributed solar power and battery storage. But to bring about a zero-carbon grid, UHVDC lines will play a role. China has its foot on the gas. Others should follow.
This article appeared in the Leaders section of the print edition under the headline “A greener grid”

Mr President

Credit: Washington Post Article authored by David Maraniss, author of ‘Barack Obama: The Story’

His journey to become a leader of consequence
How Barack Obama’s understanding of his place in the world, as a mixed-race American with a multicultural upbringing affected his presidency.
By David Maraniss, author of ‘Barack Obama: The Story’  

When Barack Obama worked as a community organizer amid the bleak industrial decay of Chicago’s far South Side during the 1980s, he tried to follow a mantra of that profession: Dream of the world as you wish it to be, but deal with the world as it is.

The notion of an Obama presidency was beyond imagining in the world as it was then. But, three decades later, it has happened, and a variation of that saying seems appropriate to the moment: Stop comparing Obama with the president you thought he might be, and deal with the one he has been.

Seven-plus years into his White House tenure, Obama is working through the final months before his presidency slips from present to past, from daily headlines to history books. That will happen at noontime on the 20th of January next year, but the talk of his legacy began much earlier and has intensified as he rounds the final corner of his improbable political career.

Of the many ways of looking at Obama’s presidency, the first is to place it in the continuum of his life. The past is prologue for all presidents to one degree or another, even as the job tests them in ways that nothing before could. For Obama, the line connecting his life’s story with the reality of what he has been as the 44th president is consistently evident.

The first connection involves Obama’s particular form of ambition. His political design arrived relatively late. He was no grade school or high school or college leader. Unlike Bill Clinton, he did not have a mother telling everyone that her first-grader would grow up to be president. When Obama was a toddler in Honolulu, his white grandfather boasted that his grandson was a Hawaiian prince, but that was more to explain his skin color than to promote family aspirations.
But once ambition took hold of Obama, it was with an intense sense of mission, sometimes tempered by self-doubt but more often self-assured and sometimes bordering messianic. At the end of his sophomore year at Occidental College, he started to talk about wanting to change the world. At the end of his time as a community organizer in Chicago, he started to talk about how the only way to change the world was through electoral power. When he was defeated for the one and only time in his career in a race for Congress in 2000, he questioned whether he indeed had been chosen for greatness, as he had thought he was, but soon concluded that he needed another test and began preparing to run for the Senate seat from Illinois that he won in 2004.

That is the sensibility he took into the White House. It was not a careless slip when he said during the 2008 campaign that he wanted to emulate Ronald Reagan and change “the trajectory of America” in ways that recent presidents, including Clinton, had been unable to do. Obama did not just want to be president. His mission was to leave a legacy as a president of consequence, the liberal counter to Reagan. To gauge himself against the highest-ranked presidents, and to learn from their legacies, Obama held private White House sessions with an elite group of American historians.

It is now becoming increasingly possible to argue that he has neared his goal. His decisions were ineffective in stemming the human wave of disaster in Syria, and he has thus far failed to close the detention camp at Guantanamo Bay, Cuba, and to make anything more than marginal changes on two domestic issues of importance to him, immigration and gun control. But from the Affordable Care Act to the legalization of same-sex marriage and the nuclear deal with Iran, from the stimulus package that started the slow recovery from the 2008 recession to the Detroit auto industry bailout, from global warming and renewable energy initiatives to the veto of the Keystone pipeline, from the withdrawal of combat troops from Iraq and Afghanistan and the killing of Osama bin Laden to the opening of relations with Cuba, the liberal achievements have added up, however one judges the policies.

This was done at the same time that he faced criticism from various quarters for seeming aloof, if not arrogant, for not being more effective in his dealings with members of Congress of either party, for not being angry enough when some thought he should be, or for not being an alpha male leader.

A promise of unity
His accomplishments were bracketed by two acts of negation by opponents seeking to minimize his authority: first a vow by Republican leaders to do what it took to render him a one-term president; and then, with 11 months left in his second term, a pledge to deny him the appointment of a nominee for the crucial Supreme Court seat vacated by the death of Antonin Scalia, a conservative icon. Obama’s White House years also saw an effort to delegitimize him personally by shrouding his story in fallacious myth — questioning whether he was a foreigner in our midst, secretly born in Kenya, despite records to the contrary, and insinuating that he was a closet Muslim, again defying established fact. Add to that a raucous new techno-political world of unending instant judgments and a decades-long erosion of economic stability for the working class and middle class that was making an increasingly large segment of the population, of various ideologies, feel left behind, uncertain, angry and divided, and the totality was a national condition that was anything but conducive to the promise of unity that brought Obama into the White House.

To the extent that his campaign rhetoric raised expectations that he could bridge the nation’s growing political divide, Obama owns responsibility for the way his presidency was perceived. His political rise, starting in 2004, when his keynote convention speech propelled him into the national consciousness, was based on his singular ability to tie his personal story as the son of a father from Kenya and mother from small-town Kansas to some transcendent common national purpose. Unity out of diversity, the ideal of the American mosaic that was constantly being tested, generation after generation, part reality, part myth. Even though Obama romanticized his parents’ relationship, which was brief and dysfunctional, his story of commonality was more than a campaign construct; it was deeply rooted in his sense of self.

As a young man, Obama at times felt apart from his high school and college friends of various races and perspectives as he watched them settle into defined niches in culture, outlook and occupation. He told one friend that he felt “large dollops of envy for them” but believed that because of his own life’s story, his mixed-race heritage, his experiences in multicultural Hawaii and exotic Indonesia, his childhood without “a structure or tradition to support me,” he had no choice but to seek the largest possible embrace of the world. “The only way to assuage my feelings of isolation are to absorb all the traditions [and all the] classes, make them mine, me theirs,” he wrote. He carried that notion with him through his political career in Illinois and all the way to the White House, where it was challenged in ways he had never confronted before.

With most politicians, their strengths are their weaknesses, and their weaknesses are their strengths.

With Obama, one way that was apparent was in his coolness. At various times in his presidency, there were calls from all sides for him to be hotter. He was criticized by liberals for not expressing more anger at Republicans who were stifling his agenda, or at Wall Street financiers and mortgage lenders whose wheeler-dealing helped drag the country into recession. He was criticized by conservatives for not being more vociferous in denouncing Islamic terrorists, or belligerent in standing up to Russian President Vladimir Putin.

His coolness as president can best be understood by the sociological forces that shaped him before he reached the White House. There is a saying among native Hawaiians that goes: Cool head, main thing. This was the culture in which Obama reached adolescence on the island of Oahu, and before that during the four years he lived with his mother in Jakarta. Never show too much. Never rush into things. Maintain a personal reserve and live by your own sense of time. This sensibility was heightened when he developed an affection for jazz, the coolest mode of music, as part of his self-tutorial on black society that he undertook while living with white grandparents in a place where there were very few African Americans. As he entered the political world, the predominantly white society made it clear to him the dangers of coming across as an angry black man. As a community organizer, he refined the skill of leading without being overt about it, making the dispossessed citizens he was organizing feel their own sense of empowerment. As a constitutional law professor at the University of Chicago, he developed an affinity for rational thought.

Differing approaches
All of this created a president who was comfortable coolly working in his own way at his own speed, waiting for events to turn his way.
Was he too cool in his dealings with other politicians? One way to consider that question is by comparing him with Clinton. Both came out of geographic isolation, Hawaii and southwest Arkansas, far from the center of power, in states that had never before offered up presidents. Both came out of troubled families defined by fatherlessness and alcoholism. Both at various times felt a sense of abandonment. Obama had the additional quandary of trying to figure out his racial identity. And the two dealt with their largely similar situations in diametrically different ways.

Rather than deal with the problems and contradictions of his life head-on, Clinton became skilled at moving around and past them. He had an insatiable need to be around people for affirmation. As a teenager, he would ask a friend to come over to the house just to watch him do a crossword puzzle. His life became all about survival and reading the room. He kept shoeboxes full of file cards of the names and phone numbers of people who might help him someday. His nature was to always move forward. He would wake up each day and forgive himself and keep going. His motto became “What’s next?” He refined these skills to become a political force of nature, a master of transactional politics. This got him to the White House, and into trouble in the White House, and out of trouble again, in acycle of loss and recovery.

Obama spent much of his young adulthood, from when he left Hawaii for the mainland and college in 1979 to the time he left Chicago for Harvard Law School nearly a decade later, trying to figure himself out, examining the racial, cultural, personal, sociological and political contradictions that life threw at him. He internalized everything, first withdrawing from the world during a period in New York City and then slowly reentering it as he was finding his identity as a community organizer in Chicago.

Rather than plow forward relentlessly, like Clinton, Obama slowed down. He woke up each day and wrote in his journal, analyzing the world and his place in it. He emerged from that process with a sense of self that helped him rise in politics all the way to the White House, then led him into difficulties in the White House, or at least criticism for the way he operated. His sensibility was that if he could resolve the contradictions of his own life, why couldn’t the rest of the country resolve the larger contradictions of American life? Why couldn’t Congress? The answer from Republicans was that his actions were different from his words, and that while he talked the language of compromise, he did not often act on it. He had built an impressive organization to get elected, but it relied more on the idea of Obama than on a long history of personal contacts. He did not have a figurative equivalent of Clinton’s shoebox full of allies, and he did not share his Democratic predecessor’s profound need to be around people. He was not as interested in the personal side of politics that was so second nature to presidents such as Clinton and Lyndon Johnson.

Politicians of both parties complained that Obama seemed distant. He was not calling them often enough. When he could be schmoozing with members of Congress, cajoling them and making them feel important, he was often back in the residence having dinner with his wife, Michelle, and their two daughters, or out golfing with the same tight group of high school chums and White House subordinates.

Here again, some history provided context. Much of Obama’s early life had been a long search for home, which he finally found with Michelle and their girls, Malia and Sasha. There were times when Obama was an Illinois state senator and living for a few months at a time in a hotel room in Springfield, when Michelle made clear her unhappiness with his political obsession, and the sense of home that he had strived so hard to find was jeopardized. Once he reached the White House, with all the demands on his time, if there was a choice, he was more inclined to be with his family than hang out with politicians. A weakness in one sense, a strength in another, enriching the image of the first-ever black first family.

A complex question
The fact that Obama was the first black president, and that his family was the first African American first family, provides him with an uncontested hold on history. Not long into his presidency, even to mention that seemed beside the point, if not tedious, but it was a prejudice-shattering event when he was elected in 2008, and its magnitude is not likely to diminish. Even as some of the political rhetoric this year longs for a past America, the odds are greater that as the century progresses, no matter what happens in the 2016 election, Obama will be seen as the pioneer who broke an archaic and distant 220-year period of white male dominance.

But what kind of black president has he been?

His life illuminates the complexity of that question. His white mother, who conscientiously taught him black history at an early age but died nearly a decade before her son reached the White House, would have been proud that he broke the racial barrier. But she also inculcated him in the humanist idea of the universality of humankind, a philosophy that her life exemplified as she married a Kenyan and later an Indonesian and worked to help empower women in many of the poorest countries in the world. Obama eventually found his own comfort as a black man with a black family, but his public persona, and his political persona, was more like his mother’s.

At various times during his career, Obama faced criticism from some African Americans that, because Obama did not grow up in a minority community and received an Ivy League education, he was not “black enough.” That argument was one of the reasons he lost that 2000 congressional race to Bobby L. Rush, a former Black Panther, but fortunes shift and attitudes along with them; there was no more poignant and revealing scene at Obama’s final State of the Union address to Congress than Rep. Rush waiting anxiously at the edge of the aisle and reaching out in the hope of recognition from the passing president.

As president, Obama rarely broke character to show what was inside. He was reluctant to bring race into the political discussion, and never publicly stated what many of his supporters believed: that some of the antagonism toward his presidency was rooted in racism. He wished to be judged by the content of his presidency rather than the color of his skin. One exception came after February 2012, when Trayvon Martin, an unarmed black teenager, was shot and killed in Florida by a gun-toting neighborhood zealot. In July 2013, commenting on the verdict in the case, Obama talked about the common experience of African American men being followed when shopping in a department store, or being passed up by a taxi on the street, or a car door lock clicking as they walked by — all of which he said had happened to him. He said Trayvon Martin could have been his son, and then added, “another way of saying that is: Trayvon Martin could have been me 35 years ago.”

Nearly two years later, in June 2015, Obama hit what might be considered the most powerful emotional note of his presidency, a legacy moment, by finding a universal message in black spiritual expression. Time after time during his two terms, he had performed the difficult task of trying to console the country after another mass shooting, choking up with tears whenever he talked about little children being the victims, as they had been in 2012 at Sandy Hook Elementary School in Newtown, Conn. Now he was delivering the heart-rending message one more time, nearing the end of a eulogy in Charleston, S.C., for the Rev. Clementa Pinckney, one of nine African Americans killed by a young white gunman during a prayer service at Emanuel African Methodist Episcopal Church. It is unlikely that any other president could have done what Barack Obama did that day, when all the separate parts of his life story came together with a national longing for reconciliation as he started to sing, “Amazing grace, how sweet the sound, that saved a wretch like me. . . .”

NYT on Google Brain, Google Translate, and AI Progress

Amazing progress!

New York Times Article on Google and AI Progress

The Great A.I. Awakening
How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
BY GIDEON LEWIS-KRAUSDEC. 14, 2016

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Greg Corrado, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Apple’s Siri, Facebook’s M, Amazon’s Echo, Alan Turing, GO (the Board Game), convolutional neural network of Yann LeCun, supervised learning, machine learning, deep learning, Mike Schuster, T.P.U.s

Prologue: You Are What You Have Read
Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media. Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.
Rekimoto wrote up his initial findings in a blog post. First, he compared a few sentences from two published versions of “The Great Gatsby,” Takashi Nozaki’s 1957 translation and Haruki Murakami’s more recent iteration, with what this new Google Translate was able to produce. Murakami’s translation is written “in very polished Japanese,” Rekimoto explained to me later via email, but the prose is distinctively “Murakami-style.” By contrast, Google’s translation — despite some “small unnaturalness” — reads to him as “more transparent.”
The second half of Rekimoto’s post examined the service in the other direction, from Japanese to English. He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine.
NO. 1:
Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
NO. 2:
Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows:
Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.
Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service. Some were successful, others meant mostly for comic effect. As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Four days later, a couple of hundred journalists, entrepreneurs and advertisers from all over the world gathered in Google’s London engineering office for a special announcement. Guests were greeted with Translate-branded fortune cookies. Their paper slips had a foreign phrase on one side — mine was in Norwegian — and on the other, an invitation to download the Translate app. Tables were set with trays of doughnuts and smoothies, each labeled with a placard that advertised its flavor in German (zitrone), Portuguese (baunilha) or Spanish (manzana). After a while, everyone was ushered into a plush, dark theater.

Sadiq Khan, the mayor of London, stood to make a few opening remarks. A friend, he began, had recently told him he reminded him of Google. “Why, because I know all the answers?” the mayor asked. “No,” the friend replied, “because you’re always trying to finish my sentences.” The crowd tittered politely. Khan concluded by introducing Google’s chief executive, Sundar Pichai, who took the stage.
Pichai was in London in part to inaugurate Google’s new building there, the cornerstone of a new “knowledge quarter” under construction at King’s Cross, and in part to unveil the completion of the initial phase of a company transformation he announced last year. The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”
A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
Translate made its debut in 2006 and since then has become one of Google’s most reliable and popular assets; it serves more than 500 million monthly users in need of 140 billion words per day in a different language. It exists not only as its own stand-alone app but also as an integrated feature within Gmail, Chrome and many other Google offerings, where we take it as a push-button given — a frictionless, natural part of our digital commerce. It was only with the refugee crisis, Pichai explained from the lectern, that the company came to reckon with Translate’s geopolitical importance: On the screen behind him appeared a graph whose steep curve indicated a recent fivefold increase in translations between Arabic and German. (It was also close to Pichai’s own heart. He grew up in India, a land divided by dozens of languages.) The team had been steadily adding new languages and features, but gains in quality over the last four years had slowed considerably.
Until today. As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
Pichai has an affection for the obscure literary reference; he told me a month earlier, in his office in Mountain View, Calif., that Translate in part exists because not everyone can be like the physicist Robert Oppenheimer, who learned Sanskrit to read the Bhagavad Gita in the original. In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”
Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.”
It was a fitting remark: The new Google Translate was run on the first machines that had, in a sense, ever learned to read anything at all.
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform.

The phrase “artificial intelligence” is invoked as if its meaning were self-evident, but it has always been a source of confusion and controversy. Imagine if you went back to the 1970s, stopped someone on the street, pulled out a smartphone and showed her Google Maps. Once you managed to convince her you weren’t some oddly dressed wizard, and that what you withdrew from your pocket wasn’t a black-arts amulet but merely a tiny computer more powerful than that onboard the Apollo shuttle, Google Maps would almost certainly seem to her a persuasive example of “artificial intelligence.” In a very real sense, it is. It can do things any map-literate human can manage, like get you from your hotel to the airport — though it can do so much more quickly and reliably. It can also do things that humans simply and obviously cannot: It can evaluate the traffic, plan the best route and reorient itself when you take the wrong exit.
Practically nobody today, however, would bestow upon Google Maps the honorific “A.I.,” so sentimental and sparing are we in our use of the word “intelligence.” Artificial intelligence, we believe, must be something that distinguishes HAL from whatever it is a loom or wheelbarrow can do. The minute we can automate a task, we downgrade the relevant skill involved to one of mere mechanism. Today Google Maps seems, in the pejorative sense of the term, robotic: It simply accepts an explicit demand (the need to get from one place to another) and tries to satisfy that demand as efficiently as possible. The goal posts for “artificial intelligence” are thus constantly receding.
When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.
In a famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor. Once a machine can translate fluently between two natural languages, the foundation has been laid for a machine that might one day “understand” human language well enough to engage in plausible conversation. Google Brain’s members, who pushed and helped oversee the Translate project, believe that such a machine would be on its way to serving as a generally intelligent all-encompassing personal digital assistant.

What follows here is the story of how a team of Google researchers and engineers — at first one or two, then three or four, and finally more than a hundred — made considerable progress in that direction. It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to. It does not feature people who think that everything will be unrecognizably different tomorrow or the next day because of some restless tinkerer in his garage. It is neither a story about people who think technology will solve all our problems nor one about people who think technology is ineluctably bound to create apocalyptic new ones. It is not about disruption, at least not in the way that word tends to be used.
It is, in fact, three overlapping stories that converge in Google Translate’s successful metamorphosis to A.I. — a technical story, an institutional story and a story about the evolution of ideas. The technical story is about one team on one product at one company, and the process by which they refined, tested and introduced a brand-new version of an old product in only about a quarter of the time anyone, themselves included, might reasonably have expected. The institutional story is about the employees of a small but influential artificial-intelligence group within that company, and the process by which their intuitive faith in some old, unproven and broadly unpalatable notions about computing upended every other company within a large radius. The story of ideas is about the cognitive scientists, psychologists and wayward engineers who long toiled in obscurity, and the process by which their ostensibly irrational convictions ultimately inspired a paradigm shift in our understanding not only of technology but also, in theory, of consciousness itself.
It’s an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we’ve grown accustomed to.

The first story, the story of Google Translate, takes place in Mountain View over nine months, and it explains the transformation of machine translation. The second story, the story of Google Brain and its many competitors, takes place in Silicon Valley over five years, and it explains the transformation of that entire community. The third story, the story of deep learning, takes place in a variety of far-flung laboratories — in Scotland, Switzerland, Japan and most of all Canada — over seven decades, and it might very well contribute to the revision of our self-image as first and foremost beings who think.
All three are stories about artificial intelligence. The seven-decade story is about what we might conceivably expect or want from it. The five-year story is about what it might do in the near future. The nine-month story is about what it can do right this minute. These three stories are themselves just proof of concept. All of this is only the beginning.

Part I: Learning Machine
1. The Birth of Brain
Jeff Dean, though his title is senior fellow, is the de facto head of Google Brain. Dean is a sinewy, energy-efficient man with a long, narrow face, deep-set eyes and an earnest, soapbox-derby sort of enthusiasm. The son of a medical anthropologist and a public-health epidemiologist, Dean grew up all over the world — Minnesota, Hawaii, Boston, Arkansas, Geneva, Uganda, Somalia, Atlanta — and, while in high school and college, wrote software used by the World Health Organization. He has been with Google since 1999, as employee 25ish, and has had a hand in the core software systems beneath nearly every significant undertaking since then. A beloved artifact of company culture is Jeff Dean Facts, written in the style of the Chuck Norris Facts meme: “Jeff Dean’s PIN is the last four digits of pi.” “When Alexander Graham Bell invented the telephone, he saw a missed call from Jeff Dean.” “Jeff Dean got promoted to Level 11 in a system where the maximum level is 10.” (This last one is, in fact, true.)
Photo

One day in early 2011, Dean walked into one of the Google campus’s “microkitchens” — the “Googley” word for the shared break spaces on most floors of the Mountain View complex’s buildings — and ran into Andrew Ng, a young Stanford computer-science professor who was working for the company as a consultant. Ng told him about Project Marvin, an internal effort (named after the celebrated A.I. pioneer Marvin Minsky) he had recently helped establish to experiment with “neural networks,” pliant digital lattices based loosely on the architecture of the brain. Dean himself had worked on a primitive version of the technology as an undergraduate at the University of Minnesota in 1990, during one of the method’s brief windows of mainstream acceptability. Now, over the previous five years, the number of academics working on neural networks had begun to grow again, from a handful to a few dozen. Ng told Dean that Project Marvin, which was being underwritten by Google’s secretive X lab, had already achieved some promising results.
Dean was intrigued enough to lend his “20 percent” — the portion of work hours every Google employee is expected to contribute to programs outside his or her core job — to the project. Pretty soon, he suggested to Ng that they bring in another colleague with a neuroscience background, Greg Corrado. (In graduate school, Corrado was taught briefly about the technology, but strictly as a historical curiosity. “It was good I was paying attention in class that day,” he joked to me.) In late spring they brought in one of Ng’s best graduate students, Quoc Le, as the project’s first intern. By then, a number of the Google engineers had taken to referring to Project Marvin by another name: Google Brain.
Since the term “artificial intelligence” was first coined, at a kind of constitutional convention of the mind at Dartmouth in the summer of 1956, a majority of researchers have long thought the best approach to creating A.I. would be to write a very big, comprehensive program that laid out both the rules of logical reasoning and sufficient knowledge of the world. If you wanted to translate from English to Japanese, for example, you would program into the computer all of the grammatical rules of English, and then the entirety of definitions contained in the Oxford English Dictionary, and then all of the grammatical rules of Japanese, as well as all of the words in the Japanese dictionary, and only after all of that feed it a sentence in a source language and ask it to tabulate a corresponding sentence in the target language. You would give the machine a language map that was, as Borges would have had it, the size of the territory. This perspective is usually called “symbolic A.I.” — because its definition of cognition is based on symbolic logic — or, disparagingly, “good old-fashioned A.I.”
There are two main problems with the old-fashioned approach. The first is that it’s awfully time-consuming on the human end. The second is that it only really works in domains where rules and definitions are very clear: in mathematics, for example, or chess. Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.

There were, however, limits to what this system could do. In the 1980s, a robotics researcher at Carnegie Mellon pointed out that it was easy to get computers to do adult things but nearly impossible to get them to do things a 1-year-old could do, like hold a ball or identify a cat. By the 1990s, despite punishing advancements in computer chess, we still weren’t remotely close to artificial general intelligence.
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
There was no reason you couldn’t try to mimic this structure in electronic form, and in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.
This attitude toward artificial intelligence was evolutionary rather than creationist. If you wanted a flexible mechanism, you wanted one that could adapt to its environment. If you wanted something that could adapt, you didn’t want to begin with the indoctrination of the rules of chess. You wanted to begin with very basic abilities — sensory perception and motor control — in the hope that advanced skills would emerge organically. Humans don’t learn to understand language by memorizing dictionaries and grammar books, so why should we possibly expect our computers to do so?
Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I. Dean, Corrado and Ng began their work as a part-time, collaborative experiment, but they made immediate progress. They took architectural inspiration for their models from recent theoretical outlines — as well as ideas that had been on the shelf since the 1980s and 1990s — and drew upon both the company’s peerless reserves of data and its massive computing infrastructure. They instructed the networks on enormous banks of “labeled” data — speech files with correct transcriptions, for example — and the computers improved their responses to better match reality.
“The portion of evolution in which animals developed eyes was a big development,” Dean told me one day, with customary understatement. We were sitting, as usual, in a whiteboarded meeting room, on which he had drawn a crowded, snaking timeline of Google Brain and its relation to inflection points in the recent history of neural networks. “Now computers have eyes. We can build them around the capabilities that now exist to understand photos. Robots will be drastically transformed. They’ll be able to operate in an unknown environment, on much different problems.” These capacities they were building may have seemed primitive, but their implications were profound.

2. The Unlikely Intern
In its first year or so of existence, Brain’s experiments in the development of a machine with the talents of a 1-year-old had, as Dean said, worked to great effect. Its speech-recognition team swapped out part of their old system for a neural network and encountered, in pretty much one fell swoop, the best quality improvements anyone had seen in 20 years. Their system’s object-recognition abilities improved by an order of magnitude. This was not because Brain’s personnel had generated a sheaf of outrageous new ideas in just a year. It was because Google had finally devoted the resources — in computers and, increasingly, personnel — to fill in outlines that had been around for a long time.
A great preponderance of these extant and neglected notions had been proposed or refined by a peripatetic English polymath named Geoffrey Hinton. In the second year of Brain’s existence, Hinton was recruited to Brain as Andrew Ng left. (Ng now leads the 1,300-person A.I. team at Baidu.) Hinton wanted to leave his post at the University of Toronto for only three months, so for arcane contractual reasons he had to be hired as an intern. At intern training, the orientation leader would say something like, “Type in your LDAP” — a user login — and he would flag a helper to ask, “What’s an LDAP?” All the smart 25-year-olds in attendance, who had only ever known deep learning as the sine qua non of artificial intelligence, snickered: “Who is that old guy? Why doesn’t he get it?”
“At lunchtime,” Hinton said, “someone in the queue yelled: ‘Professor Hinton! I took your course! What are you doing here?’ After that, it was all right.”
A few months later, Hinton and two of his students demonstrated truly astonishing gains in a big image-recognition contest, run by an open-source collective called ImageNet, that asks computers not only to identify a monkey but also to distinguish between spider monkeys and howler monkeys, and among God knows how many different breeds of cat. Google soon approached Hinton and his students with an offer. They accepted. “I thought they were interested in our I.P.,” he said. “Turns out they were interested in us.”
Hinton comes from one of those old British families emblazoned like the Darwins at eccentric angles across the intellectual landscape, where regardless of titular preoccupation a person is expected to make sideline contributions to minor problems in astronomy or fluid dynamics. His great-great-grandfather was George Boole, whose foundational work in symbolic logic underpins the computer; another great-great-grandfather was a celebrated surgeon, his father a venturesome entomologist, his father’s cousin a Los Alamos researcher; the list goes on. He trained at Cambridge and Edinburgh, then taught at Carnegie Mellon before he ended up at Toronto, where he still spends half his time. (His work has long been supported by the largess of the Canadian government.) I visited him in his office at Google there. He has tousled yellowed-pewter hair combed forward in a mature Noel Gallagher style and wore a baggy striped dress shirt that persisted in coming untucked, and oval eyeglasses that slid down to the tip of a prominent nose. He speaks with a driving if shambolic wit, and says things like, “Computers will understand sarcasm before Americans do.”
Hinton had been working on neural networks since his undergraduate days at Cambridge in the late 1960s, and he is seen as the intellectual primogenitor of the contemporary field. For most of that time, whenever he spoke about machine learning, people looked at him as though he were talking about the Ptolemaic spheres or bloodletting by leeches. Neural networks were taken as a disproven folly, largely on the basis of one overhyped project: the Perceptron, an artificial neural network that Frank Rosenblatt, a Cornell psychologist, developed in the late 1950s. The New York Times reported that the machine’s sponsor, the United States Navy, expected it would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” It went on to do approximately none of those things. Marvin Minsky, the dean of artificial intelligence in America, had worked on neural networks for his 1954 Princeton thesis, but he’d since grown tired of the inflated claims that Rosenblatt — who was a contemporary at Bronx Science — made for the neural paradigm. (He was also competing for Defense Department funding.) Along with an M.I.T. colleague, Minsky published a book that proved that there were painfully simple problems the Perceptron could never solve.
Minsky’s criticism of the Perceptron extended only to networks of one “layer,” i.e., one layer of artificial neurons between what’s fed to the machine and what you expect from it — and later in life, he expounded ideas very similar to contemporary deep learning. But Hinton already knew at the time that complex tasks could be carried out if you had recourse to multiple layers. The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns. Take the case of image recognition, which tends to rely on a contraption called a “convolutional neural net.” (These were elaborated in a seminal 1998 paper whose lead author, a Frenchman named Yann LeCun, did his postdoctoral research in Toronto under Hinton and now directs a huge A.I. endeavor at Facebook.) The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something (an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face.

A demonstration from 1993 showing an early version of the researcher Yann LeCun’s convolutional neural network, which by the late 1990s was processing 10 to 20 percent of all checks in the United States. A similar technology now drives most state-of-the-art image-recognition systems. Video posted on YouTube by Yann LeCun
The issue with multilayered, “deep” neural networks was that the trial-and-error part got extraordinarily complicated. In a single layer, it’s easy. Imagine that you’re playing with a child. You tell the child, “Pick up the green ball and put it into Box A.” The child picks up a green ball and puts it into Box B. You say, “Try again to put the green ball in Box A.” The child tries Box A. Bravo.
Now imagine you tell the child, “Pick up a green ball, go through the door marked 3 and put the green ball into Box A.” The child takes a red ball, goes through the door marked 2 and puts the red ball into Box B. How do you begin to correct the child? You cannot just repeat your initial instructions, because the child does not know at which point he went wrong. In real life, you might start by holding up the red ball and the green ball and saying, “Red ball, green ball.” The whole point of machine learning, however, is to avoid that kind of explicit mentoring. Hinton and a few others went on to invent a solution (or rather, reinvent an older one) to this layered-error problem, over the halting course of the late 1970s and 1980s, and interest among computer scientists in neural networks was briefly revived. “People got very excited about it,” he said. “But we oversold it.” Computer scientists quickly went back to thinking that people like Hinton were weirdos and mystics.
These ideas remained popular, however, among philosophers and psychologists, who called it “connectionism” or “parallel distributed processing.” “This idea,” Hinton told me, “of a few people keeping a torch burning, it’s a nice myth. It was true within artificial intelligence. But within psychology lots of people believed in the approach but just couldn’t do it.” Neither could Hinton, despite the generosity of the Canadian government. “There just wasn’t enough computer power or enough data. People on our side kept saying, ‘Yeah, but if I had a really big one, it would work.’ It wasn’t a very persuasive argument.”
‘The portion of evolution in which animals developed eyes was a big development. Now computers have eyes.’

3. A Deep Explanation of Deep Learning
When Pichai said that Google would henceforth be “A.I. first,” he was not just making a claim about his company’s business strategy; he was throwing in his company’s lot with this long-unworkable idea. Pichai’s allocation of resources ensured that people like Dean could ensure that people like Hinton would have, at long last, enough computers and enough data to make a persuasive argument. An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion. For a simple artificial neural network of the sort proposed in the 1940s, the attempt to even try to replicate this was unimaginable. We’re still far from the construction of a network of that size, but Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.
To understand why scale is so important, however, you have to start to understand some of the more technical details of what, exactly, machine intelligences are doing with the data they consume. A lot of our ambient fears about A.I. rest on the idea that they’re just vacuuming up knowledge like a sociopathic prodigy in a library, and that an artificial intelligence constructed to make paper clips might someday decide to treat humans like ants or lettuce. This just isn’t how they work. All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place.
If that brief explanation seems sufficiently reassuring, the reassured nontechnical reader is invited to skip forward to the next section, which is about cats. If not, then read on. (This section is also, luckily, about cats.)
Imagine you want to program a cat-recognizer on the old symbolic-A.I. model. You stay up for days preloading the machine with an exhaustive, explicit definition of “cat.” You tell it that a cat has four legs and pointy ears and whiskers and a tail, and so on. All this information is stored in a special place in memory called Cat. Now you show it a picture. First, the machine has to separate out the various distinct elements of the image. Then it has to take these elements and apply the rules stored in its memory. If(legs=4) and if(ears=pointy) and if(whiskers=yes) and if(tail=yes) and if(expression=supercilious), then(cat=yes). But what if you showed this cat-recognizer a Scottish Fold, a heart-rending breed with a prized genetic defect that leads to droopy doubled-over ears? Our symbolic A.I. gets to (ears=pointy) and shakes its head solemnly, “Not cat.” It is hyperliteral, or “brittle.” Even the thickest toddler shows much greater inferential acuity.
Now imagine that instead of hard-wiring the machine with a set of rules for classification stored in one location of the computer’s memory, you try the same thing on a neural network. There is no special place that can hold the definition of “cat.” There is just a giant blob of interconnected switches, like forks in a path. On one side of the blob, you present the inputs (the pictures); on the other side, you present the corresponding outputs (the labels). Then you just tell it to work out for itself, via the individual calibration of all of these interconnected switches, whatever path the data should take so that the inputs are mapped to the correct outputs. The training is the process by which a labyrinthine series of elaborate tunnels are excavated through the blob, tunnels that connect any given input to its proper output. The more training data you have, the greater the number and intricacy of the tunnels that can be dug. Once the training is complete, the middle of the blob has enough tunnels that it can make reliable predictions about how to handle data it has never seen before. This is called “supervised learning.”
The reason that the network requires so many neurons and so much data is that it functions, in a way, like a sort of giant machine democracy. Imagine you want to train a computer to differentiate among five different items. Your network is made up of millions and millions of neuronal “voters,” each of whom has been given five different cards: one for cat, one for dog, one for spider monkey, one for spoon and one for defibrillator. You show your electorate a photo and ask, “Is this a cat, a dog, a spider monkey, a spoon or a defibrillator?” All the neurons that voted the same way collect in groups, and the network foreman peers down from above and identifies the majority classification: “A dog?”
You say: “No, maestro, it’s a cat. Try again.”
Now the network foreman goes back to identify which voters threw their weight behind “cat” and which didn’t. The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes. What’s important is not the individual vote, exactly, but the pattern of votes. If Joe, Frank and Mary all vote together, it’s a dog; but if Joe, Kate and Jessica vote together, it’s a cat; and if Kate, Jessica and Frank vote together, it’s a defibrillator. The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals. If you have only Joe, Frank and Mary, you can maybe use them only to differentiate among a cat, a dog and a defibrillator. If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with incredible granularity. Your trained voter assembly will be able to look at an unlabeled picture and identify it more or less accurately.
Part of the reason there was so much resistance to these ideas in computer-science departments is that because the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them. This wooliness, however, is the point. The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes. You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.
It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.
Image-recognition networks like our cat-identifier are only one of many varieties of deep learning, but they are disproportionately invoked as teaching examples because each layer does something at least vaguely recognizable to humans — picking out edges first, then circles, then faces. This means there’s a safeguard against error. For instance, an early oddity in Google’s image-recognition software meant that it could not always identify a barbell in isolation, even though the team had trained it on an image set that included a lot of exercise categories. A visualization tool showed them the machine had learned not the concept of “dumbbell” but the concept of “dumbbell+arm,” because all the dumbbells in the training set were attached to arms. They threw into the training mix some photos of solo barbells. The problem was solved. Not everything is so easy.

Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.

4. The Cat Paper
Over the course of its first year or two, Brain’s efforts to cultivate in machines the skills of a 1-year-old were auspicious enough that the team was graduated out of the X lab and into the broader research organization. (The head of Google X once noted that Brain had paid for the entirety of X’s costs.) They still had fewer than 10 people and only a vague sense for what might ultimately come of it all. But even then they were thinking ahead to what ought to happen next. First a human mind learns to recognize a ball and rests easily with the accomplishment for a moment, but sooner or later, it wants to ask for the ball. And then it wades into language.
The first step in that direction was the cat paper, which made Brain famous.
What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself. (The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data. The cat paper showed that machines could also deal with raw unlabeled data, perhaps even data of which humans had no established foreknowledge. This seemed like a major advance not only in cat-recognition studies but also in overall artificial intelligence.
The lead author on the cat paper was Quoc Le. Le is short and willowy and soft-spoken, with a quick, enigmatic smile and shiny black penny loafers. He grew up outside Hue, Vietnam. His parents were rice farmers, and he did not have electricity at home. His mathematical abilities were obvious from an early age, and he was sent to study at a magnet school for science. In the late 1990s, while still in school, he tried to build a chatbot to talk to. He thought, How hard could this be?
“But actually,” he told me in a whispery deadpan, “it’s very hard.”
He left the rice paddies on a scholarship to a university in Canberra, Australia, where he worked on A.I. tasks like computer vision. The dominant method of the time, which involved feeding the machine definitions for things like edges, felt to him like cheating. Le didn’t know then, or knew only dimly, that there were at least a few dozen computer scientists elsewhere in the world who couldn’t help imagining, as he did, that machines could learn from scratch. In 2006, Le took a position at the Max Planck Institute for Biological Cybernetics in the medieval German university town of Tübingen. In a reading group there, he encountered two new papers by Geoffrey Hinton. People who entered the discipline during the long diaspora all have conversion stories, and when Le read those papers, he felt the scales fall away from his eyes.
“There was a big debate,” he told me. “A very big debate.” We were in a small interior conference room, a narrow, high-ceilinged space outfitted with only a small table and two whiteboards. He looked to the curve he’d drawn on the whiteboard behind him and back again, then softly confided, “I’ve never seen such a big debate.”
He remembers standing up at the reading group and saying, “This is the future.” It was, he said, an “unpopular decision at the time.” A former adviser from Australia, with whom he had stayed close, couldn’t quite understand Le’s decision. “Why are you doing this?” he asked Le in an email.
“I didn’t have a good answer back then,” Le said. “I was just curious. There was a successful paradigm, but to be honest I was just curious about the new paradigm. In 2006, there was very little activity.” He went to join Ng at Stanford and began to pursue Hinton’s ideas. “By the end of 2010, I was pretty convinced something was going to happen.”
What happened, soon afterward, was that Le went to Brain as its first intern, where he carried on with his dissertation work — an extension of which ultimately became the cat paper. On a simple level, Le wanted to see if the computer could be trained to identify on its own the information that was absolutely essential to a given image. He fed the neural network a still he had taken from YouTube. He then told the neural network to throw away some of the information contained in the image, though he didn’t specify what it should or shouldn’t throw away. The machine threw away some of the information, initially at random. Then he said: “Just kidding! Now recreate the initial image you were shown based only on the information you retained.” It was as if he were asking the machine to find a way to “summarize” the image, and then expand back to the original from the summary. If the summary was based on irrelevant data — like the color of the sky rather than the presence of whiskers — the machine couldn’t perform a competent reconstruction. Its reaction would be akin to that of a distant ancestor whose takeaway from his brief exposure to saber-tooth tigers was that they made a restful swooshing sound when they moved. Le’s neural network, unlike that ancestor, got to try again, and again and again and again. Each time it mathematically “chose” to prioritize different pieces of information and performed incrementally better. A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer. The same network that hit on our concept of cat also became enthusiastic about a pattern that looked like some sort of furniture-animal compound, like a cross between an ottoman and a goat.
Le didn’t see himself in those heady cat years as a language guy, but he felt an urge to connect the dots to his early chatbot. After the cat paper, he realized that if you could ask a network to summarize a photo, you could perhaps also ask it to summarize a sentence. This problem preoccupied Le, along with a Brain colleague named Tomas Mikolov, for the next two years.
In that time, the Brain team outgrew several offices around him. For a while they were on a floor they shared with executives. They got an email at one point from the administrator asking that they please stop allowing people to sleep on the couch in front of Larry Page and Sergey Brin’s suite. It unsettled incoming V.I.P.s. They were then allocated part of a research building across the street, where their exchanges in the microkitchen wouldn’t be squandered on polite chitchat with the suits. That interim also saw dedicated attempts on the part of Google’s competitors to catch up. (As Le told me about his close collaboration with Tomas Mikolov, he kept repeating Mikolov’s name over and over, in an incantatory way that sounded poignant. Le had never seemed so solemn. I finally couldn’t help myself and began to ask, “Is he … ?” Le nodded. “At Facebook,” he replied.)
Photo

They spent this period trying to come up with neural-network architectures that could accommodate not only simple photo classifications, which were static, but also complex structures that unfolded over time, like language or music. Many of these were first proposed in the 1990s, and Le and his colleagues went back to those long-ignored contributions to see what they could glean. They knew that once you established a facility with basic linguistic prediction, you could then go on to do all sorts of other intelligent things — like predict a suitable reply to an email, for example, or predict the flow of a sensible conversation. You could sidle up to the sort of prowess that would, from the outside at least, look a lot like thinking.

Part II: Language Machine
5. The Linguistic Turn
The hundred or so current members of Brain — it often feels less like a department within a colossal corporate hierarchy than it does a club or a scholastic society or an intergalactic cantina — came in the intervening years to count among the freest and most widely admired employees in the entire Google organization. They are now quartered in a tiered two-story eggshell building, with large windows tinted a menacing charcoal gray, on the leafy northwestern fringe of the company’s main Mountain View campus. Their microkitchen has a foosball table I never saw used; a Rock Band setup I never saw used; and a Go kit I saw used on a few occasions. (I did once see a young Brain research associate introducing his colleagues to ripe jackfruit, carving up the enormous spiky orb like a turkey.)
When I began spending time at Brain’s offices, in June, there were some rows of empty desks, but most of them were labeled with Post-it notes that said things like “Jesse, 6/27.” Now those are all occupied. When I first visited, parking was not an issue. The closest spaces were those reserved for expectant mothers or Teslas, but there was ample space in the rest of the lot. By October, if I showed up later than 9:30, I had to find a spot across the street.
Brain’s growth made Dean slightly nervous about how the company was going to handle the demand. He wanted to avoid what at Google is known as a “success disaster” — a situation in which the company’s capabilities in theory outpaced its ability to implement a product in practice. At a certain point he did some back-of-the-envelope calculations, which he presented to the executives one day in a two-slide presentation.
“If everyone in the future speaks to their Android phone for three minutes a day,” he told them, “this is how many machines we’ll need.” They would need to double or triple their global computational footprint.
“That,” he observed with a little theatrical gulp and widened eyes, “sounded scary. You’d have to” — he hesitated to imagine the consequences — “build new buildings.”
There was, however, another option: just design, mass-produce and install in dispersed data centers a new kind of chip to make everything faster. These chips would be called T.P.U.s, or “tensor processing units,” and their value proposition — counterintuitively — is that they are deliberately less precise than normal chips. Rather than compute 12.246 times 54.392, they will give you the perfunctory answer to 12 times 54. On a mathematical level, rather than a metaphorical one, a neural network is just a structured series of hundreds or thousands or tens of thousands of matrix multiplications carried out in succession, and it’s much more important that these processes be fast than that they be exact. “Normally,” Dean said, “special-purpose hardware is a bad idea. It usually works to speed up one thing. But because of the generality of neural networks, you can leverage this special-purpose hardware for a lot of other things.”
Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.
The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.
It’s no great tragedy if neural networks mislabel 1 percent of cats as dogs, but in something like a self-driving car we all want greater assurances.

6. The Ambush
Le’s paper showed that neural translation was plausible, but he had used only a relatively small public data set. (Small for Google, that is — it was actually the biggest public data set in the world. A decade of the old Translate had gathered production data that was between a hundred and a thousand times bigger.) More important, Le’s model didn’t work very well for sentences longer than about seven words.
Mike Schuster, who then was a staff research scientist at Brain, picked up the baton. He knew that if Google didn’t find a way to scale these theoretical insights up to a production level, someone else would. The project took him the next two years. “You think,” Schuster says, “to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.”
Schuster is a taut, focused, ageless being with a tanned, piston-shaped head, narrow shoulders, long camo cargo shorts tied below the knee and neon-green Nike Flyknits. He looks as if he woke up in the lotus position, reached for his small, rimless, elliptical glasses, accepted calories in the form of a modest portion of preserved acorn and completed a relaxed desert decathlon on the way to the office; in reality, he told me, it’s only an 18-mile bike ride each way. Schuster grew up in Duisburg, in the former West Germany’s blast-furnace district, and studied electrical engineering before moving to Kyoto to work on early neural networks. In the 1990s, he ran experiments with a neural-networking machine as big as a conference room; it cost millions of dollars and had to be trained for weeks to do something you could now do on your desktop in less than an hour. He published a paper in 1997 that was barely cited for a decade and a half; this year it has been cited around 150 times. He is not humorless, but he does often wear an expression of some asperity, which I took as his signature combination of German restraint and Japanese restraint.
The issues Schuster had to deal with were tangled. For one thing, Le’s code was custom-written, and it wasn’t compatible with the new open-source machine-learning platform Google was then developing, TensorFlow. Dean directed to Schuster two other engineers, Yonghui Wu and Zhifeng Chen, in the fall of 2015. It took them two months just to replicate Le’s results on the new system. Le was around, but even he couldn’t always make heads or tails of what they had done.
As Schuster put it, “Some of the stuff was not done in full consciousness. They didn’t know themselves why they worked.”
This February, Google’s research organization — the loose division of the company, roughly a thousand employees in all, dedicated to the forward-looking and the unclassifiable — convened their leads at an offsite retreat at the Westin St. Francis, on Union Square, a luxury hotel slightly less splendid than Google’s own San Francisco shop a mile or so to the east. The morning was reserved for rounds of “lightning talks,” quick updates to cover the research waterfront, and the afternoon was idled away in cross-departmental “facilitated discussions.” The hope was that the retreat might provide an occasion for the unpredictable, oblique, Bell Labs-ish exchanges that kept a mature company prolific.
At lunchtime, Corrado and Dean paired up in search of Macduff Hughes, director of Google Translate. Hughes was eating alone, and the two Brain members took positions at either side. As Corrado put it, “We ambushed him.”
“O.K.,” Corrado said to the wary Hughes, holding his breath for effect. “We have something to tell you.”
They told Hughes that 2016 seemed like a good time to consider an overhaul of Google Translate — the code of hundreds of engineers over 10 years — with a neural network. The old system worked the way all machine translation has worked for about 30 years: It sequestered each successive sentence fragment, looked up those words in a large statistically derived vocabulary table, then applied a battery of post-processing rules to affix proper endings and rearrange it all to make sense. The approach is called “phrase-based statistical machine translation,” because by the time the system gets to the next phrase, it doesn’t know what the last one was. This is why Translate’s output sometimes looked like a shaken bag of fridge magnets. Brain’s replacement would, if it came together, read and render entire sentences at one draft. It would capture context — and something akin to meaning.
The stakes may have seemed low: Translate generates minimal revenue, and it probably always will. For most Anglophone users, even a radical upgrade in the service’s performance would hardly be hailed as anything more than an expected incremental bump. But there was a case to be made that human-quality machine translation is not only a short-term necessity but also a development very likely, in the long term, to prove transformational. In the immediate future, it’s vital to the company’s business strategy. Google estimates that 50 percent of the internet is in English, which perhaps 20 percent of the world’s population speaks. If Google was going to compete in China — where a majority of market share in search-engine traffic belonged to its competitor Baidu — or India, decent machine translation would be an indispensable part of the infrastructure. Baidu itself had published a pathbreaking paper about the possibility of neural machine translation in July 2015.
‘You think to translate something, you just get the data, run the experiments and you’re done, but it doesn’t work like that.’

And in the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language. This would represent a major inflection point — perhaps the major inflection point — in the development of something that felt like true artificial intelligence.
Most people in Silicon Valley were aware of machine learning as a fast-approaching horizon, so Hughes had seen this ambush coming. He remained skeptical. A modest, sturdily built man of early middle age with mussed auburn hair graying at the temples, Hughes is a classic line engineer, the sort of craftsman who wouldn’t have been out of place at a drafting table at 1970s Boeing. His jeans pockets often look burdened with curious tools of ungainly dimension, as if he were porting around measuring tapes or thermocouples, and unlike many of the younger people who work for him, he has a wardrobe unreliant on company gear. He knew that various people in various places at Google and elsewhere had been trying to make neural translation work — not in a lab but at production scale — for years, to little avail.
Hughes listened to their case and, at the end, said cautiously that it sounded to him as if maybe they could pull it off in three years.
Dean thought otherwise. “We can do it by the end of the year, if we put our minds to it.” One reason people liked and admired Dean so much was that he had a long record of successfully putting his mind to it. Another was that he wasn’t at all embarrassed to say sincere things like “if we put our minds to it.”
Hughes was sure the conversion wasn’t going to happen any time soon, but he didn’t personally care to be the reason. “Let’s prepare for 2016,” he went back and told his team. “I’m not going to be the one to say Jeff Dean can’t deliver speed.”
A month later, they were finally able to run a side-by-side experiment to compare Schuster’s new system with Hughes’s old one. Schuster wanted to run it for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”
It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations. At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.
The neural system, on the English-French language pair, showed an improvement over the old system of seven points.
Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.
To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.
7. Theory Becomes Product
Until then, the neural-translation team had been only three people — Schuster, Wu and Chen — but with Hughes’s support, the broader team began to coalesce. They met under Schuster’s command on Wednesdays at 2 p.m. in a corner room of the Brain building called Quartz Lake. The meeting was generally attended by a rotating cast of more than a dozen people. When Hughes or Corrado were there, they were usually the only native English speakers. The engineers spoke Chinese, Vietnamese, Polish, Russian, Arabic, German and Japanese, though they mostly spoke in their own efficient pidgin and in math. It is not always totally clear, at Google, who is running a meeting, but in Schuster’s case there was no ambiguity.
The steps they needed to take, even then, were not wholly clear. “This story is a lot about uncertainty — uncertainty throughout the whole process,” Schuster told me at one point. “The software, the data, the hardware, the people. It was like” — he extended his long, gracile arms, slightly bent at the elbows, from his narrow shoulders — “swimming in a big sea of mud, and you can only see this far.” He held out his hand eight inches in front of his chest. “There’s a goal somewhere, and maybe it’s there.”
Most of Google’s conference rooms have videochat monitors, which when idle display extremely high-resolution oversaturated public Google+ photos of a sylvan dreamscape or the northern lights or the Reichstag. Schuster gestured toward one of the panels, which showed a crystalline still of the Washington Monument at night.
“The view from outside is that everyone has binoculars and can see ahead so far.”
The theoretical work to get them to this point had already been painstaking and drawn-out, but the attempt to turn it into a viable product — the part that academic scientists might dismiss as “mere” engineering — was no less difficult. For one thing, they needed to make sure that they were training on good data. Google’s billions of words of training “reading” were mostly made up of complete sentences of moderate complexity, like the sort of thing you might find in Hemingway. Some of this is in the public domain: The original Rosetta Stone of statistical machine translation was millions of pages of the complete bilingual records of the Canadian Parliament. Much of it, however, was culled from 10 years of collected data, including human translations that were crowdsourced from enthusiastic respondents. The team had in their storehouse about 97 million unique English “words.” But once they removed the emoticons, and the misspellings, and the redundancies, they had a working vocabulary of only around 160,000.
Then you had to refocus on what users actually wanted to translate, which frequently had very little to do with reasonable language as it is employed. Many people, Google had found, don’t look to the service to translate full, complex sentences; they translate weird little shards of language. If you wanted the network to be able to handle the stream of user queries, you had to be sure to orient it in that direction. The network was very sensitive to the data it was trained on. As Hughes put it to me at one point: “The neural-translation system is learning everything it can. It’s like a toddler. ‘Oh, Daddy says that word when he’s mad!’ ” He laughed. “You have to be careful.”
More than anything, though, they needed to make sure that the whole thing was fast and reliable enough that their users wouldn’t notice. In February, the translation of a 10-word sentence took 10 seconds. They could never introduce anything that slow. The Translate team began to conduct latency experiments on a small percentage of users, in the form of faked delays, to identify tolerance. They found that a translation that took twice as long, or even five times as long, wouldn’t be registered. An eightfold slowdown would. They didn’t need to make sure this was true across all languages. In the case of a high-traffic language, like French or Chinese, they could countenance virtually no slowdown. For something more obscure, they knew that users wouldn’t be so scared off by a slight delay if they were getting better quality. They just wanted to prevent people from giving up and switching over to some competitor’s service.
Schuster, for his part, admitted he just didn’t know if they ever could make it fast enough. He remembers a conversation in the microkitchen during which he turned to Chen and said, “There must be something we don’t know to make it fast enough, but I don’t know what it could be.”
He did know, though, that they needed more computers — “G.P.U.s,” graphics processors reconfigured for neural networks — for training.
Hughes went to Schuster to ask what he thought. “Should we ask for a thousand G.P.U.s?”
Schuster said, “Why not 2,000?”

In the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language.

Ten days later, they had the additional 2,000 processors.
By April, the original lineup of three had become more than 30 people — some of them, like Le, on the Brain side, and many from Translate. In May, Hughes assigned a kind of provisional owner to each language pair, and they all checked their results into a big shared spreadsheet of performance evaluations. At any given time, at least 20 people were running their own independent weeklong experiments and dealing with whatever unexpected problems came up. One day a model, for no apparent reason, started taking all the numbers it came across in a sentence and discarding them. There were months when it was all touch and go. “People were almost yelling,” Schuster said.
By late spring, the various pieces were coming together. The team introduced something called a “word-piece model,” a “coverage penalty,” “length normalization.” Each part improved the results, Schuster says, by maybe a few percentage points, but in aggregate they had significant effects. Once the model was standardized, it would be only a single multilingual model that would improve over time, rather than the 150 different models that Translate currently used. Still, the paradox — that a tool built to further generalize, via learning machines, the process of automation required such an extraordinary amount of concerted human ingenuity and effort — was not lost on them. So much of what they did was just gut. How many neurons per layer did you use? 1,024 or 512? How many layers? How many sentences did you run through at a time? How long did you train for?
“We did hundreds of experiments,” Schuster told me, “until we knew that we could stop the training after one week. You’re always saying: When do we stop? How do I know I’m done? You never know you’re done. The machine-learning mechanism is never perfect. You need to train, and at some point you have to stop. That’s the very painful nature of this whole system. It’s hard for some people. It’s a little bit an art — where you put your brush to make it nice. It comes from just doing it. Some people are better, some worse.”
By May, the Brain team understood that the only way they were ever going to make the system fast enough to implement as a product was if they could run it on T.P.U.s, the special-purpose chips that Dean had called for. As Chen put it: “We did not even know if the code would work. But we did know that without T.P.U.s, it definitely wasn’t going to work.” He remembers going to Dean one on one to plead, “Please reserve something for us.” Dean had reserved them. The T.P.U.s, however, didn’t work right out of the box. Wu spent two months sitting next to someone from the hardware team in an attempt to figure out why. They weren’t just debugging the model; they were debugging the chip. The neural-translation project would be proof of concept for the whole infrastructural investment.
One Wednesday in June, the meeting in Quartz Lake began with murmurs about a Baidu paper that had recently appeared on the discipline’s chief online forum. Schuster brought the room to order. “Yes, Baidu came out with a paper. It feels like someone looking through our shoulder — similar architecture, similar results.” The company’s BLEU scores were essentially what Google achieved in its internal tests in February and March. Le didn’t seem ruffled; his conclusion seemed to be that it was a sign Google was on the right track. “It is very similar to our system,” he said with quiet approval.
The Google team knew that they could have published their results earlier and perhaps beaten their competitors, but as Schuster put it: “Launching is more important than publishing. People say, ‘Oh, I did something first,’ but who cares, in the end?”
This did, however, make it imperative that they get their own service out first and better. Hughes had a fantasy that they wouldn’t even inform their users of the switch. They would just wait and see if social media lit up with suspicions about the vast improvements.
“We don’t want to say it’s a new system yet,” he told me at 5:36 p.m. two days after Labor Day, one minute before they rolled out Chinese-to-English to 10 percent of their users, without telling anyone. “We want to make sure it works. The ideal is that it’s exploding on Twitter: ‘Have you seen how awesome Google Translate got?’ ”
8. A Celebration
The only two reliable measures of time in the seasonless Silicon Valley are the rotations of seasonal fruit in the microkitchens — from the pluots of midsummer to the Asian pears and Fuyu persimmons of early fall — and the zigzag of technological progress. On an almost uncomfortably warm Monday afternoon in late September, the team’s paper was at last released. It had an almost comical 31 authors. The next day, the members of Brain and Translate gathered to throw themselves a little celebratory reception in the Translate microkitchen. The rooms in the Brain building, perhaps in homage to the long winters of their diaspora, are named after Alaskan locales; the Translate building’s theme is Hawaiian.
The Hawaiian microkitchen has a slightly grainy beach photograph on one wall, a small lei-garlanded thatched-hut service counter with a stuffed parrot at the center and ceiling fixtures fitted to resemble paper lanterns. Two sparse histograms of bamboo poles line the sides, like the posts of an ill-defended tropical fort. Beyond the bamboo poles, glass walls and doors open onto rows of identical gray desks on either side. That morning had seen the arrival of new hooded sweatshirts to honor 10 years of Translate, and many team members went over to the party from their desks in their new gear. They were in part celebrating the fact that their decade of collective work was, as of that day, en route to retirement. At another institution, these new hoodies might thus have become a costume of bereavement, but the engineers and computer scientists from both teams all seemed pleased.

‘It was like swimming in a big sea of mud, and you can only see this far.’ Schuster held out his hand eight inches in front of his chest.

Google’s neural translation was at last working. By the time of the party, the company’s Chinese-English test had already processed 18 million queries. One engineer on the Translate team was running around with his phone out, trying to translate entire sentences from Chinese to English using Baidu’s alternative. He crowed with glee to anybody who would listen. “If you put in more than two characters at once, it times out!” (Baidu says this problem has never been reported by users.)
When word began to spread, over the following weeks, that Google had introduced neural translation for Chinese to English, some people speculated that it was because that was the only language pair for which the company had decent results. Everybody at the party knew that the reality of their achievement would be clear in November. By then, however, many of them would be on to other projects.
Hughes cleared his throat and stepped in front of the tiki bar. He wore a faded green polo with a rumpled collar, lightly patterned across the midsection with dark bands of drying sweat. There had been last-minute problems, and then last-last-minute problems, including a very big measurement error in the paper and a weird punctuation-related bug in the system. But everything was resolved — or at least sufficiently resolved for the moment. The guests quieted. Hughes ran efficient and productive meetings, with a low tolerance for maundering or side conversation, but he was given pause by the gravity of the occasion. He acknowledged that he was, perhaps, stretching a metaphor, but it was important to him to underline the fact, he began, that the neural translation project itself represented a “collaboration between groups that spoke different languages.”
Their neural-translation project, he continued, represented a “step function forward” — that is, a discontinuous advance, a vertical leap rather than a smooth curve. The relevant translation had been not just between the two teams but from theory into reality. He raised a plastic demi-flute of expensive-looking Champagne.
“To communication,” he said, “and cooperation!”
The engineers assembled looked around at one another and gave themselves over to little circumspect whoops and applause.
Jeff Dean stood near the center of the microkitchen, his hands in his pockets, shoulders hunched slightly inward, with Corrado and Schuster. Dean saw that there was some diffuse preference that he contribute to the observance of the occasion, and he did so in a characteristically understated manner, with a light, rapid, concise addendum.
What they had shown, Dean said, was that they could do two major things at once: “Do the research and get it in front of, I dunno, half a billion people.”
Everyone laughed, not because it was an exaggeration but because it wasn’t.

Epilogue: Machines Without Ghosts
Perhaps the most famous historic critique of artificial intelligence, or the claims made on its behalf, implicates the question of translation. The Chinese Room argument was proposed in 1980 by the Berkeley philosopher John Searle. In Searle’s thought experiment, a monolingual English speaker sits alone in a cell. An unseen jailer passes him, through a slot in the door, slips of paper marked with Chinese characters. The prisoner has been given a set of tables and rules in English for the composition of replies. He becomes so adept with these instructions that his answers are soon “absolutely indistinguishable from those of Chinese speakers.” Should the unlucky prisoner be said to “understand” Chinese? Searle thought the answer was obviously not. This metaphor for a computer, Searle later wrote, exploded the claim that “the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds.”
For the Google Brain team, though, or for nearly everyone else who works in machine learning in Silicon Valley, that view is entirely beside the point. This doesn’t mean they’re just ignoring the philosophical question. It means they have a fundamentally different view of the mind. Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms. The implication is that our facility with what we consider the higher registers of thought are no different in kind from what we’re tempted to perceive as the lower registers. Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems. As Corrado said to me on my very first day at Google, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does,’ and — more importantly — what it doesn’t do yet.”
Where you come down on “knowing” versus “doing” has real cultural and social implications. At the party, Schuster came over to me to express his frustration with the paper’s media reception. “Did you see the first press?” he asked me. He paraphrased a headline from that morning, blocking it word by word with his hand as he recited it: GOOGLE SAYS A.I. TRANSLATION IS INDISTINGUISHABLE FROM HUMANS’. Over the final weeks of the paper’s composition, the team had struggled with this; Schuster often repeated that the message of the paper was “It’s much better than it was before, but not as good as humans.” He had hoped it would be clear that their efforts weren’t about replacing people but helping them.
And yet the rise of machine learning makes it more difficult for us to carve out a special place for us. If you believe, with Searle, that there is something special about human “insight,” you can draw a clear line that separates the human from the automated. If you agree with Searle’s antagonists, you can’t. It is understandable why so many people cling fast to the former view. At a 2015 M.I.T. conference about the roots of artificial intelligence, Noam Chomsky was asked what he thought of machine learning. He pooh-poohed the whole enterprise as mere statistical prediction, a glorified weather forecast. Even if neural translation attained perfect functionality, it would reveal nothing profound about the underlying nature of language. It could never tell you if a pronoun took the dative or the accusative case. This kind of prediction makes for a good tool to accomplish our ends, but it doesn’t succeed by the standards of furthering our understanding of why things happen the way they do. A machine can already detect tumors in medical scans better than human radiologists, but the machine can’t tell you what’s causing the cancer.
Then again, can the radiologist?
Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.
The most important thing happening in Silicon Valley right now is not disruption. Rather, it’s institution-building — and the consolidation of power — on a scale and at a pace that are both probably unprecedented in human history. Brain has interns; it has residents; it has “ninja” classes to train people in other departments. Everywhere there are bins of free bike helmets, and free green umbrellas for the two days a year it rains, and little fruit salads, and nap pods, and shared treadmill desks, and massage chairs, and random cartons of high-end pastries, and places for baby-clothes donations, and two-story climbing walls with scheduled instructors, and reading groups and policy talks and variegated support networks. The recipients of these major investments in human cultivation — for they’re far more than perks for proles in some digital salt mine — have at hand the power of complexly coordinated servers distributed across 13 data centers on four continents, data centers that draw enough electricity to light up large cities.

But even enormous institutions like Google will be subject to this wave of automation; once machines can learn from human speech, even the comfortable job of the programmer is threatened. As the party in the tiki bar was winding down, a Translate engineer brought over his laptop to show Hughes something. The screen swirled and pulsed with a vivid, kaleidoscopic animation of brightly colored spheres in long looping orbits that periodically collapsed into nebulae before dispersing once more.
Hughes recognized what it was right away, but I had to look closely before I saw all the names — of people and files. It was an animation of the history of 10 years of changes to the Translate code base, every single buzzing and blooming contribution by every last team member. Hughes reached over gently to skip forward, from 2006 to 2008 to 2015, stopping every once in a while to pause and remember some distant campaign, some ancient triumph or catastrophe that now hurried by to be absorbed elsewhere or to burst on its own. Hughes pointed out how often Jeff Dean’s name expanded here and there in glowing spheres.

Hughes called over Corrado, and they stood transfixed. To break the spell of melancholic nostalgia, Corrado, looking a little wounded, looked up and said, “So when do we get to delete it?”
“Don’t worry about it,” Hughes said. “The new code base is going to grow. Everything grows.”
Gideon Lewis-Kraus is a writer at large for the magazine and a fellow at New America.

========Appendix ======

Referenced Here: Google Translate, Google Brain, Jun Rekimoto, Sundar Pichai, Jeff Dean, Andrew Ng, Baidu, Project Marvin, Geoffrey Hinton, Max Planck Institute for Biological Cybernetics, Quoc Le, MacDuff Hughes, Marvin Minsky, 

THE WORK ISSUE 
What Google Learned From Its Quest to Build the Perfect Team FEB. 25, 2016 





When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ NOV. 27, 2016 



Dubai and Well-Being

Dubai: Dubai Healthcare City , a health and wellness destination, today announced the launch of the world’s largest wellness concept in its Phase 2 expansion in Al Jadaf Dubai.

World’s Largest Wellness Village

Jan 25 2016

World’s largest wellness village to launch in Dubai Healthcare City Phase 2

Dubai: Dubai Healthcare City , a health and wellness destination, today announced the launch of the world’s largest wellness concept in its Phase 2 expansion in Al Jadaf Dubai.

Strategically located on the waterfront, the WorldCare Wellness Village will occupy an area equivalent to roughly the size of 16 football fields, and is estimated to be significantly larger in scale and offerings to current wellness properties in Europe and the US.

Tapping into the growing demand of people looking for evidence-based and holistic care, the wellness concept is driven by US-based WorldCare International and developed by the Dubai-based MAG Group. WorldCare is renowned for its online medical consultation service that digitally connects millions of members worldwide with over 20,000 specialists at world-class medical centers.
MR_Story

The Wellness Village concept contributes to the vision of Dubai Healthcare City to become an internationally recognized location of choice for quality healthcare and wellness services. With DHCC ‘s Phase 2 expansion, over land area of 22 million square feet, the free zone will drive the global trend of preventative healthcare taking into account local and regional healthcare demands and demographic changes.

Increasing access to preventative care is important to improve wellbeing and lower healthcare expenditure in the long term, said Her Excellency Dr Raja Al Gurg, Vice-Chairperson and Executive Director of Dubai Healthcare City Authority.
IN_READ

“By enabling access to wellness services, we are strengthening the health system and bringing patient centered care to the forefront. We are confident that Phase 2 will drive wellness tourism together with medical tourism, boosting Dubai’s diversified economy. It will bring together unique wellness concepts and specialized services such as rehabilitation, counseling, sports medicine and elderly care for both residents and visitors.”

The WorldCare Wellness Village will be anchored by a 100,000 square feet Wellness Center that will focus on prevention and management of diseases such as obesity, hypertension, diabetes and other physical conditions.

The Center will provide diagnosis and treatment plans, offering comprehensive two-to-six week medical programs built around patient education and lifestyle change. More than 100 healthcare and allied professionals are expected to work at the Center.
Nasser Menhall, Chief Executive Officer and co-founder of WorldCare International, said “We are proud to bring to Dubai a diversified wellness capability that will aggregate leading technologies and best practices in wellness programs in an unprecedented manner. Benefiting from economies of scale and our broad medical network, we hope to deliver a unique package of services that will raise the bar and set high standards.”

The Wellness Village, occupying 810,000 square feet of built up area (gross floor area /GFA) on a 900,000 square feet plot, is also conceptualized to include customized living spaces such as residential villas and apartments, as well as rental units to support long-term stay for both for local and foreign patients.

The eco-friendly living spaces will be designed to serve wellness and rehabilitation needs through features such as therapy zero-gravity pools, personalized spas, and rigorous exercise and diet facilities.

Bader Saeed Hareb, Chief Executive Officer (CEO), Investment Sector, Dubai Healthcare City , said, “We welcome our new wellness partner WorldCare who brings international systems and healthcare expertise that will strengthen what we already offer within the free zone. Unique concepts like WorldCare are a step in the right direction to ensure long-term sustainability and to develop a health and wellness destination that improves quality of life and sense of community.”

Hareb added, “As projects take shape, there will be a significant impact on the overall health of our communities, giving impetus to more opportunities to develop unique wellness concepts.”
-Ends-

About Dubai Healthcare City ( DHCC )
Dubai Healthcare City ( DHCC ) is a free zone committed to creating a health and wellness destination.

Since its launch in 2002 by His Highness Sheikh Mohammed Bin Rashid Al Maktoum, Vice-President and Prime Minister of the UAE and Ruler of Dubai, the free zone has worked towards its vision to become an internationally recognized location of choice for quality healthcare and an integrated center of excellence for clinical and wellness services, medical education and research.

Located in the heart of Dubai, the world’s largest healthcare free zone comprises two phases. Phase 1, dedicated to healthcare and medical education, occupies 4.1 million square feet in Oud Metha, and Phase 2, which is dedicated to wellness, occupies 22 million square feet in Al Jadaf, overlooking the historic Dubai Creek.

The free zone is governed by the Dubai Healthcare City Authority (DHCA) and regulated by the independent regulatory body, Dubai Healthcare City Authority – Regulation (DHCR), whose quality standards are accredited by the International Society for Quality in Healthcare (ISQua).

DHCC has close to 160 clinical partners including hospitals, outpatient medical centers and diagnostic laboratories across 150 plus specialties with licensed professionals from almost 90 countries, strengthening its medical tourism portfolio. Representing its network of support partners, close to 200 retail and non-clinical facilities serve the free zone.

DHCC is also home to academic institution the Mohammed Bin Rashid University of Medicine and Health Sciences, part of the Mohammed Bin Rashid Academic Medical Center. The free zone’s integrated environment provides leverage for potential partners to set up operations to promote health and wellness.
To learn more, log on to www.dhcc.ae.

About WorldCare International, Inc.
WorldCare’s mission is to improve the quality of health care worldwide by maximizing timely, efficient and strategic access to the best in health care. For over 20 years, WorldCare has empowered members and physicians with the clinical information and resources needed to make more informed medical decisions. WorldCare’s online medical second opinion service does this by digitally connecting millions of members worldwide with specialists at world-class medical centers within the WorldCare Consortium®. These teams of specialists and sub-specialists, with the experience that best matches each member’s needs, review the member’s medical records and diagnostics, confirm the diagnosis, recommend optimal treatments and empower members and their treating physicians with the information and resources needed to make informed medical decisions. WorldCare’s services are available through health plans, employers or insurers. 


For media enquiries, please contact:
Dubai Healthcare City
Carolina D’Souza / Awad Al Atatra
PR & Communications Department
+971 4 391 1999 / +971 4 375 6264
media@dhcc.ae

Arts Education

Steven J. Tepper is the dean of the Herberger Institute for Design and the Arts at Arizona State University, the nation’s largest, comprehensive design and arts school at a research university. . He was the keynote speaker at the annual luncheon today of the Metropolitan Atlanta Art Fund.

He had some provocative data to share. He was quoting from SNAAP.

His context was the explosion of arts non-for-profits – from 300 in the 1950’s to over 130,000 today.

Dr. Tepper is convinced that education in the arts is poorly understood, and has data to prove it. Too many people, he says, are skeptical about the careers that are possible from an arts education. In fact, many of the competencies developed in an arts education are precisely what employers in the 21st century are looking for – especially creativity. His conclusions:

– “The MFA is the new MBA”
– “The ‘Copyright Industries’ are booming…..they are 3X the size of the construction industry”.
– “the 21st century needs ‘design thinking”.

After the luncheon, I looked him up at ASU. Here is what he has to say – in his own words:

Welcome to the Herberger Institute for Design and the Arts, the largest comprehensive design and arts school in the nation, located within a dynamic 21st-century research university.

With 4,700 students, more than 675 faculty and faculty associates, 135 degrees and a tradition of top-ranked programs, we are committed to redefining the 21st-century design and arts school. Our college is built on a combination of disciplines unlike any other program in the nation, comprising schools of art; arts, media + engineering; design; film, dance and theatre; and music; as well as the ASU Art Museum.

The Institute is dedicated to the following design principles:

Creativity is a core 21st-century competency. Our graduates develop the ability to be generative and enterprising, work collaboratively within and across artistic fields, and generate non-routine solutions to complex problems. With this broad exposure to creative thinking and problem solving, our graduates are well prepared to lead in every arena of our economy, society and culture.

Design and the arts are critical resources for transforming our society. Artists must be embedded in their communities and dedicate their creative energy and talent to building, reimagining and sustaining our world. Design and the arts must be socially relevant and never viewed as extras or as grace notes. The Herberger Institute is committed to placing artists and arts-trained graduates at the center of public life.
The Herberger Institute is committed to enterprise and entrepreneurship. For most college graduates today, the future of work is unpredictable, non-linear and constantly evolving. A recent study found that 47 percent of current occupations will likely not exist in the next few decades. At the Herberger Institute, our faculty, students and graduates are inventing the jobs and the businesses of the future; reimagining how art and culture gets made and distributed; and coming up with new platforms and technology for the exchange of culture and the enrichment of the human experience. The legendary author and expert on city life Jane Jacobs talks about the abundance of “squelchers” — parents, educators, managers and leaders who tend to say no to new ideas. At the Herberger Institute, there are no squelchers. We embrace the cardinal rule of improvisation — always say: “Yes, and…”
Every person, regardless of social background, deserves an equal chance to help tell our nation’s and our world’s stories. Our creative expression defines who we are, what we aspire to and how we hope to live together. At the Herberger Institute, we are committed to projecting all voices – to providing an affordable education to every student who has the talent and the desire to boldly add their creative voice to the world’s evolving story.

Effectiveness requires excellence. We know that our ability to solve problems, build enterprises and create compelling and socially relevant design and art requires high levels of mastery. By being the best in our chosen fields, we can stretch ourselves and our talents to make a difference in the world.

Recently, as part of a weekly installation on campus, a Herberger Institute student hand-lettered the slogan “Here’s to the dreamers and the doers” in chalk on an outdoor blackboard, and we were able to use this for the incoming freshman class t-shirt. Whether you are an architect, designer, artist, performer, filmmaker, media engineer or creative scholar, the Herberger Institute is a place to dream. But unlike the misrepresentation of the artist and scholar as lost in a cloud, our faculty and students “make stuff happen” and leave their well-chiseled mark on the world. Come tour our concert and performance halls, art and design studios, exhibition spaces, dance studios, scene shops, classrooms, clinics and digital culture labs, and you will see the power of dreamers and doers.
If you are reading this message, you are implicated as a potential collaborator. Bring us your talents, your ideas and your passion — we will dream and do great things together.
Enthusiastically yours,

Steven J. Tepper

Dean
Herberger Institute for Design and the Arts
Arizona State University