Category Archives: Academics

Academics, History, Philosophy, Literature, Music, Drama, Science, Mathematics, Logic, Sociology, Economics, Behavioral Economics, Sociology, Psychology

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid

Regulatory State and Redistributive State

Will Wilkinson is a great writer, and spells out here two critical aspects of government:

The regulatory state is the aspect of government that protects the public against abuses of private players, protects property rights, and creates well-defined “corridors” that streamline the flows of capitalism and make it work best. It always gets a bad rap, and shouldn’t. The rap is due to the difficulty of enforcing regulations on so many aspects of life.

The redistributive state is the aspect of government that deigns to shift income and wealth from certain players in society to other players. The presumption is always one of fairness, whereby society deems it in the interests of all that certain actors, e.g. veterans or seniors, get preferential distributions of some kind.

He goes on to make a great point. These two states are more independent of one another than might at first be apparent. So it is possible to dislike one and like another.

Personally, I like both. I think both are critical to a well-oiled society with capitalism and property rights as central tenants. My beef will always go to issues of efficiency and effectiveness?

On redistribution, efficiency experts can answer this question: can we dispense with the monthly paperwork and simply direct deposit funds? Medicare now works this way, and the efficiency gains are remarkable.

And on regulation, efficiency experts can answer this question: can private actors certify their compliance with regulation, and then the public actors simple audit from time to time? Many government programs work this way, to the benefit of all.

ON redistribution, effectiveness experts can answer this question: Is the homeless population minimal? Are veterans getting what they need? Are seniors satisfied with how government treats them?

On regulation, effectiveness experts can answer this question: Is the air clean? Is the water clean? Is the mortgage market making food loans that help people buy houses? Are complaints about fraudulent consumer practices low?

CREDIT: VOX Article on Economic Freedom by Will Wilkinson

By Will Wilkinson
Sep 1, 2016

American exceptionalism has been propelled by exceptionally free markets, so it’s tempting to think the United States has a freer economy than Western European countries — particularly those soft-socialist Scandinavian social democracies with punishing tax burdens and lavish, even coddling, welfare states. As late as 2000, the American economy was indeed the freest in the West. But something strange has happened since: Economic freedom in the United States has dropped at an alarming rate.

Meanwhile, a number of big-government welfare states have become at least as robustly capitalist as the United States, and maybe more so. Why? Because big welfare states needed to become better capitalists to afford their socialism. This counterintuitive, even paradoxical dynamic suggests a tantalizing hypothesis: America’s shabby, unpopular safety net is at least partly responsible for capitalism’s flagging fortunes in the Land of the Free. Could it be that Americans aren’t socialist enough to want capitalism to work? It makes more sense than you might think.

America’s falling economic freedom

From 1970 to 2000, the American economy was the freest in the West, lagging behind only Asia’s laissez-faire city-states, Hong Kong and Singapore. The average economic freedom rating of the wealthy developed member countries of the Organization for Economic Cooperation and Development (OECD) has slipped a bit since the turn of the millennium, but not as fast as America’s.
“Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States,” write the authors of Fraser Institute’s 2015

Economic Freedom of the World report, noting that “the decline in economic freedom in the United States has been more than three times greater than the average decline found in the OECD.”

The economic freedom of selected countries, 1999 to 2016. Heritage Foundation 2016 Index of Economic Freedom

The Heritage Foundation and the Canadian Fraser Institute each produce an annual index of economic freedom, scoring the world’s countries on four or five main areas, each of which breaks down into a number of subcomponents. The main rubrics include the size of government and tax burdens; protection of property rights and the soundness of the legal system; monetary stability; openness to global trade; and levels of regulation of business, labor, and capital markets. Scores on these areas and subareas are combined to generate an overall economic freedom score.

The rankings reflect right-leaning ideas about what it means for people and economies to be free. Strong labor unions and inequality-reducing redistribution are more likely to hurt than help a country’s score.

So why should you care about some right-wing think tank’s ideologically loaded measure of economic freedom? Because it matters. More economic freedom, so measured, predicts higher rates of economic growth, and higher levels of wealth predict happier, healthier, longer lives. Higher levels of economic freedom are also linked with greater political liberty and civil rights, as well as higher scores on the left-leaning Social Progress Index, which is based on indicators of social justice and human well-being, from nutrition and medical care to tolerance and inclusion.

The authors of the Fraser report estimate that the drop in American economic freedom “could cut the US historic growth rate of 3 percent by half.” The difference between a 1.5 percent and 3 percent growth rate is roughly the difference between the output of the economy tripling rather than octupling in a lifetime. That’s a huge deal.
Over the same period, the economic freedom scores of Canada and Denmark have improved a lot. According to conservative and libertarian definitions of economic freedom, Canadians, who enjoy a socialized health care system, now have more economic freedom than Americans, and Danes, who have one of the world’s most generous welfare states, have just as much.
What the hell’s going on?

The redistributive state and the regulatory state are separable

To make headway on this question, it is crucial to clearly distinguish two conceptually and empirically separable aspects of “big government” — the regulatory state and the redistributive state.

The redistributive state moves money around through taxes and transfer programs. The regulatory state places all sorts of restrictions and requirements on economic life — some necessary, some not. Most Democrats and Republicans assume that lots of regulation and lots of redistribution go hand in hand, so it’s easy to miss that you can have one without the other, and that the relationship between the two is uneasy at best. But you can’t really understand the politics behind America’s declining economic freedom if you fail to distinguish between the regulatory and fiscal aspects of the economic policy.

Standard “supply-side” Republican economic policy thinking says that cuts in tax rates and government spending will unleash latent productive potential in the economy, boosting rates of growth. And indeed, when taxes and government spending are very high, cuts produce gains by returning resources to the private sector. But it’s important to see that questions about government control versus private sector control of economic resources are categorically different from questions about the freedom of markets.

Free markets require the presence of good regulation, which define and protect property rights and facilitate market processes through the consistent application of clear law, and an absence of bad regulation, which interferes with productive economic activity. A government can tax and spend very little — yet still stomp all over markets. Conversely, a government can withdraw lots of money from the economy through taxes, but still totally nail the optimal balance of good and bad regulation.

Whether a country’s market economy is free — open, competitive, and relatively unmolested by government — is more a question of regulation than a question of taxation and redistribution. It’s not primarily about how “big” its government is. Republicans generally do support a less meddlesome regulatory approach, but when they’re in power they tend to be much more persistent about cutting taxes and social welfare spending than they are about reducing economically harmful regulatory frictions.

If you’re as worried about America’s declining economic freedom as I am, this is a serious problem. In recent years, the effect of cutting taxes and spending has been to distribute income upward and leave the least well-off more vulnerable to bad luck, globalization, “disruptive innovation,” and the vagaries of business cycles.
If spending cuts came out of the military’s titanic budget, that would help. But that’s rarely what happens. The least connected constituencies, not the most expensive ones, are the first to get dinged by budget hawks. And further tax cuts are unlikely to boost growth. Lower taxes make government seem cheaper than it really is, which leads voters to ask for more, not less, government spending, driving up the deficit. Increasing the portion of GDP devoted to paying interest on government debt isn’t a growth-enhancing way to return resources to the private sector.

Meanwhile, wages have been flat or declining for millions of Americans for decades. People increasingly believe the economy is “rigged” in favor of the rich. As a sense of economic insecurity mounts, people anxiously cast about for answers.

Easing the grip of the regulatory state is a good answer. But in the United States, its close association with “free market” supply-side efforts to produce growth by slashing the redistributive state has made it an unattractive answer, even with Republican voters. That’s at least part of the reason the GOP wound up nominating a candidate who, in addition to promising not to cut entitlement spending, openly favors protectionist trade policy, giant infrastructure projects, and huge subsidies to domestic manufacturing and energy production. Donald Trump’s economic policy is the worst of all possible worlds.

This is doubly ironic, and doubly depressing, once you recognize that the sort of big redistributive state supply-siders fight is not necessarily the enemy of economic freedom. On the contrary, high levels of social welfare spending can actually drive political demand for growth-promoting reform of the regulatory state. That’s the lesson of Canada and Denmark’s march up those free economy rankings.

The welfare state isn’t a free lunch, but it is a cheap date

Economic theory tells you that big government ought to hurt economic growth. High levels of taxation reduce the incentive to work, and redistribution is a “leaky bucket”: Moving money around always ends up wasting some of it. Moreover, a dollar spent in the private sector generally has a more beneficial effect on the economy than a dollar spent by the government. Add it all up, and big governments that tax heavily and spend freely on social transfers ought to hurt economic growth.

That matters from a moral perspective — a lot. Other things equal, people are better off on just about every measure of well-being when they’re wealthier. Relative economic equality is nice, but it’s not so nice when relatively equal shares mean smaller shares for everyone. Just as small differences in the rate at which you put money into a savings account can lead to vast differences in your account balance 40 years down the road, thanks to the compounding nature of interest, a small reduction in the rate of economic growth can leave a society’s least well-off people much poorer in absolute terms than they might have been.

Here’s the puzzle. As a general rule, when nations grow wealthier, the public demands more and better government services, increasing government spending as a percentage of GDP. (This is known as “Wagner’s law.”) According to standard growth theory, ongoing increase in the size of government ought to exert downward pressure on rates of growth. But we don’t see the expected effect in the data. Long-term national growth trends are amazingly stable.

And when we look at the family of advanced, liberal democratic countries, countries that spend a smaller portion of national income on social transfer programs gain very little in terms of growth relative to countries that spend much more lavishly on social programs. Peter Lindert, an economist at the University of California Davis, calls this the “free lunch paradox.”

Lindert’s label for the puzzle is somewhat misleading, because big expensive welfare states are, obviously, expensive. And they do come at the expense of some growth. Standard economic theory isn’t completely wrong. It’s just that democracies that have embraced generous social spending have found ways to afford it by minimizing and offsetting its anti-growth effects.

If you’re careful with the numbers, you do in fact find a small negative effect of social welfare spending on growth. Still, according to economic theory, lunch ought to be really expensive. And it’s not.

There are three main reasons big welfare states don’t hurt growth as much as you might think. First, as Lindert has emphasized, they tend to have efficient consumption-based tax systems that minimize market distortions.
When you tax something, people tend to avoid it. If you tax income, as the United States does, people work a little less, which means that certain economic gains never materialize, leaving everyone a little poorer. Taxing consumption, as many of our European peers do, is less likely to discourage productive moneymaking, though it does discourage spending. But that’s not so bad. Less consumption means more savings, and savings puts the capital in capitalism, financing the economic activity that creates growth.

There are other advantages, too. Consumption taxes are usually structured as national sales taxes (or VATs, value-added taxes), which are paid in small amounts on a continuous basis, are extremely cheap to collect (and hard to avoid), while being less in-your-face than income taxes, which further mitigates the counterproductively demoralizing aspect of taxation.

Big welfare states are also more likely to tax addictive stuff, which people tend to buy whatever the price, as well as unhealthy and polluting stuff. That harnesses otherwise fiscally self-defeating tax-avoiding behavior to minimize the costs of health care and environmental damage.
Second, some transfer programs have relatively direct pro-growth effects. Workers are most productive in jobs well-matched to their training and experience, for example, and unemployment benefits offer displaced workers time to find a good, productivity-promoting fit. There’s also some evidence that health care benefits that aren’t linked to employment can promote economic risk-taking and entrepreneurship.

Fans of open-handed redistributive programs tend to oversell this kind of upside for growth, but there really is some. Moreover, it makes sense that the countries most devoted to these programs would fine-tune them over time to amplify their positive-sum aspects.

This is why you can’t assume all government spending affects growth in the same way. The composition of spending — as well as cuts to spending — matters. Cuts to efficiency-enhancing spending can hurt growth as much as they help. And they can really hurt if they increase economic anxiety and generate demand for Trump-like economic policy.

Third, there are lots of regulatory state policies that hurt growth by, say, impeding healthy competition or closing off foreign trade, and if you like high levels of redistribution better than you like those policies, you’ll eventually consider getting rid of some of them. If you do get rid of them, your economic freedom score from the Heritage Foundation and the Fraser Institute goes up.
This sort of compensatory economic liberalization is how big welfare states can indirectly promote growth, and more or less explains why countries like Canada, Denmark, and Sweden have become more robustly capitalist over the past several decades. They needed to be better capitalists to afford their socialism. And it works pretty well.

If you bundle together fiscal efficiency, some offsetting pro-growth effects, and compensatory liberalization, you can wind up with a very big government, with very high levels of social welfare spending and very little negative consequences for growth. Call it “big-government laissez-faire.”

The missing political will for genuine pro-growth reform

Enthusiasts for small government have a ready reply. Fine, they’ll say. Big government can work through policies that offset its drag on growth. But why not a less intrusive regulatory state and a smaller redistributive state: small-government laissez-faire. After all, this is the formula in Hong Kong and Singapore, which rank No. 1 and No. 2 in economic freedom. Clearly that’s our best bet for prosperity-promoting economic freedom.

But this argument ignores two things. First, Hong Kong and Singapore are authoritarian technocracies, not liberal democracies, which suggests (though doesn’t prove) that their special recipe requires nondemocratic government to work. When you bring democracy into the picture, the most important political lesson of the Canadian and Danish rise in economic freedom becomes clear: When democratically popular welfare programs become politically nonnegotiable fixed points, they can come to exert intense pressure on fiscal and economic policy to make them sustainable.

Political demand for economic liberalization has to come from somewhere. But there’s generally very little organic, popular democratic appetite for capitalist creative destruction. Constant “disruption” is scary, the way markets generate wealth and well-being is hard to comprehend, and many of us find competitive profit-seeking intuitively objectionable.

It’s not that Danes and Swedes and Canadians ever loved their “neoliberal” market reforms. They fought bitterly about them and have rolled some of them back. But when their big-government welfare states were creaking under their own weight, enough of the public was willing, thanks to the sense of economic security provided by the welfare state, to listen to experts who warned that the redistributive state would become unsustainable without the downsizing of the regulatory state.

A sound and generous system of social insurance offers a certain peace of mind that makes the very real risks of increased economic dynamism seem tolerable to the democratic public, opening up the political possibility of stabilizing a big-government welfare state with growth-promoting economic liberalization.

This sense of baseline economic security is precisely what many millions of Americans lack.

Learning the lesson of Donald Trump
America’s declining economic freedom is a profoundly serious problem. It’s already putting the brakes on dynamism and growth, leaving millions of Americans with a bitter sense of panic about their prospects. They demand answers. But ordinary voters aren’t policy wonks. When gripped by economic anxiety, they turn to demagogues who promise measures that make intuitive economic sense, but which actually make economic problems worse.

We may dodge a Trump presidency this time, but if we fail to fix the feedback loop between declining economic freedom and an increasingly acute sense of economic anxiety, we risk plunging the world’s biggest economy and the linchpin of global stability into a political and economic death spiral. It’s a ridiculous understatement to say that it’s important that this doesn’t happen.

Market-loving Republicans and libertarians need to stare hard at a framed picture of Donald Trump and reflect on the idea that a stale economic agenda focused on cutting taxes and slashing government spending is unlikely to deliver further gains. It is instead likely to continue to backfire by exacerbating economic anxiety and the public’s sense that the system is rigged.

If you gaze at the Donald long enough, his fascist lips will whisper “thank you,” and explain that the close but confusing identification of supply-side fiscal orthodoxy with “free market” economic policy helps authoritarian populists like him — but it hurts the political prospects of regulatory state reforms that would actually make American markets freer.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Neo.Life

This beta site NeoLife link beyond the splash pagee is tracking the “neobiological revolution”. I wholeheartedly agree that some of our best and brightest are on the case. Here they are:

ABOUT
NEO.LIFE
Making Sense of the Neobiological Revolution
NOTE FROM THE EDITOR
Mapping the brain, sequencing the genome, decoding the microbiome, extending life, curing diseases, editing mutations. We live in a time of awe and possibility — and also enormous responsibility. Are you prepared?

EDITORS

FOUNDER

Jane Metcalfe
Founder of Neo.life. Entrepreneur in media (Wired) and food (TCHO). Lover of mountains, horses, roses, and kimchee, though not necessarily in that order.
Follow

EDITOR
Brian Bergstein
Story seeker and story teller. Editor at NEO.LIFE. Former executive editor of MIT Technology Review; former technology & media editor at The Associated Press
Follow

ART DIRECTOR
Nicholas Vokey
Los Angeles-based graphic designer and animator.
Follow

CONSULTANT
Saul Carlin
founder @subcasthq. used to work here.

EDITOR
Rachel Lehmann-Haupt
Editor, www.theartandscienceoffamily.com & NEO.LIFE, author of In Her Own Sweet Time: Egg Freezing and the New Frontiers of Family

Laura Cochrane
“To oppose something is to maintain it.” — Ursula K. Le Guin

WRITERS

Amanda Schaffer
writes for the New Yorker and Neo.life, and is a former medical columnist for Slate. @abschaffer

Mallory Pickett
freelance journalist in Los Angeles

Karen Weintraub
Health/Science journalist passionate about human health, cool researcher and telling stories.

Anna Nowogrodzki
Science and tech journalist. Writing in Nature, National Geographic, Smithsonian, mental_floss, & others.
Follow

Juan Enriquez
Best-selling author, Managing Director of Excel Venture Management.

Christina Farr
Tech and features writer. @Stanford grad.

NEO.LIFE
Making sense of the Neobiological Revolution. Get the email at www.neo.life.

Maria Finn
I’m an author and tell stories across multiple mediums including prose, food, gardens, technology & narrative mapping. www.mariafinn.com Instagram maria_finn1.

Stephanie Pappas
I write about science, technology and the things people do with them.

David Eagleman
Neuroscientist at Stanford, internationally bestselling author of fiction and non-fiction, creator and presenter of PBS’ The Brain.

Kristen V. Brown
Reporter @Gizmodo covering biotech.

Thomas Goetz

David Ewing Duncan
Life science journalist; bestselling author, 9 books; NY Times, Atlantic, Wired, Daily Beast, NPR, ABC News, more; Curator, Arc Fusion www.davidewingduncan.com

Dorothy Santos
writer, editor, curator, and educator based in the San Francisco Bay Area about.me/dorothysantos.com

Dr. Sophie Zaaijer
CEO of PlayDNA, Postdoctoral fellow at the New York Genome Center, Runway postdoc at Cornell Tech.

Andrew Rosenblum
I’m a freelance tech writer based in Oakland, CA. You can find my work at Neo.Life, the MIT Technology Review, Popular Science, and many other places.

Zoe Cormier

Diana Crow
Fledgling science journalist here, hoping to foster discussion about the ways science acts as a catalyst for social change #biology

Ashton Applewhite
Calling for a radical aging movement. Anti-ageism blog+talk+book

Grace Rubenstein
Journalist, editor, media producer. Social/bio science geek. Tweets on health science, journalism, immigration. Spanish speaker & dancing fool.

Science and other sundries.

Esther Dyson
Internet court jEsther — I occupy Esther Dyson. Founder @HICCup_co https://t.co/5dWfUSratQ http://t.co/a1Gmo3FTQv

Jessica Leber
Freelance science and technology journalist and editor, formerly on staff at Fast Company, Vocativ, MIT Technology Review, and ClimateWire.

Jessica Carew Kraft
An anthropologist, artist, and naturalist writing about health, education, and rewilding. Mother to two girls in San Francisco.

Corby Kummer
Senior editor, The Atlantic, five-time James Beard Journalism Award winner, restaurant reviewer for New York, Boston, and Atlanta magazines

K McGowan
Journalist. Reporting on health, medicine, science, other excellent things. T: @mcgowankat

Rob Waters
I’m a journalist living in Berkeley. I write about health, science, social justice and policy. Father of 1. From Detroit.
Follow

Yiting Sun
writes for MIT Technology Review and Neo.life from Beijing, and was based in Accra, Ghana, in 2014 and 2015.
Follow

Michael Hawley
Follow

Richard Sprague
Curious amateur. Years of near-daily microbiome experiments. US CEO of AI healthcare startup http://airdoc.com
Follow

Bob Parks ✂
Connoisseur of the slap dash . . . maker . . . runner . . . writer of Outside magazine’s Gear Guy blog . . . freelance writer and reporter.

CREDIT: https://medium.com/neodotlife/review-of-daytwo-microbiome-test-deacd5464cd5

Microbiome Apps Personalize EAT recommendations

Richard Sprague provides a useful update about the microbiome landscape below. Microbiome is exploding. Your gut can be measured, and your gut can influence your health and well-being. But now …. these gut measurements can offer people a first: personalized nutrition information.

Among the more relevant points:

– Israel’s Weitzman Institute is the global leader academically. Eran Elinav, a physician and immunologist at the Weizmann Institute and one of their lead investigators (see prior post).
– The older technology for measuring the gut is called “16S” sequencing. It tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories,
– The companies competing to measure your microbiome are uBiome, American Gut, Thryve, DayTwo and Viome. DayTwo and Viome offer more advanced technology (see below).
– The latest technology seems to be “metagenomic sequencing”. It is better because it is more specific and detailed.
– By combining “metagenomic sequencing” information with extensive research about how certain species interact with particular foods, machine-learning algorithms can recommend what you should eat.
– DayTwo offers a metagenomic sequencing for $299, and then combines that with all available research to offer personalized nutrition information.
– DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S.
– DayTwo draws its academic understandings from Israel’s Weitzman Institute. The app is based on more than five years of highly cited research showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

CREDIT: Article on Microbiome Advances

When a Double-Chocolate Brownie is Better for You Than Quinoa

A $299 microbiome test from DayTwo turns up some counterintuitive dietary advice.

Why do certain diets work well for some people but not others? Although several genetic tests try to answer that question and might help you craft ideal nutrition plans, your DNA reveals only part of the picture. A new generation of tests from DayTwo and Viome offer diet advice based on a more complete view: they look at your microbiome, the invisible world of bacteria that help you metabolize food, and, unlike your DNA, change constantly throughout your life.
These bugs are involved in the synthesis of vitamins and other compounds in food, and they even play a role in the digestion of gluten. Artificial sweeteners may not contain calories, but they do modify the bacteria in your gut, which may explain why some people continue to gain weight on diet soda. Everyone’s microbiome is different.

So how well do these new tests work?
Basic microbiome tests, long available from uBiome, American Gut, Thryve, and others, based on older “16S” sequencing, can tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories, the bacterial equivalent of, say, canines versus felines. But just as your life might depend on knowing the difference between a wolf and a Chihuahua, your body’s reaction to food often depends on distinctions that can be known only at the species level. The difference between a “good” microbe and a pathogen can be a single DNA base pair.

New tests use more precise “metagenomic” sequencing that can make those distinctions. And by combining that information with extensive research about how those species interact with particular foods, machine-learning algorithms can recommend what you should eat. (Disclosure: I am a former “citizen scientist in residence” at uBiome. But I have no current relationship with any of these companies; I’m just an enthusiast about the microbiome.)

I recently tested myself with DayTwo ($299) to see what it would recommend for me, and I was pleased that the advice was not always the standard “eat more vegetables” that you’ll get from other products claiming to help you eat healthily. DayTwo’s advice is much more specific and often refreshingly counterintuitive. It’s based on more than five years of highly cited research at Israel’s Weizmann Institute, showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

In my case, whole grain breads all rate C-. French toast with challah bread: A.

The DayTwo test was pretty straightforward: you collect what comes out of your, ahem, gut, which involves mailing a sample from your time on the toilet. Unlike the other tests, which can analyze the DNA found in just a tiny swab from a stain on a piece of toilet paper, DayTwo requires more like a tablespoon. The extra amount is needed for DayTwo’s more comprehensive metagenomics sequencing.

Since you can get a microbiome test from other companies for under $100, does the additional metagenomic information from DayTwo justify its much higher price? Generally, I found the answer is yes.

About two months after I sent my sample, my iPhone lit up with my results in a handy app that gave me a personalized rating for most common foods, graded from A+ to C-. In my case, whole grain breads all rate C-. Slightly better are pasta and oatmeal, each ranked C+. Even “healthy” quinoa — a favorite of gluten-free diets — was a mere B-. Why? DayTwo’s algorithm can’t say precisely, but among the hundreds of thousands of gut microbe and meal combinations it was trained on, it finds that my microbiome doesn’t work well with these grains. They make my blood sugar rise too high.

So what kinds of bread are good for me? How about a butter croissant (B+) or cheese ravioli (A-)? The ultimate bread winner for me: French toast with challah bread (A). These recommendations are very different from the one-size-fits-all advice from the U.S. Department of Agriculture or the American Diabetes Association.

I was also pleased to learn that a Starbucks double chocolate brownie is an A- for me, while a 100-calorie pack of Snyder’s of Hanover pretzels gets a C-. That might go against general diet advice, but an algorithm determined that the thousands of bacterial species inside me tend to metabolize fatty foods in a way that results in healthier blood sugar levels than what I get from high-carb foods. Of course, that’s advice just for me; your mileage may vary.

Although the research behind DayTwo has been well-reviewed for more than five years, the app is new to the U.S., so the built-in food suggestions often seem skewed toward Middle Eastern eaters, perhaps the Israeli subjects who formed the original research cohort. That might explain why the app’s suggestions for me include lamb souvlaki with yogurt garlic dip for dinner (A+) and lamb kabob and a side of lentils (A) for lunch. They sound delicious, but to many American ears they might not have the ring of “pork ribs” or “ribeye steak,” which have the same A+ rating. Incidentally, DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S., so I expect the menu to expand with more familiar fare.

Fortunately you’re not limited to the built-in menu choices. The app includes a “build a meal” function that lets you enter combinations of foods from a large database that includes packaged items from Trader Joe’s and Whole Foods.

There is much more to the product, such as a graphical rendering of where my microbiome fits on the spectrum of the rest of the population that eats a particular food. Since the microbiome changes constantly, this will help me see what is different when I do a retest and when I try Viome and other tests.

I’ve had my DayTwo results for only a few weeks, so it’s too soon to know what happens if I take the app’s advice over the long term. Thankfully I’m in good health and reasonably fit, but for now I’ll be eating more strawberries (A+) and blackberries (A-), and fewer apples (B-) and bananas (C+). And overall I’m looking forward to a future where each of us will insist on personalized nutritional information. We all have unique microbiomes, and an app like DayTwo lets us finally eat that way too.

Richard Sprague is a technology executive and quantified-self enthusiast who has worked at Apple, Microsoft, and other tech companies. He is now the U.S. CEO of an AI healthcare startup, Airdoc.

====================APPENDIX: Older Posts about the microbiome =========

Microbiome Update
CREDIT: https://www.wsj.com/articles/how-disrupting-your-guts-rhythm-affects-your-health-1488164400?mod=e2tw A healthy community of microbes in the gut maintains regular daily cycles of activities. A healthy community of microbes in the gut maintains regular daily cycles of activities.PHOTO: WEIZMANN INSTITUTE By LARRY M. GREENBERG Updated Feb. 27, 2017 3:33 p.m. ET 4 COMMENTS New research is helping to unravel the mystery of how […]

Vibrant Health measures microbiome

Home

Microbiome Update
My last research on this subject was in August, 2014. I looked at both microbiomes and proteomics. Today, the New York Times published a very comprehensive update on microbiome research: Link to New York Time Microbiome Article Here is the article itself: = = = = = = = ARTICLE BEGINS HERE = = = […]

Microbiomes
Science is advancing on microbiomes in the gut. The key to food is fiber, and the key to best fiber is long fibers, like cellulose, uncooked or slightly sauteed (cooking shortens fiber length). The best vegetable, in the view of Jeff Leach, is a leek. Eating Well Article on Microbiome = = = = = […]

Arivale Launches LABS company
“Arivale” Launched and Moving Fast. They launched last month. They have 19 people in the Company and a 107 person pilot – but their plans are way more ambitious than that. Moreover: “The founders said they couldn’t envision Arivale launching even two or three years ago.” Read on …. This is an important development: the […]

Precision Wellness at Mt Sinai
My Sinai announcement Mount Sinai to Establish Precision Wellness Center to Advance Personalized Healthcare Mount Sinai Health System Launches Telehealth Initiatives Joshua Harris, co-Founder of Apollo Global Management, and his wife, Marjorie has made a $5 million gift to the Icahn School of Medicine at Mount Sinai to establish the Harris Center for Precision Wellness. […]

Proteomics
“Systems biology…is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different….It means changing our philosophy, in the full sense of the term” (Denis Noble).[5] Proteomics From Wikipedia, the free encyclopedia For the journal […]

Quantified Water Movement (QWM)

Think FITBITS for water. The Quantified Water Movement (QWM) is here to stay, with devices that make real-time monitoring of water quality in streams, rivers, lakes and oceans for less than $1,000 per device.

The Stroud Water Research Center in Pennsylvania is leading the way, along with other center of excellence around the world. Stroud has been leading the way on water for fifty years. It is an elite water quality study organization, renowned for its globally relevant science and scientist excellence. Find out more at www.stroudcenter.org.

As a part of this global leadership in the study of water quality, Stroud is advancing the applied technologies that comprise the “quantified water movement” – the real-time monitoring of water quality in streams, rivers, lakes and oceans.

QWM is very much like the “quantified self movement”(see Post on QSM. QSM takes full advantage of low cost sensor and communication technology to “quantify my self”. In other words, I can dramatically advance my understanding about my own personal well-being win areas like exercise, sleep, glucose levels in blood, etc This movement already has proven that real-time reporting on metrics is possible at a very low cost, and on a one-person-at-a-time scale. Apple Watch and FITBIT are examples of commercial products arising out of QSM.

In the same way, QWM takes full advantage of sensors and communication technology to provide real-time reporting on water quality for a given stream, lake, river, or ocean. While still in a formative stage. QWM uses the well-known advances in sensor, big data, and data mining technology to monitor water quality on a real-time basis. Best of all, this applied technology has now reached an affordable price point.

For less than $1,000 per device, it is now possible to fully monitor any body of water, and to report out the findings in a comprehensive dataset. Many leaders believe that less than $100 is possible very soon.

The applied technology ends up being a simple “data logger” coupled with a simple radio transmitter.

Examples of easy-to-measure metrics are:

1. water depth
2. conductivity (measures saltiness or salinity)
3. dissolved oxygen (supports fish and beneficial bacteria)
4. turbidity (a sign of runoff from erosion. Cloudy water actually abrades fish, and prevent fish from finding food)

Training now exists, thanks to Stroud, that is super simple. For example, in one hour, you can learn the capability of this low cost equipment, and the science as to why it is important.

In a two day training, citizen scientists and civil engineers alike can learn how to program their own data logger, attach sensors to the data logger, and deploy and maintain the equipment in an aquatic environment.

All of this and more is illuminated at www.enviroDIY.org.

Primary Care Best Practice

This post is about two important articles related to Primary Care Best Practice: One by Atul Gawande called “Big Med” and the other from Harvard Medical School about Physician Burnout.

As usual, Atul tells stories. His stories begin with his positive experience at the Cheesecake Factory and with his mother’s knee replacement surgery.

====================
Article by Atul Gawande Big Med and the Cheesecake Factory
====================
JCR NOTES

Article explores the potential for transferring some of the operational excellence of the Cheesecake Factory to aspects of health care.

He finds it tempting to look for 95% standardization and 5% customization.
He sees lessons in rolling out innovations through test kitchens and training that includes how to train others.
He sees heroes in doctors that push to articulate a standard of care, or technology, or equipment, or pharmaceutical.

====================
CREDIT: New Yorker Article by Atul Gawande “Big Med”

Annals of Health Care
August 13, 2012 Issue
Big Med
Restaurant chains have managed to combine quality control, cost control, and innovation. Can health care?

By Atul Gawande

Medicine has long resisted the productivity revolutions that transformed other industries. But the new chains aim to change this.Illustration by Harry Campbell

It was Saturday night, and I was at the local Cheesecake Factory with my two teen-age daughters and three of their friends. You may know the chain: a hundred and sixty restaurants with a catalogue-like menu that, when I did a count, listed three hundred and eight dinner items (including the forty-nine on the “Skinnylicious” menu), plus a hundred and twenty-four choices of beverage. It’s a linen-napkin-and-tablecloth sort of place, but with something for everyone. There’s wine and wasabi-crusted ahi tuna, but there’s also buffalo wings and Bud Light. The kids ordered mostly comfort food—pot stickers, mini crab cakes, teriyaki chicken, Hawaiian pizza, pasta carbonara. I got a beet salad with goat cheese, white-bean hummus and warm flatbread, and the miso salmon.

The place is huge, but it’s invariably packed, and you can see why. The typical entrée is under fifteen dollars. The décor is fancy, in an accessible, Disney-cruise-ship sort of way: faux Egyptian columns, earth-tone murals, vaulted ceilings. The waiters are efficient and friendly. They wear all white (crisp white oxford shirt, pants, apron, sneakers) and try to make you feel as if it were a special night out. As for the food—can I say this without losing forever my chance of getting a reservation at Per Se?—it was delicious.
The chain serves more than eighty million people per year. I pictured semi-frozen bags of beet salad shipped from Mexico, buckets of precooked pasta and production-line hummus, fish from a box. And yet nothing smacked of mass production. My beets were crisp and fresh, the hummus creamy, the salmon like butter in my mouth. No doubt everything we ordered was sweeter, fattier, and bigger than it had to be. But the Cheesecake Factory knows its customers. The whole table was happy (with the possible exception of Ethan, aged sixteen, who picked the onions out of his Hawaiian pizza).

I wondered how they pulled it off. I asked one of the Cheesecake Factory line cooks how much of the food was premade. He told me that everything’s pretty much made from scratch—except the cheesecake, which actually is from a cheesecake factory, in Calabasas, California.
I’d come from the hospital that day. In medicine, too, we are trying to deliver a range of services to millions of people at a reasonable cost and with a consistent level of quality. Unlike the Cheesecake Factory, we haven’t figured out how. Our costs are soaring, the service is typically mediocre, and the quality is unreliable. Every clinician has his or her own way of doing things, and the rates of failure and complication (not to mention the costs) for a given service routinely vary by a factor of two or three, even within the same hospital.

It’s easy to mock places like the Cheesecake Factory—restaurants that have brought chain production to complicated sit-down meals. But the “casual dining sector,” as it is known, plays a central role in the ecosystem of eating, providing three-course, fork-and-knife restaurant meals that most people across the country couldn’t previously find or afford. The ideas start out in élite, upscale restaurants in major cities. You could think of them as research restaurants, akin to research hospitals. Some of their enthusiasms—miso salmon, Chianti-braised short ribs, flourless chocolate espresso cake—spread to other high-end restaurants. Then the casual-dining chains reëngineer them for affordable delivery to millions. Does health care need something like this?

Big chains thrive because they provide goods and services of greater variety, better quality, and lower cost than would otherwise be available. Size is the key. It gives them buying power, lets them centralize common functions, and allows them to adopt and diffuse innovations faster than they could if they were a bunch of small, independent operations. Such advantages have made Walmart the most successful retailer on earth. Pizza Hut alone runs one in eight pizza restaurants in the country. The Cheesecake Factory’s major competitor, Darden, owns Olive Garden, LongHorn Steakhouse, Red Lobster, and the Capital Grille; it has more than two thousand restaurants across the country and employs more than a hundred and eighty thousand people. We can bristle at the idea of chains and mass production, with their homogeneity, predictability, and constant genuflection to the value-for-money god. Then you spend a bad night in a “quaint” “one of a kind” bed-and-breakfast that turns out to have a manic, halitoxic innkeeper who can’t keep the hot water running, and it’s right back to the Hyatt.

Medicine, though, had held out against the trend. Physicians were always predominantly self-employed, working alone or in small private-practice groups. American hospitals tended to be community-based. But that’s changing. Hospitals and clinics have been forming into large conglomerates. And physicians—facing escalating demands to lower costs, adopt expensive information technology, and account for performance—have been flocking to join them. According to the Bureau of Labor Statistics, only a quarter of doctors are self-employed—an extraordinary turnabout from a decade ago, when a majority were independent. They’ve decided to become employees, and health systems have become chains.

I’m no exception. I am an employee of an academic, nonprofit health system called Partners HealthCare, which owns the Brigham and Women’s Hospital and the Massachusetts General Hospital, along with seven other hospitals, and is affiliated with dozens of clinics around eastern Massachusetts. Partners has sixty thousand employees, including six thousand doctors. Our competitors include CareGroup, a system of five regional hospitals, and a new for-profit chain called the Steward Health Care System.

Steward was launched in late 2010, when Cerberus—the multibillion-dollar private-investment firm—bought a group of six failing Catholic hospitals in the Boston area for nine hundred million dollars. Many people were shocked that the Catholic Church would allow a corporate takeover of its charity hospitals. But the hospitals, some of which were more than a century old, had been losing money and patients, and Cerberus is one of those firms which specialize in turning around distressed businesses.

Cerberus has owned controlling stakes in Chrysler and gmac Financing and currently has stakes in Albertsons grocery stories, one of Austria’s largest retail bank chains, and the Freedom Group, which it built into one of the biggest gun-and-ammunition manufacturers in the world. When it looked at the Catholic hospitals, it saw another opportunity to create profit through size and efficiency. In the past year, Steward bought four more Massachusetts hospitals and made an offer to buy six financially troubled hospitals in south Florida. It’s trying to create what some have called the Southwest Airlines of health care—a network of high-quality hospitals that would appeal to a more cost-conscious public.

Steward’s aggressive growth has made local doctors like me nervous. But many health systems, for-profit and not-for-profit, share its goal: large-scale, production-line medicine. The way medical care is organized is changing—because the way we pay for it is changing.
Historically, doctors have been paid for services, not results. In the eighteenth century B.C., Hammurabi’s code instructed that a surgeon be paid ten shekels of silver every time he performed a procedure for a patrician—opening an abscess or treating a cataract with his bronze lancet. It also instructed that if the patient should die or lose an eye, the surgeon’s hands be cut off. Apparently, the Mesopotamian surgeons’ lobby got this results clause dropped. Since then, we’ve generally been paid for what we do, whatever happens. The consequence is the system we have, with plenty of individual transactions—procedures, tests, specialist consultations—and uncertain attention to how the patient ultimately fares.

Health-care reforms—public and private—have sought to reshape that system. This year, my employer’s new contracts with Medicare, BlueCross BlueShield, and others link financial reward to clinical performance. The more the hospital exceeds its cost-reduction and quality-improvement targets, the more money it can keep. If it misses the targets, it will lose tens of millions of dollars. This is a radical shift. Until now, hospitals and medical groups have mainly had a landlord-tenant relationship with doctors. They offered us space and facilities, but what we tenants did behind closed doors was our business. Now it’s their business, too.

The theory the country is about to test is that chains will make us better and more efficient. The question is how. To most of us who work in health care, throwing a bunch of administrators and accountants into the mix seems unlikely to help. Good medicine can’t be reduced to a recipe.

Then again neither can good food: every dish involves attention to detail and individual adjustments that require human judgment. Yet, some chains manage to achieve good, consistent results thousands of times a day across the entire country. I decided to get inside one and find out how they did it.

Dave Luz is the regional manager for the eight Cheesecake Factories in the Boston area. He oversees operations that bring in eighty million dollars in yearly revenue, about as much as a medium-sized hospital. Luz (rhymes with “fuzz”) is forty-seven, and had started out in his twenties waiting tables at a Cheesecake Factory restaurant in Los Angeles. He was writing screenplays, but couldn’t make a living at it. When he and his wife hit thirty and had their second child, they came back east to Boston to be closer to family. He decided to stick with the Cheesecake Factory. Luz rose steadily, and made a nice living. “I wanted to have some business skills,” he said—he started a film-production company on the side—“and there was no other place I knew where you could go in, know nothing, and learn top to bottom how to run a business.”

To show me how a Cheesecake Factory works, he took me into the kitchen of his busiest restaurant, at Prudential Center, a shopping and convention hub. The kitchen design is the same in every restaurant, he explained. It’s laid out like a manufacturing facility, in which raw materials in the back of the plant come together as a finished product that rolls out the front. Along the back wall are the walk-in refrigerators and prep stations, where half a dozen people stood chopping and stirring and mixing. The next zone is where the cooking gets done—two parallel lines of countertop, forty-some feet long and just three shoe-lengths apart, with fifteen people pivoting in place between the stovetops and grills on the hot side and the neatly laid-out bins of fixings (sauces, garnishes, seasonings, and the like) on the cold side. The prep staff stock the pullout drawers beneath the counters with slabs of marinated meat and fish, serving-size baggies of pasta and crabmeat, steaming bowls of brown rice and mashed potatoes. Basically, the prep crew handles the parts, and the cooks do the assembly.

Computer monitors positioned head-high every few feet flashed the orders for a given station. Luz showed me the touch-screen tabs for the recipe for each order and a photo showing the proper presentation. The recipe has the ingredients on the left part of the screen and the steps on the right. A timer counts down to a target time for completion. The background turns from green to yellow as the order nears the target time and to red when it has exceeded it.

I watched Mauricio Gaviria at the broiler station as the lunch crowd began coming in. Mauricio was twenty-nine years old and had worked there eight years. He’d got his start doing simple prep—chopping vegetables—and worked his way up to fry cook, the pasta station, and now the sauté and broiler stations. He bounced in place waiting for the pace to pick up. An order for a “hibachi” steak popped up. He tapped the screen to open the order: medium-rare, no special requests. A ten-minute timer began. He tonged a fat hanger steak soaking in teriyaki sauce onto the broiler and started a nest of sliced onions cooking beside it. While the meat was grilling, other orders arrived: a Kobe burger, a blue-cheese B.L.T. burger, three “old-fashioned” burgers, five veggie burgers, a “farmhouse” burger, and two Thai chicken wraps. Tap, tap, tap. He got each of them grilling.

I brought up the hibachi-steak recipe on the screen. There were instructions to season the steak, sauté the onions, grill some mushrooms, slice the meat, place it on the bed of onions, pile the mushrooms on top, garnish with parsley and sesame seeds, heap a stack of asparagus tempura next to it, shape a tower of mashed potatoes alongside, drop a pat of wasabi butter on top, and serve.

Two things struck me. First, the instructions were precise about the ingredients and the objectives (the steak slices were to be a quarter of an inch thick, the presentation just so), but not about how to get there. The cook has to decide how much to salt and baste, how to sequence the onions and mushrooms and meat so they’re done at the same time, how to swivel from grill to countertop and back, sprinkling a pinch of salt here, flipping a burger there, sending word to the fry cook for the asparagus tempura, all the while keeping an eye on the steak. In producing complicated food, there might be recipes, but there was also a substantial amount of what’s called “tacit knowledge”—knowledge that has not been reduced to instructions.

Second, Mauricio never looked at the instructions anyway. By the time I’d finished reading the steak recipe, he was done with the dish and had plated half a dozen others. “Do you use this recipe screen?” I asked.

“No. I have the recipes right here,” he said, pointing to his baseball-capped head.

He put the steak dish under warming lights, and tapped the screen to signal the servers for pickup. But before the dish was taken away, the kitchen manager stopped to look, and the system started to become clearer. He pulled a clean fork out and poked at the steak. Then he called to Mauricio and the two other cooks manning the grill station.

“Gentlemen,” he said, “this steak is perfect.” It was juicy and pink in the center, he said. “The grill marks are excellent.” The sesame seeds and garnish were ample without being excessive. “But the tower is too tight.” I could see what he meant. The mashed potatoes looked a bit like something a kid at the beach might have molded with a bucket. You don’t want the food to look manufactured, he explained. Mauricio fluffed up the potatoes with a fork.

I watched the kitchen manager for a while. At every Cheesecake Factory restaurant, a kitchen manager is stationed at the counter where the food comes off the line, and he rates the food on a scale of one to ten. A nine is near-perfect. An eight requires one or two corrections before going out to a guest. A seven needs three. A six is unacceptable and has to be redone. This inspection process seemed a tricky task. No one likes to be second-guessed. The kitchen manager prodded gently, being careful to praise as often as he corrected. (“Beautiful. Beautiful!” “The pattern of this pesto glaze is just right.”) But he didn’t hesitate to correct.

“We’re getting sloppy with the plating,” he told the pasta station. He was unhappy with how the fry cooks were slicing the avocado spring rolls. “Gentlemen, a half-inch border on this next time.” He tried to be a coach more than a policeman. “Is this three-quarters of an ounce of Parm-Romano?”

And that seemed to be the spirit in which the line cooks took him and the other managers. The managers had all risen through the ranks. This earned them a certain amount of respect. They in turn seemed respectful of the cooks’ skills and experience. Still, the oversight is tight, and this seemed crucial to the success of the enterprise.

The managers monitored the pace, too—scanning the screens for a station stacking up red flags, indicating orders past the target time, and deciding whether to give the cooks at the station a nudge or an extra pair of hands. They watched for waste—wasted food, wasted time, wasted effort. The formula was Business 101: Use the right amount of goods and labor to deliver what customers want and no more. Anything more is waste, and waste is lost profit.

I spoke to David Gordon, the company’s chief operating officer. He told me that the Cheesecake Factory has worked out a staff-to-customer ratio that keeps everyone busy but not so busy that there’s no slack in the system in the event of a sudden surge of customers. More difficult is the problem of wasted food. Although the company buys in bulk from regional suppliers, groceries are the biggest expense after labor, and the most unpredictable. Everything—the chicken, the beef, the lettuce, the eggs, and all the rest—has a shelf life. If a restaurant were to stock too much, it could end up throwing away hundreds of thousands of dollars’ worth of food. If a restaurant stocks too little, it will have to tell customers that their favorite dish is not available, and they may never come back. Groceries, Gordon said, can kill a restaurant.

The company’s target last year was at least 97.5-per-cent efficiency: the managers aimed at throwing away no more than 2.5 per cent of the groceries they bought, without running out. This seemed to me an absurd target. Achieving it would require knowing in advance almost exactly how many customers would be coming in and what they were going to want, then insuring that the cooks didn’t spill or toss or waste anything. Yet this is precisely what the organization has learned to do. The chain-restaurant industry has produced a field of computer analytics known as “guest forecasting.”

“We have forecasting models based on historical data—the trend of the past six weeks and also the trend of the previous year,” Gordon told me. “The predictability of the business has become astounding.” The company has even learned how to make adjustments for the weather or for scheduled events like playoff games that keep people at home.

A computer program known as Net Chef showed Luz that for this one restaurant food costs accounted for 28.73 per cent of expenses the previous week. It also showed exactly how many chicken breasts were ordered that week ($1,614 worth), the volume sold, the volume on hand, and how much of last week’s order had been wasted (three dollars’ worth). Chain production requires control, and they’d figured out how to achieve it on a mass scale.

As a doctor, I found such control alien—possibly from a hostile planet. We don’t have patient forecasting in my office, push-button waste monitoring, or such stringent, hour-by-hour oversight of the work we do, and we don’t want to. I asked Luz if he had ever thought about the contrast when he went to see a doctor. We were standing amid the bustle of the kitchen, and the look on his face shifted before he answered.
“I have,” he said. His mother was seventy-eight. She had early Alzheimer’s disease, and required a caretaker at home. Getting her adequate medical care was, he said, a constant battle.

Recently, she’d had a fall, apparently after fainting, and was taken to a local emergency room. The doctors ordered a series of tests and scans, and kept her overnight. They never figured out what the problem was. Luz understood that sometimes explanations prove elusive. But the clinicians didn’t seem to be following any coördinated plan of action. The emergency doctor told the family one plan, the admitting internist described another, and the consulting specialist a third. Thousands of dollars had been spent on tests, but nobody ever told Luz the results.

A nurse came at ten the next morning and said that his mother was being discharged. But his mother’s nurse was on break, and the discharge paperwork with her instructions and prescriptions hadn’t been done. So they waited. Then the next person they needed was at lunch. It was as if the clinicians were the customers, and the patients’ job was to serve them. “We didn’t get to go until 6 p.m., with a tired, disabled lady and a long drive home.” Even then she still had to be changed out of her hospital gown and dressed. Luz pressed the call button to ask for help. No answer. He went out to the ward desk.

The aide was on break, the secretary said. “Don’t you dress her yourself at home?” He explained that he didn’t, and made a fuss.

An aide was sent. She was short with him and rough in changing his mother’s clothes. “She was manhandling her,” Luz said. “I felt like, ‘Stop. I’m not one to complain. I respect what you do enormously. But if there were a video camera in here, you’d be on the evening news.’ I sent her out. I had to do everything myself. I’m stuffing my mom’s boob in her bra. It was unbelievable.”

His mother was given instructions to check with her doctor for the results of cultures taken during her stay, for a possible urinary-tract infection. But when Luz tried to follow up, he couldn’t get through to her doctor for days. “Doctors are busy,” he said. “I get it. But come on.” An office assistant finally told him that the results wouldn’t be ready for another week and that she was to see a neurologist. No explanations. No chance to ask questions.

The neurologist, after giving her a two-minute exam, suggested tests that had already been done and wrote a prescription that he admitted was of doubtful benefit. Luz’s family seemed to encounter this kind of disorganization, imprecision, and waste wherever his mother went for help.

“It is unbelievable to me that they would not manage this better,” Luz said. I asked him what he would do if he were the manager of a neurology unit or a cardiology clinic. “I don’t know anything about medicine,” he said. But when I pressed he thought for a moment, and said, “This is pretty obvious. I’m sure you already do it. But I’d study what the best people are doing, figure out how to standardize it, and then bring it to everyone to execute.”

This is not at all the normal way of doing things in medicine. (“You’re scaring me,” he said, when I told him.) But it’s exactly what the new health-care chains are now hoping to do on a mass scale. They want to create Cheesecake Factories for health care. The question is whether the medical counterparts to Mauricio at the broiler station—the clinicians in the operating rooms, in the medical offices, in the intensive-care units—will go along with the plan. Fixing a nice piece of steak is hardly of the same complexity as diagnosing the cause of an elderly patient’s loss of consciousness. Doctors and patients have not had a positive experience with outsiders second-guessing decisions. How will they feel about managers trying to tell them what the “best practices” are?

In March, my mother underwent a total knee replacement, like at least six hundred thousand Americans each year. She’d had a partial knee replacement a decade ago, when arthritis had worn away part of the cartilage, and for a while this served her beautifully. The surgeon warned, however, that the results would be temporary, and about five years ago the pain returned.

She’s originally from Ahmadabad, India, and has spent three decades as a pediatrician, attending to the children of my small Ohio home town. She’s chatty. She can’t go through a grocery checkout line or get pulled over for speeding without learning people’s names and a little bit about them. But she didn’t talk about her mounting pain. I noticed, however, that she had developed a pronounced limp and had become unable to walk even moderate distances. When I asked her about it, she admitted that just getting out of bed in the morning was an ordeal. Her doctor showed me her X-rays. Her partial prosthesis had worn through the bone on the lower surface of her knee. It was time for a total knee replacement.
This past winter, she finally stopped putting it off, and asked me to find her a surgeon. I wanted her to be treated well, in both the technical and the human sense. I wanted a place where everyone and everything—from the clinic secretary to the physical therapists—worked together seamlessly.

My mother planned to come to Boston, where I live, for the surgery so she could stay with me during her recovery. (My father died last year.) Boston has three hospitals in the top rank of orthopedic surgery. But even a doctor doesn’t have much to go on when it comes to making a choice. A place may have a great reputation, but it’s hard to know about actual quality of care.

Unlike some countries, the United States doesn’t have a monitoring system that tracks joint-replacement statistics. Even within an institution, I found, surgeons take strikingly different approaches. They use different makes of artificial joints, different kinds of anesthesia, different regimens for post-surgical pain control and physical therapy.

In the absence of information, I went with my own hospital, the Brigham and Women’s Hospital. Our big-name orthopedic surgeons treat Olympians and professional athletes. Nine of them do knee replacements. Of most interest to me, however, was a surgeon who was not one of the famous names. He has no national recognition. But he has led what is now a decade-long experiment in standardizing joint-replacement surgery.

John Wright is a New Zealander in his late fifties. He’s a tower crane of a man, six feet four inches tall, and so bald he barely seems to have eyebrows. He’s informal in attire—I don’t think I’ve ever seen him in a tie, and he is as apt to do rounds in his zip-up anorak as in his white coat—but he exudes competence.

“Customization should be five per cent, not ninety-five per cent, of what we do,” he told me. A few years ago, he gathered a group of people from every specialty involved—surgery, anesthesia, nursing, physical therapy—to formulate a single default way of doing knee replacements. They examined every detail, arguing their way through their past experiences and whatever evidence they could find. Essentially, they did what Luz considered the obvious thing to do: they studied what the best people were doing, figured out how to standardize it, and then tried to get everyone to follow suit.

They came up with a plan for anesthesia based on research studies—including giving certain pain medications before the patient entered the operating room and using spinal anesthesia plus an injection of local anesthetic to block the main nerve to the knee. They settled on a postoperative regimen, too. The day after a knee replacement, most orthopedic surgeons have their patients use a continuous passive-motion machine, which flexes and extends the knee as they lie in bed. Large-scale studies, though, have suggested that the machines don’t do much good. Sure enough, when the members of Wright’s group examined their own patients, they found that the ones without the machine got out of bed sooner after surgery, used less pain medication, and had more range of motion at discharge. So Wright instructed the hospital to get rid of the machines, and to use the money this saved (ninety thousand dollars a year) to pay for more physical therapy, something that is proven to help patient mobility. Therapy, starting the day after surgery, would increase from once to twice a day, including weekends.

Even more startling, Wright had persuaded the surgeons to accept changes in the operation itself; there was now, for instance, a limit as to which prostheses they could use. Each of our nine knee-replacement surgeons had his preferred type and brand. Knee surgeons are as particular about their implants as professional tennis players are about their racquets. But the hardware is easily the biggest cost of the operation—the average retail price is around eight thousand dollars, and some cost twice that, with no solid evidence of real differences in results.

Knee implants were largely perfected a quarter century ago. By the nineteen-nineties, studies showed that, for some ninety-five per cent of patients, the implants worked magnificently a decade after surgery. Evidence from the Australian registry has shown that not a single new knee or hip prosthesis had a lower failure rate than that of the established prostheses. Indeed, thirty per cent of the new models were likelier to fail. Like others on staff, Wright has advised companies on implant design. He believes that innovation will lead to better implants. In the meantime, however, he has sought to limit the staff to the three lowest-cost knee implants.

These have been hard changes for many people to accept. Wright has tried to figure out how to persuade clinicians to follow the standardized plan. To prevent revolt, he learned, he had to let them deviate at times from the default option. Surgeons could still order a passive-motion machine or a preferred prosthesis. “But I didn’t make it easy,” Wright said. The surgeons had to enter the treatment orders in the computer themselves. To change or add an implant, a surgeon had to show that the performance was superior or the price at least as low.

I asked one of his orthopedic colleagues, a surgeon named John Ready, what he thought about Wright’s efforts. Ready was philosophical. He recognized that the changes were improvements, and liked most of them. But he wasn’t happy when Wright told him that his knee-implant manufacturer wasn’t matching the others’ prices and would have to be dropped.

“It’s not ideal to lose my prosthesis,” Ready said. “I could make the switch. The differences between manufacturers are minor. But there’d be a learning curve.” Each implant has its quirks—how you seat it, what tools you use. “It’s probably a ten-case learning curve for me.” Wright suggested that he explain the situation to the manufacturer’s sales rep. “I’m my rep’s livelihood,” Ready said. “He probably makes five hundred dollars a case from me.” Ready spoke to his rep. The price was dropped.

Wright has become the hospital’s kitchen manager—not always a pleasant role. He told me that about half of the surgeons appreciate what he’s doing. The other half tolerate it at best. One or two have been outright hostile. But he has persevered, because he’s gratified by the results. The surgeons now use a single manufacturer for seventy-five per cent of their implants, giving the hospital bargaining power that has helped slash its knee-implant costs by half. And the start-to-finish standardization has led to vastly better outcomes. The distance patients can walk two days after surgery has increased from fifty-three to eighty-five feet. Nine out of ten could stand, walk, and climb at least a few stairs independently by the time of discharge. The amount of narcotic pain medications they required fell by a third. They could also leave the hospital nearly a full day earlier on average (which saved some two thousand dollars per patient).

My mother was one of the beneficiaries. She had insisted to Dr. Wright that she would need a week in the hospital after the operation and three weeks in a rehabilitation center. That was what she’d required for her previous knee operation, and this one was more extensive.
“We’ll see,” he told her.

The morning after her operation, he came in and told her that he wanted her getting out of bed, standing up, and doing a specific set of exercises he showed her. “He’s pushy, if you want to say it that way,” she told me. The physical therapists and nurses were, too. They were a team, and that was no small matter. I counted sixty-three different people involved in her care. Nineteen were doctors, including the surgeon and chief resident who assisted him, the anesthesiologists, the radiologists who reviewed her imaging scans, and the junior residents who examined her twice a day and adjusted her fluids and medications. Twenty-three were nurses, including her operating-room nurses, her recovery-room nurse, and the many ward nurses on their eight-to-twelve-hour shifts. There were also at least five physical therapists; sixteen patient-care assistants, helping check her vital signs, bathe her, and get her to the bathroom; plus X-ray and EKG technologists, transport workers, nurse practitioners, and physician assistants. I didn’t even count the bioengineers who serviced the equipment used, the pharmacists who dispensed her medications, or the kitchen staff preparing her food while taking into account her dietary limitations. They all had to coördinate their contributions, and they did.

Three days after her operation, she was getting in and out of bed on her own. She was on virtually no narcotic medication. She was starting to climb stairs. Her knee pain was actually less than before her operation. She left the hospital for the rehabilitation center that afternoon.

The biggest complaint that people have about health care is that no one ever takes responsibility for the total experience of care, for the costs, and for the results. My mother experienced what happens in medicine when someone takes charge. Of course, John Wright isn’t alone in trying to design and implement this kind of systematic care, in joint surgery and beyond. The Virginia Mason Medical Center, in Seattle, has done it for knee surgery and cancer care; the Geisinger Health Center, in Pennsylvania, has done it for cardiac surgery and primary care; the University of Michigan Health System standardized how its doctors give blood transfusions to patients, reducing the need for transfusions by thirty-one per cent and expenses by two hundred thousand dollars a month. Yet, unless such programs are ramped up on a nationwide scale, they aren’t going to do much to improve health care for most people or reduce the explosive growth of health-care costs.

In medicine, good ideas still take an appallingly long time to trickle down. Recently, the American Academy of Neurology and the American Headache Society released new guidelines for migraine-headache-treatment. They recommended treating severe migraine sufferers—who have more than six attacks a month—with preventive medications and listed several drugs that markedly reduce the occurrence of attacks. The authors noted, however, that previous guidelines going back more than a decade had recommended such remedies, and doctors were still not providing them to more than two-thirds of patients. One study examined how long it took several major discoveries, such as the finding that the use of beta-blockers after a heart attack improves survival, to reach even half of Americans. The answer was, on average, more than fifteen years.

Scaling good ideas has been one of our deepest problems in medicine. Regulation has had its place, but it has proved no more likely to produce great medicine than food inspectors are to produce great food. During the era of managed care, insurance-company reviewers did hardly any better. We’ve been stuck. But do we have to be?

Every six months, the Cheesecake Factory puts out a new menu. This means that everyone who works in its restaurants expects to learn something new twice a year. The March, 2012, Cheesecake Factory menu included thirteen new items. The teaching process is now finely honed: from start to finish, rollout takes just seven weeks.

The ideas for a new dish, or for tweaking an old one, can come from anywhere. One of the Boston prep cooks told me about an idea he once had that ended up in a recipe. David Overton, the founder and C.E.O. of the Cheesecake Factory, spends much of his time sampling a range of cuisines and comes up with many dishes himself. All the ideas, however, go through half a dozen chefs in the company’s test kitchen, in Calabasas. They figure out how to make each recipe reproducible, appealing, and affordable. Then they teach the new recipe to the company’s regional managers and kitchen managers.

Dave Luz, the Boston regional manager, went to California for training this past January with his chief kitchen manager, Tom Schmidt, a chef with fifteen years’ experience. They attended lectures, watched videos, participated in workshops. It sounded like a surgical conference. Where I might be taught a new surgical technique, they were taught the steps involved in preparing a “Santorini farro salad.” But there was a crucial difference. The Cheesecake instructors also trained the attendees how to teach what they were learning. In medicine, we hardly ever think about how to implement what we’ve learned. We learn what we want to, when we want to.

On the first training day, the kitchen managers worked their way through thirteen stations, preparing each new dish, and their performances were evaluated. The following day, they had to teach their regional managers how to prepare each dish—Schmidt taught Luz—and this time the instructors assessed how well the kitchen managers had taught.
The managers returned home to replicate the training session for the general manager and the chief kitchen manager of every restaurant in their region. The training at the Boston Prudential Center restaurant took place on two mornings, before the lunch rush. The first day, the managers taught the kitchen staff the new menu items. There was a lot of poring over the recipes and videos and fussing over the details. The second day, the cooks made the new dishes for the servers. This gave the cooks some practice preparing the food at speed, while allowing the servers to learn the new menu items. The dishes would go live in two weeks. I asked a couple of the line cooks how long it took them to learn to make the new food.

“I know it already,” one said.
“I make it two times, and that’s all I need,” the other said.
Come on, I said. How long before they had it down pat?
“One day,” they insisted. “It’s easy.”

I asked Schmidt how much time he thought the cooks required to master the recipes. They thought a day, I told him. He grinned. “More like a month,” he said.

Even a month would be enviable in medicine, where innovations commonly spread at a glacial pace. The new health-care chains, though, are betting that they can change that, in much the same way that other chains have.
Armin Ernst is responsible for intensive-care-unit operations in Steward’s ten hospitals. The I.C.U.s he oversees serve some eight thousand patients a year. In another era, an I.C.U. manager would have been a facilities expert. He would have spent his time making sure that the equipment, electronics, pharmacy resources, and nurse staffing were up to snuff. He would have regarded the I.C.U. as the doctors’ workshop, and he would have wanted to give them the best possible conditions to do their work as they saw fit.
Ernst, though, is a doctor—a new kind of doctor, whose goal is to help disseminate good ideas. He doesn’t see the I.C.U. as a doctors’ workshop. He sees it as the temporary home of the sickest, most fragile people in the country. Nowhere in health care do we expend more resources. Although fewer than one in four thousand Americans are in intensive care at any given time, they account for four per cent of national health-care costs. Ernst believes that his job is to make sure that everyone is collaborating to provide the most effective and least wasteful care possible.

He looked like a regular doctor to me. Ernst is fifty years old, a native German who received his medical degree at the University of Heidelberg before training in pulmonary and critical-care medicine in the United States. He wears a white hospital coat and talks about drips and ventilator settings, like any other critical-care specialist. But he doesn’t deal with patients: he deals with the people who deal with patients.

Ernst says he’s not telling clinicians what to do. Instead, he’s trying to get clinicians to agree on precise standards of care, and then make sure that they follow through on them. (The word “consensus” comes up a lot.) What I didn’t understand was how he could enforce such standards in ten hospitals across three thousand square miles.

Late one Friday evening, I joined an intensive-care-unit team on night duty. But this team was nowhere near a hospital. We were in a drab one-story building behind a meat-trucking facility outside of Boston, in a back section that Ernst called his I.C.U. command center. It was outfitted with millions of dollars’ worth of technology. Banks of computer screens carried a live feed of cardiac-monitor readings, radiology-imaging scans, and laboratory results from I.C.U. patients throughout Steward’s hospitals. Software monitored the stream and produced yellow and red alerts when it detected patterns that raised concerns. Doctors and nurses manned consoles where they could toggle on high-definition video cameras that allowed them to zoom into any I.C.U. room and talk directly to the staff on the scene or to the patients themselves.

The command center was just a few months old. The team had gone live in only four of the ten hospitals. But in the next several months Ernst’s “tele-I.C.U.” team will have the ability to monitor the care for every patient in every I.C.U. bed in the Steward health-care system.
A doctor, two nurses, and an administrative assistant were on duty in the command center each night I visited. Christina Monti was one of the nurses. A pixie-like thirty-year-old with nine years’ experience as a cardiac intensive-care nurse, she was covering Holy Family Hospital, on the New Hampshire border, and St. Elizabeth’s Medical Center, in Boston’s Brighton neighborhood. When I sat down with her, she was making her rounds, virtually.

First, she checked on the patients she had marked as most critical. She reviewed their most recent laboratory results, clinical notes, and medication changes in the electronic record. Then she made a “visit,” flicking on the two-way camera and audio system. If the patients were able to interact, she would say hello to them in their beds. She asked the staff members whether she could do anything for them. The tele-I.C.U. team provided the staff with extra eyes and ears when needed. If a crashing patient diverts the staff’s attention, the members of the remote team can keep an eye on the other patients. They can handle computer paperwork if a nurse falls behind; they can look up needed clinical information. The hospital staff have an OnStar-like button in every room that they can push to summon the tele-I.C.U. team.

Monti also ran through a series of checks for each patient. She had a reference list of the standards that Ernst had negotiated with the people running the I.C.U.s, and she looked to see if they were being followed. The standards covered basics, from hand hygiene to measures for stomach-ulcer prevention. In every room with a patient on a respirator, for instance, Monti made sure the nurse had propped the head of the bed up at least thirty degrees, which makes pneumonia less likely. She made sure the breathing tube in the patient’s mouth was secure, to reduce the risk of the tube’s falling out or becoming disconnected. She zoomed in on the medication pumps to check that the drips were dosed properly. She was not looking for bad nurses or bad doctors. She was looking for the kinds of misses that even excellent nurses and doctors can make under pressure.
The concept of the remote I.C.U. started with an effort to let specialists in critical-care medicine, who are in short supply, cover not just one but several community hospitals. Two hundred and fifty hospitals from Alaska to Virginia have installed a version of the tele-I.C.U. It produced significant improvements in outcomes and costs—and, some discovered, a means of driving better practices even in hospitals that had specialists on hand.
After five minutes of observation, however, I realized that the remote I.C.U. team wasn’t exactly in command; it was in negotiation. I observed Monti perform a video check on a middle-aged man who had just come out of heart surgery. A soft chime let the people in the room know she was dropping in. The man was unconscious, supported by a respirator and intravenous drips. At his bedside was a nurse hanging a bag of fluid. She seemed to stiffen at the chime’s sound.

“Hi,” Monti said to her. “I’m Chris. Just making my evening rounds. How are you?” The bedside nurse gave the screen only a sidelong glance.
Ernst wasn’t oblivious of the issue. He had taken pains to introduce the command center’s team, spending weeks visiting the units and bringing doctors and nurses out to tour the tele-I.C.U. before a camera was ever turned on. But there was no escaping the fact that these were strangers peering over the staff’s shoulders. The bedside nurse’s chilliness wasn’t hard to understand.

In a single hour, however, Monti had caught a number of problems. She noticed, for example, that a patient’s breathing tube had come loose. Another patient wasn’t getting recommended medication to prevent potentially fatal blood clots. Red alerts flashed on the screen—a patient with an abnormal potassium level that could cause heart-rhythm problems, another with a sudden leap in heart rate.

Monti made sure that the team wasn’t already on the case and that the alerts weren’t false alarms. Checking the computer, she figured out that a doctor had already ordered a potassium infusion for the woman with the low level. Flipping on a camera, she saw that the patient with the high heart rate was just experiencing the stress of being helped out of bed for the first time after surgery. But the unsecured breathing tube and the forgotten blood-clot medication proved to be oversights. Monti raised the concerns with the bedside staff.

Sometimes they resist. “You have got to be careful from patient to patient,” Gerard Hayes, the tele-I.C.U. doctor on duty, explained. “Pushing hard on one has ramifications for how it goes with a lot of patients. You don’t want to sour whole teams on the tele-I.C.U.” Across the country, several hospitals have decommissioned their systems. Clinicians have been known to place a gown over the camera, or even rip the camera out of the wall. Remote monitoring will never be the same as being at the bedside. One nurse called the command center to ask the team not to turn on the video system in her patient’s room: he was delirious and confused, and the sudden appearance of someone talking to him from the television would freak him out.
Still, you could see signs of change. I watched Hayes make his virtual rounds through the I.C.U. at St. Anne’s Hospital, in Fall River, near the Rhode Island border. He didn’t yet know all the members of the hospital staff—this was only his second night in the command center, and when he sees patients in person it’s at a hospital sixty miles north. So, in his dealings with the on-site clinicians, he was feeling his way.

Checking on one patient, he found a few problems. Mr. Karlage, as I’ll call him, was in his mid-fifties, an alcoholic smoker with cirrhosis of the liver, severe emphysema, terrible nutrition, and now a pneumonia that had put him into respiratory failure. The I.C.U. team injected him with antibiotics and sedatives, put a breathing tube down his throat, and forced pure oxygen into his lungs. Over a few hours, he stabilized, and the I.C.U. doctor was able to turn his attention to other patients.

But stabilizing a sick patient is like putting out a house fire. There can be smoldering embers just waiting to reignite. Hayes spotted a few. The ventilator remained set to push breaths at near-maximum pressure, and, given the patient’s severe emphysema, this risked causing a blowout. The oxygen concentration was still cranked up to a hundred per cent, which, over time, can damage the lungs. The team had also started several broad-spectrum antibiotics all at once, and this regimen had to be dialled back if they were to avoid breeding resistant bacteria.

Hayes had to notify the unit doctor. An earlier interaction, however, had not been promising. During a video check on a patient, Hayes had introduced himself and mentioned an issue he’d noticed. The unit doctor stared at him with folded arms, mouth shut tight. Hayes was a former Navy flight surgeon with twenty years’ experience as an I.C.U. doctor and looked to have at least a decade on the St. Anne’s doctor. But the doctor was no greenhorn, either, and gave him the brushoff: “The morning team can deal with that.” Now Hayes needed to call him about Mr. Karlage. He decided to do it by phone.

“Sounds like you’re having a busy night,” Hayes began when he reached the doctor. “Mr. Karlage is really turning around, huh?” Hayes praised the doctor’s work. Then he brought up his three issues, explaining what he thought could be done and why. He spoke like a consultant brought in to help. This went over better. The doctor seemed to accept Hayes’s suggestions.

Unlike a mere consultant, however, Hayes took a few extra steps to make sure his suggestions were carried out. He spoke to the nurse and the respiratory therapist by video and explained the changes needed. To carry out the plan, they needed written orders from the unit doctor. Hayes told them to call him back if they didn’t get the orders soon.

Half an hour later, Hayes called Mr. Karlage’s nurse again. She hadn’t received the orders. For all the millions of dollars of technology spent on the I.C.U. command center, this is where the plug meets the socket. The fundamental question in medicine is: Who is in charge? With the opening of the command center, Steward was trying to change the answer—it gave the remote doctors the authority to issue orders as well. The idea was that they could help when a unit doctor got too busy and fell behind, and that’s what Hayes chose to believe had happened. He entered the orders into the computer. In a conflict, however, the on-site physician has the final say. So Hayes texted the St. Anne’s doctor, informing him of the changes and asking if he’d let him know if he disagreed.

Hayes received no reply. No “thanks” or “got it” or “O.K.” After midnight, though, the unit doctor pressed the video call button and his face flashed onto Hayes’s screen. Hayes braced for a confrontation. Instead, the doctor said, “So I’ve got this other patient and I wanted to get your opinion.”
Hayes suppressed a smile. “Sure,” he said.

When he signed off, he seemed ready to high-five someone. “He called us,” he marvelled. The command center was gaining credibility.
Armin Ernst has big plans for the command center—a rollout of full-scale treatment protocols for patients with severe sepsis, acute respiratory-distress syndrome, and other conditions; strategies to reduce unnecessary costs; perhaps even computer forecasting of patient volume someday. Steward is already extending the command-center concept to in-patient psychiatry. Emergency rooms and surgery may be next. Other health systems are pursuing similar models. The command-center concept provides the possibility of, well, command.

Today, some ninety “super-regional” health-care systems have formed across the country—large, growing chains of clinics, hospitals, and home-care agencies. Most are not-for-profit. Financial analysts expect the successful ones to drive independent medical centers out of existence in much of the country—either by buying them up or by drawing away their patients with better quality and cost control. Some small clinics and stand-alone hospitals will undoubtedly remain successful, perhaps catering to the luxury end of health care the way gourmet restaurants do for food. But analysts expect that most of us will gravitate to the big systems, just as we have moved away from small pharmacies to CVS and Walmart.
Already, there have been startling changes. Cleveland Clinic, for example, opened nine regional hospitals in northeast Ohio, as well as health centers in southern Florida, Toronto, and Las Vegas, and is now going international, with a three-hundred-and-sixty-four-bed hospital in Abu Dhabi scheduled to open next year. It reached an agreement with Lowe’s, the home-improvement chain, guaranteeing a fixed price for cardiac surgery for the company’s employees and dependents. The prospect of getting better care for a lower price persuaded Lowe’s to cover all out-of-pocket costs for its insured workers to go to Cleveland, including co-payments, airfare, transportation, and lodging. Three other companies, including Kohl’s department stores, have made similar deals, and a dozen more, including Boeing, are in negotiations.

Big Medicine is on the way.
Reinventing medical care could produce hundreds of innovations. Some may be as simple as giving patients greater e-mail and online support from their clinicians, which would enable timelier advice and reduce the need for emergency-room visits. Others might involve smartphone apps for coaching the chronically ill in the management of their disease, new methods for getting advice from specialists, sophisticated systems for tracking outcomes and costs, and instant delivery to medical teams of up-to-date care protocols. Innovations could take a system that requires sixty-three clinicians for a knee replacement and knock the number down by half or more. But most significant will be the changes that finally put people like John Wright and Armin Ernst in charge of making care coherent, coördinated, and affordable. Essentially, we’re moving from a Jeffersonian ideal of small guilds and independent craftsmen to a Hamiltonian recognition of the advantages that size and centralized control can bring.

Yet it seems strange to pin our hopes on chains. We have no guarantee that Big Medicine will serve the social good. Whatever the industry, an increase in size and control creates the conditions for monopoly, which could do the opposite of what we want: suppress innovation and drive up costs over time. In the past, certainly, health-care systems that pursued size and market power were better at raising prices than at lowering them.
A new generation of medical leaders and institutions professes to have a different aim. But a lesson of the past century is that government can influence the behavior of big corporations, by requiring transparency about their performance and costs, and by enacting rules and limitations to protect the ordinary citizen. The federal government has broken up monopolies like Standard Oil and A.T. & T.; in some parts of the country, similar concerns could develop in health care.

Mixed feelings about the transformation are unavoidable. There’s not just the worry about what Big Medicine will do; there’s also the worry about how society and government will respond. For the changes to live up to our hopes—lower costs and better care for everyone—liberals will have to accept the growth of Big Medicine, and conservatives will have to accept the growth of strong public oversight.

The vast savings of Big Medicine could be widely shared—or reserved for a few. The clinicians who are trying to reinvent medicine aren’t doing it to make hedge-fund managers and bondholders richer; they want to see that everyone benefits from the savings their work generates—and that won’t be automatic.

Our new models come from industries that have learned to increase the capabilities and efficiency of the human beings who work for them. Yet the same industries have also tended to devalue those employees. The frontline worker, whether he is making cars, solar panels, or wasabi-crusted ahi tuna, now generates unprecedented value but receives little of the wealth he is creating. Can we avoid this as we revolutionize health care?

Those of us who work in the health-care chains will have to contend with new protocols and technology rollouts every six months, supervisors and project managers, and detailed metrics on our performance. Patients won’t just look for the best specialist anymore; they’ll look for the best system. Nurses and doctors will have to get used to delivering care in which our own convenience counts for less and the patients’ experience counts for more. We’ll also have to figure out how to reward people for taking the time and expense to teach the next generations of clinicians. All this will be an enormous upheaval, but it’s long overdue, and many people recognize that. When I asked Christina Monti, the Steward tele-I.C.U. nurse, why she wanted to work in a remote facility tangling with staffers who mostly regarded her with indifference or hostility, she told me, “Because I wanted to be part of the change.”

And we are seeing glimpses of this change. In my mother’s rehabilitation center, miles away from where her surgery was done, the physical therapists adhered to the exercise protocols that Dr. Wright’s knee factory had developed. He didn’t have a video command center, so he came out every other day to check on all the patients and make sure that the staff was following the program. My mother was sure she’d need a month in rehab, but she left in just a week, incurring a fraction of the costs she would have otherwise. She walked out the door using a cane. On her first day at home with me, she climbed two flights of stairs and walked around the block for exercise.

The critical question is how soon that sort of quality and cost control will be available to patients everywhere across the country. We’ve let health-care systems provide us with the equivalent of greasy-spoon fare at four-star prices, and the results have been ruinous. The Cheesecake Factory model represents our best prospect for change. Some will see danger in this. Many will see hope. And that’s probably the way it should be. ♦

======================
Article on Physician Burnout and Best Practice
======================
JCR Notes:

A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.

To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week.

“Complete Care” across 8500 physicians and 4.4 million members at SCPMG has four elements:

1. Share accountability:
share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties.

2. Delegation:
One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians

3. Information technology
“Outreach team” manages information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant).

4. Standardized Care Process (see Atul Gawande Big Med)
“Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports.

Glossary:
Medical assistants (MAs)
Licensed vocational nurses (LVNs)

======================
======================
======================

CREDIT HBR Case Study on SCPMG Primary Care Best Practice

How One California Medical Group Is Decreasing Physician Burnout
Sophia Arabadjis
Erin E. Sullivan
JUNE 07, 2017

Physician burnout is a growing problem for all health care systems in the United States. Burned-out physicians deliver lower quality care, reduce their hours, or stop practicing, reducing access to care around the country. Primary care physicians are particularly vulnerable: They have some of the highest burnout rates of any medical discipline.

As part of our work researching high-performing primary care systems, we discovered a system-wide approach launched by Southern California Permanente Medical Group (SCPMG) in 2004 that unburdens primary care physicians. We believe the program — Complete Care — may be a viable model for other institutions looking to decrease burnout or increase physician satisfaction. (While burnout can easily be measured, institutions often don’t publicly report their own rates and the associated turnover they experience. Consequently, we used physician satisfaction as a proxy for burnout in our research.)

In most health care systems, primary care physicians are the first stop for patients needing care. As a result, their patients’ needs — and their own tasks — vary immensely. A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.

Some studies have examined just how much time a primary care physician needs to do all of these tasks and the results are staggering. To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week. Given such workloads, the high burnout rate is hardly surprising.

While designed with the intent to improve quality of care, SCPMG’s Complete Care program also alleviates some of the identified drivers of physician burnout by following a systematic approach to care delivery. Comprised of 8,500 physicians, SCPMG consistently provides the highest quality care to the region’s 4.4 million plan members. And a recent study of SCPMG physician satisfaction suggests that regardless of discipline, physicians feel high levels of satisfaction in three key areas: their compensation, their perceived ability to deliver high-quality care, and their day-to-day professional lives.

Complete Care has four core elements:

Share Accountability with Specialists
A few years ago, SCPMG’s regional medical director of quality and clinical analysis noticed a plateauing effect in some preventative screenings where screenings rates failed to increase after a certain percentage. He asked his team to analyze how certain patient populations — for example, women in need of a mammogram — accessed the health care system. As approximately one in eight women will develop invasive breast cancer over the course of their lifetimes, a failure to receive the recommended preventative screening could have serious health repercussions.
What the team found was startling: Over the course of a year, nearly two-thirds of women clinically eligible for a mammogram never set foot in their primary care physician’s office. Instead they showed up in specialty care or urgent care.

While this discovery spurred more research into patient access, the outcome remained the same: To achieve better rates of preventative and chronic care compliance, specialists had to be brought into the fold.
SCPMG slowly started to share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties. In order to bring the specialists on board, SCPMG identified and enlisted physician champions across the medical group to promote the program throughout the region; carefully timed the rollouts of different elements of the program pieces so increased demands wouldn’t overwhelm specialists; and crafted incentive programs whose payout was tied to their performance of preventative and chronic-care activities.

This reallocation of traditional primary care responsibilities has allowed SCPMG to achieve a high level of care integration and challenge traditional notions of roles and systems. Its specialists now have to respond to patients’ needs outside their immediate expertise: For example, a podiatrist will inquire whether a diabetic patient has had his or her regular eye examination, and an emergency room doctor will stitch up a cut and give immunizations in the same visit. And the whole system, not just primary care, is responsible for quality metrics related to prevention and chronic care (e.g., the percentage of eligible patients who received a mammogram).

In addition, SCPMG revamped the way it provided care to match how patients accessed and used their system. For example, it began promoting the idea of the comprehensive visit, where patients could see their primary care provider, get blood drawn, and pick up prescribed medications in the same building.

Ultimately, the burden on primary care physicians started to ease. Even more important, SCPMG estimates that Complete Care has saved over 17,000 lives.

Delegate Responsibility
“Right work, right people,” a guiding principle, helped shape the revamping of the organization’s infrastructure. One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians so physicians could spend their time doing tasks only they could do and everyone was working at the top of his or her license. For example, embedded nurse managers of diabetic patients help coordinate care visits, regularly communicate directly with patients about meeting their health goals (such as weekly calls about lower HbA1c levels), and track metrics on diabetic populations across the entire organization. At the same time, dedicated prescribing nurse practitioners work closely with physicians to monitor medication use, which in the case of blood thinners, is very time intensive and requires careful titration.

Leverage Technology

SCPMG invested in information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant). It also established a small outreach team (about four people) that uses large automated registries of patients to mail seasonal reminders (e.g., “it’s time for your flu vaccine shot”) and alerts about routine checkups (e.g., “you are due for a mammogram”) and handle other duties (e.g., coordinating mail-order, at-home fecal tests for colon cancer). In addition, the outreach team manages automated calls and e-mail reminders for the regions 4.4 million members.

Thanks to this reorganization of responsibilities and use of new technology, traditional primary care tasks such as monitoring blood thinners, managing diabetic care, and tracking patients eligibility for cancer screenings have been transferred to other people and processes within the SCPMG system.

Standardize Care Processes
The final element of Complete Care is the kind of process standardization advocated by Atul Gawande’s in his New Yorker article “Big Med.” Standardizing processes — and in particular, workflows — removes duplicative work, strengthens working relationships, and results in higher-functioning teams, reliable routines and higher-quality outcomes. In primary care, standardized workflows help create consistent communications between providers and staff and providers and patients, which allows physicians to spend more time during visits on patients’ pressing needs.
One such process, the “Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports. It was originally developed to improve communications between support staff and physicians after SCPMG’s electronic medical record was introduced.
Medical assistants (MAs) and licensed vocational nurses (LVNs) are key players. A series of checklists embedded into the medical record guide their work both before and after the visit. These checklists contain symptoms, actions, and questions that are timely and specific to each patient based on age, disease status, and reason for his or her visit. Prior to the visit, MAs or LVNs contact patients with pre-visit instructions or to schedule necessary lab work. During the visit, they use the same checklists to follow up pre-visit instructions, take vitals, conduct medication reconciliation and prep the patient for the provider.

Pop-ups within the medical record indicate a patient’s eligibility for a new screening or regular test based on new literature, prompting the MAs or LVNs to ask patients for additional information. During the visit, physicians have access to the same checklists and data collected by the MAs or LVNs. This enables them to review the work quickly and efficiently and follow up on any flagged issues. After the visit with the physician, patients see an MA or LVN again and receive a summary of topics discussed with the provider and specific instructions or health education resources.

Contemporary physicians face many challenges: an aging population, rising rates of chronic conditions, workforce shortages, technological uncertainty, changing governmental policies, and greater disparities in health outcomes across populations. All of this, it could be argued, disproportionately affect primary care specialties. These factors promise to increase physician burnout unless something is done by health care organizations to ease their burden. SCPMG’s Complete Care initiative offers a viable blueprint to do just that.

Sophia Arabadjis is a researcher and case writer at the Harvard Medical School Center for Primary Care and a research assistant at the University of Colorado. She has investigated health systems in Europe and the United States.

Erin E. Sullivan is the research director of the Harvard Medical School Center for Primary Care. Her research focuses on high-performing primary care systems.

Four Daily Well-Being Workouts

Marty Seligman is a renowned well-being researcher, and writes in today’s NYT about four practices for flourishing:

Identify Signature Strengths: Focus every day on personal strengths exhibited when you were at your best.

Find the Good: Focus every day on “why did this good thing happen”?

Make a Gratitude Visit: Visit a person you feel gratitude toward.

Respond Constructively: Practice active, constructive responses.

===================

CREDIT: Article Below Can Be Found at This Link

Get Happy: Four Well-Being Workouts

By JULIE SCELFO
APRIL 5, 2017
Relieving stress and anxiety might help you feel better — for a bit. Martin E.P. Seligman, a professor of psychology at the University of Pennsylvania and a pioneer in the field of positive psychology, does not see alleviating negative emotions as a path to happiness.
“Psychology is generally focused on how to relieve depression, anger and worry,” he said. “Freud and Schopenhauer said the most you can ever hope for in life is not to suffer, not to be miserable, and I think that view is empirically false, morally insidious, and a political and educational dead-end.”
“What makes life worth living,” he said, “is much more than the absence of the negative.”

To Dr. Seligman, the most effective long-term strategy for happiness is to actively cultivate well-being.

In his 2012 book, “Flourish: A Visionary New Understanding of Happiness and Well-Being,” he explored how well-being consists not merely of feeling happy (an emotion that can be fleeting) but of experiencing a sense of contentment in the knowledge that your life is flourishing and has meaning beyond your own pleasure.

To cultivate the components of well-being, which include engagement, good relationships, accomplishment and purpose, Dr. Seligman suggests these four exercises based on research at the Penn Positive Psychology Center, which he directs, and at other universities.

Identify Signature Strengths
Write down a story about a time when you were at your best. It doesn’t need to be a life-changing event but should have a clear beginning, middle and end. Reread it every day for a week, and each time ask yourself: “What personal strengths did I display when I was at my best?” Did you show a lot of creativity? Good judgment? Were you kind to other people? Loyal? Brave? Passionate? Forgiving? Honest?

Writing down your answers “puts you in touch with what you’re good at,” Dr. Seligman explained. The next step is to contemplate how to use these strengths to your advantage, intentionally organizing and structuring your life around them.

In a study by Dr. Seligman and colleagues published in American Psychologist, participants looked for an opportunity to deploy one of their signature strengths “in a new and different way” every day for one week.

“A week later, a month later, six months later, people had on average lower rates of depression and higher life satisfaction,” Dr. Seligman said. “Possible mechanisms could be more positive emotions. People like you more, relationships go better, life goes better.”

Find the Good
Set aside 10 minutes before you go to bed each night to write down three things that went really well that day. Next to each event answer the question, “Why did this good thing happen?”
Instead of focusing on life’s lows, which can increase the likelihood of depression, the exercise “turns your attention to the good things in life, so it changes what you attend to,” Dr. Seligman said. “Consciousness is like your tongue: It swirls around in the mouth looking for a cavity, and when it finds it, you focus on it. Imagine if your tongue went looking for a beautiful, healthy tooth.” Polish it.

Make a Gratitude Visit
Think of someone who has been especially kind to you but you have not properly thanked. Write a letter describing what he or she did and how it affected your life, and how you often remember the effort. Then arrange a meeting and read the letter aloud, in person.

“It’s common that when people do the gratitude visit both people weep out of joy,” Dr. Seligman said. Why is the experience so powerful? “It puts you in better touch with other people, with your place in the world.”

Respond Constructively
This exercise was inspired by the work of Shelly Gable, a social psychologist at the University of California, Santa Barbara, who has extensively studied marriages and other close relationships. The next time someone you care about shares good news, give what Dr. Gable calls an “active constructive response.”

That is, instead of saying something passive like, “Oh, that’s nice” or being dismissive, express genuine excitement. Prolong the discussion by, say, encouraging them to tell others or suggest a celebratory activity.

“Love goes better, commitment increases, and from the literature, even sex gets better after that.”

Julie Scelfo is a former staff writer for The Times who writes often about human behavior.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============

UHVDC and China

Credit: Economist Article about UHVDC and China

A greener grid
China’s embrace of a new electricity-transmission technology holds lessons for others
The case for high-voltage direct-current connectors
Jan 14th 2017

YOU cannot negotiate with nature. From the offshore wind farms of the North Sea to the solar panels glittering in the Atacama desert, renewable energy is often generated in places far from the cities and industrial centres that consume it. To boost renewables and drive down carbon-dioxide emissions, a way must be found to send energy over long distances efficiently.

The technology already exists (see article). Most electricity is transmitted today as alternating current (AC), which works well over short and medium distances. But transmission over long distances requires very high voltages, which can be tricky for AC systems. Ultra-high-voltage direct-current (UHVDC) connectors are better suited to such spans. These high-capacity links not only make the grid greener, but also make it more stable by balancing supply. The same UHVDC links that send power from distant hydroelectric plants, say, can be run in reverse when their output is not needed, pumping water back above the turbines.

Boosters of UHVDC lines envisage a supergrid capable of moving energy around the planet. That is wildly premature. But one country has grasped the potential of these high-capacity links. State Grid, China’s state-owned electricity utility, is halfway through a plan to spend $88bn on UHVDC lines between 2009 and 2020. It wants 23 lines in operation by 2030.

That China has gone furthest in this direction is no surprise. From railways to cities, China’s appetite for big infrastructure projects is legendary (see article). China’s deepest wells of renewable energy are remote—think of the sun-baked Gobi desert, the windswept plains of Xinjiang and the mountain ranges of Tibet where rivers drop precipitously. Concerns over pollution give the government an additional incentive to locate coal-fired plants away from population centres. But its embrace of the technology holds two big lessons for others. The first is a demonstration effect. China shows that UHVDC lines can be built on a massive scale. The largest, already under construction, will have the capacity to power Greater London almost three times over, and will span more than 3,000km.

The second lesson concerns the co-ordination problems that come with long-distance transmission. UHVDCs are as much about balancing interests as grids. The costs of construction are hefty. Utilities that already sell electricity at high prices are unlikely to welcome competition from suppliers of renewable energy; consumers in renewables-rich areas who buy electricity at low prices may balk at the idea of paying more because power is being exported elsewhere. Reconciling such interests is easier the fewer the utilities involved—and in China, State Grid has a monopoly.

That suggests it will be simpler for some countries than others to follow China’s lead. Developing economies that lack an established electricity infrastructure have an advantage. Solar farms on Africa’s plains and hydroplants on its powerful rivers can use UHVDC lines to get energy to growing cities. India has two lines on the drawing-board, and should have more.

Things are more complicated in the rich world. Europe’s utilities work pretty well together but a cross-border UHVDC grid will require a harmonised regulatory framework. America is the biggest anomaly. It is a continental-sized economy with the wherewithal to finance UHVDCs. It is also horribly fragmented. There are 3,000 utilities, each focused on supplying power to its own customers. Consumers a few states away are not a priority, no matter how much sense it might make to send them electricity. A scheme to connect the three regional grids in America is stuck. The only way that America will create a green national grid will be if the federal government throws its weight behind it.

Live wire
Building a UHVDC network does not solve every energy problem. Security of supply remains an issue, even within national borders: any attacker who wants to disrupt the electricity supply to China’s east coast will soon have a 3,000km-long cable to strike. Other routes to a cleaner grid are possible, such as distributed solar power and battery storage. But to bring about a zero-carbon grid, UHVDC lines will play a role. China has its foot on the gas. Others should follow.
This article appeared in the Leaders section of the print edition under the headline “A greener grid”