Monthly Archives: December 2017

Scourge of Opioids

CREDIT: https://www.nationalaffairs.com/publications/detail/taking-on-the-scourge-of-opioids

Taking On the Scourge of Opioids

Sally Satel

Summer 2017

On March 1, 2017, Maryland governor Larry Hogan declared a state of emergency. Heroin and fentanyl, a powerful synthetic opioid, had killed 1,468 Maryland residents in the first nine months of 2016, up 62% from the same period in 2015. Speaking at a command center of the Maryland Emergency Management Agency near Baltimore, the governor announced additional funding to strengthen law enforcement, prevention, and treatment services. “The reality is that this threat is rapidly escalating,” Hogan said.

And it is escalating across the country. Florida governor Rick Scott followed Hogan’s lead in May, declaring a public-health emergency after requests for help from local officials across the state. Arizona governor Doug Ducey did the same in June. In Ohio, some coroners have run out of space for the bodies of overdose victims and have to use a mobile, refrigerated morgue. In West Virginia, state burial funds have been exhausted burying overdose victims. Opioid orphans are lucky if their grandparents can raise them; if not, they are at the mercy of foster-care systems that are now overflowing with the children of addicted parents.

An estimated 2.5 million Americans abuse or are addicted to opioids — a class of highly addictive drugs that includes Percocet, Vicodin, OxyContin, and heroin. Most experts believe this is an undercount, and all agree that the casualty rate is unprecedented. At peak years in an earlier heroin epidemic, from 1973 to 1975, there were 1.5 fatalities per 100,000 Americans. In 2015, the rate was 10.4 per 100,000. In West Virginia, ground zero of the crisis, it was over 36 per 100,000. In raw numbers, more than 33,000 individuals died in 2015 — nearly equal to the number of deaths from car crashes and double the number of gun homicides. Meanwhile, the opioid-related fatalities continue to mount, having quadrupled since 1999.

The roots of the crisis can be traced to the early 1990s when physicians began to prescribe opioid painkillers more liberally. In parallel, overdose deaths from painkillers rose until about 2011. Since then, heroin and synthetic opioids have briskly driven opioid-overdose deaths; they now account for over two-thirds of victims. Synthetic opioids, such as fentanyl, are made mainly in China, shipped to Mexico, and trafficked here. Their menace cannot be overstated.

Fentanyl is 50 times as potent as heroin and can kill instantly. People have been found dead with needles dangling from their arms, the syringe barrels still partly full of fentanyl-containing liquid. One fentanyl analog, carfentanil, is a big-game tranquilizer that’s a staggering 5,000 times more powerful than heroin. This spring, “Gray Death,” a combination of heroin, fentanyl, carfentanil, and other synthetics, has pushed the bounds of lethal chemistry even further. The death rate from synthetics has increased by more than 72% over the space of a single year, from 2014 to 2015. They have transformed an already terrible problem into a true public-health emergency.

The nation has weathered drug epidemics before, but the current affliction — a new plague for a new century, in the words of Nicholas Eberstadt — is different. Today, the addicted are not inner-city minorities, though big cities are increasingly reporting problems. Instead, they are overwhelmingly white and rural, though middle- and upper-class individuals are also affected. The jarring visual of the crisis is not an urban “gang banger” but an overdosed mom slumped in the front seat of her car in a Walmart parking lot, toddler in the back.

It’s almost impossible to survey this devastating tableau and not wonder why the nation’s response has been so slow in coming. Jonathan Caulkins, a drug-policy expert at Carnegie Mellon, offers two theories. One is geography. The prescription-opioid wave crashed down earliest in fly-over states, particularly small cities and rural areas, such as West Virginia and Kentucky, without nationally important media markets. Earlier opioid (heroin) epidemics raged in urban centers, such as New York, Baltimore, Chicago, and Los Angeles.

The second of Caulkins’s plausible explanations is the absence of violence that roiled inner cities in the early 1970s, when President Richard Nixon called drug abuse “public enemy number one.” Dealers do not engage in shooting wars or other gang-related activity. As purveyors of heroin established themselves in the U.S., Mexican bosses deliberately avoided inner cities where heroin markets were dominated by violent gangs. Thanks to a “drive-through” business model perfected by traffickers and executed by discreet runners — farm boys from western Mexico looking to make quick money — heroin can be summoned via text message or cell phone and delivered, like pizza, to homes or handed off in car-to-car transactions. Sources of painkillers are low profile as well. Typically pills are obtained (or stolen) from friends or relatives, physicians, or dealers. The “dark web,” too, is a conduit for synthetics.

It’s hard to miss, too, that this time around, the drug crisis is viewed differently. Heroin users today are widely seen as suffering from an illness. And because that illness has a pale complexion, many have asked, “Where was the compassion for black people?” A racial element cannot be denied, but there are other forces at play, namely that Americans are drug-war weary and law enforcement has incarceration fatigue. It also didn’t help that, in the 1970s, officers were only loosely woven into the fabric of the inner-city minority neighborhoods that were hardest hit. Today, in the small towns where so much of the epidemic plays out, the crisis is personal. Police chiefs, officers, and local authorities will likely have at least one relative, friend, or neighbor with an opioid problem.

If there is reason for optimism in the midst of this crisis, it is that national and local politicians and even police are placing emphasis on treatment over punishment. And, without question, the nation needs considerably more funding for treatment; Congress must step up. Yet the much-touted promise of treatment — and particularly of anti-addiction medications — as a panacea has already been proven wrong. Perhaps “we can’t arrest our way out of the problem,” as officials like to say, but nor are we treating our way out of it. This is because many users reject treatment, and, if they accept it, too many drop out. Engaging drug users in treatment has turned out to be one of the biggest challenges of the epidemic — and one that needs serious attention.

The near-term forecast for this American Carnage, as journalist Christopher Caldwell calls it, is grim. What can be done?

ROOTS OF A CRISIS

In the early 1990s, campaigns for improved treatment of pain gained ground. Analgesia for pain associated with cancer and terminal illness was relatively well accepted, but doctors were leery of medicating chronic conditions, such as joint pain, back pain, and neurological conditions, lest patients become addicted. Then in 1995 the American Pain Society recommended that pain be assessed as the “fifth vital sign” along with the standard four (blood pressure, temperature, pulse, and respiratory rate). In 2001 the influential Joint Commission on Accreditation of Healthcare Organizations established standards for pain management. These standards did not mention opioids, per se, but were interpreted by many physicians as encouraging their use.

These developments had a gradual but dramatic effect on the culture of American medicine. Soon, clinicians were giving an entire month’s worth of Percocet or Lortab to patients with only minor injuries or post-surgical pain that required only a few days of opioid analgesia. Compounding the matter, pharmaceutical companies engaged in aggressive marketing to physicians.

The culture of medical practice contributed as well. Faced with draconian time pressures, a doctor who suspected that his patient was taking too many painkillers rarely had time to talk with him about it. Other time-consuming pain treatments, such as physical therapy or behavioral strategies, were, and remain, less likely to be covered by insurers. Abbreviated visits meant shortcuts, like a quick refill that may not have been warranted, while the need for addiction treatment was overlooked. In addition, clinicians were, and still are, held hostage to ubiquitous “patient-satisfaction surveys.” A poor grade mattered because Medicare and Medicaid rely on these assessments to help determine the amount of reimbursement for care. Clearly, too many incentives pushed toward prescribing painkillers, even when it went against a doctor’s better judgment.

The chief risk of liberal prescribing was not so much that the patient would become addicted — though it happens occasionally — but rather that excess medication fed the rivers of pills that were coursing through many neighborhoods. And as more painkillers began circulating, almost all of them prescribed by physicians, more opportunities arose for non-patients to obtain them, abuse them, and die. OxyContin formed a particularly notorious tributary. Available since 1996, this slow-release form of oxycodone was designed to last up to 12 hours (about six to eight hours longer than immediate-release preparations of oxycodone, such as Percocet). A sustained blood level was meant to be a therapeutic advantage for patients with unremitting pain. To achieve long action, each OxyContin tablet was loaded with a large amount of oxycodone.

Packing a large dose into a single pill presented a major unintended consequence. When it was crushed and snorted or dissolved in water and injected, OxyContin gave a clean, predictable, and enjoyable high. By 2000, reports of abuse of OxyContin began to surface in the Rust Belt — a region rife with injured coal miners who were readily prescribed OxyContin, or, as it came to be called, “hillbilly heroin.” Ohio along with Florida became the “pill mill” capitals of the nation. These mills were advertised as “pain clinics,” but were really cash-only businesses set up to sell painkillers in high volume. The mills employed shady physicians who were licensed to prescribe but knew they weren’t treating authentic patients.

Around 2010 to 2011, law enforcement began cracking down on pill mills. In 2010, OxyContin’s maker, Purdue Pharma, reformulated the pill to make it much harder to crush. In parallel, physicians began to re-examine their prescribing practices and to consider non-opioid options for chronic-pain management. More states created prescription registries so that pharmacists and doctors could detect patients who “doctor shopped” for painkillers and even forged prescriptions. (Today, all states except Missouri have such a registry.) Last year, the American Medical Association recommended that pain be removed as a “fifth vital sign” in professional medical standards.

Controlling the sources of prescription pills was completely rational. Sadly, however, it helped set the stage for a new dimension of the opioid epidemic: heroin and synthetic opioids. Heroin — cheaper and more abundant than painkillers — had flowed into the western U.S. since at least the 1990s, but trafficking east of the Mississippi and into the Rust Belt reportedly began to accelerate around the mid-2000s, a transformative episode in the history of domestic drug problems detailed in Sam Quinones’s superb book Dreamland.

The timing was darkly auspicious. As prescription painkillers became harder to get and more expensive, thanks to alterations of the OxyContin tablet, to law-enforcement efforts, and to growing physician enlightenment, a pool of individuals already primed by their experience with prescription opioids moved on to low-cost, relatively pure, and accessible heroin. Indeed, between 2008 and 2010, about three-fourths of people who had used heroin in the past year reported non-medical use of painkillers — likely obtained outside the health-care system — before initiating heroin use.

The progression from pills to heroin was abetted by the nature of addiction itself. As users became increasingly tolerant to painkillers, they needed larger quantities of opioids or more efficient ways to use them in order to achieve the same effect. Moving from oral consumption to injection allowed this. Once a person is already injecting pills, moving to heroin, despite its stigma, doesn’t seem that big a step. The march to heroin is not inexorable, of course. Yet in economically and socially depleted environments where drug use is normalized, heroin is abundant, and treatment is scarce, widespread addiction seems almost inevitable.

The last five years or so have witnessed a massive influx of powder heroin to major cities such as New York, Detroit, and Chicago. From there, traffickers direct shipments to other urban areas, and these supplies are, in turn, distributed further to rural and suburban areas. It is the powdered form of heroin that is laced with synthetics, such as fentanyl. Most victims of synthetic opioids don’t even know they are taking them. Drug traffickers mix the fentanyl with heroin or press it into pill form that they sell as OxyContin.

Yet, there are reports of addicts now knowingly seeking fentanyl as their tolerance to heroin has grown. Whereas heroin requires poppies, which take time to cultivate, synthetics can be made in a lab, so the supply chain can be downsized. And because the synthetics are so strong, small volumes can be trafficked more efficiently and more profitably. What’s more, laboratories can easily stay one step ahead of the Drug Enforcement Administration by modifying fentanyl into analogs that are more potent, less detectable, or both. Synthetics are also far more deadly: In some regions of the country, roughly two-thirds of deaths from opioids can now be traced to heroin, including heroin that medical examiners either suspect or are certain was laced with fentanyl.

THE BASICS

Terminology is important in discussions about drug use. A 2016 Surgeon General report on addiction, “Facing Addiction in America,” defines “misuse” of a substance as consumption that “causes harm to the user and/or to those around them.” Elsewhere, however, the term has been used to refer to consumption for a purpose not consistent with medical or legal guidelines. Thus, misuse would apply equally to the person who takes an extra pill now and then from his own prescribed supply of Percocet to reduce stress as well as to the person who buys it from a dealer and gets high several times a week. The term “abuse” refers to a consistent pattern of use causing harm, but “misuse,” with its protean definitions, has unhelpfully taken its place in many discussions of the current crisis. In the Surgeon General report, the clinical term “substance use disorder” refers to functionally significant impairment caused by substance use. Finally, “addiction,” while not considered a clinical term, denotes a severe form of substance-use disorder — in other words, compulsive use of a substance with difficulty stopping despite negative consequences.

Much of the conventional wisdom surrounding the opioid crisis holds that virtually anyone is at risk for opioid abuse or addiction — say, the average dental patient who receives some Vicodin for a root canal. This is inaccurate, but unsurprising. Exaggerating risk is a common strategy in public-health messaging: The idea is to garner attention and funding by democratizing affliction and universalizing vulnerability. But this kind of glossing is misleading at best, counterproductive at worst. To prevent and ameliorate problems, we need to know who is truly at risk to target resources where they are most needed.

In truth, the vast majority of people prescribed medication for pain do not misuse it, even those given high doses. A new study in the Annals of Surgery, for example, found that almost three-fourths of all opioid painkillers prescribed by surgeons for five common outpatient procedures go unused. In 2014, 81 million people received at least one prescription for an opioid pain reliever, according to a study in the American Journal of Preventive Medicine; yet during the same year, the National Survey on Drug Use and Health reported that only 1.9 million people, approximately 2%, met the criteria for prescription pain-reliever abuse or dependence (a technical term denoting addiction). Those who abuse their prescription opioids are patients who have been prescribed them for over six months and tend to suffer from concomitant psychiatric conditions, usually a mood or anxiety disorder, or have had prior problems with alcohol or drugs.

Notably, the majority of people who develop problems with painkillers are not individuals for whom they have been legitimately prescribed — nor are opioids the first drug they have misused. Such non-patients procure their pills from friends or family, often helping themselves to the amply stocked medicine chests of unsuspecting relatives suffering from cancer or chronic pain. They may scam doctors, forge prescriptions, or doctor shop. The heaviest users are apt to rely on dealers. Some of these individuals make the transition to heroin, but it is a small fraction. (Still, the death toll is striking given the lethality of synthetic opioids.) One study from the Substance Abuse and Mental Health Services Administration found that less than 5% of pill misusers had moved to heroin within five years of first beginning misuse. These painkiller-to-heroin migrators, according to analyses by the Centers for Disease Control and Prevention, also tend to be frequent users of multiple substances, such as benzodiazepines, alcohol, and cocaine. The transition from these other substances to heroin may represent a natural progression for such individuals.

Thus, factors beyond physical pain are most responsible for making individuals vulnerable to problems with opioids. Princeton economists Anne Case and Angus Deaton paint a dreary portrait of the social determinants of addiction in their work on premature demise across the nation. Beginning in the late 1990s, deaths due to alcoholism-related liver disease, suicide, and opioid overdoses began to climb nationwide. These “deaths of despair,” as Case and Deaton call them, strike less-educated whites, both men and women, between the ages of 45 and 54. While the life expectancy of men and women with a college degree continues to grow, it is actually decreasing for their less-educated counterparts. The problems start with poor job opportunities for those without college degrees. Absent employment, people come unmoored. Families unravel, domestic violence escalates, marriages dissolve, parents are alienated from their children, and their children from them.

Opioids are a salve for these communal wounds. Work by Alex Hollingsworth and colleagues found that residents of locales most severely pummeled by the economic downturn were more susceptible to opioids. As county unemployment rates increased by one percentage point, the opioid death rate (per 100,000) rose by almost 4%, and the emergency-room visit rate for opioid overdoses (per 100,000) increased by 7%. It’s no coincidence that many of the states won by Donald Trump — West Virginia, Kentucky, and Ohio, for example — had the highest rates of fatal drug overdoses in 2015.

Of all prime-working-age male labor-force dropouts, nearly half — roughly 7 million men — take pain medication on a daily basis. “In our mind’s eye,” writes Nicholas Eberstadt in a recent issue of Commentary, “we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens — stoned.” Medicaid, it turns out, financed many of those stoned hours. Of the entire non-working prime-age white male population in 2013, notes Eberstadt, 57% were reportedly collecting disability benefits from one or more government disability programs. Medicaid enabled them to see a doctor and fill their prescriptions for a fraction of the street value: A single 10-milligram Percocet could go for $5 to $10, the co-pay for an entire bottle.

When it comes to beleaguered communities, one has to wonder how much can be done for people whose reserves of optimism and purposefulness have run so low. The challenge is formidable, to be sure, but breaking the cycle of self-destruction through treatment is a critical first step.

TREATMENT OPTIONS

Perhaps surprisingly, the majority of people who become addicted to any drug, including heroin, quit on their own. But for those who cannot stop using by themselves, treatment is critical, and individuals with multiple overdoses and relapses typically need professional help. Experts recommend at least one year of counseling or anti-addiction medication, and often both. General consensus holds that a standard week of “detoxification” is basically useless, if not dangerous — not only is the person extremely likely to resume use, he is at special risk because he will have lost his tolerance and may easily overdose.

Nor is a standard 28-day stay in a residential facility particularly helpful as a sole intervention. In residential settings many patients acquire a false sense of security about their ability to resist drugs. They are, after all, insulated from the stresses and conditioned cues that routinely provoke drug cravings at home and in other familiar environments. This is why residential care must be followed by supervised transition to treatment in an outpatient setting: Users must continue to learn how to cope without drugs in the social and physical milieus they inhabit every day.

Fortunately, medical professionals are armed with a number of good anti-addiction medications to help patients addicted to opioids. The classic treatment is methadone, first introduced as a maintenance therapy in the 1960s. A newer medication approved by the FDA in 2002 for the treatment of opioid addiction is buprenorphine, or “bupe.” It comes, most popularly, as a strip that dissolves under the tongue. The suggested length of treatment with bupe is a minimum of one or two years. Like methadone, bupe is an opioid. Thus, it can prevent withdrawal, blunt cravings, and produce euphoria. Unlike methadone, however, bupe’s chemical structure makes it much less dangerous if taken in excess, thereby prompting Congress to enact a law, the Drug Addiction Treatment Act of 2000, which allows physicians to prescribe it from their offices. Methadone, by contrast, can only be administered in clinics tightly regulated by the Drug Enforcement Administration and the Substance Abuse and Mental Health Services Administration. (I work in such a clinic.)

In addition to methadone or buprenorphine, which have abuse potential of their own, there is extended-release naltrexone. Administered as a monthly injection, naltrexone is an opioid blocker. A person who is “blocked” normally experiences no effect upon taking an opioid drug. Because naltrexone has no abuse potential (hence no street value), it is favored by the criminal-justice system. Jails and prisons are increasingly offering inmates an injection of naltrexone; one dose is given at five weeks before release and another during the week of release with plans for ongoing treatment as an outpatient. Such protection is warranted given the increased risk for death, particularly from drug-related causes, in the early post-release period. For example, one study of inmates released from the Washington State Department of Corrections found a 10-fold greater risk of overdose death within the first two weeks after discharge compared with non-incarcerated state residents of the same age, sex, and race.

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid

Regulatory State and Redistributive State

Will Wilkinson is a great writer, and spells out here two critical aspects of government:

The regulatory state is the aspect of government that protects the public against abuses of private players, protects property rights, and creates well-defined “corridors” that streamline the flows of capitalism and make it work best. It always gets a bad rap, and shouldn’t. The rap is due to the difficulty of enforcing regulations on so many aspects of life.

The redistributive state is the aspect of government that deigns to shift income and wealth from certain players in society to other players. The presumption is always one of fairness, whereby society deems it in the interests of all that certain actors, e.g. veterans or seniors, get preferential distributions of some kind.

He goes on to make a great point. These two states are more independent of one another than might at first be apparent. So it is possible to dislike one and like another.

Personally, I like both. I think both are critical to a well-oiled society with capitalism and property rights as central tenants. My beef will always go to issues of efficiency and effectiveness?

On redistribution, efficiency experts can answer this question: can we dispense with the monthly paperwork and simply direct deposit funds? Medicare now works this way, and the efficiency gains are remarkable.

And on regulation, efficiency experts can answer this question: can private actors certify their compliance with regulation, and then the public actors simple audit from time to time? Many government programs work this way, to the benefit of all.

ON redistribution, effectiveness experts can answer this question: Is the homeless population minimal? Are veterans getting what they need? Are seniors satisfied with how government treats them?

On regulation, effectiveness experts can answer this question: Is the air clean? Is the water clean? Is the mortgage market making food loans that help people buy houses? Are complaints about fraudulent consumer practices low?

CREDIT: VOX Article on Economic Freedom by Will Wilkinson

By Will Wilkinson
Sep 1, 2016

American exceptionalism has been propelled by exceptionally free markets, so it’s tempting to think the United States has a freer economy than Western European countries — particularly those soft-socialist Scandinavian social democracies with punishing tax burdens and lavish, even coddling, welfare states. As late as 2000, the American economy was indeed the freest in the West. But something strange has happened since: Economic freedom in the United States has dropped at an alarming rate.

Meanwhile, a number of big-government welfare states have become at least as robustly capitalist as the United States, and maybe more so. Why? Because big welfare states needed to become better capitalists to afford their socialism. This counterintuitive, even paradoxical dynamic suggests a tantalizing hypothesis: America’s shabby, unpopular safety net is at least partly responsible for capitalism’s flagging fortunes in the Land of the Free. Could it be that Americans aren’t socialist enough to want capitalism to work? It makes more sense than you might think.

America’s falling economic freedom

From 1970 to 2000, the American economy was the freest in the West, lagging behind only Asia’s laissez-faire city-states, Hong Kong and Singapore. The average economic freedom rating of the wealthy developed member countries of the Organization for Economic Cooperation and Development (OECD) has slipped a bit since the turn of the millennium, but not as fast as America’s.
“Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States,” write the authors of Fraser Institute’s 2015

Economic Freedom of the World report, noting that “the decline in economic freedom in the United States has been more than three times greater than the average decline found in the OECD.”

The economic freedom of selected countries, 1999 to 2016. Heritage Foundation 2016 Index of Economic Freedom

The Heritage Foundation and the Canadian Fraser Institute each produce an annual index of economic freedom, scoring the world’s countries on four or five main areas, each of which breaks down into a number of subcomponents. The main rubrics include the size of government and tax burdens; protection of property rights and the soundness of the legal system; monetary stability; openness to global trade; and levels of regulation of business, labor, and capital markets. Scores on these areas and subareas are combined to generate an overall economic freedom score.

The rankings reflect right-leaning ideas about what it means for people and economies to be free. Strong labor unions and inequality-reducing redistribution are more likely to hurt than help a country’s score.

So why should you care about some right-wing think tank’s ideologically loaded measure of economic freedom? Because it matters. More economic freedom, so measured, predicts higher rates of economic growth, and higher levels of wealth predict happier, healthier, longer lives. Higher levels of economic freedom are also linked with greater political liberty and civil rights, as well as higher scores on the left-leaning Social Progress Index, which is based on indicators of social justice and human well-being, from nutrition and medical care to tolerance and inclusion.

The authors of the Fraser report estimate that the drop in American economic freedom “could cut the US historic growth rate of 3 percent by half.” The difference between a 1.5 percent and 3 percent growth rate is roughly the difference between the output of the economy tripling rather than octupling in a lifetime. That’s a huge deal.
Over the same period, the economic freedom scores of Canada and Denmark have improved a lot. According to conservative and libertarian definitions of economic freedom, Canadians, who enjoy a socialized health care system, now have more economic freedom than Americans, and Danes, who have one of the world’s most generous welfare states, have just as much.
What the hell’s going on?

The redistributive state and the regulatory state are separable

To make headway on this question, it is crucial to clearly distinguish two conceptually and empirically separable aspects of “big government” — the regulatory state and the redistributive state.

The redistributive state moves money around through taxes and transfer programs. The regulatory state places all sorts of restrictions and requirements on economic life — some necessary, some not. Most Democrats and Republicans assume that lots of regulation and lots of redistribution go hand in hand, so it’s easy to miss that you can have one without the other, and that the relationship between the two is uneasy at best. But you can’t really understand the politics behind America’s declining economic freedom if you fail to distinguish between the regulatory and fiscal aspects of the economic policy.

Standard “supply-side” Republican economic policy thinking says that cuts in tax rates and government spending will unleash latent productive potential in the economy, boosting rates of growth. And indeed, when taxes and government spending are very high, cuts produce gains by returning resources to the private sector. But it’s important to see that questions about government control versus private sector control of economic resources are categorically different from questions about the freedom of markets.

Free markets require the presence of good regulation, which define and protect property rights and facilitate market processes through the consistent application of clear law, and an absence of bad regulation, which interferes with productive economic activity. A government can tax and spend very little — yet still stomp all over markets. Conversely, a government can withdraw lots of money from the economy through taxes, but still totally nail the optimal balance of good and bad regulation.

Whether a country’s market economy is free — open, competitive, and relatively unmolested by government — is more a question of regulation than a question of taxation and redistribution. It’s not primarily about how “big” its government is. Republicans generally do support a less meddlesome regulatory approach, but when they’re in power they tend to be much more persistent about cutting taxes and social welfare spending than they are about reducing economically harmful regulatory frictions.

If you’re as worried about America’s declining economic freedom as I am, this is a serious problem. In recent years, the effect of cutting taxes and spending has been to distribute income upward and leave the least well-off more vulnerable to bad luck, globalization, “disruptive innovation,” and the vagaries of business cycles.
If spending cuts came out of the military’s titanic budget, that would help. But that’s rarely what happens. The least connected constituencies, not the most expensive ones, are the first to get dinged by budget hawks. And further tax cuts are unlikely to boost growth. Lower taxes make government seem cheaper than it really is, which leads voters to ask for more, not less, government spending, driving up the deficit. Increasing the portion of GDP devoted to paying interest on government debt isn’t a growth-enhancing way to return resources to the private sector.

Meanwhile, wages have been flat or declining for millions of Americans for decades. People increasingly believe the economy is “rigged” in favor of the rich. As a sense of economic insecurity mounts, people anxiously cast about for answers.

Easing the grip of the regulatory state is a good answer. But in the United States, its close association with “free market” supply-side efforts to produce growth by slashing the redistributive state has made it an unattractive answer, even with Republican voters. That’s at least part of the reason the GOP wound up nominating a candidate who, in addition to promising not to cut entitlement spending, openly favors protectionist trade policy, giant infrastructure projects, and huge subsidies to domestic manufacturing and energy production. Donald Trump’s economic policy is the worst of all possible worlds.

This is doubly ironic, and doubly depressing, once you recognize that the sort of big redistributive state supply-siders fight is not necessarily the enemy of economic freedom. On the contrary, high levels of social welfare spending can actually drive political demand for growth-promoting reform of the regulatory state. That’s the lesson of Canada and Denmark’s march up those free economy rankings.

The welfare state isn’t a free lunch, but it is a cheap date

Economic theory tells you that big government ought to hurt economic growth. High levels of taxation reduce the incentive to work, and redistribution is a “leaky bucket”: Moving money around always ends up wasting some of it. Moreover, a dollar spent in the private sector generally has a more beneficial effect on the economy than a dollar spent by the government. Add it all up, and big governments that tax heavily and spend freely on social transfers ought to hurt economic growth.

That matters from a moral perspective — a lot. Other things equal, people are better off on just about every measure of well-being when they’re wealthier. Relative economic equality is nice, but it’s not so nice when relatively equal shares mean smaller shares for everyone. Just as small differences in the rate at which you put money into a savings account can lead to vast differences in your account balance 40 years down the road, thanks to the compounding nature of interest, a small reduction in the rate of economic growth can leave a society’s least well-off people much poorer in absolute terms than they might have been.

Here’s the puzzle. As a general rule, when nations grow wealthier, the public demands more and better government services, increasing government spending as a percentage of GDP. (This is known as “Wagner’s law.”) According to standard growth theory, ongoing increase in the size of government ought to exert downward pressure on rates of growth. But we don’t see the expected effect in the data. Long-term national growth trends are amazingly stable.

And when we look at the family of advanced, liberal democratic countries, countries that spend a smaller portion of national income on social transfer programs gain very little in terms of growth relative to countries that spend much more lavishly on social programs. Peter Lindert, an economist at the University of California Davis, calls this the “free lunch paradox.”

Lindert’s label for the puzzle is somewhat misleading, because big expensive welfare states are, obviously, expensive. And they do come at the expense of some growth. Standard economic theory isn’t completely wrong. It’s just that democracies that have embraced generous social spending have found ways to afford it by minimizing and offsetting its anti-growth effects.

If you’re careful with the numbers, you do in fact find a small negative effect of social welfare spending on growth. Still, according to economic theory, lunch ought to be really expensive. And it’s not.

There are three main reasons big welfare states don’t hurt growth as much as you might think. First, as Lindert has emphasized, they tend to have efficient consumption-based tax systems that minimize market distortions.
When you tax something, people tend to avoid it. If you tax income, as the United States does, people work a little less, which means that certain economic gains never materialize, leaving everyone a little poorer. Taxing consumption, as many of our European peers do, is less likely to discourage productive moneymaking, though it does discourage spending. But that’s not so bad. Less consumption means more savings, and savings puts the capital in capitalism, financing the economic activity that creates growth.

There are other advantages, too. Consumption taxes are usually structured as national sales taxes (or VATs, value-added taxes), which are paid in small amounts on a continuous basis, are extremely cheap to collect (and hard to avoid), while being less in-your-face than income taxes, which further mitigates the counterproductively demoralizing aspect of taxation.

Big welfare states are also more likely to tax addictive stuff, which people tend to buy whatever the price, as well as unhealthy and polluting stuff. That harnesses otherwise fiscally self-defeating tax-avoiding behavior to minimize the costs of health care and environmental damage.
Second, some transfer programs have relatively direct pro-growth effects. Workers are most productive in jobs well-matched to their training and experience, for example, and unemployment benefits offer displaced workers time to find a good, productivity-promoting fit. There’s also some evidence that health care benefits that aren’t linked to employment can promote economic risk-taking and entrepreneurship.

Fans of open-handed redistributive programs tend to oversell this kind of upside for growth, but there really is some. Moreover, it makes sense that the countries most devoted to these programs would fine-tune them over time to amplify their positive-sum aspects.

This is why you can’t assume all government spending affects growth in the same way. The composition of spending — as well as cuts to spending — matters. Cuts to efficiency-enhancing spending can hurt growth as much as they help. And they can really hurt if they increase economic anxiety and generate demand for Trump-like economic policy.

Third, there are lots of regulatory state policies that hurt growth by, say, impeding healthy competition or closing off foreign trade, and if you like high levels of redistribution better than you like those policies, you’ll eventually consider getting rid of some of them. If you do get rid of them, your economic freedom score from the Heritage Foundation and the Fraser Institute goes up.
This sort of compensatory economic liberalization is how big welfare states can indirectly promote growth, and more or less explains why countries like Canada, Denmark, and Sweden have become more robustly capitalist over the past several decades. They needed to be better capitalists to afford their socialism. And it works pretty well.

If you bundle together fiscal efficiency, some offsetting pro-growth effects, and compensatory liberalization, you can wind up with a very big government, with very high levels of social welfare spending and very little negative consequences for growth. Call it “big-government laissez-faire.”

The missing political will for genuine pro-growth reform

Enthusiasts for small government have a ready reply. Fine, they’ll say. Big government can work through policies that offset its drag on growth. But why not a less intrusive regulatory state and a smaller redistributive state: small-government laissez-faire. After all, this is the formula in Hong Kong and Singapore, which rank No. 1 and No. 2 in economic freedom. Clearly that’s our best bet for prosperity-promoting economic freedom.

But this argument ignores two things. First, Hong Kong and Singapore are authoritarian technocracies, not liberal democracies, which suggests (though doesn’t prove) that their special recipe requires nondemocratic government to work. When you bring democracy into the picture, the most important political lesson of the Canadian and Danish rise in economic freedom becomes clear: When democratically popular welfare programs become politically nonnegotiable fixed points, they can come to exert intense pressure on fiscal and economic policy to make them sustainable.

Political demand for economic liberalization has to come from somewhere. But there’s generally very little organic, popular democratic appetite for capitalist creative destruction. Constant “disruption” is scary, the way markets generate wealth and well-being is hard to comprehend, and many of us find competitive profit-seeking intuitively objectionable.

It’s not that Danes and Swedes and Canadians ever loved their “neoliberal” market reforms. They fought bitterly about them and have rolled some of them back. But when their big-government welfare states were creaking under their own weight, enough of the public was willing, thanks to the sense of economic security provided by the welfare state, to listen to experts who warned that the redistributive state would become unsustainable without the downsizing of the regulatory state.

A sound and generous system of social insurance offers a certain peace of mind that makes the very real risks of increased economic dynamism seem tolerable to the democratic public, opening up the political possibility of stabilizing a big-government welfare state with growth-promoting economic liberalization.

This sense of baseline economic security is precisely what many millions of Americans lack.

Learning the lesson of Donald Trump
America’s declining economic freedom is a profoundly serious problem. It’s already putting the brakes on dynamism and growth, leaving millions of Americans with a bitter sense of panic about their prospects. They demand answers. But ordinary voters aren’t policy wonks. When gripped by economic anxiety, they turn to demagogues who promise measures that make intuitive economic sense, but which actually make economic problems worse.

We may dodge a Trump presidency this time, but if we fail to fix the feedback loop between declining economic freedom and an increasingly acute sense of economic anxiety, we risk plunging the world’s biggest economy and the linchpin of global stability into a political and economic death spiral. It’s a ridiculous understatement to say that it’s important that this doesn’t happen.

Market-loving Republicans and libertarians need to stare hard at a framed picture of Donald Trump and reflect on the idea that a stale economic agenda focused on cutting taxes and slashing government spending is unlikely to deliver further gains. It is instead likely to continue to backfire by exacerbating economic anxiety and the public’s sense that the system is rigged.

If you gaze at the Donald long enough, his fascist lips will whisper “thank you,” and explain that the close but confusing identification of supply-side fiscal orthodoxy with “free market” economic policy helps authoritarian populists like him — but it hurts the political prospects of regulatory state reforms that would actually make American markets freer.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Neo.Life

This beta site NeoLife link beyond the splash pagee is tracking the “neobiological revolution”. I wholeheartedly agree that some of our best and brightest are on the case. Here they are:

ABOUT
NEO.LIFE
Making Sense of the Neobiological Revolution
NOTE FROM THE EDITOR
Mapping the brain, sequencing the genome, decoding the microbiome, extending life, curing diseases, editing mutations. We live in a time of awe and possibility — and also enormous responsibility. Are you prepared?

EDITORS

FOUNDER

Jane Metcalfe
Founder of Neo.life. Entrepreneur in media (Wired) and food (TCHO). Lover of mountains, horses, roses, and kimchee, though not necessarily in that order.
Follow

EDITOR
Brian Bergstein
Story seeker and story teller. Editor at NEO.LIFE. Former executive editor of MIT Technology Review; former technology & media editor at The Associated Press
Follow

ART DIRECTOR
Nicholas Vokey
Los Angeles-based graphic designer and animator.
Follow

CONSULTANT
Saul Carlin
founder @subcasthq. used to work here.

EDITOR
Rachel Lehmann-Haupt
Editor, www.theartandscienceoffamily.com & NEO.LIFE, author of In Her Own Sweet Time: Egg Freezing and the New Frontiers of Family

Laura Cochrane
“To oppose something is to maintain it.” — Ursula K. Le Guin

WRITERS

Amanda Schaffer
writes for the New Yorker and Neo.life, and is a former medical columnist for Slate. @abschaffer

Mallory Pickett
freelance journalist in Los Angeles

Karen Weintraub
Health/Science journalist passionate about human health, cool researcher and telling stories.

Anna Nowogrodzki
Science and tech journalist. Writing in Nature, National Geographic, Smithsonian, mental_floss, & others.
Follow

Juan Enriquez
Best-selling author, Managing Director of Excel Venture Management.

Christina Farr
Tech and features writer. @Stanford grad.

NEO.LIFE
Making sense of the Neobiological Revolution. Get the email at www.neo.life.

Maria Finn
I’m an author and tell stories across multiple mediums including prose, food, gardens, technology & narrative mapping. www.mariafinn.com Instagram maria_finn1.

Stephanie Pappas
I write about science, technology and the things people do with them.

David Eagleman
Neuroscientist at Stanford, internationally bestselling author of fiction and non-fiction, creator and presenter of PBS’ The Brain.

Kristen V. Brown
Reporter @Gizmodo covering biotech.

Thomas Goetz

David Ewing Duncan
Life science journalist; bestselling author, 9 books; NY Times, Atlantic, Wired, Daily Beast, NPR, ABC News, more; Curator, Arc Fusion www.davidewingduncan.com

Dorothy Santos
writer, editor, curator, and educator based in the San Francisco Bay Area about.me/dorothysantos.com

Dr. Sophie Zaaijer
CEO of PlayDNA, Postdoctoral fellow at the New York Genome Center, Runway postdoc at Cornell Tech.

Andrew Rosenblum
I’m a freelance tech writer based in Oakland, CA. You can find my work at Neo.Life, the MIT Technology Review, Popular Science, and many other places.

Zoe Cormier

Diana Crow
Fledgling science journalist here, hoping to foster discussion about the ways science acts as a catalyst for social change #biology

Ashton Applewhite
Calling for a radical aging movement. Anti-ageism blog+talk+book

Grace Rubenstein
Journalist, editor, media producer. Social/bio science geek. Tweets on health science, journalism, immigration. Spanish speaker & dancing fool.

Science and other sundries.

Esther Dyson
Internet court jEsther — I occupy Esther Dyson. Founder @HICCup_co https://t.co/5dWfUSratQ http://t.co/a1Gmo3FTQv

Jessica Leber
Freelance science and technology journalist and editor, formerly on staff at Fast Company, Vocativ, MIT Technology Review, and ClimateWire.

Jessica Carew Kraft
An anthropologist, artist, and naturalist writing about health, education, and rewilding. Mother to two girls in San Francisco.

Corby Kummer
Senior editor, The Atlantic, five-time James Beard Journalism Award winner, restaurant reviewer for New York, Boston, and Atlanta magazines

K McGowan
Journalist. Reporting on health, medicine, science, other excellent things. T: @mcgowankat

Rob Waters
I’m a journalist living in Berkeley. I write about health, science, social justice and policy. Father of 1. From Detroit.
Follow

Yiting Sun
writes for MIT Technology Review and Neo.life from Beijing, and was based in Accra, Ghana, in 2014 and 2015.
Follow

Michael Hawley
Follow

Richard Sprague
Curious amateur. Years of near-daily microbiome experiments. US CEO of AI healthcare startup http://airdoc.com
Follow

Bob Parks ✂
Connoisseur of the slap dash . . . maker . . . runner . . . writer of Outside magazine’s Gear Guy blog . . . freelance writer and reporter.

CREDIT: https://medium.com/neodotlife/review-of-daytwo-microbiome-test-deacd5464cd5

Microbiome Apps Personalize EAT recommendations

Richard Sprague provides a useful update about the microbiome landscape below. Microbiome is exploding. Your gut can be measured, and your gut can influence your health and well-being. But now …. these gut measurements can offer people a first: personalized nutrition information.

Among the more relevant points:

– Israel’s Weitzman Institute is the global leader academically. Eran Elinav, a physician and immunologist at the Weizmann Institute and one of their lead investigators (see prior post).
– The older technology for measuring the gut is called “16S” sequencing. It tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories,
– The companies competing to measure your microbiome are uBiome, American Gut, Thryve, DayTwo and Viome. DayTwo and Viome offer more advanced technology (see below).
– The latest technology seems to be “metagenomic sequencing”. It is better because it is more specific and detailed.
– By combining “metagenomic sequencing” information with extensive research about how certain species interact with particular foods, machine-learning algorithms can recommend what you should eat.
– DayTwo offers a metagenomic sequencing for $299, and then combines that with all available research to offer personalized nutrition information.
– DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S.
– DayTwo draws its academic understandings from Israel’s Weitzman Institute. The app is based on more than five years of highly cited research showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

CREDIT: Article on Microbiome Advances

When a Double-Chocolate Brownie is Better for You Than Quinoa

A $299 microbiome test from DayTwo turns up some counterintuitive dietary advice.

Why do certain diets work well for some people but not others? Although several genetic tests try to answer that question and might help you craft ideal nutrition plans, your DNA reveals only part of the picture. A new generation of tests from DayTwo and Viome offer diet advice based on a more complete view: they look at your microbiome, the invisible world of bacteria that help you metabolize food, and, unlike your DNA, change constantly throughout your life.
These bugs are involved in the synthesis of vitamins and other compounds in food, and they even play a role in the digestion of gluten. Artificial sweeteners may not contain calories, but they do modify the bacteria in your gut, which may explain why some people continue to gain weight on diet soda. Everyone’s microbiome is different.

So how well do these new tests work?
Basic microbiome tests, long available from uBiome, American Gut, Thryve, and others, based on older “16S” sequencing, can tell you at a high level which kinds of microbes are present. It’s cheap and easy, but 16S can see only broad categories, the bacterial equivalent of, say, canines versus felines. But just as your life might depend on knowing the difference between a wolf and a Chihuahua, your body’s reaction to food often depends on distinctions that can be known only at the species level. The difference between a “good” microbe and a pathogen can be a single DNA base pair.

New tests use more precise “metagenomic” sequencing that can make those distinctions. And by combining that information with extensive research about how those species interact with particular foods, machine-learning algorithms can recommend what you should eat. (Disclosure: I am a former “citizen scientist in residence” at uBiome. But I have no current relationship with any of these companies; I’m just an enthusiast about the microbiome.)

I recently tested myself with DayTwo ($299) to see what it would recommend for me, and I was pleased that the advice was not always the standard “eat more vegetables” that you’ll get from other products claiming to help you eat healthily. DayTwo’s advice is much more specific and often refreshingly counterintuitive. It’s based on more than five years of highly cited research at Israel’s Weizmann Institute, showing, for example, that while people on average respond similarly to white bread versus whole grain sourdough bread, the differences between individuals can be huge: what’s good for one specific person may be bad for another.

In my case, whole grain breads all rate C-. French toast with challah bread: A.

The DayTwo test was pretty straightforward: you collect what comes out of your, ahem, gut, which involves mailing a sample from your time on the toilet. Unlike the other tests, which can analyze the DNA found in just a tiny swab from a stain on a piece of toilet paper, DayTwo requires more like a tablespoon. The extra amount is needed for DayTwo’s more comprehensive metagenomics sequencing.

Since you can get a microbiome test from other companies for under $100, does the additional metagenomic information from DayTwo justify its much higher price? Generally, I found the answer is yes.

About two months after I sent my sample, my iPhone lit up with my results in a handy app that gave me a personalized rating for most common foods, graded from A+ to C-. In my case, whole grain breads all rate C-. Slightly better are pasta and oatmeal, each ranked C+. Even “healthy” quinoa — a favorite of gluten-free diets — was a mere B-. Why? DayTwo’s algorithm can’t say precisely, but among the hundreds of thousands of gut microbe and meal combinations it was trained on, it finds that my microbiome doesn’t work well with these grains. They make my blood sugar rise too high.

So what kinds of bread are good for me? How about a butter croissant (B+) or cheese ravioli (A-)? The ultimate bread winner for me: French toast with challah bread (A). These recommendations are very different from the one-size-fits-all advice from the U.S. Department of Agriculture or the American Diabetes Association.

I was also pleased to learn that a Starbucks double chocolate brownie is an A- for me, while a 100-calorie pack of Snyder’s of Hanover pretzels gets a C-. That might go against general diet advice, but an algorithm determined that the thousands of bacterial species inside me tend to metabolize fatty foods in a way that results in healthier blood sugar levels than what I get from high-carb foods. Of course, that’s advice just for me; your mileage may vary.

Although the research behind DayTwo has been well-reviewed for more than five years, the app is new to the U.S., so the built-in food suggestions often seem skewed toward Middle Eastern eaters, perhaps the Israeli subjects who formed the original research cohort. That might explain why the app’s suggestions for me include lamb souvlaki with yogurt garlic dip for dinner (A+) and lamb kabob and a side of lentils (A) for lunch. They sound delicious, but to many American ears they might not have the ring of “pork ribs” or “ribeye steak,” which have the same A+ rating. Incidentally, DayTwo recently completed a $12 million financing round from, among others, Mayo Clinic, which announced it would be validating the research in the U.S., so I expect the menu to expand with more familiar fare.

Fortunately you’re not limited to the built-in menu choices. The app includes a “build a meal” function that lets you enter combinations of foods from a large database that includes packaged items from Trader Joe’s and Whole Foods.

There is much more to the product, such as a graphical rendering of where my microbiome fits on the spectrum of the rest of the population that eats a particular food. Since the microbiome changes constantly, this will help me see what is different when I do a retest and when I try Viome and other tests.

I’ve had my DayTwo results for only a few weeks, so it’s too soon to know what happens if I take the app’s advice over the long term. Thankfully I’m in good health and reasonably fit, but for now I’ll be eating more strawberries (A+) and blackberries (A-), and fewer apples (B-) and bananas (C+). And overall I’m looking forward to a future where each of us will insist on personalized nutritional information. We all have unique microbiomes, and an app like DayTwo lets us finally eat that way too.

Richard Sprague is a technology executive and quantified-self enthusiast who has worked at Apple, Microsoft, and other tech companies. He is now the U.S. CEO of an AI healthcare startup, Airdoc.

====================APPENDIX: Older Posts about the microbiome =========

Microbiome Update
CREDIT: https://www.wsj.com/articles/how-disrupting-your-guts-rhythm-affects-your-health-1488164400?mod=e2tw A healthy community of microbes in the gut maintains regular daily cycles of activities. A healthy community of microbes in the gut maintains regular daily cycles of activities.PHOTO: WEIZMANN INSTITUTE By LARRY M. GREENBERG Updated Feb. 27, 2017 3:33 p.m. ET 4 COMMENTS New research is helping to unravel the mystery of how […]

Vibrant Health measures microbiome

Home

Microbiome Update
My last research on this subject was in August, 2014. I looked at both microbiomes and proteomics. Today, the New York Times published a very comprehensive update on microbiome research: Link to New York Time Microbiome Article Here is the article itself: = = = = = = = ARTICLE BEGINS HERE = = = […]

Microbiomes
Science is advancing on microbiomes in the gut. The key to food is fiber, and the key to best fiber is long fibers, like cellulose, uncooked or slightly sauteed (cooking shortens fiber length). The best vegetable, in the view of Jeff Leach, is a leek. Eating Well Article on Microbiome = = = = = […]

Arivale Launches LABS company
“Arivale” Launched and Moving Fast. They launched last month. They have 19 people in the Company and a 107 person pilot – but their plans are way more ambitious than that. Moreover: “The founders said they couldn’t envision Arivale launching even two or three years ago.” Read on …. This is an important development: the […]

Precision Wellness at Mt Sinai
My Sinai announcement Mount Sinai to Establish Precision Wellness Center to Advance Personalized Healthcare Mount Sinai Health System Launches Telehealth Initiatives Joshua Harris, co-Founder of Apollo Global Management, and his wife, Marjorie has made a $5 million gift to the Icahn School of Medicine at Mount Sinai to establish the Harris Center for Precision Wellness. […]

Proteomics
“Systems biology…is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different….It means changing our philosophy, in the full sense of the term” (Denis Noble).[5] Proteomics From Wikipedia, the free encyclopedia For the journal […]

GPEE State of Education in GA

On November 9 the Georgia Partnership for Excellence in Education held a Critical Issues Forum on public perception polling and our new policy framework: EdQuest Georgia.

Michael Gilligan (Vice President, Strategic Initiatives) and Hans Voss (Senior Associate) from Achieve presented polling data on people’s public perceptions of education. You can find that data and presentation here.http://www.gpee.org/fileadmin/files/PDFs/Achieve_Georgia_report_deck_for_morning_presentation_110617.pdf

In addition to the public perception polling data, our Director of Policy & Research – Dr. Dana Rickman – presented EdQuest Georgia, the Partnership’s new research that identifies seven core areas that states and countries are using to achieve great success with their public education systems. You can find her presentation here.Rickman Presentation

If you were not able to attend and would like to watch the Forum in its entirety, you can view it on our YouTube page here.YouTube Link

For those of you who were able to make it to the Forum, we hope you found it to be informative. Thank you for your participation! Please share these links with anyone who is interested in improving public education in Georgia.

Marketing Automation Software

Pardot believes they are perfecting lead generation through supplemental technologies.

See Pardot Website

Salesforce.com believes in the idea enough to buy them.

At a high level, this is about accepting that most leads are not “hot”, Pardot offers a way to nurture all warm leads and monitor whether they are moving toward cold or hot. If hot, the Pardot nurtures tha relationship in a way most relevant to the lead. They call this Smarter lead generation and effortless email marketing.

My friend Andrew M is a Salesforce expert and swears by them.