Author Archives: reidcurtis

Philip Roth Update

I found this chock full of wisdom:

CREDIT: NYT Interview with Philip Roth

In an exclusive interview, the (former) novelist shares his thoughts on Trump, #MeToo and retirement.

With the death of Richard Wilbur in October, Philip Roth became the longest-serving member in the literature department of the American Academy of Arts and Letters, that august Hall of Fame on Audubon Terrace in northern Manhattan, which is to the arts what Cooperstown is to baseball. He’s been a member so long he can recall when the academy included now all-but-forgotten figures like Malcolm Cowley and Glenway Wescott — white-haired luminaries from another era. Just recently Roth joined William Faulkner, Henry James and Jack London as one of very few Americans to be included in the French Pleiades editions (the model for our own Library of America), and the Italian publisher Mondadori is also bringing out his work in its Meridiani series of classic authors. All this late-life eminence — which also includes the Spanish Prince of Asturias Award in 2012 and being named a commander in the Légion d’Honneur of France in 2013 — seems both to gratify and to amuse him. “Just look at this,” he said to me last month, holding up the ornately bound Mondadori volume, as thick as a Bible and comprising titles like “Lamento di Portnoy” and “Zuckerman Scatenato.” “Who reads books like this?”
In 2012, as he approached 80, Roth famously announced that he had retired from writing. (He actually stopped two years earlier.) In the years since, he has spent a certain amount of time setting the record straight. He wrote a lengthy and impassioned letter to Wikipedia, for example, challenging the online encyclopedia’s preposterous contention that he was not a credible witness to his own life. (Eventually, Wikipedia backed down and redid the Roth entry in its entirety.) Roth is also in regular touch with Blake Bailey, whom he appointed as his official biographer and who has already amassed 1,900 pages of notes for a book expected to be half that length. And just recently, he supervised the publication of “Why Write?,” the 10th and last volume in the Library of America edition of his work. A sort of final sweeping up, a polishing of the legacy, it includes a selection of literary essays from the 1960s and ’70s; the full text of “Shop Talk,” his 2001 collection of conversations and interviews with other writers, many of them European; and a section of valedictory essays and addresses, several published here for the first time. Not accidentally, the book ends with the three-word sentence “Here I am” — between hard covers, that is.
But mostly now Roth leads the quiet life of an Upper West Side retiree. (His house in Connecticut, where he used to seclude himself for extended bouts of writing, he now uses only in the summer.) He sees friends, goes to concerts, checks his email, watches old movies on FilmStruck. Not long ago he had a visit from David Simon, the creator of “The Wire,” who is making a six-part mini-series of “The Plot Against America,” and afterward he said he was sure his novel was in good hands. Roth’s health is good, though he has had several surgeries for a recurring back problem, and he seems cheerful and contented. He’s thoughtful but still, when he wants to be, very funny.
I have interviewed Roth on several occasions over the years, and last month I asked if we could talk again. Like a lot of his readers, I wondered what the author of “American Pastoral,” “I Married a Communist” and “The Plot Against America” made of this strange period we are living in now. And I was curious about how he spent his time. Sudoku? Daytime TV? He agreed to be interviewed but only if it could be done via email. He needed to take some time, he said, and think about what he wanted to say.
C.M. In a few months you’ll turn 85. Do you feel like an elder? What has growing old been like?
P.R. Yes, in just a matter of months I’ll depart old age to enter deep old age — easing ever deeper daily into the redoubtable Valley of the Shadow. Right now it is astonishing to find myself still here at the end of each day. Getting into bed at night I smile and think, “I lived another day.” And then it’s astonishing again to awaken eight hours later and to see that it is morning of the next day and that I continue to be here. “I survived another night,” which thought causes me to smile once more. I go to sleep smiling and I wake up smiling. I’m very pleased that I’m still alive. Moreover, when this happens, as it has, week after week and month after month since I began drawing Social Security, it produces the illusion that this thing is just never going to end, though of course I know that it can stop on a dime. It’s something like playing a game, day in and day out, a high-stakes game that for now, even against the odds, I just keep winning. We will see how long my luck holds out.
C.M. Now that you’ve retired as a novelist, do you ever miss writing, or think about un-retiring?
P.R. No, I don’t. That’s because the conditions that prompted me to stop writing fiction seven years ago haven’t changed. As I say in “Why Write?,” by 2010 I had “a strong suspicion that I’d done my best work and anything more would be inferior. I was by this time no longer in possession of the mental vitality or the verbal energy or the physical fitness needed to mount and sustain a large creative attack of any duration on a complex structure as demanding as a novel…. Every talent has its terms — its nature, its scope, its force; also its term, a tenure, a life span…. Not everyone can be fruitful forever.”
C.M. Looking back, how do you recall your 50-plus years as a writer?
P.R. Exhilaration and groaning. Frustration and freedom. Inspiration and uncertainty. Abundance and emptiness. Blazing forth and muddling through. The day-by-day repertoire of oscillating dualities that any talent withstands — and tremendous solitude, too. And the silence: 50 years in a room silent as the bottom of a pool, eking out, when all went well, my minimum daily allowance of usable prose.
C.M. In “Why Write?” you reprint your famous essay “Writing American Fiction,” which argues that American reality is so crazy that it almost outstrips the writer’s imagination. It was 1960 when you said that. What about now? Did you ever foresee an America like the one we live in today?
P.R. No one I know of has foreseen an America like the one we live in today. No one (except perhaps the acidic H. L. Mencken, who famously described American democracy as “the worship of jackals by jackasses”) could have imagined that the 21st-century catastrophe to befall the U.S.A., the most debasing of disasters, would appear not, say, in the terrifying guise of an Orwellian Big Brother but in the ominously ridiculous commedia dell’arte figure of the boastful buffoon. How naïve I was in 1960 to think that I was an American living in preposterous times! How quaint! But then what could I know in 1960 of 1963 or 1968 or 1974 or 2001 or 2016?
C.M. Your 2004 novel, “The Plot Against America,” seems eerily prescient today. When that novel came out, some people saw it as a commentary on the Bush administration, but there were nowhere near as many parallels then as there seem to be now.
P.R. However prescient “The Plot Against America” might seem to you, there is surely one enormous difference between the political circumstances I invent there for the U.S. in 1940 and the political calamity that dismays us so today. It’s the difference in stature between a President Lindbergh and a President Trump. Charles Lindbergh, in life as in my novel, may have been a genuine racist and an anti-Semite and a white supremacist sympathetic to Fascism, but he was also — because of the extraordinary feat of his solo trans-Atlantic flight at the age of 25 — an authentic American hero 13 years before I have him winning the presidency. Lindbergh, historically, was the courageous young pilot who in 1927, for the first time, flew nonstop across the Atlantic, from Long Island to Paris. He did it in 33.5 hours in a single-seat, single-engine monoplane, thus making him a kind of 20th-century Leif Ericson, an aeronautical Magellan, one of the earliest beacons of the age of aviation. Trump, by comparison, is a massive fraud, the evil sum of his deficiencies, devoid of everything but the hollow ideology of a megalomaniac.
C.M. One of your recurrent themes has been male sexual desire — thwarted desire, as often as not — and its many manifestations. What do you make of the moment we seem to be in now, with so many women coming forth and accusing so many highly visible men of sexual harassment and abuse?
P.R. I am, as you indicate, no stranger as a novelist to the erotic furies. Men enveloped by sexual temptation is one of the aspects of men’s lives that I’ve written about in some of my books. Men responsive to the insistent call of sexual pleasure, beset by shameful desires and the undauntedness of obsessive lusts, beguiled even by the lure of the taboo — over the decades, I have imagined a small coterie of unsettled men possessed by just such inflammatory forces they must negotiate and contend with. I’ve tried to be uncompromising in depicting these men each as he is, each as he behaves, aroused, stimulated, hungry in the grip of carnal fervor and facing the array of psychological and ethical quandaries the exigencies of desire present. I haven’t shunned the hard facts in these fictions of why and how and when tumescent men do what they do, even when these have not been in harmony with the portrayal that a masculine public-relations campaign — if there were such a thing — might prefer. I’ve stepped not just inside the male head but into the reality of those urges whose obstinate pressure by its persistence can menace one’s rationality, urges sometimes so intense they may even be experienced as a form of lunacy. Consequently, none of the more extreme conduct I have been reading about in the newspapers lately has astonished me.
C.M. Before you were retired, you were famous for putting in long, long days. Now that you’ve stopped writing, what do you do with all that free time?
P.R. I read — strangely or not so strangely, very little fiction. I spent my whole working life reading fiction, teaching fiction, studying fiction and writing fiction. I thought of little else until about seven years ago. Since then I’ve spent a good part of each day reading history, mainly American history but also modern European history. Reading has taken the place of writing, and constitutes the major part, the stimulus, of my thinking life.
C.M. What have you been reading lately?
P.R. I seem to have veered off course lately and read a heterogeneous collection of books. I’ve read three books by Ta-Nehisi Coates, the most telling from a literary point of view, “The Beautiful Struggle,” his memoir of the boyhood challenge from his father. From reading Coates I learned about Nell Irvin Painter’s provocatively titled compendium “The History of White People.” Painter sent me back to American history, to Edmund Morgan’s “American Slavery, American Freedom,” a big scholarly history of what Morgan calls “the marriage of slavery and freedom” as it existed in early Virginia. Reading Morgan led me circuitously to reading the essays of Teju Cole, though not before my making a major swerve by reading Stephen Greenblatt’s “The Swerve,” about the circumstances of the 15th-century discovery of the manuscript of Lucretius’ subversive “On the Nature of Things.” This led to my tackling some of Lucretius’ long poem, written sometime in the first century B.C.E., in a prose translation by A. E. Stallings. From there I went on to read Greenblatt’s book about “how Shakespeare became Shakespeare,” “Will in the World.” How in the midst of all this I came to read and enjoy Bruce Springsteen’s autobiography, “Born to Run,” I can’t explain other than to say that part of the pleasure of now having so much time at my disposal to read whatever comes my way invites unpremeditated surprises.
Pre-publication copies of books arrive regularly in the mail, and that’s how I discovered Steven Zipperstein’s “Pogrom: Kishinev and the Tilt of History.” Zipperstein pinpoints the moment at the start of the 20th century when the Jewish predicament in Europe turned deadly in a way that foretold the end of everything. “Pogrom” led me to find a recent book of interpretive history, Yuri Slezkine’s “The Jewish Century,” which argues that “the Modern Age is the Jewish Age, and the 20th century, in particular, is the Jewish Century.” I read Isaiah Berlin’s “Personal Impressions,” his essay-portraits of the cast of influential 20th-century figures he’d known or observed. There is a cameo of Virginia Woolf in all her terrifying genius and there are especially gripping pages about the initial evening meeting in badly bombarded Leningrad in 1945 with the magnificent Russian poet Anna Akhmatova, when she was in her 50s, isolated, lonely, despised and persecuted by the Soviet regime. Berlin writes, “Leningrad after the war was for her nothing but a vast cemetery, the graveyard of her friends. … The account of the unrelieved tragedy of her life went far beyond anything which anyone had ever described to me in spoken words.” They spoke until 3 or 4 in the morning. The scene is as moving as anything in Tolstoy.
Just in the past week, I read books by two friends, Edna O’Brien’s wise little biography of James Joyce and an engagingly eccentric autobiography, “Confessions of an Old Jewish Painter,” by one of my dearest dead friends, the great American artist R. B. Kitaj. I have many dear dead friends. A number were novelists. I miss finding their new books in the mail.
Follow New York Times Books on Facebook and Twitter (@nytimesbooks), and sign up for our newsletter.
Charles McGrath, a former editor of the Book Review, is a contributing writer for The Times. He is the editor of a Library of America collection of John O’Hara stories.

Tribute to Global Progress

Debbie Downers: attention!

The point of this post: global progress on the fronts that really count has been amazing.

There are many sources. But my favorite is Nick Kristof’s column “Why 2017 Was the Best Year in Human History”. The column was the most emailed column of the week. I now see why. It is reprinted below.

“The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.”

Let me step back for a minute.

Fareed Zacharia, in his 2008 book The Post-American World, first raised my awareness about global progress. He began to get my head screwed on correctly.

Don’t get me wrong. I have lived in this fishbowl of global progress my entire life. I have been keenly aware of its major events, such as:

The Industrial Revolution
The Triumph of Democracy
The victories of WWI and WWII
The fall of the Berln Wall
The rise of global institutions, e.g. the UN, the WTO, the WHO, the World Bank
The rise of the computing revolution
The rise of the internet
The advent of iPhones
The conquest of infectious disease

But Fareed’s take on world events was spectacular in its optimism. He reminded readers that wars can be massive or small, like skirmishes; that peace can be the norm or war can be the norm; that human suffering can be widespread or isolated; and, most of all, he pointed out that the last fifty years have been, on the whole, spectacularly peaceful, wealth-creating, and welbeing-creating.

I am just like everyone else, though. I need a reminder.

The reminder came to me in Nick Kristof’s column this Sunday.

My favorites:

As recently as the 1960s, a majority of humans:

were illiterate. Now fewer than 15 percent are illiterate;
lived in extreme poverty. Now fewer than 10 percent do.

“In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.”

“Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.”

The writing is below, and the data supporting the writing is attached.

=================================

CREDIT: https://ourworldindata.org

CREDIT: https://ourworldindata.org/happiness-and-life-satisfaction/

CREDIT: https://www.nytimes.com/2018/01/06/opinion/sunday/2017-progress-illiteracy-poverty.html?smid=nytcore-ipad-share&smprod=nytcore-ipad

Why 2017 Was the Best Year in Human History

We all know that the world is going to hell. Given the rising risk of nuclear war with North Korea, the paralysis in Congress, warfare in Yemen and Syria, atrocities in Myanmar and a president who may be going cuckoo, you might think 2017 was the worst year ever.

But you’d be wrong. In fact, 2017 was probably the very best year in the long history of humanity.

A smaller share of the world’s people were hungry, impoverished or illiterate than at any time before. A smaller proportion of children died than ever before. The proportion disfigured by leprosy, blinded by diseases like trachoma or suffering from other ailments also fell.

We need some perspective as we watch the circus in Washington, hands over our mouths in horror. We journalists focus on bad news — we cover planes that crash, not those that take off — but the backdrop of global progress may be the most important development in our lifetime.

Every day, the number of people around the world living in extreme poverty (less than about $2 a day) goes down by 217,000, according to calculations by Max Roser, an Oxford University economist who runs a website called Our World in Data. Every day, 325,000 more people gain access to electricity. And 300,000 more gain access to clean drinking water.

Readers often assume that because I cover war, poverty and human rights abuses, I must be gloomy, an Eeyore with a pen. But I’m actually upbeat, because I’ve witnessed transformational change.

As recently as the 1960s, a majority of humans had always been illiterate and lived in extreme poverty. Now fewer than 15 percent are illiterate, and fewer than 10 percent live in extreme poverty. In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.

Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.

Steven Pinker, the Harvard psychology professor, explores the gains in a terrific book due out next month, “Enlightenment Now,” in which he recounts the progress across a broad array of metrics, from health to wars, the environment to happiness, equal rights to quality of life. “Intellectuals hate progress,” he writes, referring to the reluctance to acknowledge gains, and I know it feels uncomfortable to highlight progress at a time of global threats. But this pessimism is counterproductive and simply empowers the forces of backwardness.

President Trump rode this gloom to the White House. The idea “Make America Great Again” professes a nostalgia for a lost Eden. But really? If that was, say, the 1950s, the U.S. also had segregation, polio and bans on interracial marriage, gay sex and birth control. Most of the world lived under dictatorships, two-thirds of parents had a child die before age 5, and it was a time of nuclear standoffs, of pea soup smog, of frequent wars, of stifling limits on women and of the worst famine in history.

What moment in history would you prefer to live in?
F. Scott Fitzgerald said the test of a first-rate intelligence is the ability to hold two contradictory thoughts at the same time. I suggest these: The world is registering important progress, but it also faces mortal threats. The first belief should empower us to act on the second.

Granted, this column may feel weird to you. Those of us in the columny gig are always bemoaning this or that, and now I’m saying that life is great? That’s because most of the time, quite rightly, we focus on things going wrong. But it’s also important to step back periodically. Professor Roser notes that there was never a headline saying, “The Industrial Revolution Is Happening,” even though that was the most important news of the last 250 years.

I had a visit the other day from Sultana, a young Afghan woman from the Taliban heartland. She had been forced to drop out of elementary school. But her home had internet, so she taught herself English, then algebra and calculus with the help of the Khan Academy, Coursera and EdX websites. Without leaving her house, she moved on to physics and string theory, wrestled with Kant and read The New York Times on the side, and began emailing a distinguished American astrophysicist, Lawrence M. Krauss.

I wrote about Sultana in 2016, and with the help of Professor Krauss and my readers, she is now studying at Arizona State University, taking graduate classes. She’s a reminder of the aphorism that talent is universal, but opportunity is not. The meaning of global progress is that such talent increasingly can flourish.

So, sure, the world is a dangerous mess; I worry in particular about the risk of a war with North Korea. But I also believe in stepping back once a year or so to take note of genuine progress — just as, a year ago, I wrote that 2016 had been the best year in the history of the world, and a year from now I hope to offer similar good news about 2018. The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.

Every other day this year, I promise to tear my hair and weep and scream in outrage at all the things going wrong. But today, let’s not miss what’s going right.

A version of this op-ed appears in print on January 7, 2018, on Page SR9 of the New York edition with the headline: Why 2017 Was the Best Year in History

Thought Recognition and BCIS

The Economist kicked off their 2018 year with a bold prediction: “Brain-computer interfaces may change what it means to be human.”

In their lead article, they suggest that BCIS (Brain Computer Interfaces) like the BrainGate System are leading the way into a new world: where mind control works.

I feel like I did in 1979 when I first heard about the Apple II. The whole world was mainframe computing and time-sharing of those monsters, and yet two guys in a garage blow a massive hole through this paradigm, turn it on its head, and invent personal computing.

Think about it: personal computing had been evolving and constantly improving now for almost forty year!

Back then, I could see the future vaguely, in very partial outlines, without much practical effect, but with intense curiosity.

Another example is voice recognition. I still remember being introduced to the subject, way back in …. 1970? I got all excited about it, until I realized …. it sucked! And it wasn’t going to get much better anytime soon. But I remember saying to myself: I can’t be fooled by the first versions of voice recognition. I can’t lull myself to sleep. I need to watch this space because it will evolve and improve over time.

If you think about it, technology version 1-10 always sucks. The history of speech recognition in the 1950’s and 1960’s is, well, pathetic.

IBM’s SHOEBOX was introduced at the 1962 World’s Fair.
DARPA got involved in the late 1970’s, and then partnered with Carnegie Mellon on HARPY – a major advance.
Threshold Technology was formed then, in order to advance commercialization of primitive speech recognition.
And now we have SIRI.

And sure, enough, after almost 40 years of trying, voice recognition is getting really, really good. Can we see a time within the next 10 years when voice recognition replaces most keyboard applications?

I think so.

And so it is with this subject. We are at the very, very beginning, when it all sounds vague, with partial outlines, without much practical effect, and yet ….. it fills me with intense curiosity.

What could the next fifty years bring?

Is it possible that we will be able to think something, and have that something (a thought? a prescribed action? an essay?) become physical?

Read on…..

===============================

CREDIT: Economist Article on The Next Frontier

TECHNOLOGIES are often billed as transformative. For William Kochevar, the term is justified. Mr Kochevar is paralysed below the shoulders after a cycling accident, yet has managed to feed himself by his own hand. This remarkable feat is partly thanks to electrodes, implanted in his right arm, which stimulate muscles. But the real magic lies higher up. Mr Kochevar can control his arm using the power of thought. His intention to move is reflected in neural activity in his motor cortex; these signals are detected by implants in his brain and processed into commands to activate the electrodes in his arms.

An ability to decode thought in this way may sound like science fiction. But brain-computer interfaces (BCIs) like the BrainGate system used by Mr Kochevar provide evidence that mind-control can work. Researchers are able to tell what words and images people have heard and seen from neural activity alone. Information can also be encoded and used to stimulate the brain. Over 300,000 people have cochlear implants, which help them to hear by converting sound into electrical signals and sending them into the brain. Scientists have “injected” data into monkeys’ heads, instructing them to perform actions via electrical pulses.

As our Technology Quarterly in this issue explains, the pace of research into BCIs and the scale of its ambition are increasing. Both America’s armed forces and Silicon Valley are starting to focus on the brain. Facebook dreams of thought-to-text typing. Kernel, a startup, has $100m to spend on neurotechnology. Elon Musk has formed a firm called Neuralink; he thinks that, if humanity is to survive the advent of artificial intelligence, it needs an upgrade. Entrepreneurs envisage a world in which people can communicate telepathically, with each other and with machines, or acquire superhuman abilities, such as hearing at very high frequencies.

These powers, if they ever materialise, are decades away. But well before then, BCIs could open the door to remarkable new applications. Imagine stimulating the visual cortex to help the blind, forging new neural connections in stroke victims or monitoring the brain for signs of depression. By turning the firing of neurons into a resource to be harnessed, BCIs may change the idea of what it means to be human.

That thinking feeling
Sceptics scoff. Taking medical BCIs out of the lab into clinical practice has proved very difficult. The BrainGate system used by Mr Kochevar was developed more than ten years ago, but only a handful of people have tried it out. Turning implants into consumer products is even harder to imagine. The path to the mainstream is blocked by three formidable barriers—technological, scientific and commercial.

Start with technology. Non-invasive techniques like an electroencephalogram (EEG) struggle to pick up high-resolution brain signals through intervening layers of skin, bone and membrane. Some advances are being made—on EEG caps that can be used to play virtual-reality games or control industrial robots using thought alone. But for the time being at least, the most ambitious applications require implants that can interact directly with neurons. And existing devices have lots of drawbacks. They involve wires that pass through the skull; they provoke immune responses; they communicate with only a few hundred of the 85bn neurons in the human brain. But that could soon change. Helped by advances in miniaturisation and increased computing power, efforts are under way to make safe, wireless implants that can communicate with hundreds of thousands of neurons. Some of these interpret the brain’s electrical signals; others experiment with light, magnetism and ultrasound.

Clear the technological barrier, and another one looms. The brain is still a foreign country. Scientists know little about how exactly it works, especially when it comes to complex functions like memory formation. Research is more advanced in animals, but experiments on humans are hard. Yet, even today, some parts of the brain, like the motor cortex, are better understood. Nor is complete knowledge always needed. Machine learning can recognise patterns of neural activity; the brain itself gets the hang of controlling BCIS with extraordinary ease. And neurotechnology will reveal more of the brain’s secrets.

Like a hole in the head
The third obstacle comprises the practical barriers to commercialisation. It takes time, money and expertise to get medical devices approved. And consumer applications will take off only if they perform a function people find useful. Some of the applications for brain-computer interfaces are unnecessary—a good voice-assistant is a simpler way to type without fingers than a brain implant, for example. The idea of consumers clamouring for craniotomies also seems far-fetched. Yet brain implants are already an established treatment for some conditions. Around 150,000 people receive deep-brain stimulation via electrodes to help them control Parkinson’s disease. Elective surgery can become routine, as laser-eye procedures show.

All of which suggests that a route to the future imagined by the neurotech pioneers is arduous but achievable. When human ingenuity is applied to a problem, however hard, it is unwise to bet against it. Within a few years, improved technologies may be opening up new channels of communications with the brain. Many of the first applications hold out unambiguous promise—of movement and senses restored. But as uses move to the augmentation of abilities, whether for military purposes or among consumers, a host of concerns will arise. Privacy is an obvious one: the refuge of an inner voice may disappear. Security is another: if a brain can be reached on the internet, it can also be hacked. Inequality is a third: access to superhuman cognitive abilities could be beyond all except a self-perpetuating elite. Ethicists are already starting to grapple with questions of identity and agency that arise when a machine is in the neural loop.

These questions are not urgent. But the bigger story is that neither are they the realm of pure fantasy. Technology changes the way people live. Beneath the skull lies the next frontier.

This article appeared in the Leaders section of the print edition under the headline “The next frontier”

================== REFERENCE: History of Speech Recognition =====

CREDIT:

PC WORLD ARTICLE ON HISTORY OF SPEECH RECOGNITION

Speech Recognition Through the Decades: How We Ended Up With Siri

By Melanie Pinola
PCWorld | NOV 2, 2011 6:00 PM PT

Looking back on the development of speech recognition technology is like watching a child grow up, progressing from the baby-talk level of recognizing single syllables, to building a vocabulary of thousands of words, to answering questions with quick, witty replies, as Apple’s supersmart virtual assistant Siri does.

Listening to Siri, with its slightly snarky sense of humor, made us wonder how far speech recognition has come over the years. Here’s a look at the developments in past decades that have made it possible for people to control devices using only their voice.

1950s and 1960s: Baby Talk
The first speech recognition systems could understand only digits. (Given the complexity of human language, it makes sense that inventors and engineers first focused on numbers.) Bell Laboratories designed in 1952 the “Audrey” system, which recognized digits spoken by a single voice. Ten years later, IBM demonstrated at the 1962 World’s Fair its “Shoebox” machine, which could understand 16 words spoken in English.

Labs in the United States, Japan, England, and the Soviet Union developed other hardware dedicated to recognizing spoken sounds, expanding speech recognition technology to support four vowels and nine consonants.
They may not sound like much, but these first efforts were an impressive start, especially when you consider how primitive computers themselves were at the time.

1970s: Speech Recognition Takes Off

Speech recognition technology made major strides in the 1970s, thanks to interest and funding from the U.S. Department of Defense. The DoD’s DARPA Speech Understanding Research (SUR) program, from 1971 to 1976, was one of the largest of its kind in the history of speech recognition, and among other things it was responsible for Carnegie Mellon’s “Harpy” speech-understanding system. Harpy could understand 1011 words, approximately the vocabulary of an average three-year-old.

Harpy was significant because it introduced a more efficient search approach, called beam search, to “prove the finite-state network of possible sentences,” according to Readings in Speech Recognition by Alex Waibel and Kai-Fu Lee. (The story of speech recognition is very much tied to advances in search methodology and technology, as Google’s entrance into speech recognition on mobile devices proved just a few years ago.)

The ’70s also marked a few other important milestones in speech recognition technology, including the founding of the first commercial speech recognition company, Threshold Technology, as well as Bell Laboratories’ introduction of a system that could interpret multiple people’s voices.

1980s: Speech Recognition Turns Toward Prediction
Over the next decade, thanks to new approaches to understanding what people say, speech recognition vocabulary jumped from about a few hundred words to several thousand words, and had the potential to recognize an unlimited number of words. One major reason was a new statistical method known as the hidden Markov model.
Rather than simply using templates for words and looking for sound patterns, HMM considered the probability of unknown sounds’ being words. This foundation would be in place for the next two decades (see Automatic Speech Recognition—A Brief History of the Technology Development by B.H. Juang and Lawrence R. Rabiner).

Equipped with this expanded vocabulary, speech recognition started to work its way into commercial applications for business and specialized industry (for instance, medical use). It even entered the home, in the form of Worlds of Wonder’s Julie doll (1987), which children could train to respond to their voice. (“Finally, the doll that understands you.”)
See how well Julie could speak:

However, whether speech recognition software at the time could recognize 1000 words, as the 1985 Kurzweil text-to-speech program did, or whether it could support a 5000-word vocabulary, as IBM’s system did, a significant hurdle remained: These programs took discrete dictation, so you had … to … pause … after … each … and … every … word.

Next page: Speech recognition for the masses, and the future of speech recognition

Fiber’s Role in Diet

In this post, I discuss the role of the microbiome and the role of fiber in supporting a healthy microbiome. A healthy microbiome is related to the amount and diversity of the bacteria found within it.

If I had to summarize, I would say this: new research strongly confirms that high fiber diets are healthy diets. Because of this finding, eat 20-200 grams of fiber daily, by eating nuts, berries, whole grains, beans and vegetables.

The Role of the Microbiome
Bacteria in the gut – the “microbiome” – has been the subject of intense research interest over the last decade.

We now know that a healthy microbiome is essential to health and wellbeing.

On a scientific level, we now know that a healthy biome is one with billions of bacteria, of many kinds.

And specifically, we now know that a healthy biome has a layer of mucus along the walls of the intestine.

“The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome.”

If that mucus layer is thick, it is healthy. If it is thin, it is unhealthy (thin mucus layers have been linked to chronic inflammation). (“Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.”)

The Role of Fiber in Supporting a Healthy Microbiome
“Fiber” refers to ruffage from fruits, vegetables, and beans that is hard to digest. If fiber is hard to digest, why are they so universally hailed as “good for you”?

That’s the subject of two newly-reported experiments.

The answer seems to lie in bacteria in the gut – the “microbiome”. Much has been written about their beneficial role in the body. But now it seems that some bacteria in the gut have an additional role: they digest fiber that human enzymes cannot digest.

So some bacteria thrive in the gut because of the fiber they eat. And, in an important natural chain, apparently there are some bacteria in the gut that that thrive because the waste of the bacteria that eats fiber. An ecosystem of bacteria tracing to fiber!

This speaks to one of the most-discussed subjects in science today: how and why is one microbiome populated with relatively few bacteria numbers and types, and why is another microbiome much more diverse – with many more bacteria and bacteria types?

One study, shown below, reports from Tanzania, after reviewing data from tribes that sustain themselves on high fiber foods. The results, reported in Science, clearly show that an ultra-high fiber diet results in ultra high bacteria counts and diversity.

Other findings suggest that fiber is the food of many bacteria types. Because of this, a diverse, healthy bacterial microbiome is dependent on a fiber-rich diet. (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.”)

Indeed, it may well be true that many types of fibers support many types of bacteria.

Proof of this?

Researchers, including Dr. Gerwitz at Georgia State proved that more fiber seems to be better:

Bad: high, fat, low fiber (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.” “Many common species became rare, and rare species became common.“)

Good: modest fiber
Better: high dose fiber (“Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.”)

Best: high dose of fiber-feeding bacteria
(“Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.”

(“Research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.”)

===========================
This article documents rich-in-fiber foods:

CREDIT: http://www.todaysdietitian.com/newarchives/063008p28.shtml

In recognition of fiber’s benefits, Today’s Dietitian looks at some of the best ways to boost fiber intake,from whole to fortified foods,using data from the USDA National Nutrient Database for Standard Reference.

Top Fiber-Rich Foods
1. Get on the Bran Wagon (Oat bran, All-bran cereal, fiber-one chewy bars, etc)
One simple way to increase fiber intake is to power up on bran. Bran from many grains is very rich in dietary fiber. Oat bran is high in soluble fiber, which has been shown to lower blood cholesterol levels. Wheat, corn, and rice bran are high in insoluble fiber, which helps prevent constipation. Bran can be sprinkled into your favorite foods,from hot cereal and pancakes to muffins and cookies. Many popular high-fiber cereals and bars are also packed with bran.

2. Take a Trip to Bean Town (Limas, Pintos, Lentils, etc)
Beans really are the magical fruit. They are one of the most naturally rich sources of fiber, as well as protein, lysine, vitamins, and minerals, in the plant kingdom. It’s no wonder so many indigenous diets include a bean or two in the mix. Some people experience intestinal gas and discomfort associated with bean intake, so they may be better off slowly introducing beans into their diet. Encourage a variety of beans as an animal protein replacement in stews, side dishes, salads, soups, casseroles, and dips.

3. Go Berry Picking (especially blackberries and raspberries)
Jewel-like berries are in the spotlight due to their antioxidant power, but let’s not forget about their fiber bonus. Berries happen to yield one of the best fiber-per-calorie bargains on the planet. Since berries are packed with tiny seeds, their fiber content is typically higher than that of many fruits. Clients can enjoy berries year-round by making the most of local berries in the summer and eating frozen, preserved, and dried berries during the other seasons. Berries make great toppings for breakfast cereal, yogurt, salads, and desserts.

4. Wholesome Whole Grains (especially barley, oats, brown rice, rye wafers)
One of the easiest ways to up fiber intake is to focus on whole grains. A grain in nature is essentially the entire seed of the plant made up of the bran, germ, and endosperm. Refining the grain removes the germ and the bran; thus, fiber, protein, and other key nutrients are lost. The Whole Grains Council recognizes a variety of grains and defines whole grains or foods made from them as containing “all the essential parts and naturally-occurring nutrients of the entire grain seed. If the grain has been processed, the food product should deliver approximately the same rich balance of nutrients that are found in the original grain seed.â€‌ Have clients choose different whole grains as features in side dishes, pilafs, salads, breads, crackers, snacks, and desserts.

5. Sweet Peas (especially frozen green peas, black eyed peas)
Peas,from fresh green peas to dried peas,are naturally chock full of fiber. In fact, food technologists have been studying pea fiber as a functional food ingredient. Clients can make the most of peas by using fresh or frozen green peas and dried peas in soups, stews, side dishes, casseroles, salads, and dips.

6. Green, the Color of Fiber (Spinach, etc)
Deep green, leafy vegetables are notoriously rich in beta-carotene, vitamins, and minerals, but their fiber content isn’t too shabby either. There are more than 1,000 species of plants with edible leaves, many with similar nutritional attributes, including high-fiber content. While many leafy greens are fabulous tossed in salads, saut ©ing them in olive oil, garlic, lemon, and herbs brings out a rich flavor.

7. Squirrel Away Nuts and Seeds (especially flaxseed and sesame seed)
Go nuts to pack a fiber punch. One ounce of nuts and seeds can provide a hearty contribution to the day’s fiber recommendation, along with a bonus of healthy fats, protein, and phytochemicals. Sprinkling a handful of nuts or seeds over breakfast cereals, yogurt, salads, and desserts is a tasty way to do fiber.

8. Play Squash (especially acorn squash)
Dishing up squash,from summer to winter squash,all year is another way that clients can ratchet up their fiber intake. These nutritious gems are part of the gourd family and contribute a variety of flavors, textures, and colors, as well as fiber, vitamins, minerals, and carotenoids, to the dinner plate. Squash can be turned into soups, stews, side dishes, casseroles, salads, and crudit ©s. Brush squash with olive oil and grill it in the summertime for a healthy, flavorful accompaniment to grilled meats.

9. Brassica or Bust (broccoli, cauliflower, kale, cabbage, and Brussels sprouts)
Brassica vegetables have been studied for their cancer-protective effects associated with high levels of glucosinolates. But these brassy beauties, including broccoli, cauliflower, kale, cabbage, and Brussels sprouts, are also full of fiber. They can be enjoyed in stir-fries, casseroles, soups, and salads and steamed as a side dish.

10. Hot Potatoes
The humble spud, the top vegetable crop in the world, is plump with fiber. Since potatoes are so popular in America, they’re an easy way to help pump up people’s fiber potential. Why stop at Russets? There are numerous potatoes that can provide a rainbow of colors, nutrients, and flavors, and remind clients to eat the skins to reap the greatest fiber rewards. Try adding cooked potatoes with skins to salads, stews, soups, side dishes, stir-fries, and casseroles or simply enjoy baked potatoes more often.

11. Everyday Fruit Basket (especially pears and oranges)
Look no further than everyday fruits to realize your full fiber potential. Many are naturally packed with fiber, as well as other important vitamins and minerals. Maybe the doctor was right when he advised an apple a day, but he could have added pears, oranges, and bananas to the prescription as well. When between fruit seasons, clients can rely on dried fruits to further fortify their diet. Encourage including fruit at breakfast each morning instead of juice; mixing dried fruits into cereals, yogurts, and salads; and reaching for the fruit bowl at snack time. It’s a healthy habit all the way around.

12. Exotic Destinations (especially avocado)
Some of the plants with the highest fiber content in the world may be slightly out of your clients’ comfort zone and, for that matter, time zone. A rainbow of indigenous fruits and vegetables used in cultural food traditions around the globe are very high in fiber. Entice clients to introduce a few new plant foods into their diets to push up the flavor, as well as their fiber, quotient.

13. Fiber Fortification Power
More foods,from juice to yogurt,are including fiber fortification in their ingredient lineup. Such foods may help busy people achieve their fiber goals. As consumer interest in foods with functional benefits, such as digestive health and cardiovascular protection, continues to grow, expect to see an even greater supply of food products promoting fiber content on supermarket shelves.

===========================

This article documents the newly-reported experiments:

CREDIT: NYT Article on Fiber Science

Fiber is Good for You. Now we Know Why

By Carl Zimmer
Jan. 1, 2018
A diet of fiber-rich foods, such as fruits and vegetables, reduces the risk of developing diabetes, heart disease and arthritis. Indeed, the evidence for fiber’s benefits extends beyond any particular ailment: Eating more fiber seems to lower people’s mortality rate, whatever the cause.

That’s why experts are always saying how good dietary fiber is for us. But while the benefits are clear, it’s not so clear why fiber is so great. “It’s an easy question to ask and a hard one to really answer,” said Fredrik Bäckhed, a biologist at the University of Gothenburg in Sweden.

He and other scientists are running experiments that are yielding some important new clues about fiber’s role in human health. Their research indicates that fiber doesn’t deliver many of its benefits directly to our bodies.

Instead, the fiber we eat feeds billions of bacteria in our guts. Keeping them happy means our intestines and immune systems remain in good working order.

In order to digest food, we need to bathe it in enzymes that break down its molecules. Those molecular fragments then pass through the gut wall and are absorbed in our intestines.
But our bodies make a limited range of enzymes, so that we cannot break down many of the tough compounds in plants. The term “dietary fiber” refers to those indigestible molecules.

But they are indigestible only to us. The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome. Some of these microbes carry the enzymes needed to break down various kinds of dietary fiber.

The ability of these bacteria to survive on fiber we can’t digest ourselves has led many experts to wonder if the microbes are somehow involved in the benefits of the fruits-and-vegetables diet. Two detailed studies published recently in the journal Cell Host and Microbe provide compelling evidence that the answer is yes.

In one experiment, Andrew T. Gewirtz of Georgia State University and his colleagues put mice on a low-fiber, high-fat diet. By examining fragments of bacterial DNA in the animals’ feces, the scientists were able to estimate the size of the gut bacterial population in each mouse.

On a low-fiber diet, they found, the population crashed, shrinking tenfold.

Dr. Bäckhed and his colleagues carried out a similar experiment, surveying the microbiome in mice as they were switched from fiber-rich food to a low-fiber diet. “It’s basically what you’d get at McDonald’s,” said Dr. Bäckhed said. “A lot of lard, a lot of sugar, and twenty percent protein.”

The scientists focused on the diversity of species that make up the mouse’s gut microbiome. Shifting the animals to a low-fiber diet had a dramatic effect, they found: Many common species became rare, and rare species became common.

Along with changes to the microbiome, both teams also observed rapid changes to the mice themselves. Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.

After a few days on the low-fiber diet, mouse intestines developed chronic inflammation. After a few weeks, Dr. Gewirtz’s team observed that the mice began to change in other ways, putting on fat, for example, and developing higher blood sugar levels.

Dr. Bäckhed and his colleagues also fed another group of rodents the high-fat menu, along with a modest dose of a type of fiber called inulin. The mucus layer in their guts was healthier than in mice that didn’t get fiber, the scientists found, and intestinal bacteria were kept at a safer distance from their intestinal wall.

Dr. Gewirtz and his colleagues gave inulin to their mice as well, but at a much higher dose. The improvements were even more dramatic: Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.

Dr. Bäckhed and his colleagues ran one more interesting experiment: They spiked water given to mice on a high-fat diet with a species of fiber-feeding bacteria. The addition changed the mice for the better: Even on a high-fat diet, they produced more mucus in their guts, creating a healthy barrier to keep bacteria from the intestinal walls.

One way that fiber benefits health is by giving us, indirectly, another source of food, Dr. Gewirtz said. Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.

But the gut’s microbes do more than just make energy. They also send messages. Intestinal cells rely on chemical signals from the bacteria to work properly, Dr. Gewirtz said. The cells respond to the signals by multiplying and making a healthy supply of mucus. They also release bacteria-killing molecules.
By generating these responses, gut bacteria help maintain a peaceful coexistence with the immune system. They rest atop the gut’s mucus layer at a safe distance from the intestinal wall. Any bacteria that wind up too close get wiped out by antimicrobial poisons.

While some species of gut bacteria feed directly on dietary fiber, they probably support other species that feed on their waste. A number of species in this ecosystem — all of it built on fiber — may be talking to our guts.

Going on a low-fiber diet disturbs this peaceful relationship, the new studies suggest. The species that depend on dietary fiber starve, as do the other species that depend on them. Some species may switch to feeding on the host’s own mucus.

With less fuel, intestinal cells grow more slowly. And without a steady stream of chemical signals from bacteria, the cells slow their production of mucus and bacteria-killing poisons.
As a result, bacteria edge closer to the intestinal wall, and the immune system kicks into high gear.

“The gut is always precariously balanced between trying to contain these organisms and not to overreact,” said Eric C. Martens, a microbiologist at the University of Michigan who was not involved in the new studies. “It could be a tipping point between health and disease.”

Inflammation can help fight infections, but if it becomes chronic, it can harm our bodies. Among other things, chronic inflammation may interfere with how the body uses the calories in food, storing more of it as fat rather than burning it for energy.

Justin L. Sonnenburg, a biologist at Stanford University who was not involved in the new studies, said that a low-fiber diet can cause low-level inflammation not only in the gut, but throughout the body.

His research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.

“You can modulate what’s happening in your lung based on what you’re feeding your microbiome in your gut,” Dr. Sonnenburg said.
ADVERTISEMENT
Hannah D. Holscher, a nutrition scientist at the University of Illinois who was not involved in the new studies, said that the results on mice need to be put to the test in humans. But it’s much harder to run such studies on people.

In her own lab, Dr. Holscher acts as a round-the-clock personal chef. She and her colleagues provide volunteers with all their meals for two weeks. She can then give some of her volunteers an extra source of fiber — such as walnuts — and look for changes in both their microbiome and their levels of inflammation.

Dr. Holscher and other researchers hope that they will learn enough about how fiber influences the microbiome to use it as a way to treat disorders. Lowering inflammation with fiber may also help in the treatment of immune disorders such as inflammatory bowel disease.

Fiber may also help reverse obesity. Last month in the American Journal of Clinical Nutrition, Dr. Holscher and her colleagues reviewed a number of trials in which fiber was used to treat obesity. They found that fiber supplements helped obese people to lose about five pounds, on average.
But for those who want to stay healthy, simply adding one kind of fiber to a typical Western diet won’t be a panacea. Giving mice inulin in the new studies only partly restored them to health.

That’s probably because we depend on a number of different kinds of dietary fiber we get from plants. It’s possible that each type of fiber feeds a particular set of bacteria, which send their own important signals to our bodies.

“It points to the boring thing that we all know but no one does,” Dr. Bäckhed said. “If you eat more green veggies and less fries and sweets, you’ll probably be better off in the long term.”

=====================

CREDIT: https://www.npr.org/sections/goatsandsoda/2017/08/24/545631521/is-the-secret-to-a-healthier-microbiome-hidden-in-the-hadza-diet

Is The Secret To A Healthier Microbiome Hidden In The Hadza Diet?

August 24, 20176:11 PM ET
Heard on All Things Considered

MICHAELEEN DOUCLEFF
Twitter

Enlarge this image

The words “endangered species” often conjure up images of big exotic creatures. Think elephants, leopards and polar bears.

But there’s another of type of extinction that may be occurring, right now, inside our bodies.

Yes, I’m talking about the microbiome — that collection of bacteria in our intestines that influences everything from metabolism and the immune system to moods and behavior.

For the past few years, scientists around the world have been accumulating evidence that the Western lifestyle is altering our microbiome. Some species of bacteria are even disappearing to undetectable levels.

“Over time we are losing valuable members of our community,” says Justin Sonnenburg, a microbiologist at Stanford University, who has been studying the microbiome for more than a decade.

Now Sonnenburg and his team have evidence for why this microbial die-off is happening — and hints about what we can possibly do to reverse it.

The study, published Thursday in the journal Science, focuses on a group of hunter-gatherers in Tanzania, called Hadza.
Their diet consists almost entirely of food they find in the forest, including wild berries, fiber-rich tubers, honey and wild meat. They basically eat no processed food — or even food that comes from farms.
“They are a very special group of people,” Sonnenburg says. “There are only about 2,200 left and really only about 200 that exclusively adhere to hunting and gathering.”

Sonnenberg and his colleagues analyzed 350 stool samples from Hadza people taken over the course of about a year. They then compared the bacteria found in Hadza with those found in 17 other cultures around the world, including other hunter-gatherer communities in Venezuela and Peru and subsistence farmers in Malawi and Cameroon.

The trend was clear: The further away people’s diets are from a Western diet, the greater the variety of microbes they tend to have in their guts. And that includes bacteria that are missing from American guts.

“So whether it’s people in Africa, Papua New Guinea or South America, communities that live a traditional lifestyle have common gut microbes — ones that we all lack in the industrialized world,” Sonnenburg says.

In a way, the Western diet — low in fiber and high in refined sugars — is basically wiping out species of bacteria from our intestines.

That’s the conclusion Sonnenburg and his team reached after analyzing the Hadza microbiome at one stage of the yearlong study. But when they checked several months later, they uncovered a surprising twist: The composition of the microbiome fluctuated over time, depending on the season and what people were eating. And at one point, the composition started to look surprisingly similar to that of Westerners’ microbiome.

During the dry season, Hadza eat a lot of more meat — kind of like Westerners do. And their microbiome shifted as their diet changed. Some of the bacterial species that had been prevalent disappeared to undetectable levels, similar to what’s been observed in Westerners’ guts.

But then in wet season — when Hadza eat more berries and honey — these missing microbes returned, although the researchers are not really sure what’s in these foods that bring the microbes back.

“I think this finding is really exciting,” says Lawrence David, who studies the microbiome at Duke University. “It suggests the shifts in the microbiome seen in industrialized nations might not be permanent — that they might be reversible by changes in people’s diets.

“The finding supports the idea that the microbiome is plastic, depending on diet,” David adds.

Now the big question is: What’s the key dietary change that could bring the missing microbes back?

Lawrence thinks it could be cutting down on fat. “At a high level, it sounds like that,” he says, “because what changed in the Hadza’s diet was whether or not they were hunting versus foraging for berries or honey,” he says.

But Sonnenburg is placing his bets on another dietary component: fiber — which is a vital food for the microbiome.
“We’re beginning to realize that people who eat more dietary fiber are actually feeding their gut microbiome,”
Sonnenburg says.

Hadza consume a huge amount of fiber because throughout the year, they eat fiber-rich tubers and fruit from baobab trees. These staples give them about 100 to 150 grams of fiber each day. That’s equivalent to the fiber in 50 bowls of Cheerios — and 10 times more than many Americans eat.

“Over the past few years, we’ve come to realize how important this gut community is for our health, and yet we’re eating a low-fiber diet that totally neglects them,” he says. “So we’re essentially starving our microbial selves.”

The Dying Algorithm

CREDIT: NYT Article on the Dying Algorithm

This Cat Sensed Death. What if Computers Could, Too
By Siddhartha Mukherjee
Jan. 3, 2018

Of the many small humiliations heaped on a young oncologist in his final year of fellowship, perhaps this one carried the oddest bite: A 2-year-old black-and-white cat named Oscar was apparently better than most doctors at predicting when a terminally ill patient was about to die. The story appeared, astonishingly, in The New England Journal of Medicine in the summer of 2007. Adopted as a kitten by the medical staff, Oscar reigned over one floor of the Steere House nursing home in Rhode Island. When the cat would sniff the air, crane his neck and curl up next to a man or woman, it was a sure sign of impending demise. The doctors would call the families to come in for their last visit. Over the course of several years, the cat had curled up next to 50 patients. Every one of them died shortly thereafter.
No one knows how the cat acquired his formidable death-sniffing skills. Perhaps Oscar’s nose learned to detect some unique whiff of death — chemicals released by dying cells, say. Perhaps there were other inscrutable signs. I didn’t quite believe it at first, but Oscar’s acumen was corroborated by other physicians who witnessed the prophetic cat in action. As the author of the article wrote: “No one dies on the third floor unless Oscar pays a visit and stays awhile.”
The story carried a particular resonance for me that summer, for I had been treating S., a 32-year-old plumber with esophageal cancer. He had responded well to chemotherapy and radiation, and we had surgically resected his esophagus, leaving no detectable trace of malignancy in his body. One afternoon, a few weeks after his treatment had been completed, I cautiously broached the topic of end-of-life care. We were going for a cure, of course, I told S., but there was always the small possibility of a relapse. He had a young wife and two children, and a mother who had brought him weekly to the chemo suite. Perhaps, I suggested, he might have a frank conversation with his family about his goals?

But S. demurred. He was regaining strength week by week. The conversation was bound to be “a bummah,” as he put it in his distinct Boston accent. His spirits were up. The cancer was out. Why rain on his celebration? I agreed reluctantly; it was unlikely that the cancer would return.

When the relapse appeared, it was a full-on deluge. Two months after he left the hospital, S. returned to see me with sprays of metastasis in his liver, his lungs and, unusually, in his bones. The pain from these lesions was so terrifying that only the highest doses of painkilling drugs would treat it, and S. spent the last weeks of his life in a state bordering on coma, unable to register the presence of his family around his bed. His mother pleaded with me at first to give him more chemo, then accused me of misleading the family about S.’s prognosis. I held my tongue in shame: Doctors, I knew, have an abysmal track record of predicting which of our patients are going to die. Death is our ultimate black box.

In a survey led by researchers at University College London of over 12,000 prognoses of the life span of terminally ill patients, the hits and misses were wide-ranging. Some doctors predicted deaths accurately. Others underestimated death by nearly three months; yet others overestimated it by an equal magnitude. Even within oncology, there were subcultures of the worst offenders: In one story, likely apocryphal, a leukemia doctor was found instilling chemotherapy into the veins of a man whose I.C.U. monitor said that his heart had long since stopped.

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

Avati and his team identified about 200,000 patients who could be studied. The patients had all sorts of illnesses — cancer, neurological diseases, heart and kidney failure. The team’s key insight was to use the hospital’s medical records as a proxy time machine. Say a man died in January 2017. What if you scrolled time back to the “sweet spot of palliative care” — the window between January and October 2016 when care would have been most effective? But to find that spot for a given patient, Avati knew, you’d presumably need to collect and analyze medical information before that window. Could you gather information about this man during this prewindow period that would enable a doctor to predict a demise in that three-to-12-month section of time? And what kinds of inputs might teach such an algorithm to make predictions?
Avati drew on medical information that had already been coded by doctors in the hospital: a patient’s diagnosis, the number of scans ordered, the number of days spent in the hospital, the kinds of procedures done, the medical prescriptions written. The information was admittedly limited — no questionnaires, no conversations, no sniffing of chemicals — but it was objective, and standardized across patients.

These inputs were fed into a so-called deep neural network — a kind of software architecture thus named because it’s thought to loosely mimic the way the brain’s neurons are organized. The task of the algorithm was to adjust the weights and strengths of each piece of information in order to generate a probability score that a given patient would die within three to 12 months.

The “dying algorithm,” as we might call it, digested and absorbed information from nearly 160,000 patients to train itself. Once it had ingested all the data, Avati’s team tested it on the remaining 40,000 patients. The algorithm performed surprisingly well. The false-alarm rate was low: Nine out of 10 patients predicted to die within three to 12 months did die within that window. And 95 percent of patients assigned low probabilities by the program survived longer than 12 months. (The data used by this algorithm can be vastly refined in the future. Lab values, scan results, a doctor’s note or a patient’s own assessment can be added to the mix, enhancing the predictive power.)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

Still, when you pry the box open to look at individual cases, you see expected and unexpected patterns. One man assigned a score of 0.946 died within a few months, as predicted. He had had bladder and prostate cancer, had undergone 21 scans, had been hospitalized for 60 days — all of which had been picked up by the algorithm as signs of impending death. But a surprising amount of weight was seemingly put on the fact that scans were made of his spine and that a catheter had been used in his spinal cord — features that I and my colleagues might not have recognized as predictors of dying (an M.R.I. of the spinal cord, I later realized, was most likely signaling cancer in the nervous system — a deadly site for metastasis).
It’s hard for me to read about the “dying algorithm” without thinking about my patient S. If a more sophisticated version of such an algorithm had been available, would I have used it in his case? Absolutely. Might that have enabled the end-of-life conversation S. never had with his family? Yes. But I cannot shake some inherent discomfort with the thought that an algorithm might understand patterns of mortality better than most humans. And why, I kept asking myself, would such a program seem so much more acceptable if it had come wrapped in a black-and-white fur box that, rather than emitting probabilistic outputs, curled up next to us with retracted claws?

Siddhartha Mukherjee is the author of “The Emperor of All Maladies: A Biography of Cancer” and, more recently, “The Gene: An Intimate History.”

Scourge of Opioids

CREDIT: https://www.nationalaffairs.com/publications/detail/taking-on-the-scourge-of-opioids

Taking On the Scourge of Opioids

Sally Satel

Summer 2017

On March 1, 2017, Maryland governor Larry Hogan declared a state of emergency. Heroin and fentanyl, a powerful synthetic opioid, had killed 1,468 Maryland residents in the first nine months of 2016, up 62% from the same period in 2015. Speaking at a command center of the Maryland Emergency Management Agency near Baltimore, the governor announced additional funding to strengthen law enforcement, prevention, and treatment services. “The reality is that this threat is rapidly escalating,” Hogan said.

And it is escalating across the country. Florida governor Rick Scott followed Hogan’s lead in May, declaring a public-health emergency after requests for help from local officials across the state. Arizona governor Doug Ducey did the same in June. In Ohio, some coroners have run out of space for the bodies of overdose victims and have to use a mobile, refrigerated morgue. In West Virginia, state burial funds have been exhausted burying overdose victims. Opioid orphans are lucky if their grandparents can raise them; if not, they are at the mercy of foster-care systems that are now overflowing with the children of addicted parents.

An estimated 2.5 million Americans abuse or are addicted to opioids — a class of highly addictive drugs that includes Percocet, Vicodin, OxyContin, and heroin. Most experts believe this is an undercount, and all agree that the casualty rate is unprecedented. At peak years in an earlier heroin epidemic, from 1973 to 1975, there were 1.5 fatalities per 100,000 Americans. In 2015, the rate was 10.4 per 100,000. In West Virginia, ground zero of the crisis, it was over 36 per 100,000. In raw numbers, more than 33,000 individuals died in 2015 — nearly equal to the number of deaths from car crashes and double the number of gun homicides. Meanwhile, the opioid-related fatalities continue to mount, having quadrupled since 1999.

The roots of the crisis can be traced to the early 1990s when physicians began to prescribe opioid painkillers more liberally. In parallel, overdose deaths from painkillers rose until about 2011. Since then, heroin and synthetic opioids have briskly driven opioid-overdose deaths; they now account for over two-thirds of victims. Synthetic opioids, such as fentanyl, are made mainly in China, shipped to Mexico, and trafficked here. Their menace cannot be overstated.

Fentanyl is 50 times as potent as heroin and can kill instantly. People have been found dead with needles dangling from their arms, the syringe barrels still partly full of fentanyl-containing liquid. One fentanyl analog, carfentanil, is a big-game tranquilizer that’s a staggering 5,000 times more powerful than heroin. This spring, “Gray Death,” a combination of heroin, fentanyl, carfentanil, and other synthetics, has pushed the bounds of lethal chemistry even further. The death rate from synthetics has increased by more than 72% over the space of a single year, from 2014 to 2015. They have transformed an already terrible problem into a true public-health emergency.

The nation has weathered drug epidemics before, but the current affliction — a new plague for a new century, in the words of Nicholas Eberstadt — is different. Today, the addicted are not inner-city minorities, though big cities are increasingly reporting problems. Instead, they are overwhelmingly white and rural, though middle- and upper-class individuals are also affected. The jarring visual of the crisis is not an urban “gang banger” but an overdosed mom slumped in the front seat of her car in a Walmart parking lot, toddler in the back.

It’s almost impossible to survey this devastating tableau and not wonder why the nation’s response has been so slow in coming. Jonathan Caulkins, a drug-policy expert at Carnegie Mellon, offers two theories. One is geography. The prescription-opioid wave crashed down earliest in fly-over states, particularly small cities and rural areas, such as West Virginia and Kentucky, without nationally important media markets. Earlier opioid (heroin) epidemics raged in urban centers, such as New York, Baltimore, Chicago, and Los Angeles.

The second of Caulkins’s plausible explanations is the absence of violence that roiled inner cities in the early 1970s, when President Richard Nixon called drug abuse “public enemy number one.” Dealers do not engage in shooting wars or other gang-related activity. As purveyors of heroin established themselves in the U.S., Mexican bosses deliberately avoided inner cities where heroin markets were dominated by violent gangs. Thanks to a “drive-through” business model perfected by traffickers and executed by discreet runners — farm boys from western Mexico looking to make quick money — heroin can be summoned via text message or cell phone and delivered, like pizza, to homes or handed off in car-to-car transactions. Sources of painkillers are low profile as well. Typically pills are obtained (or stolen) from friends or relatives, physicians, or dealers. The “dark web,” too, is a conduit for synthetics.

It’s hard to miss, too, that this time around, the drug crisis is viewed differently. Heroin users today are widely seen as suffering from an illness. And because that illness has a pale complexion, many have asked, “Where was the compassion for black people?” A racial element cannot be denied, but there are other forces at play, namely that Americans are drug-war weary and law enforcement has incarceration fatigue. It also didn’t help that, in the 1970s, officers were only loosely woven into the fabric of the inner-city minority neighborhoods that were hardest hit. Today, in the small towns where so much of the epidemic plays out, the crisis is personal. Police chiefs, officers, and local authorities will likely have at least one relative, friend, or neighbor with an opioid problem.

If there is reason for optimism in the midst of this crisis, it is that national and local politicians and even police are placing emphasis on treatment over punishment. And, without question, the nation needs considerably more funding for treatment; Congress must step up. Yet the much-touted promise of treatment — and particularly of anti-addiction medications — as a panacea has already been proven wrong. Perhaps “we can’t arrest our way out of the problem,” as officials like to say, but nor are we treating our way out of it. This is because many users reject treatment, and, if they accept it, too many drop out. Engaging drug users in treatment has turned out to be one of the biggest challenges of the epidemic — and one that needs serious attention.

The near-term forecast for this American Carnage, as journalist Christopher Caldwell calls it, is grim. What can be done?

ROOTS OF A CRISIS

In the early 1990s, campaigns for improved treatment of pain gained ground. Analgesia for pain associated with cancer and terminal illness was relatively well accepted, but doctors were leery of medicating chronic conditions, such as joint pain, back pain, and neurological conditions, lest patients become addicted. Then in 1995 the American Pain Society recommended that pain be assessed as the “fifth vital sign” along with the standard four (blood pressure, temperature, pulse, and respiratory rate). In 2001 the influential Joint Commission on Accreditation of Healthcare Organizations established standards for pain management. These standards did not mention opioids, per se, but were interpreted by many physicians as encouraging their use.

These developments had a gradual but dramatic effect on the culture of American medicine. Soon, clinicians were giving an entire month’s worth of Percocet or Lortab to patients with only minor injuries or post-surgical pain that required only a few days of opioid analgesia. Compounding the matter, pharmaceutical companies engaged in aggressive marketing to physicians.

The culture of medical practice contributed as well. Faced with draconian time pressures, a doctor who suspected that his patient was taking too many painkillers rarely had time to talk with him about it. Other time-consuming pain treatments, such as physical therapy or behavioral strategies, were, and remain, less likely to be covered by insurers. Abbreviated visits meant shortcuts, like a quick refill that may not have been warranted, while the need for addiction treatment was overlooked. In addition, clinicians were, and still are, held hostage to ubiquitous “patient-satisfaction surveys.” A poor grade mattered because Medicare and Medicaid rely on these assessments to help determine the amount of reimbursement for care. Clearly, too many incentives pushed toward prescribing painkillers, even when it went against a doctor’s better judgment.

The chief risk of liberal prescribing was not so much that the patient would become addicted — though it happens occasionally — but rather that excess medication fed the rivers of pills that were coursing through many neighborhoods. And as more painkillers began circulating, almost all of them prescribed by physicians, more opportunities arose for non-patients to obtain them, abuse them, and die. OxyContin formed a particularly notorious tributary. Available since 1996, this slow-release form of oxycodone was designed to last up to 12 hours (about six to eight hours longer than immediate-release preparations of oxycodone, such as Percocet). A sustained blood level was meant to be a therapeutic advantage for patients with unremitting pain. To achieve long action, each OxyContin tablet was loaded with a large amount of oxycodone.

Packing a large dose into a single pill presented a major unintended consequence. When it was crushed and snorted or dissolved in water and injected, OxyContin gave a clean, predictable, and enjoyable high. By 2000, reports of abuse of OxyContin began to surface in the Rust Belt — a region rife with injured coal miners who were readily prescribed OxyContin, or, as it came to be called, “hillbilly heroin.” Ohio along with Florida became the “pill mill” capitals of the nation. These mills were advertised as “pain clinics,” but were really cash-only businesses set up to sell painkillers in high volume. The mills employed shady physicians who were licensed to prescribe but knew they weren’t treating authentic patients.

Around 2010 to 2011, law enforcement began cracking down on pill mills. In 2010, OxyContin’s maker, Purdue Pharma, reformulated the pill to make it much harder to crush. In parallel, physicians began to re-examine their prescribing practices and to consider non-opioid options for chronic-pain management. More states created prescription registries so that pharmacists and doctors could detect patients who “doctor shopped” for painkillers and even forged prescriptions. (Today, all states except Missouri have such a registry.) Last year, the American Medical Association recommended that pain be removed as a “fifth vital sign” in professional medical standards.

Controlling the sources of prescription pills was completely rational. Sadly, however, it helped set the stage for a new dimension of the opioid epidemic: heroin and synthetic opioids. Heroin — cheaper and more abundant than painkillers — had flowed into the western U.S. since at least the 1990s, but trafficking east of the Mississippi and into the Rust Belt reportedly began to accelerate around the mid-2000s, a transformative episode in the history of domestic drug problems detailed in Sam Quinones’s superb book Dreamland.

The timing was darkly auspicious. As prescription painkillers became harder to get and more expensive, thanks to alterations of the OxyContin tablet, to law-enforcement efforts, and to growing physician enlightenment, a pool of individuals already primed by their experience with prescription opioids moved on to low-cost, relatively pure, and accessible heroin. Indeed, between 2008 and 2010, about three-fourths of people who had used heroin in the past year reported non-medical use of painkillers — likely obtained outside the health-care system — before initiating heroin use.

The progression from pills to heroin was abetted by the nature of addiction itself. As users became increasingly tolerant to painkillers, they needed larger quantities of opioids or more efficient ways to use them in order to achieve the same effect. Moving from oral consumption to injection allowed this. Once a person is already injecting pills, moving to heroin, despite its stigma, doesn’t seem that big a step. The march to heroin is not inexorable, of course. Yet in economically and socially depleted environments where drug use is normalized, heroin is abundant, and treatment is scarce, widespread addiction seems almost inevitable.

The last five years or so have witnessed a massive influx of powder heroin to major cities such as New York, Detroit, and Chicago. From there, traffickers direct shipments to other urban areas, and these supplies are, in turn, distributed further to rural and suburban areas. It is the powdered form of heroin that is laced with synthetics, such as fentanyl. Most victims of synthetic opioids don’t even know they are taking them. Drug traffickers mix the fentanyl with heroin or press it into pill form that they sell as OxyContin.

Yet, there are reports of addicts now knowingly seeking fentanyl as their tolerance to heroin has grown. Whereas heroin requires poppies, which take time to cultivate, synthetics can be made in a lab, so the supply chain can be downsized. And because the synthetics are so strong, small volumes can be trafficked more efficiently and more profitably. What’s more, laboratories can easily stay one step ahead of the Drug Enforcement Administration by modifying fentanyl into analogs that are more potent, less detectable, or both. Synthetics are also far more deadly: In some regions of the country, roughly two-thirds of deaths from opioids can now be traced to heroin, including heroin that medical examiners either suspect or are certain was laced with fentanyl.

THE BASICS

Terminology is important in discussions about drug use. A 2016 Surgeon General report on addiction, “Facing Addiction in America,” defines “misuse” of a substance as consumption that “causes harm to the user and/or to those around them.” Elsewhere, however, the term has been used to refer to consumption for a purpose not consistent with medical or legal guidelines. Thus, misuse would apply equally to the person who takes an extra pill now and then from his own prescribed supply of Percocet to reduce stress as well as to the person who buys it from a dealer and gets high several times a week. The term “abuse” refers to a consistent pattern of use causing harm, but “misuse,” with its protean definitions, has unhelpfully taken its place in many discussions of the current crisis. In the Surgeon General report, the clinical term “substance use disorder” refers to functionally significant impairment caused by substance use. Finally, “addiction,” while not considered a clinical term, denotes a severe form of substance-use disorder — in other words, compulsive use of a substance with difficulty stopping despite negative consequences.

Much of the conventional wisdom surrounding the opioid crisis holds that virtually anyone is at risk for opioid abuse or addiction — say, the average dental patient who receives some Vicodin for a root canal. This is inaccurate, but unsurprising. Exaggerating risk is a common strategy in public-health messaging: The idea is to garner attention and funding by democratizing affliction and universalizing vulnerability. But this kind of glossing is misleading at best, counterproductive at worst. To prevent and ameliorate problems, we need to know who is truly at risk to target resources where they are most needed.

In truth, the vast majority of people prescribed medication for pain do not misuse it, even those given high doses. A new study in the Annals of Surgery, for example, found that almost three-fourths of all opioid painkillers prescribed by surgeons for five common outpatient procedures go unused. In 2014, 81 million people received at least one prescription for an opioid pain reliever, according to a study in the American Journal of Preventive Medicine; yet during the same year, the National Survey on Drug Use and Health reported that only 1.9 million people, approximately 2%, met the criteria for prescription pain-reliever abuse or dependence (a technical term denoting addiction). Those who abuse their prescription opioids are patients who have been prescribed them for over six months and tend to suffer from concomitant psychiatric conditions, usually a mood or anxiety disorder, or have had prior problems with alcohol or drugs.

Notably, the majority of people who develop problems with painkillers are not individuals for whom they have been legitimately prescribed — nor are opioids the first drug they have misused. Such non-patients procure their pills from friends or family, often helping themselves to the amply stocked medicine chests of unsuspecting relatives suffering from cancer or chronic pain. They may scam doctors, forge prescriptions, or doctor shop. The heaviest users are apt to rely on dealers. Some of these individuals make the transition to heroin, but it is a small fraction. (Still, the death toll is striking given the lethality of synthetic opioids.) One study from the Substance Abuse and Mental Health Services Administration found that less than 5% of pill misusers had moved to heroin within five years of first beginning misuse. These painkiller-to-heroin migrators, according to analyses by the Centers for Disease Control and Prevention, also tend to be frequent users of multiple substances, such as benzodiazepines, alcohol, and cocaine. The transition from these other substances to heroin may represent a natural progression for such individuals.

Thus, factors beyond physical pain are most responsible for making individuals vulnerable to problems with opioids. Princeton economists Anne Case and Angus Deaton paint a dreary portrait of the social determinants of addiction in their work on premature demise across the nation. Beginning in the late 1990s, deaths due to alcoholism-related liver disease, suicide, and opioid overdoses began to climb nationwide. These “deaths of despair,” as Case and Deaton call them, strike less-educated whites, both men and women, between the ages of 45 and 54. While the life expectancy of men and women with a college degree continues to grow, it is actually decreasing for their less-educated counterparts. The problems start with poor job opportunities for those without college degrees. Absent employment, people come unmoored. Families unravel, domestic violence escalates, marriages dissolve, parents are alienated from their children, and their children from them.

Opioids are a salve for these communal wounds. Work by Alex Hollingsworth and colleagues found that residents of locales most severely pummeled by the economic downturn were more susceptible to opioids. As county unemployment rates increased by one percentage point, the opioid death rate (per 100,000) rose by almost 4%, and the emergency-room visit rate for opioid overdoses (per 100,000) increased by 7%. It’s no coincidence that many of the states won by Donald Trump — West Virginia, Kentucky, and Ohio, for example — had the highest rates of fatal drug overdoses in 2015.

Of all prime-working-age male labor-force dropouts, nearly half — roughly 7 million men — take pain medication on a daily basis. “In our mind’s eye,” writes Nicholas Eberstadt in a recent issue of Commentary, “we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens — stoned.” Medicaid, it turns out, financed many of those stoned hours. Of the entire non-working prime-age white male population in 2013, notes Eberstadt, 57% were reportedly collecting disability benefits from one or more government disability programs. Medicaid enabled them to see a doctor and fill their prescriptions for a fraction of the street value: A single 10-milligram Percocet could go for $5 to $10, the co-pay for an entire bottle.

When it comes to beleaguered communities, one has to wonder how much can be done for people whose reserves of optimism and purposefulness have run so low. The challenge is formidable, to be sure, but breaking the cycle of self-destruction through treatment is a critical first step.

TREATMENT OPTIONS

Perhaps surprisingly, the majority of people who become addicted to any drug, including heroin, quit on their own. But for those who cannot stop using by themselves, treatment is critical, and individuals with multiple overdoses and relapses typically need professional help. Experts recommend at least one year of counseling or anti-addiction medication, and often both. General consensus holds that a standard week of “detoxification” is basically useless, if not dangerous — not only is the person extremely likely to resume use, he is at special risk because he will have lost his tolerance and may easily overdose.

Nor is a standard 28-day stay in a residential facility particularly helpful as a sole intervention. In residential settings many patients acquire a false sense of security about their ability to resist drugs. They are, after all, insulated from the stresses and conditioned cues that routinely provoke drug cravings at home and in other familiar environments. This is why residential care must be followed by supervised transition to treatment in an outpatient setting: Users must continue to learn how to cope without drugs in the social and physical milieus they inhabit every day.

Fortunately, medical professionals are armed with a number of good anti-addiction medications to help patients addicted to opioids. The classic treatment is methadone, first introduced as a maintenance therapy in the 1960s. A newer medication approved by the FDA in 2002 for the treatment of opioid addiction is buprenorphine, or “bupe.” It comes, most popularly, as a strip that dissolves under the tongue. The suggested length of treatment with bupe is a minimum of one or two years. Like methadone, bupe is an opioid. Thus, it can prevent withdrawal, blunt cravings, and produce euphoria. Unlike methadone, however, bupe’s chemical structure makes it much less dangerous if taken in excess, thereby prompting Congress to enact a law, the Drug Addiction Treatment Act of 2000, which allows physicians to prescribe it from their offices. Methadone, by contrast, can only be administered in clinics tightly regulated by the Drug Enforcement Administration and the Substance Abuse and Mental Health Services Administration. (I work in such a clinic.)

In addition to methadone or buprenorphine, which have abuse potential of their own, there is extended-release naltrexone. Administered as a monthly injection, naltrexone is an opioid blocker. A person who is “blocked” normally experiences no effect upon taking an opioid drug. Because naltrexone has no abuse potential (hence no street value), it is favored by the criminal-justice system. Jails and prisons are increasingly offering inmates an injection of naltrexone; one dose is given at five weeks before release and another during the week of release with plans for ongoing treatment as an outpatient. Such protection is warranted given the increased risk for death, particularly from drug-related causes, in the early post-release period. For example, one study of inmates released from the Washington State Department of Corrections found a 10-fold greater risk of overdose death within the first two weeks after discharge compared with non-incarcerated state residents of the same age, sex, and race.

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid

Regulatory State and Redistributive State

Will Wilkinson is a great writer, and spells out here two critical aspects of government:

The regulatory state is the aspect of government that protects the public against abuses of private players, protects property rights, and creates well-defined “corridors” that streamline the flows of capitalism and make it work best. It always gets a bad rap, and shouldn’t. The rap is due to the difficulty of enforcing regulations on so many aspects of life.

The redistributive state is the aspect of government that deigns to shift income and wealth from certain players in society to other players. The presumption is always one of fairness, whereby society deems it in the interests of all that certain actors, e.g. veterans or seniors, get preferential distributions of some kind.

He goes on to make a great point. These two states are more independent of one another than might at first be apparent. So it is possible to dislike one and like another.

Personally, I like both. I think both are critical to a well-oiled society with capitalism and property rights as central tenants. My beef will always go to issues of efficiency and effectiveness?

On redistribution, efficiency experts can answer this question: can we dispense with the monthly paperwork and simply direct deposit funds? Medicare now works this way, and the efficiency gains are remarkable.

And on regulation, efficiency experts can answer this question: can private actors certify their compliance with regulation, and then the public actors simple audit from time to time? Many government programs work this way, to the benefit of all.

ON redistribution, effectiveness experts can answer this question: Is the homeless population minimal? Are veterans getting what they need? Are seniors satisfied with how government treats them?

On regulation, effectiveness experts can answer this question: Is the air clean? Is the water clean? Is the mortgage market making food loans that help people buy houses? Are complaints about fraudulent consumer practices low?

CREDIT: VOX Article on Economic Freedom by Will Wilkinson

By Will Wilkinson
Sep 1, 2016

American exceptionalism has been propelled by exceptionally free markets, so it’s tempting to think the United States has a freer economy than Western European countries — particularly those soft-socialist Scandinavian social democracies with punishing tax burdens and lavish, even coddling, welfare states. As late as 2000, the American economy was indeed the freest in the West. But something strange has happened since: Economic freedom in the United States has dropped at an alarming rate.

Meanwhile, a number of big-government welfare states have become at least as robustly capitalist as the United States, and maybe more so. Why? Because big welfare states needed to become better capitalists to afford their socialism. This counterintuitive, even paradoxical dynamic suggests a tantalizing hypothesis: America’s shabby, unpopular safety net is at least partly responsible for capitalism’s flagging fortunes in the Land of the Free. Could it be that Americans aren’t socialist enough to want capitalism to work? It makes more sense than you might think.

America’s falling economic freedom

From 1970 to 2000, the American economy was the freest in the West, lagging behind only Asia’s laissez-faire city-states, Hong Kong and Singapore. The average economic freedom rating of the wealthy developed member countries of the Organization for Economic Cooperation and Development (OECD) has slipped a bit since the turn of the millennium, but not as fast as America’s.
“Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States,” write the authors of Fraser Institute’s 2015

Economic Freedom of the World report, noting that “the decline in economic freedom in the United States has been more than three times greater than the average decline found in the OECD.”

The economic freedom of selected countries, 1999 to 2016. Heritage Foundation 2016 Index of Economic Freedom

The Heritage Foundation and the Canadian Fraser Institute each produce an annual index of economic freedom, scoring the world’s countries on four or five main areas, each of which breaks down into a number of subcomponents. The main rubrics include the size of government and tax burdens; protection of property rights and the soundness of the legal system; monetary stability; openness to global trade; and levels of regulation of business, labor, and capital markets. Scores on these areas and subareas are combined to generate an overall economic freedom score.

The rankings reflect right-leaning ideas about what it means for people and economies to be free. Strong labor unions and inequality-reducing redistribution are more likely to hurt than help a country’s score.

So why should you care about some right-wing think tank’s ideologically loaded measure of economic freedom? Because it matters. More economic freedom, so measured, predicts higher rates of economic growth, and higher levels of wealth predict happier, healthier, longer lives. Higher levels of economic freedom are also linked with greater political liberty and civil rights, as well as higher scores on the left-leaning Social Progress Index, which is based on indicators of social justice and human well-being, from nutrition and medical care to tolerance and inclusion.

The authors of the Fraser report estimate that the drop in American economic freedom “could cut the US historic growth rate of 3 percent by half.” The difference between a 1.5 percent and 3 percent growth rate is roughly the difference between the output of the economy tripling rather than octupling in a lifetime. That’s a huge deal.
Over the same period, the economic freedom scores of Canada and Denmark have improved a lot. According to conservative and libertarian definitions of economic freedom, Canadians, who enjoy a socialized health care system, now have more economic freedom than Americans, and Danes, who have one of the world’s most generous welfare states, have just as much.
What the hell’s going on?

The redistributive state and the regulatory state are separable

To make headway on this question, it is crucial to clearly distinguish two conceptually and empirically separable aspects of “big government” — the regulatory state and the redistributive state.

The redistributive state moves money around through taxes and transfer programs. The regulatory state places all sorts of restrictions and requirements on economic life — some necessary, some not. Most Democrats and Republicans assume that lots of regulation and lots of redistribution go hand in hand, so it’s easy to miss that you can have one without the other, and that the relationship between the two is uneasy at best. But you can’t really understand the politics behind America’s declining economic freedom if you fail to distinguish between the regulatory and fiscal aspects of the economic policy.

Standard “supply-side” Republican economic policy thinking says that cuts in tax rates and government spending will unleash latent productive potential in the economy, boosting rates of growth. And indeed, when taxes and government spending are very high, cuts produce gains by returning resources to the private sector. But it’s important to see that questions about government control versus private sector control of economic resources are categorically different from questions about the freedom of markets.

Free markets require the presence of good regulation, which define and protect property rights and facilitate market processes through the consistent application of clear law, and an absence of bad regulation, which interferes with productive economic activity. A government can tax and spend very little — yet still stomp all over markets. Conversely, a government can withdraw lots of money from the economy through taxes, but still totally nail the optimal balance of good and bad regulation.

Whether a country’s market economy is free — open, competitive, and relatively unmolested by government — is more a question of regulation than a question of taxation and redistribution. It’s not primarily about how “big” its government is. Republicans generally do support a less meddlesome regulatory approach, but when they’re in power they tend to be much more persistent about cutting taxes and social welfare spending than they are about reducing economically harmful regulatory frictions.

If you’re as worried about America’s declining economic freedom as I am, this is a serious problem. In recent years, the effect of cutting taxes and spending has been to distribute income upward and leave the least well-off more vulnerable to bad luck, globalization, “disruptive innovation,” and the vagaries of business cycles.
If spending cuts came out of the military’s titanic budget, that would help. But that’s rarely what happens. The least connected constituencies, not the most expensive ones, are the first to get dinged by budget hawks. And further tax cuts are unlikely to boost growth. Lower taxes make government seem cheaper than it really is, which leads voters to ask for more, not less, government spending, driving up the deficit. Increasing the portion of GDP devoted to paying interest on government debt isn’t a growth-enhancing way to return resources to the private sector.

Meanwhile, wages have been flat or declining for millions of Americans for decades. People increasingly believe the economy is “rigged” in favor of the rich. As a sense of economic insecurity mounts, people anxiously cast about for answers.

Easing the grip of the regulatory state is a good answer. But in the United States, its close association with “free market” supply-side efforts to produce growth by slashing the redistributive state has made it an unattractive answer, even with Republican voters. That’s at least part of the reason the GOP wound up nominating a candidate who, in addition to promising not to cut entitlement spending, openly favors protectionist trade policy, giant infrastructure projects, and huge subsidies to domestic manufacturing and energy production. Donald Trump’s economic policy is the worst of all possible worlds.

This is doubly ironic, and doubly depressing, once you recognize that the sort of big redistributive state supply-siders fight is not necessarily the enemy of economic freedom. On the contrary, high levels of social welfare spending can actually drive political demand for growth-promoting reform of the regulatory state. That’s the lesson of Canada and Denmark’s march up those free economy rankings.

The welfare state isn’t a free lunch, but it is a cheap date

Economic theory tells you that big government ought to hurt economic growth. High levels of taxation reduce the incentive to work, and redistribution is a “leaky bucket”: Moving money around always ends up wasting some of it. Moreover, a dollar spent in the private sector generally has a more beneficial effect on the economy than a dollar spent by the government. Add it all up, and big governments that tax heavily and spend freely on social transfers ought to hurt economic growth.

That matters from a moral perspective — a lot. Other things equal, people are better off on just about every measure of well-being when they’re wealthier. Relative economic equality is nice, but it’s not so nice when relatively equal shares mean smaller shares for everyone. Just as small differences in the rate at which you put money into a savings account can lead to vast differences in your account balance 40 years down the road, thanks to the compounding nature of interest, a small reduction in the rate of economic growth can leave a society’s least well-off people much poorer in absolute terms than they might have been.

Here’s the puzzle. As a general rule, when nations grow wealthier, the public demands more and better government services, increasing government spending as a percentage of GDP. (This is known as “Wagner’s law.”) According to standard growth theory, ongoing increase in the size of government ought to exert downward pressure on rates of growth. But we don’t see the expected effect in the data. Long-term national growth trends are amazingly stable.

And when we look at the family of advanced, liberal democratic countries, countries that spend a smaller portion of national income on social transfer programs gain very little in terms of growth relative to countries that spend much more lavishly on social programs. Peter Lindert, an economist at the University of California Davis, calls this the “free lunch paradox.”

Lindert’s label for the puzzle is somewhat misleading, because big expensive welfare states are, obviously, expensive. And they do come at the expense of some growth. Standard economic theory isn’t completely wrong. It’s just that democracies that have embraced generous social spending have found ways to afford it by minimizing and offsetting its anti-growth effects.

If you’re careful with the numbers, you do in fact find a small negative effect of social welfare spending on growth. Still, according to economic theory, lunch ought to be really expensive. And it’s not.

There are three main reasons big welfare states don’t hurt growth as much as you might think. First, as Lindert has emphasized, they tend to have efficient consumption-based tax systems that minimize market distortions.
When you tax something, people tend to avoid it. If you tax income, as the United States does, people work a little less, which means that certain economic gains never materialize, leaving everyone a little poorer. Taxing consumption, as many of our European peers do, is less likely to discourage productive moneymaking, though it does discourage spending. But that’s not so bad. Less consumption means more savings, and savings puts the capital in capitalism, financing the economic activity that creates growth.

There are other advantages, too. Consumption taxes are usually structured as national sales taxes (or VATs, value-added taxes), which are paid in small amounts on a continuous basis, are extremely cheap to collect (and hard to avoid), while being less in-your-face than income taxes, which further mitigates the counterproductively demoralizing aspect of taxation.

Big welfare states are also more likely to tax addictive stuff, which people tend to buy whatever the price, as well as unhealthy and polluting stuff. That harnesses otherwise fiscally self-defeating tax-avoiding behavior to minimize the costs of health care and environmental damage.
Second, some transfer programs have relatively direct pro-growth effects. Workers are most productive in jobs well-matched to their training and experience, for example, and unemployment benefits offer displaced workers time to find a good, productivity-promoting fit. There’s also some evidence that health care benefits that aren’t linked to employment can promote economic risk-taking and entrepreneurship.

Fans of open-handed redistributive programs tend to oversell this kind of upside for growth, but there really is some. Moreover, it makes sense that the countries most devoted to these programs would fine-tune them over time to amplify their positive-sum aspects.

This is why you can’t assume all government spending affects growth in the same way. The composition of spending — as well as cuts to spending — matters. Cuts to efficiency-enhancing spending can hurt growth as much as they help. And they can really hurt if they increase economic anxiety and generate demand for Trump-like economic policy.

Third, there are lots of regulatory state policies that hurt growth by, say, impeding healthy competition or closing off foreign trade, and if you like high levels of redistribution better than you like those policies, you’ll eventually consider getting rid of some of them. If you do get rid of them, your economic freedom score from the Heritage Foundation and the Fraser Institute goes up.
This sort of compensatory economic liberalization is how big welfare states can indirectly promote growth, and more or less explains why countries like Canada, Denmark, and Sweden have become more robustly capitalist over the past several decades. They needed to be better capitalists to afford their socialism. And it works pretty well.

If you bundle together fiscal efficiency, some offsetting pro-growth effects, and compensatory liberalization, you can wind up with a very big government, with very high levels of social welfare spending and very little negative consequences for growth. Call it “big-government laissez-faire.”

The missing political will for genuine pro-growth reform

Enthusiasts for small government have a ready reply. Fine, they’ll say. Big government can work through policies that offset its drag on growth. But why not a less intrusive regulatory state and a smaller redistributive state: small-government laissez-faire. After all, this is the formula in Hong Kong and Singapore, which rank No. 1 and No. 2 in economic freedom. Clearly that’s our best bet for prosperity-promoting economic freedom.

But this argument ignores two things. First, Hong Kong and Singapore are authoritarian technocracies, not liberal democracies, which suggests (though doesn’t prove) that their special recipe requires nondemocratic government to work. When you bring democracy into the picture, the most important political lesson of the Canadian and Danish rise in economic freedom becomes clear: When democratically popular welfare programs become politically nonnegotiable fixed points, they can come to exert intense pressure on fiscal and economic policy to make them sustainable.

Political demand for economic liberalization has to come from somewhere. But there’s generally very little organic, popular democratic appetite for capitalist creative destruction. Constant “disruption” is scary, the way markets generate wealth and well-being is hard to comprehend, and many of us find competitive profit-seeking intuitively objectionable.

It’s not that Danes and Swedes and Canadians ever loved their “neoliberal” market reforms. They fought bitterly about them and have rolled some of them back. But when their big-government welfare states were creaking under their own weight, enough of the public was willing, thanks to the sense of economic security provided by the welfare state, to listen to experts who warned that the redistributive state would become unsustainable without the downsizing of the regulatory state.

A sound and generous system of social insurance offers a certain peace of mind that makes the very real risks of increased economic dynamism seem tolerable to the democratic public, opening up the political possibility of stabilizing a big-government welfare state with growth-promoting economic liberalization.

This sense of baseline economic security is precisely what many millions of Americans lack.

Learning the lesson of Donald Trump
America’s declining economic freedom is a profoundly serious problem. It’s already putting the brakes on dynamism and growth, leaving millions of Americans with a bitter sense of panic about their prospects. They demand answers. But ordinary voters aren’t policy wonks. When gripped by economic anxiety, they turn to demagogues who promise measures that make intuitive economic sense, but which actually make economic problems worse.

We may dodge a Trump presidency this time, but if we fail to fix the feedback loop between declining economic freedom and an increasingly acute sense of economic anxiety, we risk plunging the world’s biggest economy and the linchpin of global stability into a political and economic death spiral. It’s a ridiculous understatement to say that it’s important that this doesn’t happen.

Market-loving Republicans and libertarians need to stare hard at a framed picture of Donald Trump and reflect on the idea that a stale economic agenda focused on cutting taxes and slashing government spending is unlikely to deliver further gains. It is instead likely to continue to backfire by exacerbating economic anxiety and the public’s sense that the system is rigged.

If you gaze at the Donald long enough, his fascist lips will whisper “thank you,” and explain that the close but confusing identification of supply-side fiscal orthodoxy with “free market” economic policy helps authoritarian populists like him — but it hurts the political prospects of regulatory state reforms that would actually make American markets freer.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Neo.Life

This beta site NeoLife link beyond the splash pagee is tracking the “neobiological revolution”. I wholeheartedly agree that some of our best and brightest are on the case. Here they are:

ABOUT
NEO.LIFE
Making Sense of the Neobiological Revolution
NOTE FROM THE EDITOR
Mapping the brain, sequencing the genome, decoding the microbiome, extending life, curing diseases, editing mutations. We live in a time of awe and possibility — and also enormous responsibility. Are you prepared?

EDITORS

FOUNDER

Jane Metcalfe
Founder of Neo.life. Entrepreneur in media (Wired) and food (TCHO). Lover of mountains, horses, roses, and kimchee, though not necessarily in that order.
Follow

EDITOR
Brian Bergstein
Story seeker and story teller. Editor at NEO.LIFE. Former executive editor of MIT Technology Review; former technology & media editor at The Associated Press
Follow

ART DIRECTOR
Nicholas Vokey
Los Angeles-based graphic designer and animator.
Follow

CONSULTANT
Saul Carlin
founder @subcasthq. used to work here.

EDITOR
Rachel Lehmann-Haupt
Editor, www.theartandscienceoffamily.com & NEO.LIFE, author of In Her Own Sweet Time: Egg Freezing and the New Frontiers of Family

Laura Cochrane
“To oppose something is to maintain it.” — Ursula K. Le Guin

WRITERS

Amanda Schaffer
writes for the New Yorker and Neo.life, and is a former medical columnist for Slate. @abschaffer

Mallory Pickett
freelance journalist in Los Angeles

Karen Weintraub
Health/Science journalist passionate about human health, cool researcher and telling stories.

Anna Nowogrodzki
Science and tech journalist. Writing in Nature, National Geographic, Smithsonian, mental_floss, & others.
Follow

Juan Enriquez
Best-selling author, Managing Director of Excel Venture Management.

Christina Farr
Tech and features writer. @Stanford grad.

NEO.LIFE
Making sense of the Neobiological Revolution. Get the email at www.neo.life.

Maria Finn
I’m an author and tell stories across multiple mediums including prose, food, gardens, technology & narrative mapping. www.mariafinn.com Instagram maria_finn1.

Stephanie Pappas
I write about science, technology and the things people do with them.

David Eagleman
Neuroscientist at Stanford, internationally bestselling author of fiction and non-fiction, creator and presenter of PBS’ The Brain.

Kristen V. Brown
Reporter @Gizmodo covering biotech.

Thomas Goetz

David Ewing Duncan
Life science journalist; bestselling author, 9 books; NY Times, Atlantic, Wired, Daily Beast, NPR, ABC News, more; Curator, Arc Fusion www.davidewingduncan.com

Dorothy Santos
writer, editor, curator, and educator based in the San Francisco Bay Area about.me/dorothysantos.com

Dr. Sophie Zaaijer
CEO of PlayDNA, Postdoctoral fellow at the New York Genome Center, Runway postdoc at Cornell Tech.

Andrew Rosenblum
I’m a freelance tech writer based in Oakland, CA. You can find my work at Neo.Life, the MIT Technology Review, Popular Science, and many other places.

Zoe Cormier

Diana Crow
Fledgling science journalist here, hoping to foster discussion about the ways science acts as a catalyst for social change #biology

Ashton Applewhite
Calling for a radical aging movement. Anti-ageism blog+talk+book

Grace Rubenstein
Journalist, editor, media producer. Social/bio science geek. Tweets on health science, journalism, immigration. Spanish speaker & dancing fool.

Science and other sundries.

Esther Dyson
Internet court jEsther — I occupy Esther Dyson. Founder @HICCup_co https://t.co/5dWfUSratQ http://t.co/a1Gmo3FTQv

Jessica Leber
Freelance science and technology journalist and editor, formerly on staff at Fast Company, Vocativ, MIT Technology Review, and ClimateWire.

Jessica Carew Kraft
An anthropologist, artist, and naturalist writing about health, education, and rewilding. Mother to two girls in San Francisco.

Corby Kummer
Senior editor, The Atlantic, five-time James Beard Journalism Award winner, restaurant reviewer for New York, Boston, and Atlanta magazines

K McGowan
Journalist. Reporting on health, medicine, science, other excellent things. T: @mcgowankat

Rob Waters
I’m a journalist living in Berkeley. I write about health, science, social justice and policy. Father of 1. From Detroit.
Follow

Yiting Sun
writes for MIT Technology Review and Neo.life from Beijing, and was based in Accra, Ghana, in 2014 and 2015.
Follow

Michael Hawley
Follow

Richard Sprague
Curious amateur. Years of near-daily microbiome experiments. US CEO of AI healthcare startup http://airdoc.com
Follow

Bob Parks ✂
Connoisseur of the slap dash . . . maker . . . runner . . . writer of Outside magazine’s Gear Guy blog . . . freelance writer and reporter.

CREDIT: https://medium.com/neodotlife/review-of-daytwo-microbiome-test-deacd5464cd5