Author Archives: reidcurtis

Prevention Revisited

The essay below is an argument for the quality of life benefits of prevention. But its conclusions about whether prevention saves money? Those conclusions are depressing.

But I want to consider it. If prevention doesn’t save money, this goes against every intuition I have ever had on the subject.

The source of this essay is worth considering. If you look below, Dr. Aaron just published a book arguing that bad foods are not so bad – in moderation. This is a conclusion I happen to agree with. I agree with “all things in moderation”.
 
For example, a primary conclusion is that insuring people makes them more, rather than less, likely to use the emergency room. But this conclusion is about insurance, not prevention, and speaks to people’s need for convenient access to health care.

Or a second example used: anti-smoking. The essay’s conclusion is outrageous: it says that society will pay more because people who stop smoking will live longer! So, if society wishes to reduce costs, a mass euthanasia program, at, say, age 67, will really do the trick!
 
I publish but do not endorse…..

================
CREDIT: Essay in the New York Times

THE NEW HEALTH CARE

The essay below is depressing. But I want to consider it. If prevention doesn’t save money, this goes against every intuition I have ever had on the subject.

I definitely don’t trust the source of this essay, or its conclusions.

For example, a primary conclusion is that insuring people makes them more, rather than less, likely to use the emergency room. But this conclusion is about insurance, not prevention, and speaks to people’s need for convenient access to health care.

Or a second example used: anti-smoking. The essay’s conclusion is outrageous: it says that society will pay more because people who stop smoking will live longer! So, if society wishes to reduce costs, a mass euthanasia program, at, say, age 67, will really do the trick!

I publish but do not in any way endorse…..

=======================

CREDIT: New York Times Essay

THE NEW HEALTH CARE

Preventive Care Saves Money Sorry, It’s Too Good to Be True

Contrary to conventional wisdom, it tends to cost money, but it improves quality of life at a very reasonable price.

By Aaron E. Carroll
Jan. 29, 2018

The idea that spending more on preventive care will reduce overall health care spending is widely believed and often promoted as a reason to support reform. It’s thought that too many people with chronic illnesses wait until they are truly ill before seeking care, often in emergency rooms, where it costs more. It should follow then that treating diseases earlier, or screening for them before they become more serious, would wind up saving money in the long run.
Unfortunately, almost none of this is true.

Let’s begin with emergency rooms, which many people believed would get less use after passage of the Affordable Care Act. The opposite occurred. It’s not just the A.C.A. The Oregon Medicaid Health Insurance experiment, which randomly chose some uninsured people to get Medicaid before the A.C.A. went into effect, also found that insurance led to increased use of emergency medicine. Massachusetts saw the same effect after it introduced a program to increase the number of insured residents.

Emergency room care is not free, after all. People didn’t always choose it because they couldn’t afford to go to a doctor’s office. They often went there because it was more convenient. When we decreased the cost for people to use that care, many used it more.
Wellness programs, based on the idea that we can save money on health care by giving people incentives to be healthy, don’t actually work this way. As my colleague Austin Frakt and I have found from reviewing the research in detail, these programs don’t decrease costs — at least not without being discriminatory.

Accountable care organizations rely on the premise that improving outpatient and preventive care, perhaps with improved management and coordination of services for those with chronic conditions, will save money. But a recent study in Health Affairs showed that care coordination and management initiatives in the outpatient setting haven’t been drivers of savings in the Medicare Shared Savings Program.

There’s little reason to believe that even more preventive care in general is going to save a fortune. A study published in Health Affairs in 2010 looked at 20 proven preventive services, all of them recommended by the United States Preventive Services Task Force. These included immunizations, counseling, and screening for disease. Researchers modeled what would happen if up to 90 percent of these services were used, which is much higher than we currently see.

They found that this probably would have saved about $3.7 billion in 2006. That might sound like a lot, until you realize that this was about 0.2 percent of personal health care spending that year. It’s a pittance — and that was with almost complete compliance with recommendations.

One reason for this is that all prevention is not the same. The task force doesn’t model costs in its calculations; it models effectiveness and a preponderance of benefits and harms. When something works, and its positive effects outweigh its adverse ones, a recommendation is made.

This doesn’t mean it saves money.

In 2009, as part of the Robert Wood Johnson Foundation’s Synthesis Project, Sarah Goodell, Joshua Cohen and Peter Neumann exhaustively explored the evidence. They examined more than 500 peer-reviewed studies that looked at primary (stopping something from happening in the first place) or secondary (stopping something from getting worse) prevention. Of all the interventions they looked at, only two were truly cost-saving: childhood immunizations (a no-brainer) and the counseling of adults on the use of low-dose aspirin. An additional 15 preventive services were cost-effective, meaning that they cost less than $50,000 to $100,000 per quality adjusted life-year gained.

But all of these analyses looked within the health care system only. If we really want to know whether prevention saves money, maybe we should take a wider perspective. Does spending on prevention save the country money over all?

A recent report from the Congressional Budget Office in the New England Journal of Medicine suggests the answer is no. The budget office modeled how a policy to reduce smoking through higher cigarette taxes might affect federal spending. It found that such a tax would cause many people to quit smoking — the desired result. In the short term, less smoking would lead to decreased spending because of reductions in health care spending for those who had smoked.
But in the long run, all of those people living longer would lead to increases in spending in many programs, including health care. The more people who quit smoking, the higher the deficit — even with the increased revenue from taxing cigarettes.

But money doesn’t have to be saved to make something worthwhile. Prevention improves outcomes. It makes people healthier. It improves quality of life. It often does so for a very reasonable price.
There are many good arguments for increasing our focus on prevention. Almost all have to do with improving quality, though, not reducing spending. We would do well to admit that and move forward.
Sometimes good things cost money.

Aaron E. Carroll is a professor of pediatrics at Indiana University School of Medicine who blogs on health research and policy at The Incidental Economist and makes videos at Healthcare Triage. He is the author of The Bad Food Bible: How and Why to Eat Sinfully.

====================APPENDIX================

CREDIT: https://www.npr.org/sections/thesalt/2017/11/19/564879018/the-bad-food-bible-says-your-eating-might-not-be-so-sinful-after-all

The Bad Food Bible
How and Why to Eat Sinfully
by Aaron, M.D. Carroll and Nina Teicholz
Hardcover, 272 pages

There are some surprises in your book, like milk isn’t as nutritious as some might think?

This is one of those where, if you just look at nature, we’re the only animal that consumes milk outside of the infant period. Now there’s no need for it. Part of that is politics, and the fact that the United States got involved in promoting dairy and the whole dairy industry. But there’s really no good evidence outside of the childhood period that milk is necessary. One of the things that I tried to state in the book, and this is true of all beverages with calories, you should treat them like you treat alcohol. I mean, what else are you going to do with a good chocolate chip cookie? Of course you need a glass of milk with that. That’s like dessert — it’s something you should have because you want it, not because you need it.

Raw eggs often get a bad reputation, particularly when it comes to cookie dough. How bad are they, really?

The raw egg is another one where of course there is a risk. But you have to weigh that against joy again. The truth of the matter is that if you committed to eating raw eggs in cookie dough once a week every week for the rest of your life, you’d almost never come into contact with salmonella. If you did, you’d almost never get sick. If you got sick, you’d almost never notice. Even if you noticed, it would almost never result in something serious. The chance of you actually getting seriously ill is infinitesimal. … The joy of doing those kinds of things with your kids or enjoying the process of baking is much more satisfying and will lead to greater increases in quality of life than the infinitesimal risk that you’re hurting your health in some way.
So, it sounds like there’s a lot of misinformation surrounding what food is bad for us. What’s your eating advice then?

So I think you know, in general, one thing you can do is limit your heavily processed food as much as possible. Nature intended you to get the appley goodness from an apple, not from apple juice. But the more we can do to smile, to cook for ourselves, to know where our food is coming from, to be mindful of it, the better. But we shouldn’t be so panicked and fearful and constantly believing that if we don’t do what we’ve heard from the latest expert, that we’re going to get sick and die. That is just not true.

Of course, we are staring down the barrel of Thanksgiving, which for many of us can be a moment that produces a lot of anxiety, especially food anxiety nowadays. It just feels like it’s all so fraught. I’m evil if I eat meat. I’m bad if I like Diet Coke. Food is loaded.
It’s also really important, it’s one day a year! Your health and your eating habits are not established by one day a year. It’s perfectly fine to enjoy yourself and to live! You need to weigh — in all your health decisions — the benefits and the harms. And too often we only focus on the latter. And included in benefits are joy, and quality of life and happiness. There are times when it’s a perfectly rational decision to allow yourself to be happy and to enjoy yourself. I’m not sort of giving a license for people to eat whatever they want, anytime they want. Yes, the Diet Coke, the pie, these are all processed foods. So you should think about how much you’re eating them in relation to everything else. But on the other hand, a piece of pie on Thanksgiving is not going to erase everything else you’ve done the rest of the year. Thanksgiving is easily my favorite holiday and it’s not just because of the food, but also because of the meal and the fact that you get to enjoy it with family and friends.

I’ve got to ask you, what are you having for Thanksgiving?

As much as I can cram into my body on that day. But, I love turkey, really well-done turkey. I love mashed potatoes, and stuffing and gravy, and I think pie is the greatest dessert that exists, so I’m sure I’ll be having too much of that as well.

Producer Adelina Lancianese contributed to this report.

Homeostasis

One of the smartest guys in the room, Antonio Damasio, give his views about neuroscience and its relationship to pain, pleasure, and feelings. He points out that they all play a giant role in one of life’s most important concepts: homeostasis.

CREDIT: http://nautil.us/issue/56/perspective/antonio-damasio-tells-us-why-pain-is-necessary

Antonio Damasio Tells Us Why Pain Is Necessary
The neuroscientist explains why feelings evolved.

BY KEVIN BERGER
JANUARY 18, 2018

Following Oliver Sacks, Antonio Damasio may be the neuroscientist whose popular books have done the most to inform readers about the biological machinery in our heads, how it generates thoughts and emotions, creates a self to cling to, and a sense of transcendence to escape by. But since he published Descartes’ Error in 1994, Damasio has been concerned that a central thesis in his books, that brains don’t define us, has been muted by research that states how much they do. To Damasio’s dismay, the view of the human brain as a computer, the command center of the body, has become lodged in popular culture.

In his new book, The Strange Order of Things, Damasio, a professor of neuroscience and the director of the Brain and Creativity Institute at the University of Southern California, mounts his boldest argument yet for the egalitarian role of the brain. In “Why Your Biology Runs on Feelings,” another article in this chapter of Nautilus, drawn from his new book, Damasio tells us “mind and brain influence the body proper just as much as the body proper can influence the brain and the mind. They are merely two aspects of the very same being.”

BEYOND SCIENCE: Antonio Damasio, director of the Brain and Creativity Institute at USC, sings the glories of the arts in his new book, The Strange Order of Things: “The sciences alone cannot illuminate the entirety of human experience without the light that comes from art and humanities.”

The Strange Order of Things offers a sharp and uncommon focus on feelings, on how their biological evolution fueled our prosperity as a species, spurred science and medicine, religion and art. “When I look back on Descartes’ Error, it was completely timid compared to what I’m saying now,” Damasio says. He knows his new book may rile believers in the brain as emperor of all. “I was entirely open with my ideas,” he says. “If people don’t like it, they don’t like it. They can criticize it, of course, which is fair, but I want to tell them, because it’s so interesting, this is why you have feelings.”
In this interview with Nautilus, Damasio, in high spirits, explains why feelings deserve a starring role in human culture, what the real problem with consciousness studies are, and why Shakespeare is the finest cognitive scientist of them all.

One thing I like about The Strange Order of Things is it counters the idea that we are just our brains.

Oh, that idea is absolutely wrong.

Not long ago I was watching a PBS series on the brain, in which host and neurologist David Eagleman, referring to our brain, declares, “What we feel, what matters to us, our beliefs and our hopes, everything we are happens in here.”

That’s not the whole story. Of course, we couldn’t have minds with all of their enormous complexity without nervous systems. That goes without saying. But minds are not the result of nervous systems alone. The statement you quote reminds me of Francis Crick, someone whom I admired immensely and was a great friend. Francis was quite opposed to my views on this issue. We would have huge discussions because he was the one who said that everything you are, your thoughts, your feelings, your mental this and that, are nothing but your neurons. This is a big mistake, in my view, because we are mentally and behaviorally far more than our neurons. We cannot have feelings arising from neurons alone. The nervous systems are in constant interaction and cooperation with the rest of the organism. The reason why nervous systems exist in the first place is to assist the rest of the organism. That fact is constantly missed.

The concept of “homeostasis” is critical in your new book. What is homeostasis?

It’s the fundamental property of life that governs everything that living cells do, whether they’re living cells alone, or living cells as part of a tissue or an organ, or a complex system such as ourselves. Most of the time, when people hear the word homeostasis, they think of balance, they think of equilibrium. That is incorrect because if we ever were in “equilibrium,” we would be dead. Thermodynamically, equilibrium means zero thermal differences and death. Equilibrium is the last thing that nature aims for.

The importance of feeling is that it makes you critically aware of what you are doing in moral terms.

What we must have is efficient functioning of a variety of components of an organism. We procure energy so that the organism can be perpetuated, but then we do something very important and almost always missed, which is hoard energy. We need to maintain positive energy balances, something that goes beyond what we need right now because that’s what ensures the future. What’s so beautiful about homeostasis is that it’s not just about sustaining life at the moment, but about having a sort of guarantee that it will continue into the future. Without those positive energy balances, we court death.

What’s a good example of homeostasis?

If you are at the edge of your energy reserves and you’re sick with the flu, you can easily tip over and die. That’s one of the reasons why there’s fat accumulation in our bodies. We need to maintain the possibility of meeting the extra needs that come from stress, in the broad sense of the term. I poetically describe this as a desire for permanence, but it’s not just poetic. I believe it’s reality.

You write homeostasis is maintained in complex creatures like us through a constant interplay of pleasure and pain. Are you giving a biological basis to Freud’s pleasure principle—life is governed by a drive for pleasure and avoidance of pain?

Yes, to a great extent. What’s so interesting is that for most of the existence of life on earth, all organisms have had this effective, automated machinery that operates for the purpose of maintenance and continuation of life. I like to call the organisms that only have that form of regulation, “living automata.” They can fight. They can cooperate. They can segregate. But there’s no evidence that they know that they’re doing so. There’s no evidence of anything we might call a mind. Obviously we have more than automatic regulation. We can control regulation in part, if we wish to. How did that come about?
Very late in the game of life there’s the appearance of nervous systems. Now you have the possibility of mapping the inside and outside world. When you map the inside world, guess what you get? You get feelings. Of necessity, the machinery of life is either in a state of reasonable efficiency or in a state of inefficiency, which is most often the case. Organisms with nervous systems can image these states. And when you start having imagery, you start having minds. Now you begin to have the possibility of responding in a way that you could call “knowledgeable.” That happens when organisms make images. A bad internal state would have been imaged as the first pains, the first malaises, the first sufferings. Now the organism has the possibility of knowingly avoiding whatever caused the pain or prefer a place or a thing or another animal that causes the opposite of that, which is well-being and pleasure.

Why would feelings have evolved?

Feelings triumphed in evolution because they were so helpful to the organisms that first had them. It’s important to understand that nervous systems serve the organism and not the other way around. We do not have brains controlling the entire operation. Brains adjust controls. They are the servants of a living organism. Brains triumphed because they provided something useful: coordination. Once organisms got to the point of being so complex that they had an endocrine system, immune system, circulation, and central metabolism, they needed a device to coordinate all that activity. They needed to have something that would simultaneously act on point A and point Z, across the entire organism, so that the parts would not be working at cross purposes. That’s what nervous systems first achieve: making things run smoothly.

Now, in the process of doing that, over millions of years, we have developed nervous systems that do plenty of other things that do not necessarily result in coordination of the organism’s interior, but happen to be very good at coordinating the internal world in relation to the outside world. This is what the higher reaches of our nervous system, namely the cerebral cortex, does. It gives us the possibilities of perceiving, of memorizing, of reasoning over the knowledge that we memorize, of manipulating all of that and even translating it into language. That is all very beautiful, and it is also homeostatic, in the sense that all of it is convenient to maintain life. It if were not, it would just have been discarded by evolution.

How does your thesis square with the hard problem of consciousness, how the physical tissue in our heads produces immaterial sensations?

Some philosophers of mind will say, “Well, we face this gigantic problem. How does consciousness emerge out of these nerve cells?” Well, it doesn’t. You’re not dealing with the brain alone. You have to think in terms of the whole organism. And you have to think in evolutionary terms.

The critical problem of consciousness is subjectivity. You need to have a “subject.” You can call it an I or a self. Not only are you aware right now that you are listening to my words, which are in the panorama of your consciousness, but you are aware of being alive, you realize that you’re there, you’re ticking. We are so distracted by what is going on around us that we forget sometimes that we are, A-R-E in capitals. But actually you are watching what you are, and so you need to have a mechanism in the brain that allows you to fabricate that part of the mind that is the watcher.
You do that with a number of devices that have to do, for example, with mapping the movements of your eyes, the position of your head, and the musculature of your body. This allows you to literally construct images of yourself making images. And you also have a layer of consciousness that is made by your perception of the outside world; and another layer that is made of appreciating the feelings that are being generated inside of you. Once you have this stack of processes, you have a fighting chance of creating consciousness.

Why do you object to comparing the brain to a computer?

In the early days of neuroscience, one of our mentors was Warren McCulloch. He was a gigantic figure of neuroscience, one of the originators of what is today computational neuroscience. When you go back to the ’40s and ’50s, you find this amazing discovery that neurons can be either active or inactive, in a way that can be described mathematically as zeroes and ones. Combine that with Alan Turing and you get this idea that the brain is like a computer and that it produces minds using that same simple method.

Religions have been one of the great causes of violence throughout history. But you can’t blame Christ for it.

That has been a very useful idea. And true enough, it explains a good part of the complex operations, that our brains produce such as language. Those operations require a lot of precision and are being carried out by cerebral cortex, with enormous detail, and probably in a basic computational mode. All the great successes of artificial intelligence used this idea and have been concerned with high-level reasoning. That is why A.I. has been so successful with games such as chess or Go. They use large memories and powerful reasoning.

Are you saying neural codes or algorithms don’t blend with living systems?

Well, they match very well with things that are high on the scale of the mental operations and behaviors, such as those we require for our conversation. But they don’t match well with the basic systems that organize life, that regulate, for example, the degree of mental energy and excitation or with how you emote and feel. The reason is that the operations of the nervous system responsible for such regulation relies less on synaptic signaling, the one that can be described in terms of zeroes and ones, and far more on non-synaptic messaging, which lends itself less to a rigid all or none operation.
Perhaps more importantly, computers are machines invented by us, made of durable materials. None of those materials has the vulnerability of the cells in our body, all of which are at risk of defective homeostasis, disease, and death. In fact, computers lack most of the characteristics that are key to a living system. A living system is maintained in operation, against all odds, thanks to a complicated mechanism that can fall apart as a result of minimal amounts of malfunction. We are extremely vulnerable creatures. People often forget that. Which is one of the reasons why our culture, or Western cultures in general, are a bit too calm and complacent about the threats to our lives. I think we are becoming less sensitive to the idea that life is what dictates what we should do or not do with ourselves and with others.

What is love for?

To protect, to cause flourishing, to give and receive pleasure, to procreate, to soothe. Endless great uses, as you can see.

How do emotions such as anger or sadness serve homeostasis?

At individual levels, both anger and sadness are protective. Anger lets your adversary know that you mean business and that there may be costs to attacking you. These days anger is an expression of sociopolitical conflicts. It is overused and has largely become ineffectual. Sadness is a prelude to mental hibernation. It lets you retreat and lick your wounds. It lets you plan a strategy of response to the cause of the wounds.

You say feelings spurred the creation of cultures. How so?

Before I started The Strange Order of Things, I was asking friends and colleagues how they thought cultures had begun. Invariably what people said was, “Oh, we’re so smart. We’re so intellectually powerful. We have all this reasoning ability. On top of it all, we have language—and there you are.” To which I say, “Fine, that’s true. How would you invent anything if you were stupid?” You would not. But the issue is to recognize the motive behind what you do. Why is it that you did it in the first place? Why did Moses come down from the mountain with Ten Commandments? Well, the Ten Commandments are representative of homeostasis because they tell you not to kill, not to steal, not to lie, not to do a lot of bad things. It sounds trivial but it’s not. We fail to think about motivation and so we do not factor it into the process of invention. We do not factor in the motives behind science or technology or governance or religion.

How does consciousness emerge out of nerve cells? Well, it doesn’t. You’re not dealing with the brain alone.

And there’s one more thing: The importance of feeling is that it makes you critically aware of what you are doing in moral terms. It forces you to look back and realize that what people were doing historically, at the outset, at the moment of invention of a cultural instrument or a cultural practice, was an attempt to reduce the amount of suffering and to maximize the amount of wellbeing not only for the inventor, but for the community around them. One person alone can invent a painting or a musical composition, but it is not meant for that person alone. And you do not invent a moral system or a government system alone or for yourself alone. It requires a society, a community.

The assertion that intellect is governed by feelings can sound New Age-y. It seems to undermine the powers of reason. How should we understand reason if it’s always motivated by subjective feelings?

Subjective simply means that it has a personal point of view, that it pertains to the self. It is compatible with “objective” facts and with truth. It is not about relativism. The fact that feelings motivate the use of knowledge and reason do not make the knowledge and the reason any less truthful or valid. Feelings are simply a call to action.

If humans formed societies and cultures to avoid suffering and pain, why do we have violence and wars?

Your question is very important. Take developments of political systems. On the face of it, when you look at Marxist ideas, you say, “This is obviously homeostatic.” What Marx and others were trying to do in the 19th century is confront and modify a social arrangement that was not equitable, that had some people suffering too much and some profiting too much. So having a system that produced equality made a lot of sense. In a way that is something that biological systems have been trying to do, quite naturally, for a long time. And when the natural systems do not succeed at improved regulation, guess what? They are weeded out by evolution because they promote illness.
Biological evolution, through genetic selection, eliminates those mechanisms. At the cultural level something comparable occurs. Seen in retrospect, Marxism as applied in Russia resulted in one of the worst tragedies of humankind. But Russian communism was ultimately weeded out by cultural selection. It took around 70 years to do it, but cultural selection did operate in a homeostatic way. It led to the fall of the Berlin Wall and the Soviet empire. It was a homeostatic correction achieved by social means.
The same reasoning applies to religions. For example, we can claim that religions have been one of the great causes of violence throughout history. But you certainly can’t blame Christ for that violence. He preached compassion, and the pardoning of enemies, and love. It does not follow that good recommendations can be implemented correctly and always produce good results. These facts in no way deny the homeostatic intent of religions.

You write, “The increasing knowledge of biology from molecules to systems reinforces the humanist project.” How so?

This knowledge gives us a broader picture of who we are and where we are in the history of life on earth. We had modest beginnings, and we have incorporated an incredible amount of living wisdom that comes from as far down as bacteria. There are characteristics of our personal and cultural behavior that can be found in single-cell organisms or in social insects. They clearly do not have the kind of highly developed brains that we have. In some cases, they don’t have any brain at all. But by analyzing this strange order of developments we are confronted with the spectacle of life processes that are complex and rich in spite of their apparent modesty, so complex and rich that they can deliver the high level of behaviors that we normally, quite pretentiously, attribute only to our great human smarts. We should be far more humble. That’s one of my main messages. In general, connecting cultures to the life process makes apparent a link that we have ignored for far too long.


What would you be if you weren’t a scientist?

When I was an adolescent, I often thought that I might become a philosopher or perhaps a playwright or filmmaker. That’s because I so admired what philosophers and storytellers had found about the human mind. Today when people ask me, “Who’s your most admired cognitive scientist?” I say Shakespeare. He knew it all and knew it with enormous precision. He didn’t have the nice fMRI scanner and electrophysiology techniques we have in our Institute. But he knew human beings. Watch a good performance of Hamlet, King Lear, or Othello. All of our psychology is there, richly analyzed, ready for us to experience and appreciate.

Platforms for “on-demand”

Tom Goodwin makes a great point:

-Uber, the world’s largest taxi company, owns no vehicles.
-Facebook, the most popular media owner, creates no content.
-Alibaba, the most valuable retailer, has no inventory.
-Airbnb, the largest accommodation provider, owns no real estate.

What do these observations have in common?

All of these companies are platforms for the new “on-demand” economy.

The very nature of the on-demand economy is that a buyer must, on a real-time basis be able to identify seller.

Uber Is the most obvious example. Uber brilliantly identifies, for the buyer of taxi services, a seller with a willing car and driver.

Facebook helps billions of writers find audiences for their content.

Airbnb identifies, for the buyer, a homeowner willing to rent their house.

“On Demand”

Note: This post is a continuation of prior posts on complex, adaptive systems. This post focusses on the virtual workplace, the virtual retailer, the virtual employer, and their myriad manifestations in today’s world. These particular complex, adaptive systems will have the ability to rapidly expand or contract based on demand. And this is the point of this post: to explore the notion of “on demand”.

“On Demand”
Its so obvious …. but, then again, its not so obvious: “on demand” is the drumbeat of daily life. But the 21st century is putting the notion of “on demand” on steroids!

What is “on demand”?
I want a glass of wine, right now. I either pour myself one, buy one, or ask someone else to pour me one. “On demand”.

I need a hotel room, right now. Hotels inventory rooms. I rent one. “On demand”.

I need to haircut, right now. Barbers are open for business. I visit one. They are not busy so they take me. I buy the haircut. “On demand”.

Note that “on demand” wine needs an open bottle of wine to be available; the hotel room requires a hotel; the haircut requires a barber open for business;

In the 21st century, it seems clear to me that “on demand” will morph into smaller, more flexible slices. Consumers and companies will be able to purchase these slices when they want them, for as long as they want them.

It’s happening at lightning speed! There are so many examples. You can find them everywhere, in:

On Demand Transportation

The point: in the 20th century, you had to rent a car or bike or ride for a day from a business location, and now you can rent it for exactly as long as you want it from a street location.

Uber revolutionized the taxi business when they broke the paradigm and said “Effectively immediately, and car with driver can pick up a passenger and get paid to take them somewhere.” From a passenger’s POV, the result is revolutionary: I can get anywhere I want, anytime I want, by simply alerting a central intelligence on-line that I need ride from x to y at z time.

Every major city now rents bikes. Grab a bike at one stand and leave it at another stand. Take the bike from x to y at z time.

ZipCars are on-demand cars.

On Demand Work

The point: in the 20th century, you either had a job or you didn’t. Now you can have a job for a half hour of your choosing. “Temporary Help” agencies filled any gaps – when the job-holder was unable to work.

LiveOps revolutionized call center management by organizing workers to be available when the client wants the worker, for as long as the client wants the worker. They keep workers trained and on–the-ready, so they can deploy them virtually as needed.

On Demand Work Space

The point: In the 20th century, you worked someplace and employers employed workers in workplaces. Today workplaces are built for flexibility, so many employers can do their work with employees when they need the workplace and how they need the workplace for as long as they need the workplace.

Metro Studios and others are replacing the Hollywood “studio” with a flexible studio. Studios in the past built spaces for their filming needs. Metro Studios works with any client that will rent their massive spaces – for as long as the client needs the space, and not longer. Note that the “Studio” is a big box, easily repurposed to a warehouse or distribution center or big box store if demand shifts.

“Co-working” is exploding, and has revolutionized office work. A co-working space can be sized up or down as demand requires, for as long as demand requires. Co-working can suit the individual virtual worker, who can come in as they wish and stay as long as they wish. But, importantly, more and more companies are using co-working facilities in order to have flex space that suits them.

Self-storage is exploding, giving companies and consumers the ability to get storage space when they want it, for as long as they want it.

On Demand Housing

The point: Hotels, long term rentals and short term rentals will have their place in tomorrow’s economy but ordinary people with extra space in their houses will make places available when, where, and however long they are needed.

AirB&B and VRBO have revolutionized the way we access temporary housing. Go on line, check out who’s offering what, and when, and then make your selection.

On Demand Entertainment

The point: entertainment was made available at a certain time, at a certain place (a concert venue, a movie theater, a movie channel, or a TV channel). No longer. Increasingly, consumers will get what they want, when they want it, for as long as they want it.

Netflix revolutionized on demand movies by letting consumer get what they want when they want it. They started with on-line movie rentals that requires physical shipping of CD’s, but rapidly moved to on-line downloads and streaming. Amazon is chasing them, but with amazing speed.

Cable companies are perfecting “on demand” movies. Select the movie you want and when you want to view it (including immediately), and press “play”.

Amazon is perfecting “on demand” books. Select the book you want, and how you want to read it, and press “buy”. Download and start reading right now. No shipping. No library schlepping.

On Demand Tools

The point: In the 20th century, the norm was “if you want a tool, buy it and put it in a safe place until you need it. The norm is changing to “when you need a tool, order it up for as when you need it, for as long as you need it.”

Home Depot and Loews both have lucrative side businesses that allow businesses and consumers to rent the exact tool they need for as long as they need it.

On Demand Medicine

The point: in the 20th century, when you needed a doctor, you would call the office and make an appointment. If it was urgent, you would beg for the appointment to be sooner rather than later. We are not yet at an inflection point, but the trends seem clear enough: if you need a doctor, you can get a doctor – when you need it, and through the medium that makes the most sense to you.

CVS and Walgreen’s both are perfecting the mini-clinic. Modeled after the convenience store revolution of the 1960’s, mini-clinics are inside the store, and require a sign-up sheet, and that’s all. If the doctor is available, they will see you.

Telemedicine is taking full advantage of Skype and other two-way video conference platforms. In the best case, a patient’s blood work, vital signs, and medical history can be on-line while the patient is online, so the doctor can have as much context as possible. And when the doctor also has a genetic history, in the future a myriad of risks that cannot be currently understood will be known.

Other examples of “On Demand”

On Demand Meals Fast food showed the way to drive-throughs; Then Domino’s showed the way to pizza “on demand” – when you want it, how you want it. Yesterday, this was delivered to my house: a spaghetti made out of squash, in a coconut curry, with a fresh salad and lasagna for the kids. Costs a bit more, but so worth it!

On Demand Internet There are no good examples at the present time, but isn’t it plausible that the average consumer could summon ultra-high-speed internet “on-demand”? The consumer is just fine most of the day with low-speed internet, for emails and searches, etc. But if they want to watch a movie, an want to avoid slow downloads, or breaks in streaming quality, then they are happy to pay for “express lane service”.

On Demand Inventory This is old news, but further illustrates the mega trend: procurement can now demand that materials and components contracted and scheduled arrive when and as needed, minimizing inventory carrying costs.

On-demand Event Space
There has always been a demand for highly flexible event space. This is the world of clubs, hotels, etc, where it is usual to build a big box in your space that can be outfitted to a client’s needs. Today, though, that has been professionalized through companies like Convene, who specializes in this business.

========================== APPENDIX ==========================
References:

Co-Working

Virtual Workplace and Virtual Retailer

Co-Working – Update

On-Demand Work Articles and Commentary
The New York Times article below refers to a mega-trend: on-demand work. The author refers to it as a “tectonic shift in how jobs are structured“.

The focus of the article is Liveops, but this is only illustrative of this larger trend. https://www.liveops.com. Their competitor is https://workingsolutions.com .

On their front page, LiveOps says: “It’s a highly skilled workforce of virtual agents who flex to meet customer needs.”

On-demand work is exploding in customer service call centers and sales.

Some points I found interesting:

Roughly 3,000,000 Americans find work this way – as independent contractors working on a virtual basis.

Since 2001, apparently the move to outsource call centers to India has reversed. Today, the focus is on quality, and so the new trend is toward employing American workers, on a contract basis, and on a virtual basis.

They are only paid while on the phone. This is roughly 75% of the time they “commit” (“commits” are made in half hour blocks)

Top performers get the first call. “Performance-Based Routing, so the top-performing agents on your programs get more calls. By aligning our agents’ incentives with your goals, each agent who answers the call will be invested in your business objectives. What’s more, you won’t be paying call centers for idle time.”

Clients hire LiveOp. In this article, TruStage Insurance is the client.

Liveops CEO is Greg Hanover. Their competitor, Working Solutions, was founded in 1996. LiveOps says they have 20,000 agents, which they refer to as “Liveops Nation”.

“We hand-pick our agents for their great phone voices and warm and friendly personalities.”

“Scalable and flexible contract center outsourcing – leveraging an on-demand distributed network.”

=====================
CREDIT: https://www.nytimes.com/2017/11/11/business/economy/call-center-gig-workers.html?smid=nytcore-ipad-share&smprod=nytcore-ipad
=====================

Plugging Into the Gig Economy, From Home With a Headset

A company called Liveops has become the
Uber of call centers by doting on its agents.
But is the work liberating, or dehumanizing?
By NOAM SCHEIBERNOV. 11, 2017

DURHAM, N.C. — The gathering in a private dining room at a Mexican restaurant had the fervent energy of a megachurch service, or maybe an above-average “Oprah” episode — a mix of revival-style confession and extravagant empathy. There were souls to be won.

“By the end of the day, Kelly’s going to be an agent,” the group’s square-jawed leader said. “Kelly went through the process a while ago, then life happens, now she’s back. Her commitment to me that she made earlier, she looked me right in the eyes and told me she’s going to be an agent.”
Paradise, for these pilgrims, lies at one end of a phone line.

The company behind this spectacle, Liveops, had invited several dozen freelance call-center agents to a so-called road show. Some of them may have answered your customer-service calls to Home Depot or AAA. All were among the more than 100,000 agents who work as independent contractors through on-demand platforms like the one Liveops operates, which uses big data, algorithms and gamelike techniques to match its agents to clients. What Uber is to cars, Liveops is to call centers.

The agents are part of a tectonic shift in how jobs are structured. More companies are pushing work onto freelancers, temps, contractors and franchisees in the quest for an ever more nimble profit-making machine. It is one reason a job category seemingly headed offshore forever — customer service representatives — has been thriving in call centers and home offices across the United States, supporting roughly three million workers.

While critics of the arrangement cite rising insecurity, some of Liveops’ star agents — like Emmett Jones in Chicago, who knows of his rivals primarily as numbers on a leader board — say the opportunity has been transformative.

The earnest gratitude of the agents assembled here, not far from the Raleigh-Durham Airport, affirmed that. To them, Liveops is a sustaining force, a way to earn a living while being present at home. A few had driven hours to attend. Many brought friends and family members who were considering joining “Liveops Nation,” too.

There were icebreakers (“Liveops Nation Bingo”). Gift-card raffles (“$150?” the chief executive quipped. “Who approved these things?”). Free enchiladas. Everyone was invited to schmooze.
“John, I heard your story about how you got to us is pretty great,” said the master of ceremonies, an impossibly sunny woman named Tara. “Would you mind telling all these people?”
When the mic came to John, a former insurance claims adjuster with a gray beard and several earrings, there was a sense of imminent revelation.

“I was working in another glass box over near here for six years,” he began. “I reached the point where it was either jump off the roof or walk out the front door.” The other agents laughed knowingly.

He continued: “My commute now is I walk down the hall, close the bedroom door behind me.” More laughter.
Then John’s voice softened: “This is good, this is good. I get paid for when I’m working, instead of souring when you get paid for 40 hours and work some more. So, I’m here.”
“Awesome,” Tara said, applause drowning her out. “I feel like John’s story mimics a lot of what we hear from people.”
According to Greg Hanover, a longtime Liveops official who became chief executive this summer, the company’s goal is to make agents feel as if they’re part of a movement, not just earning a wage.

“Where we want to be with this is what Mary Kay has done, multilevel marketing companies,” Mr. Hanover said, referring to the cosmetics distributor and its independent sales force. “The direction we need to head in for the community within Liveops Nation is that the agents are so happy, so satisfied with the purpose and meaning there, that they’re telling their story.”

It’s an ambition that feels almost radical compared with Uber, whose best-known exercise in worker outreach is a video of its former chief executive berating a driver. It was heartening to discover that on-demand work could be both financially viable and emotionally fulfilling.

That is, until I began to speak with Mr. Jones and some of his Liveops competitors. The more you talk with them, the more you detect a kind of Darwinian struggle behind the facade of community and self-actualization. You start to wonder: Is there really such a thing as a righteous gig-economy job, even if the company is as apparently well intentioned as Liveops? Or is there something about the nature of gig work that’s inescapably dehumanizing?

Just the Right Tone
Mr. Jones, who lives in Chicago, was the top rated Liveops agent for an insurer called TruStage for much of this year.
An AT&T technician for decades, he decided that he needed to be at home not long after his wife was diagnosed with vertigo in 2008. “I can’t work and be worried about how she’s doing,” he said.

A few years later, when his daughter told him of a friend who worked with Liveops, he was eager to sign up — but refused to send in his required voice test until it was close to perfect. “I must have did the voice test four or five times,” he said. “I wanted to make sure I gave the right tone that they were looking for.”

As a Liveops agent, Mr. Jones sells life policies to callers, often those who have just seen a television commercial for TruStage insurance. He estimates that he works roughly 40 hours each week, beginning around 8 most mornings, and that he makes about $20 an hour. He is such a valued worker that TruStage invited him to its headquarters earlier this year for a two-day visit by an elite group of agents, in which executives pumped them for insights about how to increase sales.

Roughly two decades ago, Liveops and its competitors typically connected callers to psychic hotlines, and in some cases less reputable services. Such businesses had frequent spikes in call volume, making it helpful to have an on-demand work force that could be abruptly ramped up.

“The only thing people were interested in was the abandonment rate” — that is, the number of people who would hang up in frustration from being kept on hold — said Kim Houlne, the chief executive of a Liveops rival called Working Solutions, which she founded in 1996.

The call center industry took a hit during the 2001 recession, when cost consciousness unleashed a wave of outsourcing to India. But within 10 years, many companies decided that the practice, known as offshoring, had been oversold. The savings on wages were often wiped out by lost business from enraged customers, who preferred to communicate with native English speakers.
“People don’t feel comfortable,” Ms. Houlne said, alluding to the overseas agents.

By the early part of this decade, quality was in fashion. The enormous amounts of data that companies like Liveops and Working Solutions collect allowed them to connect callers to the best possible agent with remarkable precision, while allowing big clients to avoid the overhead of a physical call center and full-time workers.

Today, in addition to sales calls, Liveops agents handle calls from people trying to file insurance claims, those in need of roadside assistance, even those with medical or financial issues relating to prescription drugs. The agents must obtain a certification before they can handle such calls, which sometimes takes weeks of online coursework.

Liveops goes to great lengths to attend to their needs, addressing technical-support issues, even answering agents’ emails to the chief executive within 24 hours.

Mr. Jones, like many of his fellow agents, thinks of himself as helping others in need. He said that many families will gather around a table after a loved one has died to discuss the burial. If the deceased relative had no insurance, he said, “A lot of times that table is going to clear.” If, on the other hand, he had even $2,000 in life insurance — the minimum that TruStage sells — “the family members are more inclined to say, ‘He did what he could, let me see what I could do to help out.’ You end up with $5,000 to $6,000. You can do a decent burial rather than none at all.”

Still, there is undeniably a brass-tacks quality to the work. Shortly after we hung up, I turned my attention to an assignment due that afternoon, only to receive more calls from Mr. Jones’s number. When I finally answered, he apologized for interrupting me, then came to the point. “I have a question for you,” he said. “Do you have life insurance?”

‘Where the Price Point Is’
Like Uber, Liveops expends considerable effort calculating demand for its agents. For example, if an auto insurance company is running a commercial on ESPN, Liveops will ask the company’s media buyer — that is, the intermediary that placed the ad — to predict how many calls such an ad is likely to generate. Liveops will adjust that prediction, using its own data showing how many calls similar ads have produced from similar audiences during a comparable time of year.

And like Uber, the Liveops focuses on “utilization” — in the Liveops case, the percentage of working agents actually on a call. Depending on the client, Liveops strives for rates of 65 percent to 75 percent. Lower than that and the agents, who make money only when they’re on a call, will complain that they’re not busy enough. Significantly higher and the system is vulnerable to a sudden increase in demand that could tie up the phone lines and keep callers waiting.

Liveops asks agents to schedule themselves in half-hour blocks, known as “commits,” for the upcoming week. If the company expects demand to be higher than the number of commits, it sends agents a message urging them to sign up. (Uber does something similar, except without formal scheduling.) Sometimes it will even offer financial incentives, like a bump in the rate earned for each minute they’re on a call, or a raffle-type scheme in which people accumulate tickets for the giveaway of an iPad or a cruise.

Again like Uber, Liveops relentlessly tests the effectiveness of these tools. Referring to financial incentives, Jon Brown, the Liveops senior director of client services, said, “We’ve zeroed in on exactly what we need for an agent to go from 10 to 15 commits, from 15 commits to 20 commits. We know where the price point is, what drives behavior.”

And then there are the performance metrics. Liveops agents are rated according to what are called key performance indicators, which, depending on the customer, can include the number of sales they make, their success at upselling customers, and whether a caller would recommend the service based on their interaction.

Liveops makes clear that its agents’ ability to earn more money is closely tied to performance. “You’ve heard the term meritocracy?” said a Liveops official named Aimee Matolka at the North Carolina event. “When a call comes in, it routes in to that best agent. Yes, our router is that smart. You guys want to be that agent, I know you do. Otherwise you wouldn’t be here.”

It allows the agents to track their rankings obsessively through internal leader boards. (Liveops officials say that while the pressures of the job can preoccupy agents, it is up to them how much time to invest.)

“I lost the No. 1 spot, now I’m No. 2,” Mr. Jones said in early August, acknowledging that he checks his ranking frequently. “I thought about researching to find out who it is — you always want to know who’s the competition — but I said leave it.”

He added: “I’m a competitive person. We just toggle back and forth. If they see me jump back in, they work harder. They want that spot back.”

‘This Is My Phone Call’
My flight to Bangor, Me., was due after 9 p.m., and apparently sensing my unease with the North Country, the firefighter seated next to me asked if I had to far to drive when we landed. “About three hours north,” I confessed. “Watch out for moose,” he said. I assured him I’d driven around deer before. He stopped me short: If you hit a deer, you’ll kill them, he said. If you hit a moose, they’ll kill you.

I found Troy Carter, the agent who had recently surpassed Emmett Jones, at his home in Fort Fairfield the next day, wearing jeans, a button-down short-sleeve shirt, and a New England Patriots hat. There were no shoes on his feet, only white socks.

Like Mr. Jones, Mr. Carter said Liveops had been a blessing, allowing him to earn a living in a part of Maine so remote that my cellphone carrier welcomed me to Canada shortly after I pulled into his driveway.

When I told Mr. Carter that I had been in touch with his top competitor, he quickly pulled up the latest monthly rankings of Liveops agents selling TruStage insurance. He pointed out that while Mr. Jones, whom he recognized only by his identification number, 141806, had more sales — 87 to his 82 — he had far fewer paid sales, charged at the time of purchase rather than by invoice.

“The real thing is the paid application rate — they want it around 95 percent,” Mr. Carter said. “He has 87 sales, but only 65 percent paid, compared to my 94 percent.” This, he explained, was why he enjoyed the right to call himself the top agent for the month.

Mr. Carter is what you might call a serial entrepreneur. He once started an art supply website that folded within a few months, and a penny auction site called Bid Tree that foundered for lack of a marketing budget.

He sees Liveops, on which he spends 40 to 50 hours per week, as of a piece with these entrepreneurial efforts. In fact, it is something of a family business. His wife, Lori, handles incoming calls while he’s busy with customers. “I’m a housewife/secretary/receptionist,” she said. Even Mr. Carter’s 9-year-old son, Logan, plays a role. “At nighttime, he says the last part of his prayer based on how many sales I did today,” Mr. Carter said. “If it was a lot of sales, he’ll pray, ‘Dear Lord, help my dad get the same amount of sales tomorrow.’”

Though Liveops agents work from a script, Mr. Carter, like Mr. Jones, adds his own flourishes. Before asking a caller’s gender, as he is required to do, he will say, “Now I already know the answer to this question, but please confirm if you’re male or female.” Upon receiving the answer, he will pause momentarily before saying, “I told you I already knew the answer,” and break into a laugh.

He might make this identical joke, with identical timing, dozens of times in a workday. “It’s like a comedian has a little pause before a joke,” he told me. “It relaxes them right off.”

Even with these touches, results can vary widely. Two days earlier, Mr. Carter had made seven sales, only a few shy of his record. The day I turned up, he managed only one. He said some callers had the impression they could receive $25,000 of insurance for $9.95 per month — the commercial mentions both figures — and begged off when Mr. Carter told them that
Mr. Carter has done research on how to comport himself, including watching an instructional YouTube video by the former stockbroker who was the subject of the movie “The Wolf of Wall Street.” He believes the key is to come off as the alpha presence. “The one that asks the most questions is the one in control,” he said. “If they ask me questions — ‘How are you doing?’ — I’ll come back, ‘The question is how are you doing?’ This is my phone call, as much as I can make it.”

But on this day he repeatedly ran up against the limits of his powers. Even those who remained interested after 10 or 15 minutes of painstaking back-and-forth often demurred when Mr. Carter asked them for payment information. “This one guy was outside in a wheelchair,” Mr. Carter said of a caller who couldn’t produce his credit card. “He didn’t want to go in and get it. I said, ‘I’m fine waiting,’ but I can’t push him.”

These setbacks only seem to make Mr. Carter focus more. At one point, he made a swiping motion between his face and his headset with his index finger and middle finger. “They recommend that you keep the microphone two fingers away,” he said. “I’m always doing that — checking that it’s two fingers. I’ll do that for the rest of my life.”

It seemed, all in all, like a grueling way to make the slightly more than $30,000 that Mr. Carter estimates he takes in before taxes. “The good thing is he can take hours off,” Lori told me. “But then he can lose his spot. It’s always a fight for the top.”

I was reminded of the Alec Baldwin monologue from the movie “Glengarry Glen Ross,” except that the prize for having the most sales wouldn’t be a Cadillac, it would be a set of steak knives, because the Liveops analytics team had calculated that agents would give nearly as much effort for a prize worth a small fraction of the cost.

Of course, unlike the salesmen in that movie, the Liveops agents can’t really be fired — the third prize — because they weren’t employees to begin with.

A while later, Mr. Carter described a recent initiative in which agents were promised a bonus if 95 percent of their collective sales were paid up front. “I knew it wasn’t going to work as soon as they said it,” he told me, because a handful of agents with low paid rates could ruin everyone else’s chances.

“They did do a pullover sweatshirt for the top two,” he added, brightening. “I was second, so that’s coming.”

A version of this article appears in print on November 12, 2017, on Page BU1 of the New York edition with the headline: Paradise at the End of a Phone Line. Order Reprints| Today’s Paper|Subscribe

Philip Roth Update

I found this chock full of wisdom:

CREDIT: NYT Interview with Philip Roth

In an exclusive interview, the (former) novelist shares his thoughts on Trump, #MeToo and retirement.

With the death of Richard Wilbur in October, Philip Roth became the longest-serving member in the literature department of the American Academy of Arts and Letters, that august Hall of Fame on Audubon Terrace in northern Manhattan, which is to the arts what Cooperstown is to baseball. He’s been a member so long he can recall when the academy included now all-but-forgotten figures like Malcolm Cowley and Glenway Wescott — white-haired luminaries from another era. Just recently Roth joined William Faulkner, Henry James and Jack London as one of very few Americans to be included in the French Pleiades editions (the model for our own Library of America), and the Italian publisher Mondadori is also bringing out his work in its Meridiani series of classic authors. All this late-life eminence — which also includes the Spanish Prince of Asturias Award in 2012 and being named a commander in the Légion d’Honneur of France in 2013 — seems both to gratify and to amuse him. “Just look at this,” he said to me last month, holding up the ornately bound Mondadori volume, as thick as a Bible and comprising titles like “Lamento di Portnoy” and “Zuckerman Scatenato.” “Who reads books like this?”
In 2012, as he approached 80, Roth famously announced that he had retired from writing. (He actually stopped two years earlier.) In the years since, he has spent a certain amount of time setting the record straight. He wrote a lengthy and impassioned letter to Wikipedia, for example, challenging the online encyclopedia’s preposterous contention that he was not a credible witness to his own life. (Eventually, Wikipedia backed down and redid the Roth entry in its entirety.) Roth is also in regular touch with Blake Bailey, whom he appointed as his official biographer and who has already amassed 1,900 pages of notes for a book expected to be half that length. And just recently, he supervised the publication of “Why Write?,” the 10th and last volume in the Library of America edition of his work. A sort of final sweeping up, a polishing of the legacy, it includes a selection of literary essays from the 1960s and ’70s; the full text of “Shop Talk,” his 2001 collection of conversations and interviews with other writers, many of them European; and a section of valedictory essays and addresses, several published here for the first time. Not accidentally, the book ends with the three-word sentence “Here I am” — between hard covers, that is.
But mostly now Roth leads the quiet life of an Upper West Side retiree. (His house in Connecticut, where he used to seclude himself for extended bouts of writing, he now uses only in the summer.) He sees friends, goes to concerts, checks his email, watches old movies on FilmStruck. Not long ago he had a visit from David Simon, the creator of “The Wire,” who is making a six-part mini-series of “The Plot Against America,” and afterward he said he was sure his novel was in good hands. Roth’s health is good, though he has had several surgeries for a recurring back problem, and he seems cheerful and contented. He’s thoughtful but still, when he wants to be, very funny.
I have interviewed Roth on several occasions over the years, and last month I asked if we could talk again. Like a lot of his readers, I wondered what the author of “American Pastoral,” “I Married a Communist” and “The Plot Against America” made of this strange period we are living in now. And I was curious about how he spent his time. Sudoku? Daytime TV? He agreed to be interviewed but only if it could be done via email. He needed to take some time, he said, and think about what he wanted to say.
C.M. In a few months you’ll turn 85. Do you feel like an elder? What has growing old been like?
P.R. Yes, in just a matter of months I’ll depart old age to enter deep old age — easing ever deeper daily into the redoubtable Valley of the Shadow. Right now it is astonishing to find myself still here at the end of each day. Getting into bed at night I smile and think, “I lived another day.” And then it’s astonishing again to awaken eight hours later and to see that it is morning of the next day and that I continue to be here. “I survived another night,” which thought causes me to smile once more. I go to sleep smiling and I wake up smiling. I’m very pleased that I’m still alive. Moreover, when this happens, as it has, week after week and month after month since I began drawing Social Security, it produces the illusion that this thing is just never going to end, though of course I know that it can stop on a dime. It’s something like playing a game, day in and day out, a high-stakes game that for now, even against the odds, I just keep winning. We will see how long my luck holds out.
C.M. Now that you’ve retired as a novelist, do you ever miss writing, or think about un-retiring?
P.R. No, I don’t. That’s because the conditions that prompted me to stop writing fiction seven years ago haven’t changed. As I say in “Why Write?,” by 2010 I had “a strong suspicion that I’d done my best work and anything more would be inferior. I was by this time no longer in possession of the mental vitality or the verbal energy or the physical fitness needed to mount and sustain a large creative attack of any duration on a complex structure as demanding as a novel…. Every talent has its terms — its nature, its scope, its force; also its term, a tenure, a life span…. Not everyone can be fruitful forever.”
C.M. Looking back, how do you recall your 50-plus years as a writer?
P.R. Exhilaration and groaning. Frustration and freedom. Inspiration and uncertainty. Abundance and emptiness. Blazing forth and muddling through. The day-by-day repertoire of oscillating dualities that any talent withstands — and tremendous solitude, too. And the silence: 50 years in a room silent as the bottom of a pool, eking out, when all went well, my minimum daily allowance of usable prose.
C.M. In “Why Write?” you reprint your famous essay “Writing American Fiction,” which argues that American reality is so crazy that it almost outstrips the writer’s imagination. It was 1960 when you said that. What about now? Did you ever foresee an America like the one we live in today?
P.R. No one I know of has foreseen an America like the one we live in today. No one (except perhaps the acidic H. L. Mencken, who famously described American democracy as “the worship of jackals by jackasses”) could have imagined that the 21st-century catastrophe to befall the U.S.A., the most debasing of disasters, would appear not, say, in the terrifying guise of an Orwellian Big Brother but in the ominously ridiculous commedia dell’arte figure of the boastful buffoon. How naïve I was in 1960 to think that I was an American living in preposterous times! How quaint! But then what could I know in 1960 of 1963 or 1968 or 1974 or 2001 or 2016?
C.M. Your 2004 novel, “The Plot Against America,” seems eerily prescient today. When that novel came out, some people saw it as a commentary on the Bush administration, but there were nowhere near as many parallels then as there seem to be now.
P.R. However prescient “The Plot Against America” might seem to you, there is surely one enormous difference between the political circumstances I invent there for the U.S. in 1940 and the political calamity that dismays us so today. It’s the difference in stature between a President Lindbergh and a President Trump. Charles Lindbergh, in life as in my novel, may have been a genuine racist and an anti-Semite and a white supremacist sympathetic to Fascism, but he was also — because of the extraordinary feat of his solo trans-Atlantic flight at the age of 25 — an authentic American hero 13 years before I have him winning the presidency. Lindbergh, historically, was the courageous young pilot who in 1927, for the first time, flew nonstop across the Atlantic, from Long Island to Paris. He did it in 33.5 hours in a single-seat, single-engine monoplane, thus making him a kind of 20th-century Leif Ericson, an aeronautical Magellan, one of the earliest beacons of the age of aviation. Trump, by comparison, is a massive fraud, the evil sum of his deficiencies, devoid of everything but the hollow ideology of a megalomaniac.
C.M. One of your recurrent themes has been male sexual desire — thwarted desire, as often as not — and its many manifestations. What do you make of the moment we seem to be in now, with so many women coming forth and accusing so many highly visible men of sexual harassment and abuse?
P.R. I am, as you indicate, no stranger as a novelist to the erotic furies. Men enveloped by sexual temptation is one of the aspects of men’s lives that I’ve written about in some of my books. Men responsive to the insistent call of sexual pleasure, beset by shameful desires and the undauntedness of obsessive lusts, beguiled even by the lure of the taboo — over the decades, I have imagined a small coterie of unsettled men possessed by just such inflammatory forces they must negotiate and contend with. I’ve tried to be uncompromising in depicting these men each as he is, each as he behaves, aroused, stimulated, hungry in the grip of carnal fervor and facing the array of psychological and ethical quandaries the exigencies of desire present. I haven’t shunned the hard facts in these fictions of why and how and when tumescent men do what they do, even when these have not been in harmony with the portrayal that a masculine public-relations campaign — if there were such a thing — might prefer. I’ve stepped not just inside the male head but into the reality of those urges whose obstinate pressure by its persistence can menace one’s rationality, urges sometimes so intense they may even be experienced as a form of lunacy. Consequently, none of the more extreme conduct I have been reading about in the newspapers lately has astonished me.
C.M. Before you were retired, you were famous for putting in long, long days. Now that you’ve stopped writing, what do you do with all that free time?
P.R. I read — strangely or not so strangely, very little fiction. I spent my whole working life reading fiction, teaching fiction, studying fiction and writing fiction. I thought of little else until about seven years ago. Since then I’ve spent a good part of each day reading history, mainly American history but also modern European history. Reading has taken the place of writing, and constitutes the major part, the stimulus, of my thinking life.
C.M. What have you been reading lately?
P.R. I seem to have veered off course lately and read a heterogeneous collection of books. I’ve read three books by Ta-Nehisi Coates, the most telling from a literary point of view, “The Beautiful Struggle,” his memoir of the boyhood challenge from his father. From reading Coates I learned about Nell Irvin Painter’s provocatively titled compendium “The History of White People.” Painter sent me back to American history, to Edmund Morgan’s “American Slavery, American Freedom,” a big scholarly history of what Morgan calls “the marriage of slavery and freedom” as it existed in early Virginia. Reading Morgan led me circuitously to reading the essays of Teju Cole, though not before my making a major swerve by reading Stephen Greenblatt’s “The Swerve,” about the circumstances of the 15th-century discovery of the manuscript of Lucretius’ subversive “On the Nature of Things.” This led to my tackling some of Lucretius’ long poem, written sometime in the first century B.C.E., in a prose translation by A. E. Stallings. From there I went on to read Greenblatt’s book about “how Shakespeare became Shakespeare,” “Will in the World.” How in the midst of all this I came to read and enjoy Bruce Springsteen’s autobiography, “Born to Run,” I can’t explain other than to say that part of the pleasure of now having so much time at my disposal to read whatever comes my way invites unpremeditated surprises.
Pre-publication copies of books arrive regularly in the mail, and that’s how I discovered Steven Zipperstein’s “Pogrom: Kishinev and the Tilt of History.” Zipperstein pinpoints the moment at the start of the 20th century when the Jewish predicament in Europe turned deadly in a way that foretold the end of everything. “Pogrom” led me to find a recent book of interpretive history, Yuri Slezkine’s “The Jewish Century,” which argues that “the Modern Age is the Jewish Age, and the 20th century, in particular, is the Jewish Century.” I read Isaiah Berlin’s “Personal Impressions,” his essay-portraits of the cast of influential 20th-century figures he’d known or observed. There is a cameo of Virginia Woolf in all her terrifying genius and there are especially gripping pages about the initial evening meeting in badly bombarded Leningrad in 1945 with the magnificent Russian poet Anna Akhmatova, when she was in her 50s, isolated, lonely, despised and persecuted by the Soviet regime. Berlin writes, “Leningrad after the war was for her nothing but a vast cemetery, the graveyard of her friends. … The account of the unrelieved tragedy of her life went far beyond anything which anyone had ever described to me in spoken words.” They spoke until 3 or 4 in the morning. The scene is as moving as anything in Tolstoy.
Just in the past week, I read books by two friends, Edna O’Brien’s wise little biography of James Joyce and an engagingly eccentric autobiography, “Confessions of an Old Jewish Painter,” by one of my dearest dead friends, the great American artist R. B. Kitaj. I have many dear dead friends. A number were novelists. I miss finding their new books in the mail.
Follow New York Times Books on Facebook and Twitter (@nytimesbooks), and sign up for our newsletter.
Charles McGrath, a former editor of the Book Review, is a contributing writer for The Times. He is the editor of a Library of America collection of John O’Hara stories.

Tribute to Global Progress

Debbie Downers: attention!

The point of this post: global progress on the fronts that really count has been amazing.

There are many sources. But my favorite is Nick Kristof’s column “Why 2017 Was the Best Year in Human History”. The column was the most emailed column of the week. I now see why. It is reprinted below.

“The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.”

Let me step back for a minute.

Fareed Zacharia, in his 2008 book The Post-American World, first raised my awareness about global progress. He began to get my head screwed on correctly.

Don’t get me wrong. I have lived in this fishbowl of global progress my entire life. I have been keenly aware of its major events, such as:

The Industrial Revolution
The Triumph of Democracy
The victories of WWI and WWII
The fall of the Berln Wall
The rise of global institutions, e.g. the UN, the WTO, the WHO, the World Bank
The rise of the computing revolution
The rise of the internet
The advent of iPhones
The conquest of infectious disease

But Fareed’s take on world events was spectacular in its optimism. He reminded readers that wars can be massive or small, like skirmishes; that peace can be the norm or war can be the norm; that human suffering can be widespread or isolated; and, most of all, he pointed out that the last fifty years have been, on the whole, spectacularly peaceful, wealth-creating, and welbeing-creating.

I am just like everyone else, though. I need a reminder.

The reminder came to me in Nick Kristof’s column this Sunday.

My favorites:

As recently as the 1960s, a majority of humans:

were illiterate. Now fewer than 15 percent are illiterate;
lived in extreme poverty. Now fewer than 10 percent do.

“In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.”

“Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.”

The writing is below, and the data supporting the writing is attached.

=================================

CREDIT: https://ourworldindata.org

CREDIT: https://ourworldindata.org/happiness-and-life-satisfaction/

CREDIT: https://www.nytimes.com/2018/01/06/opinion/sunday/2017-progress-illiteracy-poverty.html?smid=nytcore-ipad-share&smprod=nytcore-ipad

Why 2017 Was the Best Year in Human History

We all know that the world is going to hell. Given the rising risk of nuclear war with North Korea, the paralysis in Congress, warfare in Yemen and Syria, atrocities in Myanmar and a president who may be going cuckoo, you might think 2017 was the worst year ever.

But you’d be wrong. In fact, 2017 was probably the very best year in the long history of humanity.

A smaller share of the world’s people were hungry, impoverished or illiterate than at any time before. A smaller proportion of children died than ever before. The proportion disfigured by leprosy, blinded by diseases like trachoma or suffering from other ailments also fell.

We need some perspective as we watch the circus in Washington, hands over our mouths in horror. We journalists focus on bad news — we cover planes that crash, not those that take off — but the backdrop of global progress may be the most important development in our lifetime.

Every day, the number of people around the world living in extreme poverty (less than about $2 a day) goes down by 217,000, according to calculations by Max Roser, an Oxford University economist who runs a website called Our World in Data. Every day, 325,000 more people gain access to electricity. And 300,000 more gain access to clean drinking water.

Readers often assume that because I cover war, poverty and human rights abuses, I must be gloomy, an Eeyore with a pen. But I’m actually upbeat, because I’ve witnessed transformational change.

As recently as the 1960s, a majority of humans had always been illiterate and lived in extreme poverty. Now fewer than 15 percent are illiterate, and fewer than 10 percent live in extreme poverty. In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.

Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.

Steven Pinker, the Harvard psychology professor, explores the gains in a terrific book due out next month, “Enlightenment Now,” in which he recounts the progress across a broad array of metrics, from health to wars, the environment to happiness, equal rights to quality of life. “Intellectuals hate progress,” he writes, referring to the reluctance to acknowledge gains, and I know it feels uncomfortable to highlight progress at a time of global threats. But this pessimism is counterproductive and simply empowers the forces of backwardness.

President Trump rode this gloom to the White House. The idea “Make America Great Again” professes a nostalgia for a lost Eden. But really? If that was, say, the 1950s, the U.S. also had segregation, polio and bans on interracial marriage, gay sex and birth control. Most of the world lived under dictatorships, two-thirds of parents had a child die before age 5, and it was a time of nuclear standoffs, of pea soup smog, of frequent wars, of stifling limits on women and of the worst famine in history.

What moment in history would you prefer to live in?
F. Scott Fitzgerald said the test of a first-rate intelligence is the ability to hold two contradictory thoughts at the same time. I suggest these: The world is registering important progress, but it also faces mortal threats. The first belief should empower us to act on the second.

Granted, this column may feel weird to you. Those of us in the columny gig are always bemoaning this or that, and now I’m saying that life is great? That’s because most of the time, quite rightly, we focus on things going wrong. But it’s also important to step back periodically. Professor Roser notes that there was never a headline saying, “The Industrial Revolution Is Happening,” even though that was the most important news of the last 250 years.

I had a visit the other day from Sultana, a young Afghan woman from the Taliban heartland. She had been forced to drop out of elementary school. But her home had internet, so she taught herself English, then algebra and calculus with the help of the Khan Academy, Coursera and EdX websites. Without leaving her house, she moved on to physics and string theory, wrestled with Kant and read The New York Times on the side, and began emailing a distinguished American astrophysicist, Lawrence M. Krauss.

I wrote about Sultana in 2016, and with the help of Professor Krauss and my readers, she is now studying at Arizona State University, taking graduate classes. She’s a reminder of the aphorism that talent is universal, but opportunity is not. The meaning of global progress is that such talent increasingly can flourish.

So, sure, the world is a dangerous mess; I worry in particular about the risk of a war with North Korea. But I also believe in stepping back once a year or so to take note of genuine progress — just as, a year ago, I wrote that 2016 had been the best year in the history of the world, and a year from now I hope to offer similar good news about 2018. The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.

Every other day this year, I promise to tear my hair and weep and scream in outrage at all the things going wrong. But today, let’s not miss what’s going right.

A version of this op-ed appears in print on January 7, 2018, on Page SR9 of the New York edition with the headline: Why 2017 Was the Best Year in History

Thought Recognition and BCIS

The Economist kicked off their 2018 year with a bold prediction: “Brain-computer interfaces may change what it means to be human.”

In their lead article, they suggest that BCIS (Brain Computer Interfaces) like the BrainGate System are leading the way into a new world: where mind control works.

I feel like I did in 1979 when I first heard about the Apple II. The whole world was mainframe computing and time-sharing of those monsters, and yet two guys in a garage blow a massive hole through this paradigm, turn it on its head, and invent personal computing.

Think about it: personal computing had been evolving and constantly improving now for almost forty year!

Back then, I could see the future vaguely, in very partial outlines, without much practical effect, but with intense curiosity.

Another example is voice recognition. I still remember being introduced to the subject, way back in …. 1970? I got all excited about it, until I realized …. it sucked! And it wasn’t going to get much better anytime soon. But I remember saying to myself: I can’t be fooled by the first versions of voice recognition. I can’t lull myself to sleep. I need to watch this space because it will evolve and improve over time.

If you think about it, technology version 1-10 always sucks. The history of speech recognition in the 1950’s and 1960’s is, well, pathetic.

IBM’s SHOEBOX was introduced at the 1962 World’s Fair.
DARPA got involved in the late 1970’s, and then partnered with Carnegie Mellon on HARPY – a major advance.
Threshold Technology was formed then, in order to advance commercialization of primitive speech recognition.
And now we have SIRI.

And sure, enough, after almost 40 years of trying, voice recognition is getting really, really good. Can we see a time within the next 10 years when voice recognition replaces most keyboard applications?

I think so.

And so it is with this subject. We are at the very, very beginning, when it all sounds vague, with partial outlines, without much practical effect, and yet ….. it fills me with intense curiosity.

What could the next fifty years bring?

Is it possible that we will be able to think something, and have that something (a thought? a prescribed action? an essay?) become physical?

Read on…..

===============================

CREDIT: Economist Article on The Next Frontier

TECHNOLOGIES are often billed as transformative. For William Kochevar, the term is justified. Mr Kochevar is paralysed below the shoulders after a cycling accident, yet has managed to feed himself by his own hand. This remarkable feat is partly thanks to electrodes, implanted in his right arm, which stimulate muscles. But the real magic lies higher up. Mr Kochevar can control his arm using the power of thought. His intention to move is reflected in neural activity in his motor cortex; these signals are detected by implants in his brain and processed into commands to activate the electrodes in his arms.

An ability to decode thought in this way may sound like science fiction. But brain-computer interfaces (BCIs) like the BrainGate system used by Mr Kochevar provide evidence that mind-control can work. Researchers are able to tell what words and images people have heard and seen from neural activity alone. Information can also be encoded and used to stimulate the brain. Over 300,000 people have cochlear implants, which help them to hear by converting sound into electrical signals and sending them into the brain. Scientists have “injected” data into monkeys’ heads, instructing them to perform actions via electrical pulses.

As our Technology Quarterly in this issue explains, the pace of research into BCIs and the scale of its ambition are increasing. Both America’s armed forces and Silicon Valley are starting to focus on the brain. Facebook dreams of thought-to-text typing. Kernel, a startup, has $100m to spend on neurotechnology. Elon Musk has formed a firm called Neuralink; he thinks that, if humanity is to survive the advent of artificial intelligence, it needs an upgrade. Entrepreneurs envisage a world in which people can communicate telepathically, with each other and with machines, or acquire superhuman abilities, such as hearing at very high frequencies.

These powers, if they ever materialise, are decades away. But well before then, BCIs could open the door to remarkable new applications. Imagine stimulating the visual cortex to help the blind, forging new neural connections in stroke victims or monitoring the brain for signs of depression. By turning the firing of neurons into a resource to be harnessed, BCIs may change the idea of what it means to be human.

That thinking feeling
Sceptics scoff. Taking medical BCIs out of the lab into clinical practice has proved very difficult. The BrainGate system used by Mr Kochevar was developed more than ten years ago, but only a handful of people have tried it out. Turning implants into consumer products is even harder to imagine. The path to the mainstream is blocked by three formidable barriers—technological, scientific and commercial.

Start with technology. Non-invasive techniques like an electroencephalogram (EEG) struggle to pick up high-resolution brain signals through intervening layers of skin, bone and membrane. Some advances are being made—on EEG caps that can be used to play virtual-reality games or control industrial robots using thought alone. But for the time being at least, the most ambitious applications require implants that can interact directly with neurons. And existing devices have lots of drawbacks. They involve wires that pass through the skull; they provoke immune responses; they communicate with only a few hundred of the 85bn neurons in the human brain. But that could soon change. Helped by advances in miniaturisation and increased computing power, efforts are under way to make safe, wireless implants that can communicate with hundreds of thousands of neurons. Some of these interpret the brain’s electrical signals; others experiment with light, magnetism and ultrasound.

Clear the technological barrier, and another one looms. The brain is still a foreign country. Scientists know little about how exactly it works, especially when it comes to complex functions like memory formation. Research is more advanced in animals, but experiments on humans are hard. Yet, even today, some parts of the brain, like the motor cortex, are better understood. Nor is complete knowledge always needed. Machine learning can recognise patterns of neural activity; the brain itself gets the hang of controlling BCIS with extraordinary ease. And neurotechnology will reveal more of the brain’s secrets.

Like a hole in the head
The third obstacle comprises the practical barriers to commercialisation. It takes time, money and expertise to get medical devices approved. And consumer applications will take off only if they perform a function people find useful. Some of the applications for brain-computer interfaces are unnecessary—a good voice-assistant is a simpler way to type without fingers than a brain implant, for example. The idea of consumers clamouring for craniotomies also seems far-fetched. Yet brain implants are already an established treatment for some conditions. Around 150,000 people receive deep-brain stimulation via electrodes to help them control Parkinson’s disease. Elective surgery can become routine, as laser-eye procedures show.

All of which suggests that a route to the future imagined by the neurotech pioneers is arduous but achievable. When human ingenuity is applied to a problem, however hard, it is unwise to bet against it. Within a few years, improved technologies may be opening up new channels of communications with the brain. Many of the first applications hold out unambiguous promise—of movement and senses restored. But as uses move to the augmentation of abilities, whether for military purposes or among consumers, a host of concerns will arise. Privacy is an obvious one: the refuge of an inner voice may disappear. Security is another: if a brain can be reached on the internet, it can also be hacked. Inequality is a third: access to superhuman cognitive abilities could be beyond all except a self-perpetuating elite. Ethicists are already starting to grapple with questions of identity and agency that arise when a machine is in the neural loop.

These questions are not urgent. But the bigger story is that neither are they the realm of pure fantasy. Technology changes the way people live. Beneath the skull lies the next frontier.

This article appeared in the Leaders section of the print edition under the headline “The next frontier”

================== REFERENCE: History of Speech Recognition =====

CREDIT:

PC WORLD ARTICLE ON HISTORY OF SPEECH RECOGNITION

Speech Recognition Through the Decades: How We Ended Up With Siri

By Melanie Pinola
PCWorld | NOV 2, 2011 6:00 PM PT

Looking back on the development of speech recognition technology is like watching a child grow up, progressing from the baby-talk level of recognizing single syllables, to building a vocabulary of thousands of words, to answering questions with quick, witty replies, as Apple’s supersmart virtual assistant Siri does.

Listening to Siri, with its slightly snarky sense of humor, made us wonder how far speech recognition has come over the years. Here’s a look at the developments in past decades that have made it possible for people to control devices using only their voice.

1950s and 1960s: Baby Talk
The first speech recognition systems could understand only digits. (Given the complexity of human language, it makes sense that inventors and engineers first focused on numbers.) Bell Laboratories designed in 1952 the “Audrey” system, which recognized digits spoken by a single voice. Ten years later, IBM demonstrated at the 1962 World’s Fair its “Shoebox” machine, which could understand 16 words spoken in English.

Labs in the United States, Japan, England, and the Soviet Union developed other hardware dedicated to recognizing spoken sounds, expanding speech recognition technology to support four vowels and nine consonants.
They may not sound like much, but these first efforts were an impressive start, especially when you consider how primitive computers themselves were at the time.

1970s: Speech Recognition Takes Off

Speech recognition technology made major strides in the 1970s, thanks to interest and funding from the U.S. Department of Defense. The DoD’s DARPA Speech Understanding Research (SUR) program, from 1971 to 1976, was one of the largest of its kind in the history of speech recognition, and among other things it was responsible for Carnegie Mellon’s “Harpy” speech-understanding system. Harpy could understand 1011 words, approximately the vocabulary of an average three-year-old.

Harpy was significant because it introduced a more efficient search approach, called beam search, to “prove the finite-state network of possible sentences,” according to Readings in Speech Recognition by Alex Waibel and Kai-Fu Lee. (The story of speech recognition is very much tied to advances in search methodology and technology, as Google’s entrance into speech recognition on mobile devices proved just a few years ago.)

The ’70s also marked a few other important milestones in speech recognition technology, including the founding of the first commercial speech recognition company, Threshold Technology, as well as Bell Laboratories’ introduction of a system that could interpret multiple people’s voices.

1980s: Speech Recognition Turns Toward Prediction
Over the next decade, thanks to new approaches to understanding what people say, speech recognition vocabulary jumped from about a few hundred words to several thousand words, and had the potential to recognize an unlimited number of words. One major reason was a new statistical method known as the hidden Markov model.
Rather than simply using templates for words and looking for sound patterns, HMM considered the probability of unknown sounds’ being words. This foundation would be in place for the next two decades (see Automatic Speech Recognition—A Brief History of the Technology Development by B.H. Juang and Lawrence R. Rabiner).

Equipped with this expanded vocabulary, speech recognition started to work its way into commercial applications for business and specialized industry (for instance, medical use). It even entered the home, in the form of Worlds of Wonder’s Julie doll (1987), which children could train to respond to their voice. (“Finally, the doll that understands you.”)
See how well Julie could speak:

However, whether speech recognition software at the time could recognize 1000 words, as the 1985 Kurzweil text-to-speech program did, or whether it could support a 5000-word vocabulary, as IBM’s system did, a significant hurdle remained: These programs took discrete dictation, so you had … to … pause … after … each … and … every … word.

Next page: Speech recognition for the masses, and the future of speech recognition

Fiber’s Role in Diet

In this post, I discuss the role of the microbiome and the role of fiber in supporting a healthy microbiome. A healthy microbiome is related to the amount and diversity of the bacteria found within it.

If I had to summarize, I would say this: new research strongly confirms that high fiber diets are healthy diets. Because of this finding, eat 20-200 grams of fiber daily, by eating nuts, berries, whole grains, beans and vegetables.

The Role of the Microbiome
Bacteria in the gut – the “microbiome” – has been the subject of intense research interest over the last decade.

We now know that a healthy microbiome is essential to health and wellbeing.

On a scientific level, we now know that a healthy biome is one with billions of bacteria, of many kinds.

And specifically, we now know that a healthy biome has a layer of mucus along the walls of the intestine.

“The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome.”

If that mucus layer is thick, it is healthy. If it is thin, it is unhealthy (thin mucus layers have been linked to chronic inflammation). (“Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.”)

The Role of Fiber in Supporting a Healthy Microbiome
“Fiber” refers to ruffage from fruits, vegetables, and beans that is hard to digest. If fiber is hard to digest, why are they so universally hailed as “good for you”?

That’s the subject of two newly-reported experiments.

The answer seems to lie in bacteria in the gut – the “microbiome”. Much has been written about their beneficial role in the body. But now it seems that some bacteria in the gut have an additional role: they digest fiber that human enzymes cannot digest.

So some bacteria thrive in the gut because of the fiber they eat. And, in an important natural chain, apparently there are some bacteria in the gut that that thrive because the waste of the bacteria that eats fiber. An ecosystem of bacteria tracing to fiber!

This speaks to one of the most-discussed subjects in science today: how and why is one microbiome populated with relatively few bacteria numbers and types, and why is another microbiome much more diverse – with many more bacteria and bacteria types?

One study, shown below, reports from Tanzania, after reviewing data from tribes that sustain themselves on high fiber foods. The results, reported in Science, clearly show that an ultra-high fiber diet results in ultra high bacteria counts and diversity.

Other findings suggest that fiber is the food of many bacteria types. Because of this, a diverse, healthy bacterial microbiome is dependent on a fiber-rich diet. (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.”)

Indeed, it may well be true that many types of fibers support many types of bacteria.

Proof of this?

Researchers, including Dr. Gerwitz at Georgia State proved that more fiber seems to be better:

Bad: high, fat, low fiber (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.” “Many common species became rare, and rare species became common.“)

Good: modest fiber
Better: high dose fiber (“Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.”)

Best: high dose of fiber-feeding bacteria
(“Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.”

(“Research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.”)

===========================
This article documents rich-in-fiber foods:

CREDIT: http://www.todaysdietitian.com/newarchives/063008p28.shtml

In recognition of fiber’s benefits, Today’s Dietitian looks at some of the best ways to boost fiber intake,from whole to fortified foods,using data from the USDA National Nutrient Database for Standard Reference.

Top Fiber-Rich Foods
1. Get on the Bran Wagon (Oat bran, All-bran cereal, fiber-one chewy bars, etc)
One simple way to increase fiber intake is to power up on bran. Bran from many grains is very rich in dietary fiber. Oat bran is high in soluble fiber, which has been shown to lower blood cholesterol levels. Wheat, corn, and rice bran are high in insoluble fiber, which helps prevent constipation. Bran can be sprinkled into your favorite foods,from hot cereal and pancakes to muffins and cookies. Many popular high-fiber cereals and bars are also packed with bran.

2. Take a Trip to Bean Town (Limas, Pintos, Lentils, etc)
Beans really are the magical fruit. They are one of the most naturally rich sources of fiber, as well as protein, lysine, vitamins, and minerals, in the plant kingdom. It’s no wonder so many indigenous diets include a bean or two in the mix. Some people experience intestinal gas and discomfort associated with bean intake, so they may be better off slowly introducing beans into their diet. Encourage a variety of beans as an animal protein replacement in stews, side dishes, salads, soups, casseroles, and dips.

3. Go Berry Picking (especially blackberries and raspberries)
Jewel-like berries are in the spotlight due to their antioxidant power, but let’s not forget about their fiber bonus. Berries happen to yield one of the best fiber-per-calorie bargains on the planet. Since berries are packed with tiny seeds, their fiber content is typically higher than that of many fruits. Clients can enjoy berries year-round by making the most of local berries in the summer and eating frozen, preserved, and dried berries during the other seasons. Berries make great toppings for breakfast cereal, yogurt, salads, and desserts.

4. Wholesome Whole Grains (especially barley, oats, brown rice, rye wafers)
One of the easiest ways to up fiber intake is to focus on whole grains. A grain in nature is essentially the entire seed of the plant made up of the bran, germ, and endosperm. Refining the grain removes the germ and the bran; thus, fiber, protein, and other key nutrients are lost. The Whole Grains Council recognizes a variety of grains and defines whole grains or foods made from them as containing “all the essential parts and naturally-occurring nutrients of the entire grain seed. If the grain has been processed, the food product should deliver approximately the same rich balance of nutrients that are found in the original grain seed.â€‌ Have clients choose different whole grains as features in side dishes, pilafs, salads, breads, crackers, snacks, and desserts.

5. Sweet Peas (especially frozen green peas, black eyed peas)
Peas,from fresh green peas to dried peas,are naturally chock full of fiber. In fact, food technologists have been studying pea fiber as a functional food ingredient. Clients can make the most of peas by using fresh or frozen green peas and dried peas in soups, stews, side dishes, casseroles, salads, and dips.

6. Green, the Color of Fiber (Spinach, etc)
Deep green, leafy vegetables are notoriously rich in beta-carotene, vitamins, and minerals, but their fiber content isn’t too shabby either. There are more than 1,000 species of plants with edible leaves, many with similar nutritional attributes, including high-fiber content. While many leafy greens are fabulous tossed in salads, saut ©ing them in olive oil, garlic, lemon, and herbs brings out a rich flavor.

7. Squirrel Away Nuts and Seeds (especially flaxseed and sesame seed)
Go nuts to pack a fiber punch. One ounce of nuts and seeds can provide a hearty contribution to the day’s fiber recommendation, along with a bonus of healthy fats, protein, and phytochemicals. Sprinkling a handful of nuts or seeds over breakfast cereals, yogurt, salads, and desserts is a tasty way to do fiber.

8. Play Squash (especially acorn squash)
Dishing up squash,from summer to winter squash,all year is another way that clients can ratchet up their fiber intake. These nutritious gems are part of the gourd family and contribute a variety of flavors, textures, and colors, as well as fiber, vitamins, minerals, and carotenoids, to the dinner plate. Squash can be turned into soups, stews, side dishes, casseroles, salads, and crudit ©s. Brush squash with olive oil and grill it in the summertime for a healthy, flavorful accompaniment to grilled meats.

9. Brassica or Bust (broccoli, cauliflower, kale, cabbage, and Brussels sprouts)
Brassica vegetables have been studied for their cancer-protective effects associated with high levels of glucosinolates. But these brassy beauties, including broccoli, cauliflower, kale, cabbage, and Brussels sprouts, are also full of fiber. They can be enjoyed in stir-fries, casseroles, soups, and salads and steamed as a side dish.

10. Hot Potatoes
The humble spud, the top vegetable crop in the world, is plump with fiber. Since potatoes are so popular in America, they’re an easy way to help pump up people’s fiber potential. Why stop at Russets? There are numerous potatoes that can provide a rainbow of colors, nutrients, and flavors, and remind clients to eat the skins to reap the greatest fiber rewards. Try adding cooked potatoes with skins to salads, stews, soups, side dishes, stir-fries, and casseroles or simply enjoy baked potatoes more often.

11. Everyday Fruit Basket (especially pears and oranges)
Look no further than everyday fruits to realize your full fiber potential. Many are naturally packed with fiber, as well as other important vitamins and minerals. Maybe the doctor was right when he advised an apple a day, but he could have added pears, oranges, and bananas to the prescription as well. When between fruit seasons, clients can rely on dried fruits to further fortify their diet. Encourage including fruit at breakfast each morning instead of juice; mixing dried fruits into cereals, yogurts, and salads; and reaching for the fruit bowl at snack time. It’s a healthy habit all the way around.

12. Exotic Destinations (especially avocado)
Some of the plants with the highest fiber content in the world may be slightly out of your clients’ comfort zone and, for that matter, time zone. A rainbow of indigenous fruits and vegetables used in cultural food traditions around the globe are very high in fiber. Entice clients to introduce a few new plant foods into their diets to push up the flavor, as well as their fiber, quotient.

13. Fiber Fortification Power
More foods,from juice to yogurt,are including fiber fortification in their ingredient lineup. Such foods may help busy people achieve their fiber goals. As consumer interest in foods with functional benefits, such as digestive health and cardiovascular protection, continues to grow, expect to see an even greater supply of food products promoting fiber content on supermarket shelves.

===========================

This article documents the newly-reported experiments:

CREDIT: NYT Article on Fiber Science

Fiber is Good for You. Now we Know Why

By Carl Zimmer
Jan. 1, 2018
A diet of fiber-rich foods, such as fruits and vegetables, reduces the risk of developing diabetes, heart disease and arthritis. Indeed, the evidence for fiber’s benefits extends beyond any particular ailment: Eating more fiber seems to lower people’s mortality rate, whatever the cause.

That’s why experts are always saying how good dietary fiber is for us. But while the benefits are clear, it’s not so clear why fiber is so great. “It’s an easy question to ask and a hard one to really answer,” said Fredrik Bäckhed, a biologist at the University of Gothenburg in Sweden.

He and other scientists are running experiments that are yielding some important new clues about fiber’s role in human health. Their research indicates that fiber doesn’t deliver many of its benefits directly to our bodies.

Instead, the fiber we eat feeds billions of bacteria in our guts. Keeping them happy means our intestines and immune systems remain in good working order.

In order to digest food, we need to bathe it in enzymes that break down its molecules. Those molecular fragments then pass through the gut wall and are absorbed in our intestines.
But our bodies make a limited range of enzymes, so that we cannot break down many of the tough compounds in plants. The term “dietary fiber” refers to those indigestible molecules.

But they are indigestible only to us. The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome. Some of these microbes carry the enzymes needed to break down various kinds of dietary fiber.

The ability of these bacteria to survive on fiber we can’t digest ourselves has led many experts to wonder if the microbes are somehow involved in the benefits of the fruits-and-vegetables diet. Two detailed studies published recently in the journal Cell Host and Microbe provide compelling evidence that the answer is yes.

In one experiment, Andrew T. Gewirtz of Georgia State University and his colleagues put mice on a low-fiber, high-fat diet. By examining fragments of bacterial DNA in the animals’ feces, the scientists were able to estimate the size of the gut bacterial population in each mouse.

On a low-fiber diet, they found, the population crashed, shrinking tenfold.

Dr. Bäckhed and his colleagues carried out a similar experiment, surveying the microbiome in mice as they were switched from fiber-rich food to a low-fiber diet. “It’s basically what you’d get at McDonald’s,” said Dr. Bäckhed said. “A lot of lard, a lot of sugar, and twenty percent protein.”

The scientists focused on the diversity of species that make up the mouse’s gut microbiome. Shifting the animals to a low-fiber diet had a dramatic effect, they found: Many common species became rare, and rare species became common.

Along with changes to the microbiome, both teams also observed rapid changes to the mice themselves. Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.

After a few days on the low-fiber diet, mouse intestines developed chronic inflammation. After a few weeks, Dr. Gewirtz’s team observed that the mice began to change in other ways, putting on fat, for example, and developing higher blood sugar levels.

Dr. Bäckhed and his colleagues also fed another group of rodents the high-fat menu, along with a modest dose of a type of fiber called inulin. The mucus layer in their guts was healthier than in mice that didn’t get fiber, the scientists found, and intestinal bacteria were kept at a safer distance from their intestinal wall.

Dr. Gewirtz and his colleagues gave inulin to their mice as well, but at a much higher dose. The improvements were even more dramatic: Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.

Dr. Bäckhed and his colleagues ran one more interesting experiment: They spiked water given to mice on a high-fat diet with a species of fiber-feeding bacteria. The addition changed the mice for the better: Even on a high-fat diet, they produced more mucus in their guts, creating a healthy barrier to keep bacteria from the intestinal walls.

One way that fiber benefits health is by giving us, indirectly, another source of food, Dr. Gewirtz said. Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.

But the gut’s microbes do more than just make energy. They also send messages. Intestinal cells rely on chemical signals from the bacteria to work properly, Dr. Gewirtz said. The cells respond to the signals by multiplying and making a healthy supply of mucus. They also release bacteria-killing molecules.
By generating these responses, gut bacteria help maintain a peaceful coexistence with the immune system. They rest atop the gut’s mucus layer at a safe distance from the intestinal wall. Any bacteria that wind up too close get wiped out by antimicrobial poisons.

While some species of gut bacteria feed directly on dietary fiber, they probably support other species that feed on their waste. A number of species in this ecosystem — all of it built on fiber — may be talking to our guts.

Going on a low-fiber diet disturbs this peaceful relationship, the new studies suggest. The species that depend on dietary fiber starve, as do the other species that depend on them. Some species may switch to feeding on the host’s own mucus.

With less fuel, intestinal cells grow more slowly. And without a steady stream of chemical signals from bacteria, the cells slow their production of mucus and bacteria-killing poisons.
As a result, bacteria edge closer to the intestinal wall, and the immune system kicks into high gear.

“The gut is always precariously balanced between trying to contain these organisms and not to overreact,” said Eric C. Martens, a microbiologist at the University of Michigan who was not involved in the new studies. “It could be a tipping point between health and disease.”

Inflammation can help fight infections, but if it becomes chronic, it can harm our bodies. Among other things, chronic inflammation may interfere with how the body uses the calories in food, storing more of it as fat rather than burning it for energy.

Justin L. Sonnenburg, a biologist at Stanford University who was not involved in the new studies, said that a low-fiber diet can cause low-level inflammation not only in the gut, but throughout the body.

His research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.

“You can modulate what’s happening in your lung based on what you’re feeding your microbiome in your gut,” Dr. Sonnenburg said.
ADVERTISEMENT
Hannah D. Holscher, a nutrition scientist at the University of Illinois who was not involved in the new studies, said that the results on mice need to be put to the test in humans. But it’s much harder to run such studies on people.

In her own lab, Dr. Holscher acts as a round-the-clock personal chef. She and her colleagues provide volunteers with all their meals for two weeks. She can then give some of her volunteers an extra source of fiber — such as walnuts — and look for changes in both their microbiome and their levels of inflammation.

Dr. Holscher and other researchers hope that they will learn enough about how fiber influences the microbiome to use it as a way to treat disorders. Lowering inflammation with fiber may also help in the treatment of immune disorders such as inflammatory bowel disease.

Fiber may also help reverse obesity. Last month in the American Journal of Clinical Nutrition, Dr. Holscher and her colleagues reviewed a number of trials in which fiber was used to treat obesity. They found that fiber supplements helped obese people to lose about five pounds, on average.
But for those who want to stay healthy, simply adding one kind of fiber to a typical Western diet won’t be a panacea. Giving mice inulin in the new studies only partly restored them to health.

That’s probably because we depend on a number of different kinds of dietary fiber we get from plants. It’s possible that each type of fiber feeds a particular set of bacteria, which send their own important signals to our bodies.

“It points to the boring thing that we all know but no one does,” Dr. Bäckhed said. “If you eat more green veggies and less fries and sweets, you’ll probably be better off in the long term.”

=====================

CREDIT: https://www.npr.org/sections/goatsandsoda/2017/08/24/545631521/is-the-secret-to-a-healthier-microbiome-hidden-in-the-hadza-diet

Is The Secret To A Healthier Microbiome Hidden In The Hadza Diet?

August 24, 20176:11 PM ET
Heard on All Things Considered

MICHAELEEN DOUCLEFF
Twitter

Enlarge this image

The words “endangered species” often conjure up images of big exotic creatures. Think elephants, leopards and polar bears.

But there’s another of type of extinction that may be occurring, right now, inside our bodies.

Yes, I’m talking about the microbiome — that collection of bacteria in our intestines that influences everything from metabolism and the immune system to moods and behavior.

For the past few years, scientists around the world have been accumulating evidence that the Western lifestyle is altering our microbiome. Some species of bacteria are even disappearing to undetectable levels.

“Over time we are losing valuable members of our community,” says Justin Sonnenburg, a microbiologist at Stanford University, who has been studying the microbiome for more than a decade.

Now Sonnenburg and his team have evidence for why this microbial die-off is happening — and hints about what we can possibly do to reverse it.

The study, published Thursday in the journal Science, focuses on a group of hunter-gatherers in Tanzania, called Hadza.
Their diet consists almost entirely of food they find in the forest, including wild berries, fiber-rich tubers, honey and wild meat. They basically eat no processed food — or even food that comes from farms.
“They are a very special group of people,” Sonnenburg says. “There are only about 2,200 left and really only about 200 that exclusively adhere to hunting and gathering.”

Sonnenberg and his colleagues analyzed 350 stool samples from Hadza people taken over the course of about a year. They then compared the bacteria found in Hadza with those found in 17 other cultures around the world, including other hunter-gatherer communities in Venezuela and Peru and subsistence farmers in Malawi and Cameroon.

The trend was clear: The further away people’s diets are from a Western diet, the greater the variety of microbes they tend to have in their guts. And that includes bacteria that are missing from American guts.

“So whether it’s people in Africa, Papua New Guinea or South America, communities that live a traditional lifestyle have common gut microbes — ones that we all lack in the industrialized world,” Sonnenburg says.

In a way, the Western diet — low in fiber and high in refined sugars — is basically wiping out species of bacteria from our intestines.

That’s the conclusion Sonnenburg and his team reached after analyzing the Hadza microbiome at one stage of the yearlong study. But when they checked several months later, they uncovered a surprising twist: The composition of the microbiome fluctuated over time, depending on the season and what people were eating. And at one point, the composition started to look surprisingly similar to that of Westerners’ microbiome.

During the dry season, Hadza eat a lot of more meat — kind of like Westerners do. And their microbiome shifted as their diet changed. Some of the bacterial species that had been prevalent disappeared to undetectable levels, similar to what’s been observed in Westerners’ guts.

But then in wet season — when Hadza eat more berries and honey — these missing microbes returned, although the researchers are not really sure what’s in these foods that bring the microbes back.

“I think this finding is really exciting,” says Lawrence David, who studies the microbiome at Duke University. “It suggests the shifts in the microbiome seen in industrialized nations might not be permanent — that they might be reversible by changes in people’s diets.

“The finding supports the idea that the microbiome is plastic, depending on diet,” David adds.

Now the big question is: What’s the key dietary change that could bring the missing microbes back?

Lawrence thinks it could be cutting down on fat. “At a high level, it sounds like that,” he says, “because what changed in the Hadza’s diet was whether or not they were hunting versus foraging for berries or honey,” he says.

But Sonnenburg is placing his bets on another dietary component: fiber — which is a vital food for the microbiome.
“We’re beginning to realize that people who eat more dietary fiber are actually feeding their gut microbiome,”
Sonnenburg says.

Hadza consume a huge amount of fiber because throughout the year, they eat fiber-rich tubers and fruit from baobab trees. These staples give them about 100 to 150 grams of fiber each day. That’s equivalent to the fiber in 50 bowls of Cheerios — and 10 times more than many Americans eat.

“Over the past few years, we’ve come to realize how important this gut community is for our health, and yet we’re eating a low-fiber diet that totally neglects them,” he says. “So we’re essentially starving our microbial selves.”

The Dying Algorithm

CREDIT: NYT Article on the Dying Algorithm

This Cat Sensed Death. What if Computers Could, Too
By Siddhartha Mukherjee
Jan. 3, 2018

Of the many small humiliations heaped on a young oncologist in his final year of fellowship, perhaps this one carried the oddest bite: A 2-year-old black-and-white cat named Oscar was apparently better than most doctors at predicting when a terminally ill patient was about to die. The story appeared, astonishingly, in The New England Journal of Medicine in the summer of 2007. Adopted as a kitten by the medical staff, Oscar reigned over one floor of the Steere House nursing home in Rhode Island. When the cat would sniff the air, crane his neck and curl up next to a man or woman, it was a sure sign of impending demise. The doctors would call the families to come in for their last visit. Over the course of several years, the cat had curled up next to 50 patients. Every one of them died shortly thereafter.
No one knows how the cat acquired his formidable death-sniffing skills. Perhaps Oscar’s nose learned to detect some unique whiff of death — chemicals released by dying cells, say. Perhaps there were other inscrutable signs. I didn’t quite believe it at first, but Oscar’s acumen was corroborated by other physicians who witnessed the prophetic cat in action. As the author of the article wrote: “No one dies on the third floor unless Oscar pays a visit and stays awhile.”
The story carried a particular resonance for me that summer, for I had been treating S., a 32-year-old plumber with esophageal cancer. He had responded well to chemotherapy and radiation, and we had surgically resected his esophagus, leaving no detectable trace of malignancy in his body. One afternoon, a few weeks after his treatment had been completed, I cautiously broached the topic of end-of-life care. We were going for a cure, of course, I told S., but there was always the small possibility of a relapse. He had a young wife and two children, and a mother who had brought him weekly to the chemo suite. Perhaps, I suggested, he might have a frank conversation with his family about his goals?

But S. demurred. He was regaining strength week by week. The conversation was bound to be “a bummah,” as he put it in his distinct Boston accent. His spirits were up. The cancer was out. Why rain on his celebration? I agreed reluctantly; it was unlikely that the cancer would return.

When the relapse appeared, it was a full-on deluge. Two months after he left the hospital, S. returned to see me with sprays of metastasis in his liver, his lungs and, unusually, in his bones. The pain from these lesions was so terrifying that only the highest doses of painkilling drugs would treat it, and S. spent the last weeks of his life in a state bordering on coma, unable to register the presence of his family around his bed. His mother pleaded with me at first to give him more chemo, then accused me of misleading the family about S.’s prognosis. I held my tongue in shame: Doctors, I knew, have an abysmal track record of predicting which of our patients are going to die. Death is our ultimate black box.

In a survey led by researchers at University College London of over 12,000 prognoses of the life span of terminally ill patients, the hits and misses were wide-ranging. Some doctors predicted deaths accurately. Others underestimated death by nearly three months; yet others overestimated it by an equal magnitude. Even within oncology, there were subcultures of the worst offenders: In one story, likely apocryphal, a leukemia doctor was found instilling chemotherapy into the veins of a man whose I.C.U. monitor said that his heart had long since stopped.

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

Avati and his team identified about 200,000 patients who could be studied. The patients had all sorts of illnesses — cancer, neurological diseases, heart and kidney failure. The team’s key insight was to use the hospital’s medical records as a proxy time machine. Say a man died in January 2017. What if you scrolled time back to the “sweet spot of palliative care” — the window between January and October 2016 when care would have been most effective? But to find that spot for a given patient, Avati knew, you’d presumably need to collect and analyze medical information before that window. Could you gather information about this man during this prewindow period that would enable a doctor to predict a demise in that three-to-12-month section of time? And what kinds of inputs might teach such an algorithm to make predictions?
Avati drew on medical information that had already been coded by doctors in the hospital: a patient’s diagnosis, the number of scans ordered, the number of days spent in the hospital, the kinds of procedures done, the medical prescriptions written. The information was admittedly limited — no questionnaires, no conversations, no sniffing of chemicals — but it was objective, and standardized across patients.

These inputs were fed into a so-called deep neural network — a kind of software architecture thus named because it’s thought to loosely mimic the way the brain’s neurons are organized. The task of the algorithm was to adjust the weights and strengths of each piece of information in order to generate a probability score that a given patient would die within three to 12 months.

The “dying algorithm,” as we might call it, digested and absorbed information from nearly 160,000 patients to train itself. Once it had ingested all the data, Avati’s team tested it on the remaining 40,000 patients. The algorithm performed surprisingly well. The false-alarm rate was low: Nine out of 10 patients predicted to die within three to 12 months did die within that window. And 95 percent of patients assigned low probabilities by the program survived longer than 12 months. (The data used by this algorithm can be vastly refined in the future. Lab values, scan results, a doctor’s note or a patient’s own assessment can be added to the mix, enhancing the predictive power.)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

Still, when you pry the box open to look at individual cases, you see expected and unexpected patterns. One man assigned a score of 0.946 died within a few months, as predicted. He had had bladder and prostate cancer, had undergone 21 scans, had been hospitalized for 60 days — all of which had been picked up by the algorithm as signs of impending death. But a surprising amount of weight was seemingly put on the fact that scans were made of his spine and that a catheter had been used in his spinal cord — features that I and my colleagues might not have recognized as predictors of dying (an M.R.I. of the spinal cord, I later realized, was most likely signaling cancer in the nervous system — a deadly site for metastasis).
It’s hard for me to read about the “dying algorithm” without thinking about my patient S. If a more sophisticated version of such an algorithm had been available, would I have used it in his case? Absolutely. Might that have enabled the end-of-life conversation S. never had with his family? Yes. But I cannot shake some inherent discomfort with the thought that an algorithm might understand patterns of mortality better than most humans. And why, I kept asking myself, would such a program seem so much more acceptable if it had come wrapped in a black-and-white fur box that, rather than emitting probabilistic outputs, curled up next to us with retracted claws?

Siddhartha Mukherjee is the author of “The Emperor of All Maladies: A Biography of Cancer” and, more recently, “The Gene: An Intimate History.”

Scourge of Opioids

CREDIT: https://www.nationalaffairs.com/publications/detail/taking-on-the-scourge-of-opioids

Taking On the Scourge of Opioids

Sally Satel

Summer 2017

On March 1, 2017, Maryland governor Larry Hogan declared a state of emergency. Heroin and fentanyl, a powerful synthetic opioid, had killed 1,468 Maryland residents in the first nine months of 2016, up 62% from the same period in 2015. Speaking at a command center of the Maryland Emergency Management Agency near Baltimore, the governor announced additional funding to strengthen law enforcement, prevention, and treatment services. “The reality is that this threat is rapidly escalating,” Hogan said.

And it is escalating across the country. Florida governor Rick Scott followed Hogan’s lead in May, declaring a public-health emergency after requests for help from local officials across the state. Arizona governor Doug Ducey did the same in June. In Ohio, some coroners have run out of space for the bodies of overdose victims and have to use a mobile, refrigerated morgue. In West Virginia, state burial funds have been exhausted burying overdose victims. Opioid orphans are lucky if their grandparents can raise them; if not, they are at the mercy of foster-care systems that are now overflowing with the children of addicted parents.

An estimated 2.5 million Americans abuse or are addicted to opioids — a class of highly addictive drugs that includes Percocet, Vicodin, OxyContin, and heroin. Most experts believe this is an undercount, and all agree that the casualty rate is unprecedented. At peak years in an earlier heroin epidemic, from 1973 to 1975, there were 1.5 fatalities per 100,000 Americans. In 2015, the rate was 10.4 per 100,000. In West Virginia, ground zero of the crisis, it was over 36 per 100,000. In raw numbers, more than 33,000 individuals died in 2015 — nearly equal to the number of deaths from car crashes and double the number of gun homicides. Meanwhile, the opioid-related fatalities continue to mount, having quadrupled since 1999.

The roots of the crisis can be traced to the early 1990s when physicians began to prescribe opioid painkillers more liberally. In parallel, overdose deaths from painkillers rose until about 2011. Since then, heroin and synthetic opioids have briskly driven opioid-overdose deaths; they now account for over two-thirds of victims. Synthetic opioids, such as fentanyl, are made mainly in China, shipped to Mexico, and trafficked here. Their menace cannot be overstated.

Fentanyl is 50 times as potent as heroin and can kill instantly. People have been found dead with needles dangling from their arms, the syringe barrels still partly full of fentanyl-containing liquid. One fentanyl analog, carfentanil, is a big-game tranquilizer that’s a staggering 5,000 times more powerful than heroin. This spring, “Gray Death,” a combination of heroin, fentanyl, carfentanil, and other synthetics, has pushed the bounds of lethal chemistry even further. The death rate from synthetics has increased by more than 72% over the space of a single year, from 2014 to 2015. They have transformed an already terrible problem into a true public-health emergency.

The nation has weathered drug epidemics before, but the current affliction — a new plague for a new century, in the words of Nicholas Eberstadt — is different. Today, the addicted are not inner-city minorities, though big cities are increasingly reporting problems. Instead, they are overwhelmingly white and rural, though middle- and upper-class individuals are also affected. The jarring visual of the crisis is not an urban “gang banger” but an overdosed mom slumped in the front seat of her car in a Walmart parking lot, toddler in the back.

It’s almost impossible to survey this devastating tableau and not wonder why the nation’s response has been so slow in coming. Jonathan Caulkins, a drug-policy expert at Carnegie Mellon, offers two theories. One is geography. The prescription-opioid wave crashed down earliest in fly-over states, particularly small cities and rural areas, such as West Virginia and Kentucky, without nationally important media markets. Earlier opioid (heroin) epidemics raged in urban centers, such as New York, Baltimore, Chicago, and Los Angeles.

The second of Caulkins’s plausible explanations is the absence of violence that roiled inner cities in the early 1970s, when President Richard Nixon called drug abuse “public enemy number one.” Dealers do not engage in shooting wars or other gang-related activity. As purveyors of heroin established themselves in the U.S., Mexican bosses deliberately avoided inner cities where heroin markets were dominated by violent gangs. Thanks to a “drive-through” business model perfected by traffickers and executed by discreet runners — farm boys from western Mexico looking to make quick money — heroin can be summoned via text message or cell phone and delivered, like pizza, to homes or handed off in car-to-car transactions. Sources of painkillers are low profile as well. Typically pills are obtained (or stolen) from friends or relatives, physicians, or dealers. The “dark web,” too, is a conduit for synthetics.

It’s hard to miss, too, that this time around, the drug crisis is viewed differently. Heroin users today are widely seen as suffering from an illness. And because that illness has a pale complexion, many have asked, “Where was the compassion for black people?” A racial element cannot be denied, but there are other forces at play, namely that Americans are drug-war weary and law enforcement has incarceration fatigue. It also didn’t help that, in the 1970s, officers were only loosely woven into the fabric of the inner-city minority neighborhoods that were hardest hit. Today, in the small towns where so much of the epidemic plays out, the crisis is personal. Police chiefs, officers, and local authorities will likely have at least one relative, friend, or neighbor with an opioid problem.

If there is reason for optimism in the midst of this crisis, it is that national and local politicians and even police are placing emphasis on treatment over punishment. And, without question, the nation needs considerably more funding for treatment; Congress must step up. Yet the much-touted promise of treatment — and particularly of anti-addiction medications — as a panacea has already been proven wrong. Perhaps “we can’t arrest our way out of the problem,” as officials like to say, but nor are we treating our way out of it. This is because many users reject treatment, and, if they accept it, too many drop out. Engaging drug users in treatment has turned out to be one of the biggest challenges of the epidemic — and one that needs serious attention.

The near-term forecast for this American Carnage, as journalist Christopher Caldwell calls it, is grim. What can be done?

ROOTS OF A CRISIS

In the early 1990s, campaigns for improved treatment of pain gained ground. Analgesia for pain associated with cancer and terminal illness was relatively well accepted, but doctors were leery of medicating chronic conditions, such as joint pain, back pain, and neurological conditions, lest patients become addicted. Then in 1995 the American Pain Society recommended that pain be assessed as the “fifth vital sign” along with the standard four (blood pressure, temperature, pulse, and respiratory rate). In 2001 the influential Joint Commission on Accreditation of Healthcare Organizations established standards for pain management. These standards did not mention opioids, per se, but were interpreted by many physicians as encouraging their use.

These developments had a gradual but dramatic effect on the culture of American medicine. Soon, clinicians were giving an entire month’s worth of Percocet or Lortab to patients with only minor injuries or post-surgical pain that required only a few days of opioid analgesia. Compounding the matter, pharmaceutical companies engaged in aggressive marketing to physicians.

The culture of medical practice contributed as well. Faced with draconian time pressures, a doctor who suspected that his patient was taking too many painkillers rarely had time to talk with him about it. Other time-consuming pain treatments, such as physical therapy or behavioral strategies, were, and remain, less likely to be covered by insurers. Abbreviated visits meant shortcuts, like a quick refill that may not have been warranted, while the need for addiction treatment was overlooked. In addition, clinicians were, and still are, held hostage to ubiquitous “patient-satisfaction surveys.” A poor grade mattered because Medicare and Medicaid rely on these assessments to help determine the amount of reimbursement for care. Clearly, too many incentives pushed toward prescribing painkillers, even when it went against a doctor’s better judgment.

The chief risk of liberal prescribing was not so much that the patient would become addicted — though it happens occasionally — but rather that excess medication fed the rivers of pills that were coursing through many neighborhoods. And as more painkillers began circulating, almost all of them prescribed by physicians, more opportunities arose for non-patients to obtain them, abuse them, and die. OxyContin formed a particularly notorious tributary. Available since 1996, this slow-release form of oxycodone was designed to last up to 12 hours (about six to eight hours longer than immediate-release preparations of oxycodone, such as Percocet). A sustained blood level was meant to be a therapeutic advantage for patients with unremitting pain. To achieve long action, each OxyContin tablet was loaded with a large amount of oxycodone.

Packing a large dose into a single pill presented a major unintended consequence. When it was crushed and snorted or dissolved in water and injected, OxyContin gave a clean, predictable, and enjoyable high. By 2000, reports of abuse of OxyContin began to surface in the Rust Belt — a region rife with injured coal miners who were readily prescribed OxyContin, or, as it came to be called, “hillbilly heroin.” Ohio along with Florida became the “pill mill” capitals of the nation. These mills were advertised as “pain clinics,” but were really cash-only businesses set up to sell painkillers in high volume. The mills employed shady physicians who were licensed to prescribe but knew they weren’t treating authentic patients.

Around 2010 to 2011, law enforcement began cracking down on pill mills. In 2010, OxyContin’s maker, Purdue Pharma, reformulated the pill to make it much harder to crush. In parallel, physicians began to re-examine their prescribing practices and to consider non-opioid options for chronic-pain management. More states created prescription registries so that pharmacists and doctors could detect patients who “doctor shopped” for painkillers and even forged prescriptions. (Today, all states except Missouri have such a registry.) Last year, the American Medical Association recommended that pain be removed as a “fifth vital sign” in professional medical standards.

Controlling the sources of prescription pills was completely rational. Sadly, however, it helped set the stage for a new dimension of the opioid epidemic: heroin and synthetic opioids. Heroin — cheaper and more abundant than painkillers — had flowed into the western U.S. since at least the 1990s, but trafficking east of the Mississippi and into the Rust Belt reportedly began to accelerate around the mid-2000s, a transformative episode in the history of domestic drug problems detailed in Sam Quinones’s superb book Dreamland.

The timing was darkly auspicious. As prescription painkillers became harder to get and more expensive, thanks to alterations of the OxyContin tablet, to law-enforcement efforts, and to growing physician enlightenment, a pool of individuals already primed by their experience with prescription opioids moved on to low-cost, relatively pure, and accessible heroin. Indeed, between 2008 and 2010, about three-fourths of people who had used heroin in the past year reported non-medical use of painkillers — likely obtained outside the health-care system — before initiating heroin use.

The progression from pills to heroin was abetted by the nature of addiction itself. As users became increasingly tolerant to painkillers, they needed larger quantities of opioids or more efficient ways to use them in order to achieve the same effect. Moving from oral consumption to injection allowed this. Once a person is already injecting pills, moving to heroin, despite its stigma, doesn’t seem that big a step. The march to heroin is not inexorable, of course. Yet in economically and socially depleted environments where drug use is normalized, heroin is abundant, and treatment is scarce, widespread addiction seems almost inevitable.

The last five years or so have witnessed a massive influx of powder heroin to major cities such as New York, Detroit, and Chicago. From there, traffickers direct shipments to other urban areas, and these supplies are, in turn, distributed further to rural and suburban areas. It is the powdered form of heroin that is laced with synthetics, such as fentanyl. Most victims of synthetic opioids don’t even know they are taking them. Drug traffickers mix the fentanyl with heroin or press it into pill form that they sell as OxyContin.

Yet, there are reports of addicts now knowingly seeking fentanyl as their tolerance to heroin has grown. Whereas heroin requires poppies, which take time to cultivate, synthetics can be made in a lab, so the supply chain can be downsized. And because the synthetics are so strong, small volumes can be trafficked more efficiently and more profitably. What’s more, laboratories can easily stay one step ahead of the Drug Enforcement Administration by modifying fentanyl into analogs that are more potent, less detectable, or both. Synthetics are also far more deadly: In some regions of the country, roughly two-thirds of deaths from opioids can now be traced to heroin, including heroin that medical examiners either suspect or are certain was laced with fentanyl.

THE BASICS

Terminology is important in discussions about drug use. A 2016 Surgeon General report on addiction, “Facing Addiction in America,” defines “misuse” of a substance as consumption that “causes harm to the user and/or to those around them.” Elsewhere, however, the term has been used to refer to consumption for a purpose not consistent with medical or legal guidelines. Thus, misuse would apply equally to the person who takes an extra pill now and then from his own prescribed supply of Percocet to reduce stress as well as to the person who buys it from a dealer and gets high several times a week. The term “abuse” refers to a consistent pattern of use causing harm, but “misuse,” with its protean definitions, has unhelpfully taken its place in many discussions of the current crisis. In the Surgeon General report, the clinical term “substance use disorder” refers to functionally significant impairment caused by substance use. Finally, “addiction,” while not considered a clinical term, denotes a severe form of substance-use disorder — in other words, compulsive use of a substance with difficulty stopping despite negative consequences.

Much of the conventional wisdom surrounding the opioid crisis holds that virtually anyone is at risk for opioid abuse or addiction — say, the average dental patient who receives some Vicodin for a root canal. This is inaccurate, but unsurprising. Exaggerating risk is a common strategy in public-health messaging: The idea is to garner attention and funding by democratizing affliction and universalizing vulnerability. But this kind of glossing is misleading at best, counterproductive at worst. To prevent and ameliorate problems, we need to know who is truly at risk to target resources where they are most needed.

In truth, the vast majority of people prescribed medication for pain do not misuse it, even those given high doses. A new study in the Annals of Surgery, for example, found that almost three-fourths of all opioid painkillers prescribed by surgeons for five common outpatient procedures go unused. In 2014, 81 million people received at least one prescription for an opioid pain reliever, according to a study in the American Journal of Preventive Medicine; yet during the same year, the National Survey on Drug Use and Health reported that only 1.9 million people, approximately 2%, met the criteria for prescription pain-reliever abuse or dependence (a technical term denoting addiction). Those who abuse their prescription opioids are patients who have been prescribed them for over six months and tend to suffer from concomitant psychiatric conditions, usually a mood or anxiety disorder, or have had prior problems with alcohol or drugs.

Notably, the majority of people who develop problems with painkillers are not individuals for whom they have been legitimately prescribed — nor are opioids the first drug they have misused. Such non-patients procure their pills from friends or family, often helping themselves to the amply stocked medicine chests of unsuspecting relatives suffering from cancer or chronic pain. They may scam doctors, forge prescriptions, or doctor shop. The heaviest users are apt to rely on dealers. Some of these individuals make the transition to heroin, but it is a small fraction. (Still, the death toll is striking given the lethality of synthetic opioids.) One study from the Substance Abuse and Mental Health Services Administration found that less than 5% of pill misusers had moved to heroin within five years of first beginning misuse. These painkiller-to-heroin migrators, according to analyses by the Centers for Disease Control and Prevention, also tend to be frequent users of multiple substances, such as benzodiazepines, alcohol, and cocaine. The transition from these other substances to heroin may represent a natural progression for such individuals.

Thus, factors beyond physical pain are most responsible for making individuals vulnerable to problems with opioids. Princeton economists Anne Case and Angus Deaton paint a dreary portrait of the social determinants of addiction in their work on premature demise across the nation. Beginning in the late 1990s, deaths due to alcoholism-related liver disease, suicide, and opioid overdoses began to climb nationwide. These “deaths of despair,” as Case and Deaton call them, strike less-educated whites, both men and women, between the ages of 45 and 54. While the life expectancy of men and women with a college degree continues to grow, it is actually decreasing for their less-educated counterparts. The problems start with poor job opportunities for those without college degrees. Absent employment, people come unmoored. Families unravel, domestic violence escalates, marriages dissolve, parents are alienated from their children, and their children from them.

Opioids are a salve for these communal wounds. Work by Alex Hollingsworth and colleagues found that residents of locales most severely pummeled by the economic downturn were more susceptible to opioids. As county unemployment rates increased by one percentage point, the opioid death rate (per 100,000) rose by almost 4%, and the emergency-room visit rate for opioid overdoses (per 100,000) increased by 7%. It’s no coincidence that many of the states won by Donald Trump — West Virginia, Kentucky, and Ohio, for example — had the highest rates of fatal drug overdoses in 2015.

Of all prime-working-age male labor-force dropouts, nearly half — roughly 7 million men — take pain medication on a daily basis. “In our mind’s eye,” writes Nicholas Eberstadt in a recent issue of Commentary, “we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens — stoned.” Medicaid, it turns out, financed many of those stoned hours. Of the entire non-working prime-age white male population in 2013, notes Eberstadt, 57% were reportedly collecting disability benefits from one or more government disability programs. Medicaid enabled them to see a doctor and fill their prescriptions for a fraction of the street value: A single 10-milligram Percocet could go for $5 to $10, the co-pay for an entire bottle.

When it comes to beleaguered communities, one has to wonder how much can be done for people whose reserves of optimism and purposefulness have run so low. The challenge is formidable, to be sure, but breaking the cycle of self-destruction through treatment is a critical first step.

TREATMENT OPTIONS

Perhaps surprisingly, the majority of people who become addicted to any drug, including heroin, quit on their own. But for those who cannot stop using by themselves, treatment is critical, and individuals with multiple overdoses and relapses typically need professional help. Experts recommend at least one year of counseling or anti-addiction medication, and often both. General consensus holds that a standard week of “detoxification” is basically useless, if not dangerous — not only is the person extremely likely to resume use, he is at special risk because he will have lost his tolerance and may easily overdose.

Nor is a standard 28-day stay in a residential facility particularly helpful as a sole intervention. In residential settings many patients acquire a false sense of security about their ability to resist drugs. They are, after all, insulated from the stresses and conditioned cues that routinely provoke drug cravings at home and in other familiar environments. This is why residential care must be followed by supervised transition to treatment in an outpatient setting: Users must continue to learn how to cope without drugs in the social and physical milieus they inhabit every day.

Fortunately, medical professionals are armed with a number of good anti-addiction medications to help patients addicted to opioids. The classic treatment is methadone, first introduced as a maintenance therapy in the 1960s. A newer medication approved by the FDA in 2002 for the treatment of opioid addiction is buprenorphine, or “bupe.” It comes, most popularly, as a strip that dissolves under the tongue. The suggested length of treatment with bupe is a minimum of one or two years. Like methadone, bupe is an opioid. Thus, it can prevent withdrawal, blunt cravings, and produce euphoria. Unlike methadone, however, bupe’s chemical structure makes it much less dangerous if taken in excess, thereby prompting Congress to enact a law, the Drug Addiction Treatment Act of 2000, which allows physicians to prescribe it from their offices. Methadone, by contrast, can only be administered in clinics tightly regulated by the Drug Enforcement Administration and the Substance Abuse and Mental Health Services Administration. (I work in such a clinic.)

In addition to methadone or buprenorphine, which have abuse potential of their own, there is extended-release naltrexone. Administered as a monthly injection, naltrexone is an opioid blocker. A person who is “blocked” normally experiences no effect upon taking an opioid drug. Because naltrexone has no abuse potential (hence no street value), it is favored by the criminal-justice system. Jails and prisons are increasingly offering inmates an injection of naltrexone; one dose is given at five weeks before release and another during the week of release with plans for ongoing treatment as an outpatient. Such protection is warranted given the increased risk for death, particularly from drug-related causes, in the early post-release period. For example, one study of inmates released from the Washington State Department of Corrections found a 10-fold greater risk of overdose death within the first two weeks after discharge compared with non-incarcerated state residents of the same age, sex, and race.