Category Archives: Well-Being – Community

Well-being, Community Well-Being, Architecture, Landscape Architecture, Interior Design, Real Estate Economics, Real Estate Development, Real Estate Investing, Design, Construction, Construction Materials, Construction Techniques, Zoning, Building Codes, Public Policy, Christopher Alexander, Adaptive Physical Systems Design

Co-Housing

Not to be confused with co-working, co-ops, or condos, co-housing is its own cultural phenomenon.

This is a major article in the NYT describing a decades-old phenomenon of co-housing, and updating the phenomenon with present day facts:

Co-housing

20211024

CREDIT: 

The most recent manifestation of the communalist impulse is the postvaccine nostalgia for the pandemic pod. People are now telling reporters that they miss the camaraderie of those pared-down social networks, as well as the frequent physical company of the same group of friends, the “transformative power of proximity,” as the psychologist Susan Pinker calls it.

I was late to find out about co-housing, a species of intentional community that dates back 30 years, in the United States, anyway. (It emerged in Denmark in the 1970s.) Forced to characterize co-housing in a phrase, you might say “living together, separately.” Those living together have built a community based on, well, belief in community. But they live separately, in that they own their homes, condo-style.

Co-housing sounds confusingly similar to co-living but has a whole different vibe. Co-housers aren’t transient. They have a much stickier idea of social affiliation, and they’re not about to rent a bedroom in some random complex. To draw even finer distinctions: Co-housing communities are not communes. Residents do not give up financial privacy any more than they give up domestic privacy. They have their own bank accounts and commute to ordinary jobs. If you were lucky enough to grow up on a friendly cul-de-sac, you’re in range of the idea, except that you don’t have to worry about your child being hit by a car as she plays in the street. A core principle of co-housing is that cars should be parked on a community’s periphery.

This, I thought, was an idea with promise. Co-living accommodates precarity; co-housing seeks stability. Podding is a byproduct of the collapse of society; co-housing builds society.

The most recent manifestation of the communalist impulse is the postvaccine nostalgia for the pandemic pod. People are now telling reporters that they miss the camaraderie of those pared-down social networks, as well as the frequent physical company of the same group of friends, the “transformative power of proximity,” as the psychologist Susan Pinker calls it.

I was late to find out about co-housing, a species of intentional community that dates back 30 years, in the United States, anyway. (It emerged in Denmark in the 1970s.) Forced to characterize co-housing in a phrase, you might say “living together, separately.” Those living together have built a community based on, well, belief in community. But they live separately, in that they own their homes, condo-style.

Co-housing sounds confusingly similar to co-living but has a whole different vibe. Co-housers aren’t transient. They have a much stickier idea of social affiliation, and they’re not about to rent a bedroom in some random complex. To draw even finer distinctions: Co-housing communities are not communes. Residents do not give up financial privacy any more than they give up domestic privacy. They have their own bank accounts and commute to ordinary jobs. If you were lucky enough to grow up on a friendly cul-de-sac, you’re in range of the idea, except that you don’t have to worry about your child being hit by a car as she plays in the street. A core principle of co-housing is that cars should be parked on a community’s periphery.

This, I thought, was an idea with promise. Co-living accommodates precarity; co-housing seeks stability. Podding is a byproduct of the collapse of society; co-housing builds society

Out of the 165 co-housing communities around the country, Eastern Village interested me because it’s urban and vertical, while the majority are suburban or at least suburbanish. I wondered whether co-housing could survive the claustrophobia of city living and the resulting need for personal space. My cheeks still get hot with embarrassment when I remember a remark in an elevator: It was a few years after my son was born, and I’d moved back to Manhattan, hoping to find the something I missed in the suburbs. “You’re not from around here, are you?” a man said, after I tried to start a conversation. Oh, right, I thought. People crammed into a box don’t want to talk to a chirpy lady they might have to edge away from. I never did get to know the other families in the building.

There are other, better-known urban co-housing communities around the country, but Eastern Village has the virtue of not being exemplary. For one thing, it was built from the top down rather than the bottom up. Model co-housing tends to be grass-roots: First the group meets to explore its wants and needs, then it finds an architect who designs a community just right for them, and finally it builds. From the time a group of would-be co-housers forms to the time it moves in, two to five years can pass. The idea for Eastern Village, on the other hand, came from a developer. He undertook the daunting task of retrofitting the building, then asked someone better versed in co-housing to go out, put together a group and teach participants how to live together.

The process still took two and a half years, but it struck me as a more replicable model. If co-housing didn’t have to be handcrafted, I thought, maybe it could be scaled up. And this seems the moment to think about how.

Americans may be about to experience three once-in-a-lifetime opportunities to reconsider how they house themselves. The first is the two big spending bills working their way through Congress. If they pass, they could provide billions of dollars to alleviate homelessness and increase affordable housing. The second opportunity proceeds from the shift to working from home: Record numbers of office buildings stand empty and ready for the refurbishing, and they won’t all be refilled

The third force that could push us to change our way of life is a heightened awareness of isolation. In a 2020 survey by the Harvard Graduate School of Education, one-third of Americans described themselves as seriously lonely — up from one-fifth before the Covid pandemic. Loneliness is now understood as a public health crisis, ranking as high among risk factors for mortality as heavy smoking, drinking and obesity.

Contrary to what one might think, the loneliest people in America aren’t the elderly. They’re young adults (close to two-thirds of them, according to the Harvard survey) and mothers of small children (about half). This makes sense: Young people tend to lead migratory lives, leading to weak social ties. Mothers have their children, although almost a quarter of them are raising those children without a partner; the United States has the highest rate in the world of children living with only one parent. With or without a partner, a mother may still have a hard time finding a fulfilling social life, since paid work and unpaid maternal labor take up so much of her time.

The pandemic lockdown exposed women’s solitude, in particular, as a function not just of time but also of space. Afraid to go out into the public domain, all caregivers — the newly full-time ones as well as those who had already put care at the center of their lives — became painfully aware that the private domain can be a very lonely and demanding place.

Under the circumstances, co-housing has the potential, if nothing else, to furnish ideas of how to build for community. After all, you’d never get away with snubbing people in the elevator at Eastern Village

If there is an adage that informs life in co-housing, it’s treat thy neighbor as thy family. Thy extended family, that is, assuming it’s a happy one. And what do happy families do? For one thing, they share stuff. As Rabbi Kimelman-Block led me through what felt like a labyrinth, he opened several overstuffed “sharing closets.” One was full of expensive, space-hogging items like travel cribs and skis. Another was for things being given away.

What else do families do? Well, chores, preferably cheerfully and collaboratively. And indeed, co-housers are expected to sign up for maintenance and cleanup days. Families also look out for one another. In co-housing that means, among other things, helping keep an eye on all the children. Many communities pay for formal day care. Most important, co-housers eat together. Breaking bread is probably the most effective bonding ritual society has ever come up with, and co-housers take turns cooking for and serving meals to other members. Some communities offer meals as often as six times a week. (Attendance is never mandatory.)

Most co-housing communities are anchored by a large, shared kitchen. It forms the heart of the common house, which may also offer pools, carpentry workshops, dance studios or meeting rooms — you name it, some community has it. In Eastern Village, common spaces have been cleverly tucked around the complex. Wending our way from basement to roof, Rabbi Kimelman-Block and I went through a dining room, a room for table tennis and foosball, a living room with a fireplace and fat leather chairs, a children’s playroom, a lamp-lit quiet room, a game room, a laundry room, an exercise room, a small lending library. The kitchen, though, is a problem. It’s not set up to cook communitywide dinners, in part because the fire marshal insisted that it install a crushingly expensive commercial range, and it went instead with a “warm-up kitchen,” as architect and developer Don Tucker calls it. So Eastern Village is more or less stuck with potluck.

But then again, as my mother liked to say, the perfect is the enemy of the good. We have to make do if we want to make change.

Today, the detached single-family house — the lonesome cowboy model of domestic architecture — dominates the American landscape so thoroughly that it feels as if it were inevitable. As of 2019, there were about 100 million single-family homes in the United States (including mobile and prefab homes), compared to about 40 million multifamily ones. But it didn’t have to turn out this way. Although the home on the farm had been the American ideal since Thomas Jefferson popularized pastoralism, as the country urbanized after the Civil War, many visionaries saw opportunities for a less atomized, more female-friendly lifestyle.

The landscape designer Frederick Law Olmsted, for one, imagined Emerald City-like metropolises with public laundries, bakeries and kitchens, taking some of the burden off housewives. Amenities like sewers, gutters and sidewalks would make streets more appealing for women. Women’s rights activists such as Charlotte Perkins Gilmanand a now-forgotten feminist named Melusina Fay Peirce envisioned Eastern Village-like cooperatives in apartment complexes, complete with communal laundries, sewing rooms, kitchens and dining rooms. Peirce called it “cooperative housekeeping” and thought women should make money at it.

During the early part of the 20th century, however, those reveries retreated into science fiction novels. Many forces converged to rob them of reality, not least the Red Scare, when politicians developed an allergy to anything that seemed to have a flavor of socialism or feminism. Along with builders, they began to promote the single-family dream house, with its Harry Homeowner and his happy housewife.

Today, roughly three-quarters of the residential land in metro areas is set aside for such houses and yards. Hub-and-spoke roads and commuter railways have grown up around them. Elaborate exclusionary zoning codes were written to protect them from the taint of commerce and industry — as well as to keep white, wealthy neighborhoods away from Black and poorer ones. The distance between home and everything else imposed by these laws is the reason most Americans need to drive to shop or work.

Back when the majority of breadwinners were male and made the journey downtown unburdened by domestic concerns, a long commute wasn’t a big logistical challenge. Today, mothers are also making those commutes, but they still have domestic burdens. Working from home improves the situation only if child care is available.

Co-housing arose, in part, as a solution to the work-life problem. In 1969, Hildur Jackson — just one among many co-housing pioneers, but an eloquent one — was living in a house in Copenhagen, a law school graduate unsure whether she should stay home with her two little boys or embark on a law career. “There was no apparent third option,” she wrote in a remembrance. Then she read an article titled “Children Need 100 Parents.”

Ms. Jackson decided to start a six-family community on an old farm in a Copenhagen suburb. The families built homes around two giant lawns, which were used largely for games, particularly soccer. The barn was turned into a common house, and three Icelandic horses were bought for the stables. “We chose to have no borders between our gardens,” she wrote. “We raised chickens, tended a large common vegetable garden and had fruit trees and berry bushes.” Days were set aside for community maintenance. When her husband traveled on business, which he did often, “I never felt isolated,” she wrote. When she had her third child, she had 11 other parents to help.

Co-housing (called “living communities” in Denmark) soon spread throughout Scandinavia and to the Netherlands and Germany; communities are now found all over Europe, as well as in Canada, Australia and New Zealand. In the 1980s, the architects Charles Durrett and Kathryn McCamant, who were married and business partners at the time, began importing co-housing to the United States. (Between the two of them, they have built or been consultants on many of the co-housing communities in the country.) The two got involved in the movement because they wanted children but their lives seemed too hectic: “We would come home from work exhausted and hungry, only to find the refrigerator empty,” Mr. Durrett has written. So they went to Denmark to study another way to build for parenting.

Co-housing is the nonthreatening heir of America’s far more radical communitarian past. And during my many years of self-education, I discovered that communitarianism has often had a feminist face.

Early socialists avowed an egalitarianism so radical that it included housewives. Nineteenth-century progressives, male as well as female, understood wives’ solitary and unremunerated duties as central to their oppression. Socialists set up model villages and touted them as a way to inspire workers to abandon cities, factories and industrial bosses. But they also promised to enfranchise women and free them from the shackles of domestic drudgery.

Robert Owen, the most famous British socialist of his day, and his French counterpart, Charles Fourier, envisioned the collectivization of women’s work in communal kitchens, dining rooms and nurseries, although they seemed to think this would require the construction of vast, ornate (and unrealistic) palaces. Owen’s and Fourier’s followers, known as Cooperators, established close to 50 socialist communities in rural areas in the Northeastern and Midwestern United States in the 1820s to 1840s. The leaders, who were almost always men, rarely put theory into practice when it came to women. As Carol A. Kolmerten, a historian and the author of “Women in Utopia,” a study of American Owenite communities, wrote, it fell to female Cooperators to prepare the food, wash the clothes and teach the little ones. Or, if the women toiled in fields and workshops, they would still cook and clean in the evenings. Wives who had arrived full of hope left, taking their husbands with them.

Male obtuseness was not the main reason these settlements failed. Other realities proved more damaging. Some settlements couldn’t generate enough cash to pay off the loans that paid for the land. Life in the wilderness wasn’t palatial; it involved log cabins and mosquitoes. Refugees from cities didn’t know how to farm. Class differences among members reasserted themselves, leading to factionalism. But the alienation of one-half of the population (the “woman problem,” Owen came to call it) didn’t help.

On the other hand, secular socialists accounted for only a small fraction of America’s intentional communities. Millenarian Christians — Shakers, Mormons, the Oneida Community and Anabaptist offshoots like the Amish and the Hutterites — built many more, and theirs tended to last longer, as Lawrence Foster writes in “Women, Family and Utopia.” Perhaps that’s because when their leaders broke down the walls of nuclear families to create communal ones, they did so to strengthen their members’ attachment to God and commitment to building his kingdom on earth.

What is remarkable about some of these religious communes is the degree to which they defied the gender norms of their day, in some cases going further than the socialists. The Shakers weren’t feminist in a way contemporary Americans would recognize. They didn’t question the gendered division of labor: Women worked in the kitchens and did the weaving, while men did the farm labor. But women’s work wasn’t seen as inferior to men’s. Both helped sustain the community; therefore both were equal in God’s eyes. More important, Shaker leaders were as likely to be female as male.

In the Oneida Community, a sect that eschewed what its leader called the gloominess of “the little man-and-wife circle” and replaced it with nonmonogamy, women were able to participate without restriction in every aspect of life — religious, economic and social.

Collectivizing domestic labor gave groups incentives to come up with labor-saving household devices. The Shakers patented a water-powered washing machine that cleaned clothes by churning them, an improvement on previous devices. Oneidans may or may not have invented the lazy susan (the point is debated); in any case, they used it to reduce the labor required to serve food in a communal dining hall. With the same goal in mind, they came up with, among other things, an industrial potato peeler and a mop wringer.

These old-time religious communes hold lessons for us moderns. “From a feminist viewpoint the major achievement of most communitarian experiments was ending the isolation of the housewife,” wrote Dolores Hayden in her classic study of feminist communalism, “The Grand Domestic Revolution.” “A second achievement was the division and specialization of household labor.”

A

fter the tour, Rabbi Kimelman-Block roped in whoever was around to talk to me. We gathered on Eastern Village’s xeriscaped roof, its communal green space. Most people brought drinks. I ate Ethiopian takeout. Professions ranged from Realtor to social-justice activist. Eastern Village has 110 residents, 30 of them college age or younger. The ones I met were mostly middle-aged, though one couple bought in when they were in their 70s.

Parenting was the leading answer to my question about why they’d chosen co-housing: Kids aren’t stuck in their apartments; they can run downstairs. Neighbors’ kids or older members were almost always around to babysit, and for a while, there was a somewhat more formal day care arrangement. Adults benefit from the ad hoc interaction, too. Instead of planning dinner or drinks weeks in advance, on any Wednesday or Saturday, a sociable soul can find a neighbor to share a snack or a beer with.

One unexpected comment came from Adrienne Torrey, a curly-haired middle-aged woman with a relaxed manner. “Co-housing attracts a lot of introverts,” she said. That hadn’t occurred to me, but inclined to introversion myself, I immediately saw the logic. Who needs a community more than those who have a hard time spontaneously cobbling one together? Or — my next thought — than new parents stranded by their change of circumstance? By contrast, as soon as you show up in co-housing, you are swept into a round robin of meals and festivities and cleanup days.

The most controversial topic that evening was meetings. Almost all co-housing communities make big decisions by consensus. One member complained that arriving at unanimity is cumbersome and unnecessary. The rest disagreed. However long consensus takes, everyone feels heard and learns the art of compromise. That, I’m told, may be the most important key to successful group living.

I

f co-housing offers solutions for so many of the problems from which America’s mothers suffer, if we are now uniquely positioned to put at least some of its lessons into effect — thanks to the pandemic’s unintentional consciousness-raising and the possibility that Congress will pass the Biden administration’s plans to rebuild the economy — what’s stopping us?

During one of my several conversations with Charles Durrett, I asked what he would identify as the biggest obstacle to building co-housing in the United States. “Our culture,” he said promptly. “We tend to think of ourselves as independent pioneers. We’re not a cooperative kind of culture.” But he grew up in a tight-knit neighborhood, he said, and his neighbors “played a huge role in my well-being.”

But planning departments, regional as well as municipal, don’t help. Typical American zoning laws frown on multifamily complexes unless they’ve been exiled to poorer parts of town. Even accessory dwelling units, such as mother-in-law apartments, are unpopular, lest they be rented to “undesirables.” Those are the most notorious restrictions; they’re not the only ones Mr. Durrett has had to fight as he tried to build co-housing.

City planning laws simply don’t envision communities focused on residents’ helping one other and keeping children safe. One city demanded two-car driveways for each unit, a waste of space and money in a community that keeps cars far from houses. When a town insisted that to accommodate the number of people in a proposed community, it would have to pay for a $1 million fire truck, Mr. Durrett asked the officials what the fire department’s most common call was. “Pick up and put back,” they told him, meaning putting seniors who have fallen out of their beds back into them. “We can do that for ourselves,” he said. Finding people who can put other people back in bed is precisely what co-housing is good at.

The other challenge, of course, is that not all people want to share their lives. People have to be willing to sacrifice time (all those meetings, the grounds maintenance) and the luxury of self-absorption (the small talk expected from those on their way to the mailroom). Co-housing may consume emotional energy that would otherwise go to keeping other social circles — work colleagues, college buddies, fellow parents at our children’s schools — spinning in the air. “Living in co-housing is not easy,” said Ann Zabaldo, the person hired by Eastern Village’s developer to recruit and educate its future occupants about the art of co-housing. But, she added, “it is so much richer, like drinking deeply from the well.”

Communal living by itself will never solve any one major social problem, be it loneliness or sexism or anything else. Although much more communal architecture can (and should) be built, you can’t mass-produce community. People have to be able to see the benefits before they’ll make the necessary commitments.

But life is changing in ways that may make collaborative coexistence more attractive. Rents are on the rise. People are getting used to the sharing economy. And then there’s that bottom-line truth exposed by the pandemic: Take away child care, and women stop working for pay and don’t start again, like the nearly two million of them who have dropped out of the labor force since February 2020. Something must be done.

In the past few years, states and cities around the country have started reconsidering single-family zoning or dared to vote to put an end to it. Last month, Gov. Gavin Newsom of California signed into lawbills to limit single-family zoning and permit construction of buildings with up to 10 units near public transit.

A wholesale revision of zoning codes could lead to a new built environment, one that would nudge us toward a new mind-set. We should build co-housing on a large scale. But even if we don’t, we could start reshaping the contours of our hyperindividualist and antimaternalist landscapes so as to encourage solidarity and fellow feeling rather than aloofness: Co-housing communities are centered on their greenswards; we need more parks. Co-housing puts people before cars; towns and cities should do the same. Co-housers live together, meaning they are around in case of need; the least inspiration we can take from that is to make our housing stock more varied, less focused on the nuclear family, so that members of extended families and groups of friends can be there for one another, too.

If this sounds not unlike the best-designed urban neighborhoods in America, well, maybe it’s not. But the pandemic has sparked a flight from cities and a demand for more suburban housing, and the boom in the market right now is in exurbia — low-density, lower-cost suburbs on the outer edges of metropolitan areas. As these neighborhoods are built, in all likelihood old design habits will prevail. But there’s no harm in imagining, and fighting for, a land-use philosophy focused on making life more pleasant for parents and children — and for the introvert in all of us.

I

n the 19 years since I had my first child, I have spent a lot of time thinking about how my life might have been different if I’d known about Hildur Jackson’s “third option.” What if there had been tens of thousands of co-housing communities in America instead of a couple hundred? Maybe I would have moved into one rather than back to unfriendly Manhattan.

If I had to single out one feature of cooperative living I find particularly attractive, it would be regular, spontaneous contact with people of all ages. I had my children later in life, and my parents weren’t healthy enough to spend as much time with their grandchildren as all of us wanted, and then, as happens, they died. I’m nostalgic for an intergenerational experience I never had.

A few weeks ago, I watched my teenage daughter spend an entire meal talking conspiratorially to two of my best friends. How often do American teenagers open up to their parents’ friends? What would it have been like for her to be able to do that throughout her childhood with surrogate aunts and uncles and grandparents? The three of them sat just out of earshot, making it hard for me to eavesdrop, which I’m sure was the point. But the sight of them gossiping made me think that maybe, despite the blank suburban streets and the chilly city elevators and my never quite figuring out where we should live, I’d done something right.

Judith Shulevitz (@JudithShulevitz) is a cultural critic and the author of “The Sabbath World: Glimpses of a Different Order of Time.” She still lives in New York City.

Tribute to Global Progress

Debbie Downers: attention!

The point of this post: global progress on the fronts that really count has been amazing.

There are many sources. But my favorite is Nick Kristof’s column “Why 2017 Was the Best Year in Human History”. The column was the most emailed column of the week. I now see why. It is reprinted below.

“The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.”

Let me step back for a minute.

Fareed Zacharia, in his 2008 book The Post-American World, first raised my awareness about global progress. He began to get my head screwed on correctly.

Don’t get me wrong. I have lived in this fishbowl of global progress my entire life. I have been keenly aware of its major events, such as:

The Industrial Revolution
The Triumph of Democracy
The victories of WWI and WWII
The fall of the Berln Wall
The rise of global institutions, e.g. the UN, the WTO, the WHO, the World Bank
The rise of the computing revolution
The rise of the internet
The advent of iPhones
The conquest of infectious disease

But Fareed’s take on world events was spectacular in its optimism. He reminded readers that wars can be massive or small, like skirmishes; that peace can be the norm or war can be the norm; that human suffering can be widespread or isolated; and, most of all, he pointed out that the last fifty years have been, on the whole, spectacularly peaceful, wealth-creating, and welbeing-creating.

I am just like everyone else, though. I need a reminder.

The reminder came to me in Nick Kristof’s column this Sunday.

My favorites:

As recently as the 1960s, a majority of humans:

were illiterate. Now fewer than 15 percent are illiterate;
lived in extreme poverty. Now fewer than 10 percent do.

“In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.”

“Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.”

The writing is below, and the data supporting the writing is attached.

=================================

CREDIT: https://ourworldindata.org

CREDIT: https://ourworldindata.org/happiness-and-life-satisfaction/

CREDIT: https://www.nytimes.com/2018/01/06/opinion/sunday/2017-progress-illiteracy-poverty.html?smid=nytcore-ipad-share&smprod=nytcore-ipad

Why 2017 Was the Best Year in Human History

We all know that the world is going to hell. Given the rising risk of nuclear war with North Korea, the paralysis in Congress, warfare in Yemen and Syria, atrocities in Myanmar and a president who may be going cuckoo, you might think 2017 was the worst year ever.

But you’d be wrong. In fact, 2017 was probably the very best year in the long history of humanity.

A smaller share of the world’s people were hungry, impoverished or illiterate than at any time before. A smaller proportion of children died than ever before. The proportion disfigured by leprosy, blinded by diseases like trachoma or suffering from other ailments also fell.

We need some perspective as we watch the circus in Washington, hands over our mouths in horror. We journalists focus on bad news — we cover planes that crash, not those that take off — but the backdrop of global progress may be the most important development in our lifetime.

Every day, the number of people around the world living in extreme poverty (less than about $2 a day) goes down by 217,000, according to calculations by Max Roser, an Oxford University economist who runs a website called Our World in Data. Every day, 325,000 more people gain access to electricity. And 300,000 more gain access to clean drinking water.

Readers often assume that because I cover war, poverty and human rights abuses, I must be gloomy, an Eeyore with a pen. But I’m actually upbeat, because I’ve witnessed transformational change.

As recently as the 1960s, a majority of humans had always been illiterate and lived in extreme poverty. Now fewer than 15 percent are illiterate, and fewer than 10 percent live in extreme poverty. In another 15 years, illiteracy and extreme poverty will be mostly gone. After thousands of generations, they are pretty much disappearing on our watch.

Just since 1990, the lives of more than 100 million children have been saved by vaccinations, diarrhea treatment, breast-feeding promotion and other simple steps.

Steven Pinker, the Harvard psychology professor, explores the gains in a terrific book due out next month, “Enlightenment Now,” in which he recounts the progress across a broad array of metrics, from health to wars, the environment to happiness, equal rights to quality of life. “Intellectuals hate progress,” he writes, referring to the reluctance to acknowledge gains, and I know it feels uncomfortable to highlight progress at a time of global threats. But this pessimism is counterproductive and simply empowers the forces of backwardness.

President Trump rode this gloom to the White House. The idea “Make America Great Again” professes a nostalgia for a lost Eden. But really? If that was, say, the 1950s, the U.S. also had segregation, polio and bans on interracial marriage, gay sex and birth control. Most of the world lived under dictatorships, two-thirds of parents had a child die before age 5, and it was a time of nuclear standoffs, of pea soup smog, of frequent wars, of stifling limits on women and of the worst famine in history.

What moment in history would you prefer to live in?
F. Scott Fitzgerald said the test of a first-rate intelligence is the ability to hold two contradictory thoughts at the same time. I suggest these: The world is registering important progress, but it also faces mortal threats. The first belief should empower us to act on the second.

Granted, this column may feel weird to you. Those of us in the columny gig are always bemoaning this or that, and now I’m saying that life is great? That’s because most of the time, quite rightly, we focus on things going wrong. But it’s also important to step back periodically. Professor Roser notes that there was never a headline saying, “The Industrial Revolution Is Happening,” even though that was the most important news of the last 250 years.

I had a visit the other day from Sultana, a young Afghan woman from the Taliban heartland. She had been forced to drop out of elementary school. But her home had internet, so she taught herself English, then algebra and calculus with the help of the Khan Academy, Coursera and EdX websites. Without leaving her house, she moved on to physics and string theory, wrestled with Kant and read The New York Times on the side, and began emailing a distinguished American astrophysicist, Lawrence M. Krauss.

I wrote about Sultana in 2016, and with the help of Professor Krauss and my readers, she is now studying at Arizona State University, taking graduate classes. She’s a reminder of the aphorism that talent is universal, but opportunity is not. The meaning of global progress is that such talent increasingly can flourish.

So, sure, the world is a dangerous mess; I worry in particular about the risk of a war with North Korea. But I also believe in stepping back once a year or so to take note of genuine progress — just as, a year ago, I wrote that 2016 had been the best year in the history of the world, and a year from now I hope to offer similar good news about 2018. The most important thing happening right now is not a Trump tweet, but children’s lives saved and major gains in health, education and human welfare.

Every other day this year, I promise to tear my hair and weep and scream in outrage at all the things going wrong. But today, let’s not miss what’s going right.

A version of this op-ed appears in print on January 7, 2018, on Page SR9 of the New York edition with the headline: Why 2017 Was the Best Year in History

Scourge of Opioids

CREDIT: https://www.nationalaffairs.com/publications/detail/taking-on-the-scourge-of-opioids

Taking On the Scourge of Opioids

Sally Satel

Summer 2017

On March 1, 2017, Maryland governor Larry Hogan declared a state of emergency. Heroin and fentanyl, a powerful synthetic opioid, had killed 1,468 Maryland residents in the first nine months of 2016, up 62% from the same period in 2015. Speaking at a command center of the Maryland Emergency Management Agency near Baltimore, the governor announced additional funding to strengthen law enforcement, prevention, and treatment services. “The reality is that this threat is rapidly escalating,” Hogan said.

And it is escalating across the country. Florida governor Rick Scott followed Hogan’s lead in May, declaring a public-health emergency after requests for help from local officials across the state. Arizona governor Doug Ducey did the same in June. In Ohio, some coroners have run out of space for the bodies of overdose victims and have to use a mobile, refrigerated morgue. In West Virginia, state burial funds have been exhausted burying overdose victims. Opioid orphans are lucky if their grandparents can raise them; if not, they are at the mercy of foster-care systems that are now overflowing with the children of addicted parents.

An estimated 2.5 million Americans abuse or are addicted to opioids — a class of highly addictive drugs that includes Percocet, Vicodin, OxyContin, and heroin. Most experts believe this is an undercount, and all agree that the casualty rate is unprecedented. At peak years in an earlier heroin epidemic, from 1973 to 1975, there were 1.5 fatalities per 100,000 Americans. In 2015, the rate was 10.4 per 100,000. In West Virginia, ground zero of the crisis, it was over 36 per 100,000. In raw numbers, more than 33,000 individuals died in 2015 — nearly equal to the number of deaths from car crashes and double the number of gun homicides. Meanwhile, the opioid-related fatalities continue to mount, having quadrupled since 1999.

The roots of the crisis can be traced to the early 1990s when physicians began to prescribe opioid painkillers more liberally. In parallel, overdose deaths from painkillers rose until about 2011. Since then, heroin and synthetic opioids have briskly driven opioid-overdose deaths; they now account for over two-thirds of victims. Synthetic opioids, such as fentanyl, are made mainly in China, shipped to Mexico, and trafficked here. Their menace cannot be overstated.

Fentanyl is 50 times as potent as heroin and can kill instantly. People have been found dead with needles dangling from their arms, the syringe barrels still partly full of fentanyl-containing liquid. One fentanyl analog, carfentanil, is a big-game tranquilizer that’s a staggering 5,000 times more powerful than heroin. This spring, “Gray Death,” a combination of heroin, fentanyl, carfentanil, and other synthetics, has pushed the bounds of lethal chemistry even further. The death rate from synthetics has increased by more than 72% over the space of a single year, from 2014 to 2015. They have transformed an already terrible problem into a true public-health emergency.

The nation has weathered drug epidemics before, but the current affliction — a new plague for a new century, in the words of Nicholas Eberstadt — is different. Today, the addicted are not inner-city minorities, though big cities are increasingly reporting problems. Instead, they are overwhelmingly white and rural, though middle- and upper-class individuals are also affected. The jarring visual of the crisis is not an urban “gang banger” but an overdosed mom slumped in the front seat of her car in a Walmart parking lot, toddler in the back.

It’s almost impossible to survey this devastating tableau and not wonder why the nation’s response has been so slow in coming. Jonathan Caulkins, a drug-policy expert at Carnegie Mellon, offers two theories. One is geography. The prescription-opioid wave crashed down earliest in fly-over states, particularly small cities and rural areas, such as West Virginia and Kentucky, without nationally important media markets. Earlier opioid (heroin) epidemics raged in urban centers, such as New York, Baltimore, Chicago, and Los Angeles.

The second of Caulkins’s plausible explanations is the absence of violence that roiled inner cities in the early 1970s, when President Richard Nixon called drug abuse “public enemy number one.” Dealers do not engage in shooting wars or other gang-related activity. As purveyors of heroin established themselves in the U.S., Mexican bosses deliberately avoided inner cities where heroin markets were dominated by violent gangs. Thanks to a “drive-through” business model perfected by traffickers and executed by discreet runners — farm boys from western Mexico looking to make quick money — heroin can be summoned via text message or cell phone and delivered, like pizza, to homes or handed off in car-to-car transactions. Sources of painkillers are low profile as well. Typically pills are obtained (or stolen) from friends or relatives, physicians, or dealers. The “dark web,” too, is a conduit for synthetics.

It’s hard to miss, too, that this time around, the drug crisis is viewed differently. Heroin users today are widely seen as suffering from an illness. And because that illness has a pale complexion, many have asked, “Where was the compassion for black people?” A racial element cannot be denied, but there are other forces at play, namely that Americans are drug-war weary and law enforcement has incarceration fatigue. It also didn’t help that, in the 1970s, officers were only loosely woven into the fabric of the inner-city minority neighborhoods that were hardest hit. Today, in the small towns where so much of the epidemic plays out, the crisis is personal. Police chiefs, officers, and local authorities will likely have at least one relative, friend, or neighbor with an opioid problem.

If there is reason for optimism in the midst of this crisis, it is that national and local politicians and even police are placing emphasis on treatment over punishment. And, without question, the nation needs considerably more funding for treatment; Congress must step up. Yet the much-touted promise of treatment — and particularly of anti-addiction medications — as a panacea has already been proven wrong. Perhaps “we can’t arrest our way out of the problem,” as officials like to say, but nor are we treating our way out of it. This is because many users reject treatment, and, if they accept it, too many drop out. Engaging drug users in treatment has turned out to be one of the biggest challenges of the epidemic — and one that needs serious attention.

The near-term forecast for this American Carnage, as journalist Christopher Caldwell calls it, is grim. What can be done?

ROOTS OF A CRISIS

In the early 1990s, campaigns for improved treatment of pain gained ground. Analgesia for pain associated with cancer and terminal illness was relatively well accepted, but doctors were leery of medicating chronic conditions, such as joint pain, back pain, and neurological conditions, lest patients become addicted. Then in 1995 the American Pain Society recommended that pain be assessed as the “fifth vital sign” along with the standard four (blood pressure, temperature, pulse, and respiratory rate). In 2001 the influential Joint Commission on Accreditation of Healthcare Organizations established standards for pain management. These standards did not mention opioids, per se, but were interpreted by many physicians as encouraging their use.

These developments had a gradual but dramatic effect on the culture of American medicine. Soon, clinicians were giving an entire month’s worth of Percocet or Lortab to patients with only minor injuries or post-surgical pain that required only a few days of opioid analgesia. Compounding the matter, pharmaceutical companies engaged in aggressive marketing to physicians.

The culture of medical practice contributed as well. Faced with draconian time pressures, a doctor who suspected that his patient was taking too many painkillers rarely had time to talk with him about it. Other time-consuming pain treatments, such as physical therapy or behavioral strategies, were, and remain, less likely to be covered by insurers. Abbreviated visits meant shortcuts, like a quick refill that may not have been warranted, while the need for addiction treatment was overlooked. In addition, clinicians were, and still are, held hostage to ubiquitous “patient-satisfaction surveys.” A poor grade mattered because Medicare and Medicaid rely on these assessments to help determine the amount of reimbursement for care. Clearly, too many incentives pushed toward prescribing painkillers, even when it went against a doctor’s better judgment.

The chief risk of liberal prescribing was not so much that the patient would become addicted — though it happens occasionally — but rather that excess medication fed the rivers of pills that were coursing through many neighborhoods. And as more painkillers began circulating, almost all of them prescribed by physicians, more opportunities arose for non-patients to obtain them, abuse them, and die. OxyContin formed a particularly notorious tributary. Available since 1996, this slow-release form of oxycodone was designed to last up to 12 hours (about six to eight hours longer than immediate-release preparations of oxycodone, such as Percocet). A sustained blood level was meant to be a therapeutic advantage for patients with unremitting pain. To achieve long action, each OxyContin tablet was loaded with a large amount of oxycodone.

Packing a large dose into a single pill presented a major unintended consequence. When it was crushed and snorted or dissolved in water and injected, OxyContin gave a clean, predictable, and enjoyable high. By 2000, reports of abuse of OxyContin began to surface in the Rust Belt — a region rife with injured coal miners who were readily prescribed OxyContin, or, as it came to be called, “hillbilly heroin.” Ohio along with Florida became the “pill mill” capitals of the nation. These mills were advertised as “pain clinics,” but were really cash-only businesses set up to sell painkillers in high volume. The mills employed shady physicians who were licensed to prescribe but knew they weren’t treating authentic patients.

Around 2010 to 2011, law enforcement began cracking down on pill mills. In 2010, OxyContin’s maker, Purdue Pharma, reformulated the pill to make it much harder to crush. In parallel, physicians began to re-examine their prescribing practices and to consider non-opioid options for chronic-pain management. More states created prescription registries so that pharmacists and doctors could detect patients who “doctor shopped” for painkillers and even forged prescriptions. (Today, all states except Missouri have such a registry.) Last year, the American Medical Association recommended that pain be removed as a “fifth vital sign” in professional medical standards.

Controlling the sources of prescription pills was completely rational. Sadly, however, it helped set the stage for a new dimension of the opioid epidemic: heroin and synthetic opioids. Heroin — cheaper and more abundant than painkillers — had flowed into the western U.S. since at least the 1990s, but trafficking east of the Mississippi and into the Rust Belt reportedly began to accelerate around the mid-2000s, a transformative episode in the history of domestic drug problems detailed in Sam Quinones’s superb book Dreamland.

The timing was darkly auspicious. As prescription painkillers became harder to get and more expensive, thanks to alterations of the OxyContin tablet, to law-enforcement efforts, and to growing physician enlightenment, a pool of individuals already primed by their experience with prescription opioids moved on to low-cost, relatively pure, and accessible heroin. Indeed, between 2008 and 2010, about three-fourths of people who had used heroin in the past year reported non-medical use of painkillers — likely obtained outside the health-care system — before initiating heroin use.

The progression from pills to heroin was abetted by the nature of addiction itself. As users became increasingly tolerant to painkillers, they needed larger quantities of opioids or more efficient ways to use them in order to achieve the same effect. Moving from oral consumption to injection allowed this. Once a person is already injecting pills, moving to heroin, despite its stigma, doesn’t seem that big a step. The march to heroin is not inexorable, of course. Yet in economically and socially depleted environments where drug use is normalized, heroin is abundant, and treatment is scarce, widespread addiction seems almost inevitable.

The last five years or so have witnessed a massive influx of powder heroin to major cities such as New York, Detroit, and Chicago. From there, traffickers direct shipments to other urban areas, and these supplies are, in turn, distributed further to rural and suburban areas. It is the powdered form of heroin that is laced with synthetics, such as fentanyl. Most victims of synthetic opioids don’t even know they are taking them. Drug traffickers mix the fentanyl with heroin or press it into pill form that they sell as OxyContin.

Yet, there are reports of addicts now knowingly seeking fentanyl as their tolerance to heroin has grown. Whereas heroin requires poppies, which take time to cultivate, synthetics can be made in a lab, so the supply chain can be downsized. And because the synthetics are so strong, small volumes can be trafficked more efficiently and more profitably. What’s more, laboratories can easily stay one step ahead of the Drug Enforcement Administration by modifying fentanyl into analogs that are more potent, less detectable, or both. Synthetics are also far more deadly: In some regions of the country, roughly two-thirds of deaths from opioids can now be traced to heroin, including heroin that medical examiners either suspect or are certain was laced with fentanyl.

THE BASICS

Terminology is important in discussions about drug use. A 2016 Surgeon General report on addiction, “Facing Addiction in America,” defines “misuse” of a substance as consumption that “causes harm to the user and/or to those around them.” Elsewhere, however, the term has been used to refer to consumption for a purpose not consistent with medical or legal guidelines. Thus, misuse would apply equally to the person who takes an extra pill now and then from his own prescribed supply of Percocet to reduce stress as well as to the person who buys it from a dealer and gets high several times a week. The term “abuse” refers to a consistent pattern of use causing harm, but “misuse,” with its protean definitions, has unhelpfully taken its place in many discussions of the current crisis. In the Surgeon General report, the clinical term “substance use disorder” refers to functionally significant impairment caused by substance use. Finally, “addiction,” while not considered a clinical term, denotes a severe form of substance-use disorder — in other words, compulsive use of a substance with difficulty stopping despite negative consequences.

Much of the conventional wisdom surrounding the opioid crisis holds that virtually anyone is at risk for opioid abuse or addiction — say, the average dental patient who receives some Vicodin for a root canal. This is inaccurate, but unsurprising. Exaggerating risk is a common strategy in public-health messaging: The idea is to garner attention and funding by democratizing affliction and universalizing vulnerability. But this kind of glossing is misleading at best, counterproductive at worst. To prevent and ameliorate problems, we need to know who is truly at risk to target resources where they are most needed.

In truth, the vast majority of people prescribed medication for pain do not misuse it, even those given high doses. A new study in the Annals of Surgery, for example, found that almost three-fourths of all opioid painkillers prescribed by surgeons for five common outpatient procedures go unused. In 2014, 81 million people received at least one prescription for an opioid pain reliever, according to a study in the American Journal of Preventive Medicine; yet during the same year, the National Survey on Drug Use and Health reported that only 1.9 million people, approximately 2%, met the criteria for prescription pain-reliever abuse or dependence (a technical term denoting addiction). Those who abuse their prescription opioids are patients who have been prescribed them for over six months and tend to suffer from concomitant psychiatric conditions, usually a mood or anxiety disorder, or have had prior problems with alcohol or drugs.

Notably, the majority of people who develop problems with painkillers are not individuals for whom they have been legitimately prescribed — nor are opioids the first drug they have misused. Such non-patients procure their pills from friends or family, often helping themselves to the amply stocked medicine chests of unsuspecting relatives suffering from cancer or chronic pain. They may scam doctors, forge prescriptions, or doctor shop. The heaviest users are apt to rely on dealers. Some of these individuals make the transition to heroin, but it is a small fraction. (Still, the death toll is striking given the lethality of synthetic opioids.) One study from the Substance Abuse and Mental Health Services Administration found that less than 5% of pill misusers had moved to heroin within five years of first beginning misuse. These painkiller-to-heroin migrators, according to analyses by the Centers for Disease Control and Prevention, also tend to be frequent users of multiple substances, such as benzodiazepines, alcohol, and cocaine. The transition from these other substances to heroin may represent a natural progression for such individuals.

Thus, factors beyond physical pain are most responsible for making individuals vulnerable to problems with opioids. Princeton economists Anne Case and Angus Deaton paint a dreary portrait of the social determinants of addiction in their work on premature demise across the nation. Beginning in the late 1990s, deaths due to alcoholism-related liver disease, suicide, and opioid overdoses began to climb nationwide. These “deaths of despair,” as Case and Deaton call them, strike less-educated whites, both men and women, between the ages of 45 and 54. While the life expectancy of men and women with a college degree continues to grow, it is actually decreasing for their less-educated counterparts. The problems start with poor job opportunities for those without college degrees. Absent employment, people come unmoored. Families unravel, domestic violence escalates, marriages dissolve, parents are alienated from their children, and their children from them.

Opioids are a salve for these communal wounds. Work by Alex Hollingsworth and colleagues found that residents of locales most severely pummeled by the economic downturn were more susceptible to opioids. As county unemployment rates increased by one percentage point, the opioid death rate (per 100,000) rose by almost 4%, and the emergency-room visit rate for opioid overdoses (per 100,000) increased by 7%. It’s no coincidence that many of the states won by Donald Trump — West Virginia, Kentucky, and Ohio, for example — had the highest rates of fatal drug overdoses in 2015.

Of all prime-working-age male labor-force dropouts, nearly half — roughly 7 million men — take pain medication on a daily basis. “In our mind’s eye,” writes Nicholas Eberstadt in a recent issue of Commentary, “we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens — stoned.” Medicaid, it turns out, financed many of those stoned hours. Of the entire non-working prime-age white male population in 2013, notes Eberstadt, 57% were reportedly collecting disability benefits from one or more government disability programs. Medicaid enabled them to see a doctor and fill their prescriptions for a fraction of the street value: A single 10-milligram Percocet could go for $5 to $10, the co-pay for an entire bottle.

When it comes to beleaguered communities, one has to wonder how much can be done for people whose reserves of optimism and purposefulness have run so low. The challenge is formidable, to be sure, but breaking the cycle of self-destruction through treatment is a critical first step.

TREATMENT OPTIONS

Perhaps surprisingly, the majority of people who become addicted to any drug, including heroin, quit on their own. But for those who cannot stop using by themselves, treatment is critical, and individuals with multiple overdoses and relapses typically need professional help. Experts recommend at least one year of counseling or anti-addiction medication, and often both. General consensus holds that a standard week of “detoxification” is basically useless, if not dangerous — not only is the person extremely likely to resume use, he is at special risk because he will have lost his tolerance and may easily overdose.

Nor is a standard 28-day stay in a residential facility particularly helpful as a sole intervention. In residential settings many patients acquire a false sense of security about their ability to resist drugs. They are, after all, insulated from the stresses and conditioned cues that routinely provoke drug cravings at home and in other familiar environments. This is why residential care must be followed by supervised transition to treatment in an outpatient setting: Users must continue to learn how to cope without drugs in the social and physical milieus they inhabit every day.

Fortunately, medical professionals are armed with a number of good anti-addiction medications to help patients addicted to opioids. The classic treatment is methadone, first introduced as a maintenance therapy in the 1960s. A newer medication approved by the FDA in 2002 for the treatment of opioid addiction is buprenorphine, or “bupe.” It comes, most popularly, as a strip that dissolves under the tongue. The suggested length of treatment with bupe is a minimum of one or two years. Like methadone, bupe is an opioid. Thus, it can prevent withdrawal, blunt cravings, and produce euphoria. Unlike methadone, however, bupe’s chemical structure makes it much less dangerous if taken in excess, thereby prompting Congress to enact a law, the Drug Addiction Treatment Act of 2000, which allows physicians to prescribe it from their offices. Methadone, by contrast, can only be administered in clinics tightly regulated by the Drug Enforcement Administration and the Substance Abuse and Mental Health Services Administration. (I work in such a clinic.)

In addition to methadone or buprenorphine, which have abuse potential of their own, there is extended-release naltrexone. Administered as a monthly injection, naltrexone is an opioid blocker. A person who is “blocked” normally experiences no effect upon taking an opioid drug. Because naltrexone has no abuse potential (hence no street value), it is favored by the criminal-justice system. Jails and prisons are increasingly offering inmates an injection of naltrexone; one dose is given at five weeks before release and another during the week of release with plans for ongoing treatment as an outpatient. Such protection is warranted given the increased risk for death, particularly from drug-related causes, in the early post-release period. For example, one study of inmates released from the Washington State Department of Corrections found a 10-fold greater risk of overdose death within the first two weeks after discharge compared with non-incarcerated state residents of the same age, sex, and race.

Regulatory State and Redistributive State

Will Wilkinson is a great writer, and spells out here two critical aspects of government:

The regulatory state is the aspect of government that protects the public against abuses of private players, protects property rights, and creates well-defined “corridors” that streamline the flows of capitalism and make it work best. It always gets a bad rap, and shouldn’t. The rap is due to the difficulty of enforcing regulations on so many aspects of life.

The redistributive state is the aspect of government that deigns to shift income and wealth from certain players in society to other players. The presumption is always one of fairness, whereby society deems it in the interests of all that certain actors, e.g. veterans or seniors, get preferential distributions of some kind.

He goes on to make a great point. These two states are more independent of one another than might at first be apparent. So it is possible to dislike one and like another.

Personally, I like both. I think both are critical to a well-oiled society with capitalism and property rights as central tenants. My beef will always go to issues of efficiency and effectiveness?

On redistribution, efficiency experts can answer this question: can we dispense with the monthly paperwork and simply direct deposit funds? Medicare now works this way, and the efficiency gains are remarkable.

And on regulation, efficiency experts can answer this question: can private actors certify their compliance with regulation, and then the public actors simple audit from time to time? Many government programs work this way, to the benefit of all.

ON redistribution, effectiveness experts can answer this question: Is the homeless population minimal? Are veterans getting what they need? Are seniors satisfied with how government treats them?

On regulation, effectiveness experts can answer this question: Is the air clean? Is the water clean? Is the mortgage market making food loans that help people buy houses? Are complaints about fraudulent consumer practices low?

CREDIT: VOX Article on Economic Freedom by Will Wilkinson

By Will Wilkinson
Sep 1, 2016

American exceptionalism has been propelled by exceptionally free markets, so it’s tempting to think the United States has a freer economy than Western European countries — particularly those soft-socialist Scandinavian social democracies with punishing tax burdens and lavish, even coddling, welfare states. As late as 2000, the American economy was indeed the freest in the West. But something strange has happened since: Economic freedom in the United States has dropped at an alarming rate.

Meanwhile, a number of big-government welfare states have become at least as robustly capitalist as the United States, and maybe more so. Why? Because big welfare states needed to become better capitalists to afford their socialism. This counterintuitive, even paradoxical dynamic suggests a tantalizing hypothesis: America’s shabby, unpopular safety net is at least partly responsible for capitalism’s flagging fortunes in the Land of the Free. Could it be that Americans aren’t socialist enough to want capitalism to work? It makes more sense than you might think.

America’s falling economic freedom

From 1970 to 2000, the American economy was the freest in the West, lagging behind only Asia’s laissez-faire city-states, Hong Kong and Singapore. The average economic freedom rating of the wealthy developed member countries of the Organization for Economic Cooperation and Development (OECD) has slipped a bit since the turn of the millennium, but not as fast as America’s.
“Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States,” write the authors of Fraser Institute’s 2015

Economic Freedom of the World report, noting that “the decline in economic freedom in the United States has been more than three times greater than the average decline found in the OECD.”

The economic freedom of selected countries, 1999 to 2016. Heritage Foundation 2016 Index of Economic Freedom

The Heritage Foundation and the Canadian Fraser Institute each produce an annual index of economic freedom, scoring the world’s countries on four or five main areas, each of which breaks down into a number of subcomponents. The main rubrics include the size of government and tax burdens; protection of property rights and the soundness of the legal system; monetary stability; openness to global trade; and levels of regulation of business, labor, and capital markets. Scores on these areas and subareas are combined to generate an overall economic freedom score.

The rankings reflect right-leaning ideas about what it means for people and economies to be free. Strong labor unions and inequality-reducing redistribution are more likely to hurt than help a country’s score.

So why should you care about some right-wing think tank’s ideologically loaded measure of economic freedom? Because it matters. More economic freedom, so measured, predicts higher rates of economic growth, and higher levels of wealth predict happier, healthier, longer lives. Higher levels of economic freedom are also linked with greater political liberty and civil rights, as well as higher scores on the left-leaning Social Progress Index, which is based on indicators of social justice and human well-being, from nutrition and medical care to tolerance and inclusion.

The authors of the Fraser report estimate that the drop in American economic freedom “could cut the US historic growth rate of 3 percent by half.” The difference between a 1.5 percent and 3 percent growth rate is roughly the difference between the output of the economy tripling rather than octupling in a lifetime. That’s a huge deal.
Over the same period, the economic freedom scores of Canada and Denmark have improved a lot. According to conservative and libertarian definitions of economic freedom, Canadians, who enjoy a socialized health care system, now have more economic freedom than Americans, and Danes, who have one of the world’s most generous welfare states, have just as much.
What the hell’s going on?

The redistributive state and the regulatory state are separable

To make headway on this question, it is crucial to clearly distinguish two conceptually and empirically separable aspects of “big government” — the regulatory state and the redistributive state.

The redistributive state moves money around through taxes and transfer programs. The regulatory state places all sorts of restrictions and requirements on economic life — some necessary, some not. Most Democrats and Republicans assume that lots of regulation and lots of redistribution go hand in hand, so it’s easy to miss that you can have one without the other, and that the relationship between the two is uneasy at best. But you can’t really understand the politics behind America’s declining economic freedom if you fail to distinguish between the regulatory and fiscal aspects of the economic policy.

Standard “supply-side” Republican economic policy thinking says that cuts in tax rates and government spending will unleash latent productive potential in the economy, boosting rates of growth. And indeed, when taxes and government spending are very high, cuts produce gains by returning resources to the private sector. But it’s important to see that questions about government control versus private sector control of economic resources are categorically different from questions about the freedom of markets.

Free markets require the presence of good regulation, which define and protect property rights and facilitate market processes through the consistent application of clear law, and an absence of bad regulation, which interferes with productive economic activity. A government can tax and spend very little — yet still stomp all over markets. Conversely, a government can withdraw lots of money from the economy through taxes, but still totally nail the optimal balance of good and bad regulation.

Whether a country’s market economy is free — open, competitive, and relatively unmolested by government — is more a question of regulation than a question of taxation and redistribution. It’s not primarily about how “big” its government is. Republicans generally do support a less meddlesome regulatory approach, but when they’re in power they tend to be much more persistent about cutting taxes and social welfare spending than they are about reducing economically harmful regulatory frictions.

If you’re as worried about America’s declining economic freedom as I am, this is a serious problem. In recent years, the effect of cutting taxes and spending has been to distribute income upward and leave the least well-off more vulnerable to bad luck, globalization, “disruptive innovation,” and the vagaries of business cycles.
If spending cuts came out of the military’s titanic budget, that would help. But that’s rarely what happens. The least connected constituencies, not the most expensive ones, are the first to get dinged by budget hawks. And further tax cuts are unlikely to boost growth. Lower taxes make government seem cheaper than it really is, which leads voters to ask for more, not less, government spending, driving up the deficit. Increasing the portion of GDP devoted to paying interest on government debt isn’t a growth-enhancing way to return resources to the private sector.

Meanwhile, wages have been flat or declining for millions of Americans for decades. People increasingly believe the economy is “rigged” in favor of the rich. As a sense of economic insecurity mounts, people anxiously cast about for answers.

Easing the grip of the regulatory state is a good answer. But in the United States, its close association with “free market” supply-side efforts to produce growth by slashing the redistributive state has made it an unattractive answer, even with Republican voters. That’s at least part of the reason the GOP wound up nominating a candidate who, in addition to promising not to cut entitlement spending, openly favors protectionist trade policy, giant infrastructure projects, and huge subsidies to domestic manufacturing and energy production. Donald Trump’s economic policy is the worst of all possible worlds.

This is doubly ironic, and doubly depressing, once you recognize that the sort of big redistributive state supply-siders fight is not necessarily the enemy of economic freedom. On the contrary, high levels of social welfare spending can actually drive political demand for growth-promoting reform of the regulatory state. That’s the lesson of Canada and Denmark’s march up those free economy rankings.

The welfare state isn’t a free lunch, but it is a cheap date

Economic theory tells you that big government ought to hurt economic growth. High levels of taxation reduce the incentive to work, and redistribution is a “leaky bucket”: Moving money around always ends up wasting some of it. Moreover, a dollar spent in the private sector generally has a more beneficial effect on the economy than a dollar spent by the government. Add it all up, and big governments that tax heavily and spend freely on social transfers ought to hurt economic growth.

That matters from a moral perspective — a lot. Other things equal, people are better off on just about every measure of well-being when they’re wealthier. Relative economic equality is nice, but it’s not so nice when relatively equal shares mean smaller shares for everyone. Just as small differences in the rate at which you put money into a savings account can lead to vast differences in your account balance 40 years down the road, thanks to the compounding nature of interest, a small reduction in the rate of economic growth can leave a society’s least well-off people much poorer in absolute terms than they might have been.

Here’s the puzzle. As a general rule, when nations grow wealthier, the public demands more and better government services, increasing government spending as a percentage of GDP. (This is known as “Wagner’s law.”) According to standard growth theory, ongoing increase in the size of government ought to exert downward pressure on rates of growth. But we don’t see the expected effect in the data. Long-term national growth trends are amazingly stable.

And when we look at the family of advanced, liberal democratic countries, countries that spend a smaller portion of national income on social transfer programs gain very little in terms of growth relative to countries that spend much more lavishly on social programs. Peter Lindert, an economist at the University of California Davis, calls this the “free lunch paradox.”

Lindert’s label for the puzzle is somewhat misleading, because big expensive welfare states are, obviously, expensive. And they do come at the expense of some growth. Standard economic theory isn’t completely wrong. It’s just that democracies that have embraced generous social spending have found ways to afford it by minimizing and offsetting its anti-growth effects.

If you’re careful with the numbers, you do in fact find a small negative effect of social welfare spending on growth. Still, according to economic theory, lunch ought to be really expensive. And it’s not.

There are three main reasons big welfare states don’t hurt growth as much as you might think. First, as Lindert has emphasized, they tend to have efficient consumption-based tax systems that minimize market distortions.
When you tax something, people tend to avoid it. If you tax income, as the United States does, people work a little less, which means that certain economic gains never materialize, leaving everyone a little poorer. Taxing consumption, as many of our European peers do, is less likely to discourage productive moneymaking, though it does discourage spending. But that’s not so bad. Less consumption means more savings, and savings puts the capital in capitalism, financing the economic activity that creates growth.

There are other advantages, too. Consumption taxes are usually structured as national sales taxes (or VATs, value-added taxes), which are paid in small amounts on a continuous basis, are extremely cheap to collect (and hard to avoid), while being less in-your-face than income taxes, which further mitigates the counterproductively demoralizing aspect of taxation.

Big welfare states are also more likely to tax addictive stuff, which people tend to buy whatever the price, as well as unhealthy and polluting stuff. That harnesses otherwise fiscally self-defeating tax-avoiding behavior to minimize the costs of health care and environmental damage.
Second, some transfer programs have relatively direct pro-growth effects. Workers are most productive in jobs well-matched to their training and experience, for example, and unemployment benefits offer displaced workers time to find a good, productivity-promoting fit. There’s also some evidence that health care benefits that aren’t linked to employment can promote economic risk-taking and entrepreneurship.

Fans of open-handed redistributive programs tend to oversell this kind of upside for growth, but there really is some. Moreover, it makes sense that the countries most devoted to these programs would fine-tune them over time to amplify their positive-sum aspects.

This is why you can’t assume all government spending affects growth in the same way. The composition of spending — as well as cuts to spending — matters. Cuts to efficiency-enhancing spending can hurt growth as much as they help. And they can really hurt if they increase economic anxiety and generate demand for Trump-like economic policy.

Third, there are lots of regulatory state policies that hurt growth by, say, impeding healthy competition or closing off foreign trade, and if you like high levels of redistribution better than you like those policies, you’ll eventually consider getting rid of some of them. If you do get rid of them, your economic freedom score from the Heritage Foundation and the Fraser Institute goes up.
This sort of compensatory economic liberalization is how big welfare states can indirectly promote growth, and more or less explains why countries like Canada, Denmark, and Sweden have become more robustly capitalist over the past several decades. They needed to be better capitalists to afford their socialism. And it works pretty well.

If you bundle together fiscal efficiency, some offsetting pro-growth effects, and compensatory liberalization, you can wind up with a very big government, with very high levels of social welfare spending and very little negative consequences for growth. Call it “big-government laissez-faire.”

The missing political will for genuine pro-growth reform

Enthusiasts for small government have a ready reply. Fine, they’ll say. Big government can work through policies that offset its drag on growth. But why not a less intrusive regulatory state and a smaller redistributive state: small-government laissez-faire. After all, this is the formula in Hong Kong and Singapore, which rank No. 1 and No. 2 in economic freedom. Clearly that’s our best bet for prosperity-promoting economic freedom.

But this argument ignores two things. First, Hong Kong and Singapore are authoritarian technocracies, not liberal democracies, which suggests (though doesn’t prove) that their special recipe requires nondemocratic government to work. When you bring democracy into the picture, the most important political lesson of the Canadian and Danish rise in economic freedom becomes clear: When democratically popular welfare programs become politically nonnegotiable fixed points, they can come to exert intense pressure on fiscal and economic policy to make them sustainable.

Political demand for economic liberalization has to come from somewhere. But there’s generally very little organic, popular democratic appetite for capitalist creative destruction. Constant “disruption” is scary, the way markets generate wealth and well-being is hard to comprehend, and many of us find competitive profit-seeking intuitively objectionable.

It’s not that Danes and Swedes and Canadians ever loved their “neoliberal” market reforms. They fought bitterly about them and have rolled some of them back. But when their big-government welfare states were creaking under their own weight, enough of the public was willing, thanks to the sense of economic security provided by the welfare state, to listen to experts who warned that the redistributive state would become unsustainable without the downsizing of the regulatory state.

A sound and generous system of social insurance offers a certain peace of mind that makes the very real risks of increased economic dynamism seem tolerable to the democratic public, opening up the political possibility of stabilizing a big-government welfare state with growth-promoting economic liberalization.

This sense of baseline economic security is precisely what many millions of Americans lack.

Learning the lesson of Donald Trump
America’s declining economic freedom is a profoundly serious problem. It’s already putting the brakes on dynamism and growth, leaving millions of Americans with a bitter sense of panic about their prospects. They demand answers. But ordinary voters aren’t policy wonks. When gripped by economic anxiety, they turn to demagogues who promise measures that make intuitive economic sense, but which actually make economic problems worse.

We may dodge a Trump presidency this time, but if we fail to fix the feedback loop between declining economic freedom and an increasingly acute sense of economic anxiety, we risk plunging the world’s biggest economy and the linchpin of global stability into a political and economic death spiral. It’s a ridiculous understatement to say that it’s important that this doesn’t happen.

Market-loving Republicans and libertarians need to stare hard at a framed picture of Donald Trump and reflect on the idea that a stale economic agenda focused on cutting taxes and slashing government spending is unlikely to deliver further gains. It is instead likely to continue to backfire by exacerbating economic anxiety and the public’s sense that the system is rigged.

If you gaze at the Donald long enough, his fascist lips will whisper “thank you,” and explain that the close but confusing identification of supply-side fiscal orthodoxy with “free market” economic policy helps authoritarian populists like him — but it hurts the political prospects of regulatory state reforms that would actually make American markets freer.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

World’s biggest battery installation

JAMESTOWN, Australia—Tesla Inc. Chief Executive Elon Musk may have overpromised on production of the company’s latest electric car, but he is delivering on his audacious Australian battery bet.

An enormous Tesla-built battery system—storing electricity from a new wind farm and capable of supplying 30,000 homes for more than an hour—will be powered up over the coming days, the government of South Australia state said Thursday. Final tests are set to be followed by a street party that Mr. Musk, founder of both Tesla and rocket maker Space Exploration Technologies Corp., or SpaceX, was expected to attend.

Success would fulfill the risky pledge Mr. Musk made in March, to deliver a working system in “100 days from contract signature or it is free.” He was answering a Twitter challenge from Australian IT billionaire and environmentalist Mike Cannon-Brookes to help fix electricity problems in South Australia—which relies heavily on renewable energy—after crippling summer blackouts left 1.7 million people without power, some for weeks.

Mr. Cannon-Brookes then brokered talks between Mr. Musk and Australian Prime Minister Malcolm Turnbull, who has faced criticism from climate groups for winding back renewable-energy policies in favor of coal. South Australia notwithstanding, the country’s per-person greenhouse emissions are among the world’s highest.

South Australia’s government has yet to say how much the battery will cost taxpayers, although renewable-energy experts estimate it at US$50 million. Tesla says the system’s 100-megawatt capacity makes it the world’s largest, tripling the previous record array at Mira Loma in Ontario, Calif., also built by Tesla and U.S. power company Edison.

Senior concierge services

“Elder concierge”, or senior concierge services, are blossoming as baby boomers age:

CREDIT: New York Times Article on Senior Concierge Services

https://www.forbes.com/sites/robertpearl/2017/06/22/concierge-medicine/amp/

The concierges help their customers complete the relatively mundane activities of everyday life, a way for the semi- and fully retired to continue to work.

Facts of note:
“Around 10,000 people turn 65 every day in the United States, and by 2030, there will be 72 million people over 65 nationwide.
Some 43 million people already provide care to family members — either their own parents or children — according to AARP, and half of them are “sandwich generation” women, ages 40 to 60. All told, they contribute an estimated $470 billion a year in unpaid assistance.”
“elder concierges charge by the hour, anywhere from $30 to $70, or in blocks of time, according to Katharine Giovanni, the director of the International Concierge & Lifestyle Management Network”

Organizations of note:

“One start-up, AgeWell, employs able-bodied older people to assist less able people of the same age, figuring the two will find a social connection that benefits overall health.
The company was founded by Mitch Besser, a doctor whose previous work involved putting H.I.V.-positive women together in mentoring relationships. AgeWell employees come from the same communities as their clients, some of whom are out of reach of medical professionals
until an emergency.”

The National Aging in Place Council, a trade group, is developing a social worker training program with Stony Brook University. It wants to have a dedicated set of social workers at the council, funded by donations, who are able to field calls from seniors and their caretakers, and make referrals to local service providers.
The council already works with volunteers and small businesses in 25 cities to make referrals for things like home repair and remodeling, daily money management and legal issues.”

Village to Village Network, has small businesses and volunteers working on a similar idea: providing older residents and their family or caretakers with referrals to vetted local services.
In the Village to Village Network model, residents pay an annual fee, from about $400 to $700 for individuals and more for households. The organization so far has 25,000 members in 190 member-run communities across the United States, and is forming similar groups overseas as well.”

=========== ARTICLE IS BELOW ============

Baby Boomers Look to Senior Concierge Services to Raise Income
Retiring
By LIZ MOYER MAY 19, 2017

In her 40 years as a photographer in the Denver area, Jill Kaplan did not think she would need her social work degree.
But when it became harder to make a living as a professional photographer, she joined a growing army of part-time workers across the country who help older people living independently, completing household tasks and providing companionship.
Elder concierge, as the industry is known, is a way for the semi- and fully retired to continue to work, and, from a business standpoint, the opportunities look as if they will keep growing. Around 10,000 people turn 65 every day in the United States, and by 2030, there will be 72 million people over 65 nationwide.
Some 43 million people already provide care to family members — either their own parents or children — according to AARP, and half of them are “sandwich generation” women, ages 40 to 60. All told, they contribute an estimated $470 billion a year in unpaid assistance.

Seven years ago, Ms. Kaplan, 63, made the leap, signing up with Denver-based Elder Concierge Services. She makes $25 to $40 an hour for a few days a week of work. She could be driving older clients to doctor’s appointments, playing cards or just acting as an extra set of eyes and ears for family members who aren’t able to be around but worry about their older relatives being isolated and alone. Many baby boomers themselves are attracted to the work because they feel an affinity for the client base.
“It’s very satisfying,” she said of the work, which supplements her photography income. Like others in search of additional money, she could have become an Uber driver but said this offered her a chance to do something “more meaningful.”
“We see a lot of women,” Ms. Kaplan said, “who had raised their families and cared for their parents out there looking for a purpose.”

Concierges are not necessarily social workers by background, and there isn’t a formal licensing program. They carry out tasks or help their customers complete the relatively mundane activities of everyday life, and just need to be able to handle the sometimes physical aspects of the job, like pushing a wheelchair.
Medical care is left to medical professionals. Instead, concierges help out around the house, get their client to appointments, join them for recreation, and run small errands.
While precise statistics are not available for the elder concierge industry, other on-demand industries have flourished, and baby boomers are a fast-growing worker population.
Nancy LeaMond, the AARP’s executive vice president and chief advocacy officer, said: “Everyone assumed the on-demand economy was a millennial thing. But it is really a boomer thing.”
Ms. LeaMond noted that while people like the extra cash, they also appreciate the “extra engagement.”
A variety of companies has sprung up, each fulfilling a different niche in the elder concierge economy.
In some areas, elder concierges charge by the hour, anywhere from $30 to $70, or in blocks of time, according to Katharine Giovanni, the director of the International Concierge & Lifestyle Management Network. Those considering going into the business should have liability insurance, Ms. Giovanni said.

One start-up, AgeWell, employs able-bodied older people to assist less able people of the same age, figuring the two will find a social connection that benefits overall health.
The company was founded by Mitch Besser, a doctor whose previous work involved putting H.I.V.-positive women together in mentoring relationships. AgeWell employees come from the same communities as their clients, some of whom are out of reach of medical professionals until an emergency.
The goal is to provide consistent monitoring to reduce or eliminate full-blown crises. AgeWell began in South Africa but recently got a grant to start a peer-to-peer companionship and wellness program in New York.
Elsewhere, in San Francisco, Justin Lin operates Envoy, a network of stay-at-home parents and part-time workers who accept jobs like grocery delivery, light housework and other tasks that don’t require medical training. Each Envoy employee is matched to a customer, who pays $18 to $20 an hour for the service, on top of a $19 monthly fee.
The inspiration for the company came from Mr. Lin’s work on a start-up called Mamapedia, an online parental wisdom-sharing forum, where he noticed a lot of people talking about the need for family care workers. He decided to start Envoy two years ago, after his own mother died of cancer, leaving him and his father to care for a disabled brother.
The typical Envoy employee works a few hours a week, so it won’t replace the earnings from a full-time job. But it nevertheless involves more interpersonal contact than simply standing behind a store counter.
“It’s not going to pay the rent,” Mr. Lin said. “They want to be flexible but also make a difference.”

Katleen Bouchard, 69, signed up with Envoy three years ago, after retiring from an advertising career. She gets $20 an hour working a handful of hours a week with older clients in her rural community in Sonoma County, Calif. She sees it as a chance to be civic-minded. “It’s very easy to help and be of service,” Ms. Bouchard said.
Companies like AgeWell and Envoy are part of the growing on-demand economy, where flexibility and entrepreneurship have combined to create a new class of workers, said Mary Furlong, a Silicon Valley consultant who specializes in the job market for baby boomers. At the same time, many retirees — as well as those on the cusp of retirement — worry that market volatility may hit their savings.
The extra income from the job, Ms. Furlong said, could help cover unexpected expenses. “You don’t know what the shocks are going to be that interrupt your plan,” she added.
Other organizations are looking to help direct older residents to vetted local service providers.
The National Aging in Place Council, a trade group, is developing a social worker training program with Stony Brook University. It wants to have a dedicated set of social workers at the council, funded by donations, who are able to field calls from seniors and their caretakers, and make referrals to local service providers.
The council already works with volunteers and small businesses in 25 cities to make referrals for things like home repair and remodeling, daily money management and legal issues.
Another group, the Village to Village Network, has small businesses and volunteers working on a similar idea: providing older residents and their family or caretakers with referrals to vetted local services.
In the Village to Village Network model, residents pay an annual fee, from about $400 to $700 for individuals and more for households. The organization so far has 25,000 members in 190 member-run communities across the United States, and is forming similar groups overseas as well.
“We feel like we are creating a new occupation,” said Marty Bell, the National Aging in Place Council’s executive director. “It’s really needed.”
Twitter: @LizMoyer

Smuggling, Capitalism and the Law of Unintended Consequences

To me, this article seems to be about the border wall with Mexico, but it instead is about 1) the law of unintended consequences, and 2) the nature of capitalism.

The law of unintended consequences can never be underestimated; nor can the ability of capitalism to bring out the creativity of entrepreneurs and organizations when there is big money to be made.

A few notes:

“But rather than stopping smuggling, the barriers have just pushed it farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.”

“A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year.”

The price of smuggling any given drug will rise proportionate to the difficulty of smuggling.

52 legal crossings
Nogales (Mexico) and Nogales (US) and the dense homes on border

Tricks:
Coyotes (small drug smugglers)
Donkeys (the people who actually carry the drugs)
“Clavos” Secret compartments whose sophistication grows
Trains (a principal means of smuggling)
“Trampolines” (gigantic catapults that hurl the drugs over any wall)
Tunnel and new technologies (216 discovered since 1990)

===============
CREDIT: New York Times Article: mexican-drug-smugglers-to-trump-thanks!

Mexican Drug Smugglers to Trump: Thanks!

Ioan Grillo
MAY 5, 2017

NOGALES, Mexico — Crouched in the spiky terrain near this border city, a veteran smuggler known as Flaco points to the steel border fence and describes how he has taken drugs and people into the United States for more than three decades. His smuggling techniques include everything from throwing drugs over in gigantic catapults to hiding them in the engine cars of freight trains to making side tunnels off the cross-border sewage system.

When asked whether the border wall promised by President Trump will stop smugglers, he smiles. “This is never going to stop, neither the narco trafficking nor the illegals,” he says. “There will be more tunnels. More holes. If it doesn’t go over, it will go under.”

What will change? The fees that criminal networks charge to transport people and contraband across the border. Every time the wall goes up, so do smuggling profits.

The first time Flaco took people over the line was in 1984, when he was 15; he showed them a hole torn in a wire fence on the edge of Nogales for a tip of 50 cents. Today, many migrants pay smugglers as much as $5,000 to head north without papers, trekking for days through the Sonoran Desert. Most of that money goes to drug cartels that have taken over the profitable business.

“From 50 cents to $5,000,” Flaco says. “As the prices went up, the mafia, which is the Sinaloa cartel, took over everything here, drugs and people smuggling.” Sinaloa dominates Nogales and other parts of northwest Mexico, while rivals, including the Juarez, Gulf and Zetas cartels, control other sections of the border. Flaco finished a five-year prison sentence here for drug trafficking in 2009 and has continued to smuggle since.

His comments underline a problem that has frustrated successive American governments and is likely to haunt President Trump, even if the wall becomes more than a rallying cry and he finally gets the billions of dollars needed to fund it. Strengthening defenses does not stop smuggling. It only makes it more expensive, which inadvertently gives more money to criminal networks.

The cartels have taken advantage of this to build a multibillion industry, and they protect it with brutal violence that destabilizes Mexico and forces thousands of Mexicans to head north seeking asylum.

Stretching almost 2,000 miles from the Pacific Ocean to the Gulf of Mexico, the border has proved treacherous to block. It traverses a sparsely populated desert, patches of soft earth that are easy to tunnel through, and the mammoth Rio Grande, which floods its banks, making fencing difficult.

And it contains 52 legal crossing points, where millions of people, cars, trucks and trains enter the United States every week.

President Trump’s idea of a wall is not new. Chunks of walls, fencing and anti-car spikes have been erected periodically, particularly in 1990 and 2006. On April 30, Congress reached a deal to fund the federal budget through September that failed to approve any money for extending the barriers as President Trump has promised. However, it did allocate several hundred million dollars for repairing existing infrastructure, and the White House has said it will use this to replace some fencing with a more solid wall.

But rather than stopping smuggling, the barriers have just pushed it: farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.
It is particularly concerning how cartels have taken over the human smuggling business. Known as coyotes, these smugglers used to work independently, or in small groups. Now they have to work for the cartel, which takes a huge cut of the profits, Flaco says. If migrants try to cross the border without paying, they risk getting beaten or murdered.

The number of people detained without papers on the southern border has dropped markedly in the first months of the Trump administration, with fewer than 17,000 apprehended in March, the lowest since 2000. But this has nothing to do with the yet-to-be-built new wall. The president’s anti-immigrant rhetoric could be a deterrent — signaling that tweets can have a bigger effect than bricks. However, this may not last, and there is no sign of drug seizures going down.

Flaco grew up in a Nogales slum called Buenos Aires, which has produced generations of smugglers. The residents refer to the people who carry over backpacks full of drugs as burros, or donkeys. “When I first heard about this, I thought they used real donkeys to carry the marijuana,” Flaco says. “Then I realized, we were the donkeys.”

He was paid $500 for his first trip as a donkey when he was in high school, encouraging him to drop out for what seemed like easy money.
The fences haven’t stopped the burros, who use either ropes or their bare hands to scale them. This was captured in extraordinary footage from a Mexican TV crew, showing smugglers climbing into California. But solid walls offer no solution, as they can also be scaled and they make it harder for border patrol agents to spot what smugglers are up to on the Mexican side.

Flaco quickly graduated to building secret compartments in cars. Called clavos, they are fixed into gas tanks, on dashboards, on roofs. The cars, known by customs agents as trap cars, then drive right through the ports of entry. In fact, while most marijuana is caught in the desert, harder drugs such as heroin are far more likely to go over the bridge.
When customs agents learned to look for the switches that opened the secret compartments, smugglers figured out how to do without them. Some new trap cars can be opened only with complex procedures, such as when the driver is in the seat, all doors are closed, the defroster is turned on and a special card is swiped.

Equally sophisticated engineering goes into the tunnels that turn the border into a block of Swiss cheese. Between 1990 and 2016, 224 tunnels were discovered, some with air vents, rails and electric lights. While the drug lord Joaquin Guzman, known as El Chapo, became infamous for using them, Flaco says they are as old as the border itself and began as natural underground rivers.

Tunnels are particularly popular in Nogales, where Mexican federal agents regularly seize houses near the border for having them. Flaco even shows me a filled-in passage that started inside a graveyard tomb. “It’s because Nogales is one of the few border towns that is urbanized right up to the line,” explains Mayor David Cuauhtémoc Galindo. “There are houses that are on both sides of the border at a very short distance,” making it easy to tunnel from one to the other.

Nogales is also connected to its neighbor across the border in Arizona, also called Nogales, by a common drainage system. It cannot be blocked, because the ground slopes downward from Mexico to the United States. Police officers took me into the drainage system and showed me several smuggling tunnels that had been burrowed off it. They had been filled in with concrete, but the officers warned that smugglers could be lurking around to make new ones and that I should hit the ground if we ran into any.

Back above ground, catapults are one of the most spectacular smuggling methods. “We call them trampolines,” Flaco says. “They have a spring that is like a tripod, and two or three people operate them.” Border patrol agents captured one that had been attached to the fence near the city of Douglas, Ariz., in February and showed photos of what looked like a medieval siege weapon.

Freight trains also cross the border, on their way from southern Mexico up to Canada. While agents inspect them, it’s impossible to search all the carriages, which are packed with cargo from cars to canned chilies. Flaco says the train workers are often paid off by the smugglers. He was once caught with a load of marijuana on a train in Arizona, but he managed to persuade police that he was a train worker and did only a month in jail.
While marijuana does less harm, the smugglers also bring heroin, crack cocaine and crystal meth to America, which kill many. Calls to wage war on drugs can be emotionally appealing. The way President Trump linked his promises of a wall to drug problems in rural America was most likely a factor in his victory.

But four decades after Richard Nixon declared a “war on drugs,” despite trillions of dollars spent on agents, soldiers and barriers, drugs are still easy to buy all across America.

President Trump has taken power at a turning point in the drug policy debate. A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year. President Trump has also called for more treatment for drug addicts. He would be wise to make that, and not the wall, a cornerstone of his drug policy.

Reducing the finances of drug cartels could reduce some of the violence, and the number of people fleeing north to escape it. But to really tackle the issue of human smuggling, the United States must provide a path to papers for the millions of undocumented workers already in the country, and then make sure businesses hire only workers with papers in the future. So long as illegal immigrants can make a living in the United States, smugglers will make a fortune leading them there.

Stopping the demand for the smugglers’ services actually hits them in their pockets. Otherwise, they will just keep getting richer as the bricks get higher.

Ioan Grillo is the author of “Gangster Warlords: Drug Dollars, Killing Fields and the New Politics of Latin America” and a contributing opinion writer.

High costs of health care

I found this NYT story to be scary and illuminating. God save this country. Frankenstein lives….they are called CPT codes … And CPT consultants and CPT courses and CPT mavens and AMA licensing of CPT (biggest source of revenue).

New York Times Article on High Costs of Health Care

Hospitals have learned to manipulate medical codes — often resulting in mind-boggling bills.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============