Monthly Archives: February 2017

DeathEd

I recently did a post on “elderhood” ( http://johncreid.com/2017/02/elderhood/ This post is a follow-up to that.

CREDIT:
https://www.nytimes.com/2017/02/18/opinion/sunday/first-sex-ed-then-death-ed.html?emc=edit_th_20170219&nl=todaysheadlines&nlid=44049881

First, Sex Ed. Then Death Ed.

By JESSICA NUTIK ZITTER•FEB. 18, 2017

FIVE years ago, I taught sex education to my daughter Tessa’s class. Last week, I taught death education to my daughter Sasha’s class. In both cases, I didn’t really want to delegate the task. I wanted my daughters and the other children in the class to know about all of the tricky situations that might await them. I didn’t want anyone mincing words or using euphemisms. Also, there was no one else to do it. And in the case of death ed, no curriculum to do it with.

When Tessa heard I’d be teaching sex ed to her fellow seventh graders, she was mortified. My husband suggested she wear a paper bag over her head, whereupon she rolled her eyes and walked away. When the day arrived, she slunk to the back of the room, sat down at a desk and lowered her head behind her backpack.

As I started in, 13 girls watched me with trepidation. I knew I needed to bring in the words they were dreading right away, so that we could move on to the important stuff. “Penis and vagina,” I said, and there were nervous giggles. A pencil dropped to the floor. With the pressure released, I moved on to talking about contraception, saying no, saying yes, pregnancy, sexually transmitted diseases, even roofies. By the end of the hour, hands were held urgently in the air, and my daughter’s head had emerged from behind her backpack.

Sexual education programming was promoted by the National Education Association as far back as 1892 as a necessary part of a national education curriculum. As information spread and birth control became increasingly available, unwanted pregnancies dropped, and rates of S.T.D.s plummeted. In this case, knowledge really is power.

I believe that this is true of death, too.

I am a doctor who practices both critical and palliative care medicine at a hospital in Oakland, Calif. I love to use my high-tech tools to save lives in the intensive-care unit. But I am also witness to the profound suffering those very same tools can inflict on patients who are approaching the end of life. Too many of our patients die in overmedicalized conditions, where treatments and technologies are used by default, even when they are unlikely to help. Many patients have I.C.U. stays in the days before death that often involve breathing machines, feeding tubes and liquid calories running through those tubes into the stomach. The use of arm restraints to prevent accidental dislodgment of the various tubes and catheters is common.

Many of the patients I have cared for at the end of their lives had no idea they were dying, despite raging illness and repeated hospital admissions. The reasons for this are complex and varied — among them poor physician training in breaking bad news and a collective hope that our technologies will somehow ultimately triumph against death. By the time patients are approaching the end, they are often too weak or disabled to express their preferences, if those preferences were ever considered at all. Patients aren’t getting what they say they want. For example, 80 percent of Americans would prefer to die at home, but only 20 percent achieve that wish.

Many of us would choose to die in a planned, comfortable way, surrounded by those we love. But you can’t plan for a good death if you don’t know you’re dying. We need to learn how to make a place for death in our lives and we also need to learn how to plan for it. In most cases, the suffering could have been avoided, or at least mitigated, by some education on death and our medical system. The fact is that when patients are prepared, they die better. When they have done the work of considering their own goals and values, and have documented those preferences, they make different choices. What people want when it comes to end-of-life care is almost never as much as what we give them.

I am a passionate advocate for educating teenagers to be responsible about their sexuality. And I believe it is past time for us to educate them also about death, an equally important stage of life, and one for which the consequences of poor preparedness are as bad, arguably worse. Ideally this education would come early, well before it’s likely to be needed.

I propose that we teach death ed in all of our high schools. I see this curriculum as a civic responsibility. I understand that might sound radical, but bear with me. Why should death be considered more taboo than sex? Both are a natural part of life. We may think death is too scary for kids to talk about, but I believe the consequences of a bad death are far scarier. A death ed program would aim to normalize this passage of life and encourage students to prepare for it, whenever it might come — for them, or for their families.

Every year in my I.C.U. I see dozens of young people at the bedsides of dying relatives. If we started to teach death ed in high school, a student visiting a dying grandparent might draw from the curriculum to ask a question that could shift the entire conversation. She might ask about a palliative care consultation, for example, or share important information about the patient’s preferences that she elicited during her course. High school, when students are getting their drivers’ licenses and considering organ donation, is the perfect time for this. Where else do we have the attention of our entire society?

Last week, my colleague Dawn Gross and I taught our first death ed program in my daughter’s ninth-grade class at the Head-Royce School, a private, progressive (and brave) school in Oakland. In the classroom, we had some uncomfortable terms to get out of the way early on, just as I did in sex ed — death, cancer, dementia. We showed the teenagers clips of unrealistic rescues on the TV show “Grey’s Anatomy,” and then we debunked them. We described the realities of life in the I.C.U. without mincing words — the effects of a life prolonged on machines, the arm restraints, the isolation. Everyone was with us, a little tentative, but rapt.

And then we presented the material another way. We taught them how to play “Go Wish,” a card game designed to ease families into these difficult conversations in an entertaining way. We asked students to identify their most important preferences and values, both in life and as death might approach. We discussed strategies for communicating these preferences to a health care team and to their own families.

We were delighted by their response. It didn’t take them long to jump in. They talked openly about their own preferences around death. One teenager told another that she wanted to make sure she wasn’t a burden to her family. A third said he was looking forward to playing “Go Wish” with his grandfather, who recently had a health scare.

Dawn and I walked out with huge smiles on our faces. No one had fainted. No one had run out of the class screaming. The health teacher told us she was amazed by their level of engagement. It is my hope that this is only the first step toward generating wide public literacy about this phase of life, which will eventually affect us all. The sooner we start talking about it, the better.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============

Elderhood

Such an irony! The third and arguably the most advanced stage of human development is the stage that gets the least attention. The next big wave is when baby boomers insist that ELDERHOOD matters …. a lot!

CREDIT: Elderhood Website

Childhood: First developmental stage
Adulthood: Second developmental stage
Elderhood: Third Developmental Stage

CREDIT: http://www.anti-aging-articles.com/Aging-Stereotype.html
http://www.anti-aging-articles.com/Elderhood.html

Key points to remember:

Elderhood is the third stage of human existence, which is a continuum of progressive growth.
Elderhood is not about loss. It is about gaining freedom, and – along with that freedom – opportunities for learning, joy, service and other experiences that significantly increase well-being.
Elderhood is more like a crowning achievement, where you have lived long enough, experienced enough, and accumulated enough that you no longer have to work (unless you want to), and can devote your time and energy in ways that are closer to what you want, than to what the world demands.

Aging Stereotype

Many  books about life after age 60 emphasizes ‘how to hold onto’ adulthood. Aging is viewed as a decline, a series of losses – until you reach the ultimate loss – the loss of life itself. This view of live after age 50 is an Aging Stereotype – and it is mistaken.

Current psychology in much of the developed world  seems to favor adulthood as the peak of human development. (This is not surprising since most practicing psychologists and gerontologists are adults.)

Understanding the Aging Stereotype

There was a time when children were seen as ‘little adults’. But after Jean Piaget suggested  that childhood is a special phase of human life, that it is not just ‘adulthood writ small’ and that children are a worthy object of scientific study, the field of child psychology burgeoned.
Pick up any  college catalog and you will see many courses about childhood and its various ‘stages’. Basic to all these courses is the idea childhood and adulthood are very different and that moving to adulthood is the  goal of human beings.

Childhood is a long preparation for adulthood and providing a child with a ‘good childhood experiences’ will allow that child to mature into a productive and happy adult. And if an adult is not productive and happy, the ’cause’ is usually assumed to be negative childhood experiences.

Adulthood in much of the developed world is THE goal. Adulthood is a most important life stage. Adults make the economy work, they give birth to and rear children. They create laws, lead the nation and are generally viewed as the ‘movers and doers’ of society. There is nothing in adult psychology that mirrors the notion found in child psychology that this particular stage of human development is a preparation for a new and even more important stage . . .  Elderhood.
Instead of viewing Elderhood as the ‘crowning achievement’ of life, we are offered an Aging Stereotype –  the stage after adulthood as a a time when life is in decline.

This is  The Aging Stereotype.
(Note that the stage of human development that follows adulthood is called ‘aging’ – as though going from age 17 to age 29 is NOT aging. The Aging Stereotype describes our last stage of development as a long, slow decline  – a period of losses and grieving the bygone adult powers and prestige.
Such are the fundamental beliefs of the  Aging Stereotype. Those who hold this view spend much energy trying to figure out how people can ‘hold onto’ adult characteristics and skills. At the same time they refuse Elders any of the power and prestige of adulthood – even if elders retain many of their adult skills.

The Aging Stereotype is at best a false view of human development and at worst it is degrading to those in the last third of their lives.

A different view
There is an alternative to this Aging Stereotype.  This view holds that there are at least 3 big stages of human development: childhood, adulthood and elderhood

AND that goal of human existence is progressive growth though these three stages. Why? So that at the end of ones life you are ready to move on because you have lived a full and complete human life. You have been a child, an adult and an elder.

Yes,  Elderhood is the third stage of human development. And using this name removes any Aging Stereotype. (It also removes current bias of the primacy of adulthood.)

But I have yet to see a course on Elderhood listed in any college catalog. Oh, there are courses in gerontology and aging. And most of these concentrate on ‘inevitable decline’ and ‘how we can help these people’.

Now of course there are losses as you move to elderhood. Just as there were losses when you moved from childhood to adulthood. But somehow in the current state of things only  the losses of adult tasks and powers are seen as  losses that count. No one speaks of the losses children face as they move into adulthood – only those adults face when moving into Elderhood.

Perhaps this skewing of psychology towards the primacy of adulthood is caused by the fact that most academic studies are done by adults and adults have not yet lived as elders. They have no experiential knowledge of their own elderhood and so they judge everything out of their own bias concerning the centrality and importance of adulthood.
To understand the unconscious bias towards their own adulthood, consider what would happen to views of Adulthood if most of what we knew about it was written by children.  Children have no personal experiences of being an adult, they ,too, would concentrate on what a person loses in becoming an adult.

Ask children what it is like to be an adult and after they say a few good things related to power or prestige , the children are likely to talk about things that are lost in becoming an adult.Depending on their age, children say that in Adulthood:
• You can not play all day.
• You have to go to work all the time.
• You need to think and worry about money
• No one tucks you in each night… or takes care of you when you are sick
• You can not play in Little League (or Pop WArner or…)
• You have lots of RESPONSIBILITY
• The list goes on….
Notice, that children know that adults have power but when it gets to the nitty gritty,  children can not help but notice the things they would lose by being an adult. For children, adulthood is about losses. Why? Because children have not experienced and do not understand the special joys of adulthood. So , too, adults who have not experienced the positive aspects of Elderhood,do not value Elderhood. They emphasize the losses.

Children can not value what they do not know. If children were to write the textbooks about adulthood, these texts might be full of advice as to how to hold onto some of the joys and experience of childhood. Just as adults who DO write the texts about Elderhood give advice about how to ‘hold onto’ aspects of adulthood but offer NO INSIGHT into any of the special joys or tasks of Elderhood. So it is that we have the Aging Stereotype of ‘inevitable decline’.

To summarize:
Adults have NOT experienced the joys of Elderhood. Once they get past a few cliches, they tend to focus on the losses that they, as adults, will experience. Adults do not perceive the positive values of elderhood because they have no clue as to what they are and just as children do not imagine thatthere are a host special joys to being an adult, so, too, adults do not imagine that there are special joys of Elderhood. It never occurs to them to ask about them and even if they were to ask, most adults have no psychological context to appreciate the answers.
We who are elders need to read the works of other elders and we need to rethink the whole view of Aging.  That is what this web site is about. I hope you will read more….and come back and read again.
============
Elderhood – what comes after adulthood

Elderhood is one of the 3 main developmental stages of human life:
• Childhood is the first stage. Traditionally this first stage has be subdivided into three major groupings :infancy, early and middle childhood and adolescence. 
There are many, many studies of childhood. You can go to most libraries and find that the there are shelves and shelves of books about childhood, what it means, what to expect from a child in each of the ‘growth periods’, how adults can help children grow into happy, responsible adults etc. 
Childhood is the most researched of the three major life stages. In fact there have been so many studies that many childhood specialists now subdivide this period of life into more stages than just the four listed above. 

• Adulthood is the second major stage of human life. There are fewer studies about adulthood as a stage of human development but there have been some. If you look into the field, you will find that many researchers now divide this period into: young adulthood and full adulthood.

• Elderhood is the third major stage of human development but almost no one writes about it. Oh, you will find many studies about ‘aging’ and its problems. And you will find an increasing number of research papers and books about ‘old age’ and its problems. But ‘aging’ and ‘old age’ are chronological stages. They are not really developmental stages of life. 
The third great developmental stage of human life can not be identified solely by age any more than adulthood is identified solely by age. (There are ever so many 19 year olds who have moved into adulthood just as there are 40 year old’s who never really left the adolescent stage.) 
Being and elder is a specific social role – one that requires new forms of maturity. It requires approaches to life that go beyond those of one’s adult years. And it requires taking on new tasks in and for one’s family/community.

So, what do we know about this third stage of human development?
First, we know that not everyone attains it. Just because you have reached a certain age, does not say that you have become an Elder. Some people cling to adulthood so long, that they never quite make it into this next stage of human development.

One of the tragedies of the developed world is that most people are so focused on adulthood: adult tasks, adult powers and responsibility that they give scant thought or attention to the next stage of human development. in fact there are some adults who do not realize that there IS a next stage in human maturity.

Some researchers believe that one of the reasons why many adults fear their loss of adult powers and adult privilege is because they have NO Notion that there is a ‘next developmental stage’ beyond adulthood.

They do not realize that the next stage of human life is one of growth, that new types of power accompany it or that this next stage has special joys that are unknown to most adults. (Just as many of the special joys of adulthood are unknown to children. )

To read more about this stage of human development click on:
• The Aging Stereotype
• Elderhood Characteristics: Freedom
• More Characteristics Elderhood Development

Good reads
• If I live to Be 100 

• The Blue Zones – life where people live the longest

=================

Issues related to elderhood

The luxury of time: Having the time to get your house in order

Well-being? How do I want to approach my well-being? what choices do I need to make? Set up MARVELS for yourself:

– M- Medications (including over the counter and supplements)
– A – Activity
– R – Resilience of Mindfulness
– V – Vital Sign monitoring
– E – Eating and Drinking Choices, including nutrition
– L – Labs monitoring, including blood, saliva, stool, genomic, etc
– S – Sleep

Re-establishing your identity – creating new business cards, email addresses, websites, blogs, twitter accounts – all of which help position you as you choose. Time for a new wardrobe? Haircut? Style? Primary residence? Second home? Downsize? Simplify?
Scrapbooking – sorting and clarifying connections and meaning in photo collections, awards and recognized achievements, old letters and memories, etc.
Re-establishing contact – with long-lost friends and relatives.
Choosing what’s right for you now:
Insurance – including health, long-term disability, Medicare, etc
Estate Planning
Wills – does my will need to be written? Updated?
Philanthropy – how to approach it (donor-advised fund?)
Service – what options are available to volunteer.
Learning – do I want to continue learning? At what rate? Through formal or informal means?
Service workers; housekeeping, landscaping, pool maintenance, etc.
health workers – primary care, dentist, specialists, nurses, etc

History of Community Foundations in the US

I have an announcement to make.

I have been published! Well, kind of ……

Take a look!

The Double Trust Imperative: A History of Community Foundations in the United States

Background:

I became the Chairman of the Board of the Community Foundation for Greater Atlanta (CFGA) in January, 2017. Its a three-year appointment.

Last year, as Vice Chair, I decided to study the history of these institutions. Because I couldn’t find a good history, I decided I would write a History of Community Foundations in the United States. In addition to researching the subject extensively, I have been discussing the work with other heads of community foundations nationally. Through these discussions, I decided to try to identify the key difference between community foundations and other institutions. I put that difference right in the title: The Double Trust Imperative…..because community foundations uniquely build trust in two directions: the community and donors.

The essay documents how community foundations came to be. It documents how it came to be that $82 billion in philanthropic assets came to be housed in these institutions – so that the institutions can invest those assets back into the communities they serve. The impact on any given community? Well, in ATL alone, we have 900+ donors with $900 million+ in assets….and the CFGA gave away $130 million+ last year to non-profits of all shapes and sizes in ATL last year. The ATL community foundation (CFGA) is the second largest foundation in Georgia, and the 17th largest community foundation in the United States.

Well, the essay was selected as one of the pre-reads for the upcoming Conference for Large Community Foundations in San Diego. Over 200 people will be there from all over the country. These are the Chairs and the CEO’s of all the big community foundations – the movers and shakers of the movement (Alicia Phillipp, the CEO of the Community Foundation for Greater Atlanta and I will attend representing ATL).

They have a tradition of reading the pre-reads (so a lot of movers and shakers will read this).

Its a pre-read for the second day session – which is themed “where have community foundations been and where are they headed.”

So there you go. A bit of news about me as a writer, kind of …… not a best seller, but step by step……

NEST Smart Home Update

I have been tracking Google’s NEST for awhile now. It’s the best example I know of a learning system for the home. The latest is …. it is still the best!

================
CREDIT: http://thewirecutter.com/reviews/the-best-thermostat/

We spent more than a month trying five popular smart thermostats—testing the hardware, their accompanying mobile apps, and their integrations with various smart-home systems—and the third-generation Nest remains our pick. Five years after the Nest’s debut, a handful of bona fide competitors approach it in style and functionality, but the Nest Learning Thermostat remains the leader. It’s still the easiest, most intuitive thermostat we tested, offering the best combination of style and substance.

Last Updated: November 10, 2016
We’ve added our review of Ecobee’s new Ecobee3 Lite, and we’ve updated our thoughts on HomeKit integration following the launch of Apple’s Home app. We’ve also included details on Nest’s new Eco setting and color options, a brief look at the upcoming Lyric T5, and a clarification regarding the use of a C wire for the Emerson Sensi.

The Nest works well on its own or integrated with other smart-home products. Its software and apps are solid and elegant, too, and it does a really good job of keeping your home at a comfortable temperature with little to no input from you. Plus, if you want to change the temperature yourself, you can easily do so from your smartphone or computer, or with your voice via Google or an Amazon Echo. All of that means never having to get up from a cozy spot on the couch to mess with the thermostat. While the competition is catching up, none of the other devices we tested could match the Nest’s smarts. The expansion of the Works with Nest smart-home ecosystem and the introduction of Home/Away Assist have kept the Nest in the lead by fine-tuning those smart capabilities. The recent hardware update merely added a larger screen and a choice of clock interfaces, but the ongoing software improvements (which apply to all three generations of the product) have helped keep the Nest in its position as the frontrunner in this category without leaving its early adopters out in the cold.

Runner-up

Ecobee3
Not as sleek or intuitive as the Nest, but it supports Apple’s HomeKit and uses stand-alone remote sensors to register temperature in different parts of a house, making it an option for large homes with weak HVAC systems.
The Ecobee3’s support for remote sensors makes it appealing if your thermostat isn’t in the best part of your house to measure the temperature. If you have a large, multistory house with a single-zone HVAC system, you can have big temperature differences between rooms. With Ecobee3’s add-on sensors (you get one with the unit and can add up to 32 more), the thermostat uses the sensors’ occupancy detectors to match the target temperature in occupied rooms, rather than just wherever the thermostat is installed. However, it doesn’t have the level of intelligence of the Nest, or that model’s retro cool look (which even the Honeywell Lyric takes a good stab at). Its black, rounded-rectangle design and touchscreen interface have a more modern feel, it looks a bit like someone mounted a smartphone app on your wall.

Ecobee3 Lite
Ecobee’s new Lite model is a great budget option. It doesn’t have any occupancy sensors or remote temperature sensors, but it would work well for a smaller home invested in the Apple ecosystem.
For a cheaper smart thermostat with most of the important features of the more expensive models, we suggest the Ecobee3 Lite. This budget version of the Ecobee3 lacks the remote sensors and occupancy sensors of its predecessor but retains the programming and scheduling features, and like the main Ecobee3, it works with a variety of smart-home systems, including HomeKit, Alexa, SmartThings, Wink, and IFTTT. However, the lack of an occupancy sensor means you’ll have to manually revert it to its prescheduled state anytime you use Alexa, Siri, or any other integration to change its temperature.

real people should not fill this in and expect good things – do not remove this or risk form bot signups

Table of contents
Why a smart thermostat?
Smart-home integration
Who this is for
The C-wire conundrum
Multizone systems
How we picked and tested
Our pick
Who else likes our pick
Flaws but not deal breakers
Potential privacy issues
The next best thing (for larger homes)
Budget pick
The competition
What to look forward to
Wrapping it up

Why a smart thermostat?
A smart thermostat isn’t just convenient: Used wisely, it can save energy (and money), and it offers the potential for some cool integrations. If you upgrade to any smart thermostat after years with a basic one, the first and most life-changing difference will be the ability to control it remotely, from your phone, on your tablet, or with your voice. No more getting up in the middle of the night to turn up the AC. No dashing back into the house to lower the heat before you go on errands (or vacation). No coming home to a sweltering apartment—you just fire up the AC when you’re 10 minutes away, or even better, have your thermostat turn itself on in anticipation of your arrival.
Technically, thermostats have been “smart” since the first time a manufacturer realized that such devices could be more than a mercury thermometer and a metal dial. For years, the Home Depots of the world were full of plastic rectangles that owed a lot to the digital clock: They’d let you dial in ideal heating and cooling temperatures, and maybe even set different temperatures for certain times of the day and particular days of the week.
The thermostat landscape changed with the introduction of the Nest in 2011 by Nest Labs, a company led by Tony Fadell, generally credited to be one of the major forces behind Apple’s iPod. (Google acquired Nest Labs in 2014; Fadell has since moved on to an advisory position at Alphabet, Google’s parent company.) The original Nest was a stylish metal-and-glass Wi-Fi–enabled device, with a bright color screen and integrated smartphone apps—in other words, a device that combined style and functionality in a way never before seen in the category.
The Nest got a lot of publicity, especially when you consider that it’s a thermostat. Within a few months, Nest Labs was slapped with a patent suit by Honeywell, maker of numerous competing thermostats.
But once the Nest was out there, it was hard to deny that the thermostat world had needed a kick in the pants. And five years later, not only have the traditional plastic beige rectangles gained Wi-Fi features and smartphone apps, but other companies have also entered the high-feature, high-design thermostat market, including the upstart Ecobee and the old standards Honeywell, Emerson, and Carrier.
The fact is, a cheap plastic thermostat with basic time programming—the kind people have had for two decades—will do a pretty good job of keeping your house at the right temperature without wasting a lot of money, so long as you put in the effort to program it and remember to shut it off. But that’s the thing: Most people don’t.
These new thermostats are smart because they spend time doing the thinking that most people just don’t do.
“The majority of people who have a programmable thermostat don’t program it, or maybe they program it once and never update it when things change,” said Bronson Shavitz, a Chicago-area contractor who has installed and serviced hundreds of heating and cooling systems over the years.

Smart thermostats spend time doing the thinking that most people just don’t do, turning themselves off when nobody’s home, targeting temperatures only in occupied rooms, and learning your household schedule through observation. Plus, with their sleek chassis and integrated smartphone apps, these thermostats are fun to use.

Nest Labs claims that a learning thermostat (well, its learning thermostat) saves enough energy to pay for itself in as little as two years.
Since the introduction of the Nest, energy companies have begun offering rebates and incentives for their customers to switch to a smart thermostat, and some have even developed their own devices and apps and now offer them for free or at a greatly reduced price to encourage customers to switch. Clearly, these devices provide a larger benefit than simple convenience. Because they can do a better job of scheduling the heating and cooling of your house than you can, they save money and energy.

Smart-home integration
Among the useful features of smart thermostats is the ability to work as part of a larger smart-home system and to keep developing even after you’ve purchased one. For example, many of the thermostats we tested now integrate with the Amazon Echo, a Wi-Fi–connected speaker that can control many smart-home devices. You can speak commands to Alexa, Echo’s personal assistant, to adjust your climate control. This function came to the thermostats via a software update, so a smart thermostat purchased last year has the same functionality as one bought yesterday.
These over-the-air software updates, while sometimes known to cause issues, are a key feature of smart devices. Shelling out $250 for a thermostat that has the potential to become better as it sits on your wall helps cushion some of the sticker shock. The Nest earns particularly high marks in this area, because whether you bought one in 2011 or 2016, you get the same advanced learning algorithms and smart integrations.
Additionally, all of the thermostats we tested work with one or more smart-home hubs such as SmartThings and Wink, or within a Web-enabled ecosystem like Amazon’s Alexa or IFTTT (If This Then That). The Nest also has its own developer program, Works with Nest, which integrates the company’s thermostat and other products directly with a long and growing list of devices including smart lights, appliances, locks, cars, shades, and garage door openers. This means you can add your thermostat to different smart scenarios and have it react to other actions in your home: It could set itself to Away mode and lock your Kevo smart door lock when you leave your house, for instance, or it could turn up the heat when your Chamberlain MyQ garage door opener activates. These ecosystems are continually growing, meaning the interactions your thermostat is capable of are growing as well (sometimes with the purchase of additional hardware).
With the release of the Home app for HomeKit, Apple’s smart-home unification plans have taken a bigger step toward fruition. While the devices are still limited (a hardware update is required for compatibility), you can now create scenes (linking devices together) and control them from outside the home on an iPad; previously you had to use a third-generation Apple TV. This change increases the number of people who will see HomeKit as a viable smart-home option. Even without an iPad permanently residing in your home, you can still talk to and operate HomeKit products using Siri on your iPhone or iPad while you are at home. The system works in the same way Alexa does, and it’s actually a little more pleasant to use than shouting across the room.
The Ecobee3, Ecobee 3 Lite and Honeywell Lyric (released January 2016) are all HomeKit compatible, and can communicate with other HomeKit devices to create scenes such as “I’m Home,” to trigger your thermostat to set to your desired temp and your HomeKit-compatible lights to come on.
Google now offers its own voice-activated speaker similar to Amazon’s Echo, the Google Home. The Home, which integrates with Nest as well as IFTTT, SmartThings, and Philips Hue, allows you to control your Nest thermostat via voice.

Who this is for
Get a smart thermostat if you’re interested in saving more energy and exerting more control over your home environment. If you like the prospect of turning on your heater on your way home from work, or having your home’s temperature adjust intelligently, a smart thermostat will suit you. And, well, these devices just look cooler than those plastic rectangles of old.
Get a smart thermostat if you’re interested in saving more energy and exerting more control over your home environment.
If you already have a smart thermostat, such as a first- or second-generation Nest, you don’t need to upgrade. And if you have a big, complex home-automation system that includes a thermostat, you may prefer the interoperability of your current setup to the intelligence and elegance of a Nest or similar thermostat.
If you don’t care much about slick design and attractive user interfaces, you can find cheaper thermostats (available from companies such as Honeywell) that offer Wi-Fi connectivity and some degree of scheduling flexibility. The hardware is dull and interfaces pedestrian, but they’ll do the job and save you a few bucks.
The devices we looked at are designed to be attached to existing heating and cooling systems. Most manufacturers now offer Wi-Fi thermostats of their own, and while they’re generally not as stylish as the models we looked at, they have the advantage of being designed specifically for that manufacturer’s equipment. That offers some serious benefits, including access to special features and a deep understanding of how specific equipment behaves that a more general thermostat can’t have.

The C-wire conundrum
One major caveat with all smart thermostats is the need for a C wire, or “common wire,” which supplies AC power from your furnace to connected devices such as thermostats. Smart thermostats are essentially small computers that require power to operate—even more so if you want to keep their screens illuminated all the time. If your heating and cooling system is equipped with a C wire, you won’t have any concerns about power. The problem is, common wires are not very common in houses.
In the absence of a C wire, both the Nest and the Honeywell Lyric can charge themselves by stealing power from other wires, but that can cause serious side effects, according to contractor Bronson Shavitz. He told us that old-school furnaces are generally resilient enough to provide power for devices such as the Nest and the Lyric, but that the high-tech circuit boards on newer models can be more prone to failure when they’re under stress from the tricks the Nest and Lyric use to charge themselves without a common wire.
Installing a C wire requires hiring an electrician and will add about $150 to your costs. The Ecobee3 includes an entire wiring kit to add a C wire if you don’t have one (for the previous version of this guide, reviewer Jason Snell spent about two hours rewiring his heater to accommodate the wiring kit). The Emerson Sensi is the only thermostat we tested that claims not to need a C wire, but it too draws power from whichever system is not in currently in use (for example, the heating system if you’re using the AC). This means that if you have a heat- or air-only system, you will need a C wire.
Note: If the power handling is not correct, the damage to your system can be significant. The expense of replacing a furnace or AC board, plus the cost of professional installation, will probably outweigh the convenience or energy savings of a smart thermostat. Nest addresses the power requirements of its thermostat, including whether a common wire is necessary, in detail on its website, so if you’re unsure whether your system is suited for it, check out this page for C wire information, as well as this page for system compatibility questions and this page for solutions to wiring problems.

Multizone systems
If you have more than one zone in your HVAC system, you will need to purchase a separate smart thermostat for each zone. Currently, while all of the smart thermostats we tested are compatible with multizone systems, none can control more than one zone. Even though the Ecobee3 supports remote sensors, those feed only a single thermostat—so if you want more zones, you’ll still need separate thermostats, with their own sensors. However, the Ecobee3 is the only thermostat we tested that allows you to put more than one thermostat into a group so that you can program them to act identically, if you choose.

How we picked and tested

We put these five smart thermostats through their paces to bring you our top pick. Photo: Michael Hession
By eliminating proprietary and basic Wi-Fi–enabled thermostats, we ended up with six finalists: the third-generation Nest, Ecobee’s Ecobee3 and Ecobee3 Lite, Honeywell’s second-generation Lyric, Emerson’s Sensi Wi-Fi thermostat, and Carrier’s Cor. We installed each model ourselves and ran them for three to 10 days in routine operation. We did our testing in a 2,200-square-foot, two-story South Carolina home, running a two-zone HVAC system with an electric heat pump and forced air.
For each thermostat, our testing considered ease of installation and setup, ease of adjusting the temperature, processes for setting a schedule and using smartphone app features, multizone control capabilities, and smart-home interoperability.

Gallup reports on 2016 Well-being

Gallup/Healthways 2016 Report

This report, part of the Gallup-Healthways State of American Well-Being series, examines well-being across the nation, including how well-being varies by state and which states lead and lag across the five elements of well-being. The five elements include:
• Purpose: liking what you do each day and being motivated to achieve your goals
• Social: having supportive relationships and love in your life
• Financial: managing your economic life to reduce stress and increase security
• Community: liking where you live, feeling safe and having pride in your community • Physical: having good health and enough energy to get things done daily

In 2016, the national Well-Being Index score reached 62.1, showing statistically signif- icant gains from 2014 and 2015. Also in 2016, Americans’ life evaluation reached its highest point since 2008, when Gallup and Healthways began measurement. Now 55.4% of American adults are “thriving”, compared to 48.9% in 2008. Other positive trends include historically low smoking rates (now at 18.0%, down from 21.1% in 2008); historically high exercise rates as measured by those who report they exercised for 30 minutes or more, three or more days in the last week; and the highest scores recorded on healthcare access measures, with the greatest number of Americans covered by health insurance and visiting the dentist. Americans are also reporting the lowest rates of healthcare insecurity since 2008, as measured by not being able to afford health- care once in the last 12 months.

All national well-being trends are not positive, however; chronic diseases such as obesity (28.4%), diabetes (11.6%), and depression (17.8%) are now at their highest points since 2008. The percentage of Americans who report eating healthy all day during the previous day is also at a nine-year low.

Georgia ranked 29th – middle of the pack. Georgia improved – up from 35th.

Georgia showed a wide variation in sub-components. Georgia scored in the highest quintile on Social Rank (“having supportive relationships”), second quintile on Purpose Rank (“liking what you do each day”), third quintile on Physical (“having good health”), fourth quintile on Community ((“like where you live”), and fifth quintile on Financial (“managing your economic life”).

Tesla and Energy Storage

CREDIT: Guardian Story on Tesla and Energy Storage

Tesla moves beyond electric cars with new California battery farm

From the road, the close to 400 white industrial boxes packed into 1.5 acres of barren land in Ontario, California, a little more than 40 miles from downtown Los Angeles, look like standard electrical equipment. They’re surrounded by a metal fence, stand on concrete pads and sit under long electrical lines.

But take a closer look and you’ll notice the bright red coloring and gray logo of electric car company Tesla on the sides. And inside the boxes are thousands of battery cells – the same ones that are used in Tesla’s electric cars – made by the company in its massive $5bn Tesla Gigafactory outside of Reno, Nevada.

This spot, located at the Mira Loma substation of Southern California Edison, hosts the biggest battery farm Tesla has built for a power company. Southern California Edison will use the battery farm, which has been operating since December and is one of the biggest in the world, to store energy and meet spikes in demand – like on hot summer afternoons when buildings start to crank up the air conditioning.

Tesla’s project has a capacity of 20 megawatts and is designed to discharge 80-megawatt hours of electricity in four-hour periods. It contains enough batteries to run about 1,000 Tesla cars, and the equivalent energy to supply power to 15,000 homes for four hours. The company declined to disclose the project’s cost.

The project marks an important point in Tesla’s strategy to expand beyond the electric car business. Developing battery packs is a core expertise for the company, which is designing packs for homes, businesses and utilities. It markets them partly as a way to store solar electricity for use after sundown, a pitch that works well for states with a booming solar energy market such as California.

Battery systems built for power companies can serve more than one purpose. A utility can avoid blackouts by charging them up when its natural gas power plants, or solar and wind farms, produce more electricity than needed, and draw from them when the power plants aren’t able to keep up with demand.

Edison and other California utilities hired Tesla and a few other battery farm builders after an important natural gas reservoir near Los Angeles, called Aliso Canyon, closed following a huge leak and massive environmental disaster in late 2015. The leak forced thousands of people in nearby neighborhoods to evacuate. It also left utilities worried about how they’d meet the peak electricity demands of coming summers if they weren’t able to dip into the natural gas storage whenever they need fuel to produce power. They couldn’t always get natural gas shipment from other suppliers quick enough to meet a sharp rise in electricity consumption.

As a result, the California Public Utilities Commission approved 100 megawatts of energy storage projects for both Southern California Edison and also San Diego Gas & Electric. The commission also asked for the projects to be built quickly, before the end of 2016.

Other energy storage projects that have been built since include a 37.5-megawatt project in San Diego County by AES Energy Storage, which used lithium-ion batteries from Samsung. AES has completed the project, which is going through the commissioning phase. AES also plans to build a 100-megawatt project for Southern California Edison in Long Beach in 2020.
Even before the Aliso Canyon disaster, the commission had already recognized the benefit of using energy storage to manage supply and demand and expected it to become an important component in the state’s plan to replace fossil fuel energy with renewables. The commission, which requires the state’s three big utilities to add more wind and solar energy to their supplies over time, also set a statement energy storage target of 1,325 megawatts by 2020.
Surrounded by rows of batteries at a ribbon-cutting ceremony at the project on Monday, Southern California Edison’s CEO Kevin Payne said the Tesla project is important because “it validates that energy storage can be part of the energy mix now” and is “a great reminder of how fast technology is changing the electric power industry”.

This latest crop of energy storage projects use a new generation of lithium-ion batteries. Historically, batteries were too expensive for energy storage, but their prices have dropped dramatically in recent years, thanks to their mass production by companies such as Panasonic, Tesla and Samsung.
Companies that buy lithium-ion batteries have been reporting drops in prices of 70% over the past two years. Tesla has said it plans to lower its battery prices by 30% by expanding production inside its Gigafactory.
At the event on Monday, Tesla’s co-founder and chief technology officer JB Straubel said: “Storage has been missing on the grid since it was invented.”

Tesla is counting on the energy storage market as an important source of revenue and built its giant factory with that in mind.
The company believes its expertise in engineering and building electric cars sets itself apart from other battery farm developers. Tesla has been developing battery packs for a decade and improved the technology that manages the batteries temperatures, which can be high enough to pose a fire risk.
Overheating is a well known problem for lithium-ion batteries, which require insulating materials and software to keep them running cool. A battery farm built next to a wind farm in Hawaii by a now-bankrupt company caught fire in 2012 and temporarily put a dampener on the energy storage market.
Tesla has been building another battery farm on the Hawaiian island of Kauai, and has projects in Connecticut, North Carolina, New Zealand and the UK.
The company is looking for opportunities to build battery farms outside of California, including the East Coast and countries such as Germany, Australia and Japan. Tesla co-founder and CEO Elon Musk has said in the past that the company’s energy storage business could one day be bigger than its car business.