Category Archives: Politics

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid

Regulatory State and Redistributive State

Will Wilkinson is a great writer, and spells out here two critical aspects of government:

The regulatory state is the aspect of government that protects the public against abuses of private players, protects property rights, and creates well-defined “corridors” that streamline the flows of capitalism and make it work best. It always gets a bad rap, and shouldn’t. The rap is due to the difficulty of enforcing regulations on so many aspects of life.

The redistributive state is the aspect of government that deigns to shift income and wealth from certain players in society to other players. The presumption is always one of fairness, whereby society deems it in the interests of all that certain actors, e.g. veterans or seniors, get preferential distributions of some kind.

He goes on to make a great point. These two states are more independent of one another than might at first be apparent. So it is possible to dislike one and like another.

Personally, I like both. I think both are critical to a well-oiled society with capitalism and property rights as central tenants. My beef will always go to issues of efficiency and effectiveness?

On redistribution, efficiency experts can answer this question: can we dispense with the monthly paperwork and simply direct deposit funds? Medicare now works this way, and the efficiency gains are remarkable.

And on regulation, efficiency experts can answer this question: can private actors certify their compliance with regulation, and then the public actors simple audit from time to time? Many government programs work this way, to the benefit of all.

ON redistribution, effectiveness experts can answer this question: Is the homeless population minimal? Are veterans getting what they need? Are seniors satisfied with how government treats them?

On regulation, effectiveness experts can answer this question: Is the air clean? Is the water clean? Is the mortgage market making food loans that help people buy houses? Are complaints about fraudulent consumer practices low?

CREDIT: VOX Article on Economic Freedom by Will Wilkinson

By Will Wilkinson
Sep 1, 2016

American exceptionalism has been propelled by exceptionally free markets, so it’s tempting to think the United States has a freer economy than Western European countries — particularly those soft-socialist Scandinavian social democracies with punishing tax burdens and lavish, even coddling, welfare states. As late as 2000, the American economy was indeed the freest in the West. But something strange has happened since: Economic freedom in the United States has dropped at an alarming rate.

Meanwhile, a number of big-government welfare states have become at least as robustly capitalist as the United States, and maybe more so. Why? Because big welfare states needed to become better capitalists to afford their socialism. This counterintuitive, even paradoxical dynamic suggests a tantalizing hypothesis: America’s shabby, unpopular safety net is at least partly responsible for capitalism’s flagging fortunes in the Land of the Free. Could it be that Americans aren’t socialist enough to want capitalism to work? It makes more sense than you might think.

America’s falling economic freedom

From 1970 to 2000, the American economy was the freest in the West, lagging behind only Asia’s laissez-faire city-states, Hong Kong and Singapore. The average economic freedom rating of the wealthy developed member countries of the Organization for Economic Cooperation and Development (OECD) has slipped a bit since the turn of the millennium, but not as fast as America’s.
“Nowhere has the reversal of the rising trend in the economic freedom been more evident than in the United States,” write the authors of Fraser Institute’s 2015

Economic Freedom of the World report, noting that “the decline in economic freedom in the United States has been more than three times greater than the average decline found in the OECD.”

The economic freedom of selected countries, 1999 to 2016. Heritage Foundation 2016 Index of Economic Freedom

The Heritage Foundation and the Canadian Fraser Institute each produce an annual index of economic freedom, scoring the world’s countries on four or five main areas, each of which breaks down into a number of subcomponents. The main rubrics include the size of government and tax burdens; protection of property rights and the soundness of the legal system; monetary stability; openness to global trade; and levels of regulation of business, labor, and capital markets. Scores on these areas and subareas are combined to generate an overall economic freedom score.

The rankings reflect right-leaning ideas about what it means for people and economies to be free. Strong labor unions and inequality-reducing redistribution are more likely to hurt than help a country’s score.

So why should you care about some right-wing think tank’s ideologically loaded measure of economic freedom? Because it matters. More economic freedom, so measured, predicts higher rates of economic growth, and higher levels of wealth predict happier, healthier, longer lives. Higher levels of economic freedom are also linked with greater political liberty and civil rights, as well as higher scores on the left-leaning Social Progress Index, which is based on indicators of social justice and human well-being, from nutrition and medical care to tolerance and inclusion.

The authors of the Fraser report estimate that the drop in American economic freedom “could cut the US historic growth rate of 3 percent by half.” The difference between a 1.5 percent and 3 percent growth rate is roughly the difference between the output of the economy tripling rather than octupling in a lifetime. That’s a huge deal.
Over the same period, the economic freedom scores of Canada and Denmark have improved a lot. According to conservative and libertarian definitions of economic freedom, Canadians, who enjoy a socialized health care system, now have more economic freedom than Americans, and Danes, who have one of the world’s most generous welfare states, have just as much.
What the hell’s going on?

The redistributive state and the regulatory state are separable

To make headway on this question, it is crucial to clearly distinguish two conceptually and empirically separable aspects of “big government” — the regulatory state and the redistributive state.

The redistributive state moves money around through taxes and transfer programs. The regulatory state places all sorts of restrictions and requirements on economic life — some necessary, some not. Most Democrats and Republicans assume that lots of regulation and lots of redistribution go hand in hand, so it’s easy to miss that you can have one without the other, and that the relationship between the two is uneasy at best. But you can’t really understand the politics behind America’s declining economic freedom if you fail to distinguish between the regulatory and fiscal aspects of the economic policy.

Standard “supply-side” Republican economic policy thinking says that cuts in tax rates and government spending will unleash latent productive potential in the economy, boosting rates of growth. And indeed, when taxes and government spending are very high, cuts produce gains by returning resources to the private sector. But it’s important to see that questions about government control versus private sector control of economic resources are categorically different from questions about the freedom of markets.

Free markets require the presence of good regulation, which define and protect property rights and facilitate market processes through the consistent application of clear law, and an absence of bad regulation, which interferes with productive economic activity. A government can tax and spend very little — yet still stomp all over markets. Conversely, a government can withdraw lots of money from the economy through taxes, but still totally nail the optimal balance of good and bad regulation.

Whether a country’s market economy is free — open, competitive, and relatively unmolested by government — is more a question of regulation than a question of taxation and redistribution. It’s not primarily about how “big” its government is. Republicans generally do support a less meddlesome regulatory approach, but when they’re in power they tend to be much more persistent about cutting taxes and social welfare spending than they are about reducing economically harmful regulatory frictions.

If you’re as worried about America’s declining economic freedom as I am, this is a serious problem. In recent years, the effect of cutting taxes and spending has been to distribute income upward and leave the least well-off more vulnerable to bad luck, globalization, “disruptive innovation,” and the vagaries of business cycles.
If spending cuts came out of the military’s titanic budget, that would help. But that’s rarely what happens. The least connected constituencies, not the most expensive ones, are the first to get dinged by budget hawks. And further tax cuts are unlikely to boost growth. Lower taxes make government seem cheaper than it really is, which leads voters to ask for more, not less, government spending, driving up the deficit. Increasing the portion of GDP devoted to paying interest on government debt isn’t a growth-enhancing way to return resources to the private sector.

Meanwhile, wages have been flat or declining for millions of Americans for decades. People increasingly believe the economy is “rigged” in favor of the rich. As a sense of economic insecurity mounts, people anxiously cast about for answers.

Easing the grip of the regulatory state is a good answer. But in the United States, its close association with “free market” supply-side efforts to produce growth by slashing the redistributive state has made it an unattractive answer, even with Republican voters. That’s at least part of the reason the GOP wound up nominating a candidate who, in addition to promising not to cut entitlement spending, openly favors protectionist trade policy, giant infrastructure projects, and huge subsidies to domestic manufacturing and energy production. Donald Trump’s economic policy is the worst of all possible worlds.

This is doubly ironic, and doubly depressing, once you recognize that the sort of big redistributive state supply-siders fight is not necessarily the enemy of economic freedom. On the contrary, high levels of social welfare spending can actually drive political demand for growth-promoting reform of the regulatory state. That’s the lesson of Canada and Denmark’s march up those free economy rankings.

The welfare state isn’t a free lunch, but it is a cheap date

Economic theory tells you that big government ought to hurt economic growth. High levels of taxation reduce the incentive to work, and redistribution is a “leaky bucket”: Moving money around always ends up wasting some of it. Moreover, a dollar spent in the private sector generally has a more beneficial effect on the economy than a dollar spent by the government. Add it all up, and big governments that tax heavily and spend freely on social transfers ought to hurt economic growth.

That matters from a moral perspective — a lot. Other things equal, people are better off on just about every measure of well-being when they’re wealthier. Relative economic equality is nice, but it’s not so nice when relatively equal shares mean smaller shares for everyone. Just as small differences in the rate at which you put money into a savings account can lead to vast differences in your account balance 40 years down the road, thanks to the compounding nature of interest, a small reduction in the rate of economic growth can leave a society’s least well-off people much poorer in absolute terms than they might have been.

Here’s the puzzle. As a general rule, when nations grow wealthier, the public demands more and better government services, increasing government spending as a percentage of GDP. (This is known as “Wagner’s law.”) According to standard growth theory, ongoing increase in the size of government ought to exert downward pressure on rates of growth. But we don’t see the expected effect in the data. Long-term national growth trends are amazingly stable.

And when we look at the family of advanced, liberal democratic countries, countries that spend a smaller portion of national income on social transfer programs gain very little in terms of growth relative to countries that spend much more lavishly on social programs. Peter Lindert, an economist at the University of California Davis, calls this the “free lunch paradox.”

Lindert’s label for the puzzle is somewhat misleading, because big expensive welfare states are, obviously, expensive. And they do come at the expense of some growth. Standard economic theory isn’t completely wrong. It’s just that democracies that have embraced generous social spending have found ways to afford it by minimizing and offsetting its anti-growth effects.

If you’re careful with the numbers, you do in fact find a small negative effect of social welfare spending on growth. Still, according to economic theory, lunch ought to be really expensive. And it’s not.

There are three main reasons big welfare states don’t hurt growth as much as you might think. First, as Lindert has emphasized, they tend to have efficient consumption-based tax systems that minimize market distortions.
When you tax something, people tend to avoid it. If you tax income, as the United States does, people work a little less, which means that certain economic gains never materialize, leaving everyone a little poorer. Taxing consumption, as many of our European peers do, is less likely to discourage productive moneymaking, though it does discourage spending. But that’s not so bad. Less consumption means more savings, and savings puts the capital in capitalism, financing the economic activity that creates growth.

There are other advantages, too. Consumption taxes are usually structured as national sales taxes (or VATs, value-added taxes), which are paid in small amounts on a continuous basis, are extremely cheap to collect (and hard to avoid), while being less in-your-face than income taxes, which further mitigates the counterproductively demoralizing aspect of taxation.

Big welfare states are also more likely to tax addictive stuff, which people tend to buy whatever the price, as well as unhealthy and polluting stuff. That harnesses otherwise fiscally self-defeating tax-avoiding behavior to minimize the costs of health care and environmental damage.
Second, some transfer programs have relatively direct pro-growth effects. Workers are most productive in jobs well-matched to their training and experience, for example, and unemployment benefits offer displaced workers time to find a good, productivity-promoting fit. There’s also some evidence that health care benefits that aren’t linked to employment can promote economic risk-taking and entrepreneurship.

Fans of open-handed redistributive programs tend to oversell this kind of upside for growth, but there really is some. Moreover, it makes sense that the countries most devoted to these programs would fine-tune them over time to amplify their positive-sum aspects.

This is why you can’t assume all government spending affects growth in the same way. The composition of spending — as well as cuts to spending — matters. Cuts to efficiency-enhancing spending can hurt growth as much as they help. And they can really hurt if they increase economic anxiety and generate demand for Trump-like economic policy.

Third, there are lots of regulatory state policies that hurt growth by, say, impeding healthy competition or closing off foreign trade, and if you like high levels of redistribution better than you like those policies, you’ll eventually consider getting rid of some of them. If you do get rid of them, your economic freedom score from the Heritage Foundation and the Fraser Institute goes up.
This sort of compensatory economic liberalization is how big welfare states can indirectly promote growth, and more or less explains why countries like Canada, Denmark, and Sweden have become more robustly capitalist over the past several decades. They needed to be better capitalists to afford their socialism. And it works pretty well.

If you bundle together fiscal efficiency, some offsetting pro-growth effects, and compensatory liberalization, you can wind up with a very big government, with very high levels of social welfare spending and very little negative consequences for growth. Call it “big-government laissez-faire.”

The missing political will for genuine pro-growth reform

Enthusiasts for small government have a ready reply. Fine, they’ll say. Big government can work through policies that offset its drag on growth. But why not a less intrusive regulatory state and a smaller redistributive state: small-government laissez-faire. After all, this is the formula in Hong Kong and Singapore, which rank No. 1 and No. 2 in economic freedom. Clearly that’s our best bet for prosperity-promoting economic freedom.

But this argument ignores two things. First, Hong Kong and Singapore are authoritarian technocracies, not liberal democracies, which suggests (though doesn’t prove) that their special recipe requires nondemocratic government to work. When you bring democracy into the picture, the most important political lesson of the Canadian and Danish rise in economic freedom becomes clear: When democratically popular welfare programs become politically nonnegotiable fixed points, they can come to exert intense pressure on fiscal and economic policy to make them sustainable.

Political demand for economic liberalization has to come from somewhere. But there’s generally very little organic, popular democratic appetite for capitalist creative destruction. Constant “disruption” is scary, the way markets generate wealth and well-being is hard to comprehend, and many of us find competitive profit-seeking intuitively objectionable.

It’s not that Danes and Swedes and Canadians ever loved their “neoliberal” market reforms. They fought bitterly about them and have rolled some of them back. But when their big-government welfare states were creaking under their own weight, enough of the public was willing, thanks to the sense of economic security provided by the welfare state, to listen to experts who warned that the redistributive state would become unsustainable without the downsizing of the regulatory state.

A sound and generous system of social insurance offers a certain peace of mind that makes the very real risks of increased economic dynamism seem tolerable to the democratic public, opening up the political possibility of stabilizing a big-government welfare state with growth-promoting economic liberalization.

This sense of baseline economic security is precisely what many millions of Americans lack.

Learning the lesson of Donald Trump
America’s declining economic freedom is a profoundly serious problem. It’s already putting the brakes on dynamism and growth, leaving millions of Americans with a bitter sense of panic about their prospects. They demand answers. But ordinary voters aren’t policy wonks. When gripped by economic anxiety, they turn to demagogues who promise measures that make intuitive economic sense, but which actually make economic problems worse.

We may dodge a Trump presidency this time, but if we fail to fix the feedback loop between declining economic freedom and an increasingly acute sense of economic anxiety, we risk plunging the world’s biggest economy and the linchpin of global stability into a political and economic death spiral. It’s a ridiculous understatement to say that it’s important that this doesn’t happen.

Market-loving Republicans and libertarians need to stare hard at a framed picture of Donald Trump and reflect on the idea that a stale economic agenda focused on cutting taxes and slashing government spending is unlikely to deliver further gains. It is instead likely to continue to backfire by exacerbating economic anxiety and the public’s sense that the system is rigged.

If you gaze at the Donald long enough, his fascist lips will whisper “thank you,” and explain that the close but confusing identification of supply-side fiscal orthodoxy with “free market” economic policy helps authoritarian populists like him — but it hurts the political prospects of regulatory state reforms that would actually make American markets freer.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Property Rights and Modern Conservatism



In this excellent essay by one of my favorite conservative writers, Will Wilkinson takes Congress to task for their ridiculous botched-joob-with-a-botchhed-process of passing Tax Cut legislation in 2017.

But I am blogging because of his other points.

In the article, he spells out some tenants of modern conservatism that bear repeating, namely:

– property rights (and the Murray Rothbard extreme positions of absolute property rights)
– economic freedom (“…if we tax you at 100 percent, then you’ve got 0 percent liberty…If we tax you at 50 percent, you are half-slave, half-free”)
– libertarianism (“The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.”)
– legally enforceable rights
– moral traditionalism

Modern conservatism is a “fusion” of these ideas. They have an intellectual footing that is impressive.

But Will points out where they are flawed. The flaws are most apparent in the idea that the hoards want to use democratic institutions to plunder the wealth of the elites. This is a notion from the days when communism was public enemy #1. He points out that the opposite is actually the truth.

“Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.”

Ironically, the new Tax Cut legislation is an example of reverse plunder: where the wealthy get the big, permanent gains and the rest get appeased with small cuts that expire.

So, we are very far from the fears of communism. We instead are amidst a taking by the haves, from the have nots.

====================
Credit: New York Times 12/120/17 Op Ed by Will Wilkinson

Opinion | OP-ED CONTRIBUTOR
The Tax Bill Shows the G.O.P.’s Contempt for Democracy
By WILL WILKINSON
DEC. 20, 2017
The Republican Tax Cuts and Jobs Act is notably generous to corporations, high earners, inheritors of large estates and the owners of private jets. Taken as a whole, the bill will add about $1.4 trillion to the deficit in the next decade and trigger automatic cuts to Medicare and other safety net programs unless Congress steps in to stop them.

To most observers on the left, the Republican tax bill looks like sheer mercenary cupidity. “This is a brazen expression of money power,” Jesse Jackson wrote in The Chicago Tribune, “an example of American plutocracy — a government of the wealthy, by the wealthy, for the wealthy.”

Mr. Jackson is right to worry about the wealthy lording it over the rest of us, but the open contempt for democracy displayed in the Senate’s slapdash rush to pass the tax bill ought to trouble us as much as, if not more than, what’s in it.

In its great haste, the “world’s greatest deliberative body” held no hearings or debate on tax reform. The Senate’s Republicans made sloppy math mistakes, crossed out and rewrote whole sections of the bill by hand at the 11th hour and forced a vote on it before anyone could conceivably read it.

The link between the heedlessly negligent style and anti-redistributive substance of recent Republican lawmaking is easy to overlook. The key is the libertarian idea, woven into the right’s ideological DNA, that redistribution is the exploitation of the “makers” by the “takers.” It immediately follows that democracy, which enables and legitimizes this exploitation, is itself an engine of injustice. As the novelist Ayn Rand put it, under democracy “one’s work, one’s property, one’s mind, and one’s life are at the mercy of any gang that may muster the vote of a majority.”

On the campaign trail in 2015, Senator Rand Paul, Republican of Kentucky, conceded that government is a “necessary evil” requiring some tax revenue. “But if we tax you at 100 percent, then you’ve got 0 percent liberty,” Mr. Paul continued. “If we tax you at 50 percent, you are half-slave, half-free.” The speaker of the House, Paul Ryan, shares Mr. Paul’s sense of the injustice of redistribution. He’s also a big fan of Ayn Rand. “I give out ‘Atlas Shrugged’ as Christmas presents, and I make all my interns read it,” Mr. Ryan has said. If the big-spending, democratic welfare state is really a system of part-time slavery, as Ayn Rand and Senator Paul contend, then beating it back is a moral imperative of the first order.

But the clock is ticking. Looking ahead to a potentially paralyzing presidential scandal, midterm blood bath or both, congressional Republicans are in a mad dash to emancipate us from the welfare state. As they see it, the redistributive upshot of democracy is responsible for the big-government mess they’re trying to bail us out of, so they’re not about to be tender with the niceties of democratic deliberation and regular parliamentary order.

The idea that there is an inherent conflict between democracy and the integrity of property rights is as old as democracy itself. Because the poor vastly outnumber the propertied rich — so the argument goes — if allowed to vote, the poor might gang up at the ballot box to wipe out the wealthy.

In the 20th century, and in particular after World War II, with voting rights and Soviet Communism on the march, the risk that wealthy democracies might redistribute their way to serfdom had never seemed more real. Radical libertarian thinkers like Rand and Murray Rothbard (who would be a muse to both Charles Koch and Ron Paul) responded with a theory of absolute property rights that morally criminalized taxation and narrowed the scope of legitimate government action and democratic discretion nearly to nothing. “What is the State anyway but organized banditry?” Rothbard asked. “What is taxation but theft on a gigantic, unchecked scale?”

Mainstream conservatives, like William F. Buckley, banished radical libertarians to the fringes of the conservative movement to mingle with the other unclubbables. Still, the so-called fusionist synthesis of libertarianism and moral traditionalism became the ideological core of modern conservatism. For hawkish Cold Warriors, libertarianism’s glorification of capitalism and vilification of redistribution was useful for immunizing American political culture against viral socialism. Moral traditionalists, struggling to hold ground against rising mass movements for racial and gender equality, found much to like in libertarianism’s principled skepticism of democracy. “If you analyze it,” Ronald Reagan said, “I believe the very heart and soul of conservatism is libertarianism.”

The hostility to redistributive democracy at the ideological center of the American right has made standard policies of successful modern welfare states, happily embraced by Europe’s conservative parties, seem beyond the moral pale for many Republicans. The outsize stakes seem to justify dubious tactics — bunking down with racists, aggressive gerrymandering, inventing paper-thin pretexts for voting rules that disproportionately hurt Democrats — to prevent majorities from voting themselves a bigger slice of the pie.

But the idea that there is an inherent tension between democracy and the integrity of property rights is wildly misguided. The liberal-democratic state is a relatively recent historical innovation, and our best accounts of the transition from autocracy to democracy points to the role of democratic political inclusion in protecting property rights.

As Daron Acemoglu of M.I.T. and James Robinson of Harvard show in “Why Nations Fail,” ruling elites in pre-democratic states arranged political and economic institutions to extract labor and property from the lower orders. That is to say, the system was set up to make it easy for elites to seize what ought to have been other people’s stuff.

In “Inequality and Democratization,” the political scientists Ben W. Ansell and David J. Samuels show that this demand for political inclusion generally isn’t driven by a desire to use the existing institutions to plunder the elites. It’s driven by a desire to keep the elites from continuing to plunder them.

It’s easy to say that everyone ought to have certain rights. Democracy is how we come to get and protect them. Far from endangering property rights by facilitating redistribution, inclusive democratic institutions limit the “organized banditry” of the elite-dominated state by bringing everyone inside the charmed circle of legally enforced rights.

Democracy is fundamentally about protecting the middle and lower classes from redistribution by establishing the equality of basic rights that makes it possible for everyone to be a capitalist. Democracy doesn’t strangle the golden goose of free enterprise through redistributive taxation; it fattens the goose by releasing the talent, ingenuity and effort of otherwise abused and exploited people.

At a time when America’s faith in democracy is flagging, the Republicans elected to treat the United States Senate, and the citizens it represents, with all the respect college guys accord public restrooms. It’s easier to reverse a bad piece of legislation than the bad reputation of our representative institutions, which is why the way the tax bill was passed is probably worse than what’s in it. Ultimately, it’s the integrity of democratic institutions and the rule of law that gives ordinary people the power to protect themselves against elite exploitation. But the Republican majority is bulldozing through basic democratic norms as though freedom has everything to do with the tax code and democracy just gets in the way.

Will Wilkinson is the vice president for policy at the Niskanen Center.

Dianne Dillon-Ridgley

Karen first met Dianne through the Women’s Network for a Sustainable Future (WNSF).

As I got to know Dianne more, I realized that there were many stories: facets of her experience and interests that make her life very complex, but also very interesting.

I came to realize that she believes that her myriad interests are really one interest: justice.

If I were to try to summarize her interests, I might do it this way:

Sustainability (including Energy, Environment, Environmental Health)
Civil Rights
Women’s Rights

Her story includes many close relatives that were are part of the Thurgood Marshall precedent cases that led up to Brown v Board of Education. That ruling, in 1954, overturned Plessy vs. Ferguson (1896), which held that segregation was legal, so long as facilities were “separate but equal”. The court ruled that segregation violated the Fourteenth Amendment (“no State shall … deny to any person … the equal protection of the laws”).

Her organizational affiliations:

Interface
Howard University (alumnus)
Women’s Network for a Sustainable Future (WNSF)
Green Mountain Energy
Auburn University
River Network
Center for International Environmental Law
National Wildlife Federation
University of Indiana (School for Public Environmental Administration)
Zero Population Growth

Her full biography is below:
Ms. Dianne Dillon-Ridgley serves as an Adjunct Lecturer of University of Indiana School for Public Environmental Administration. Since 1997, Ms. Dillon-Ridgley has represented the World Young Women’s Christian Association at U.N. headquarters. From 1995 to 1998, Ms. Dillon-Ridgley served as a Senior Policy Analyst of the Women’s Environment and Development Organization and from 1998 to 1999, Ms. Dillon-Ridgley served as an Executive Director of that organization. From 1994 to 1997, Ms. Dillon-Ridgley served as a National President of Zero Population Growth, the nation’s largest grassroots organization concerned with rapid population growth and the environment. In 1998, Ms. Dillon-Ridgley was elected to the Global Water Partnership (Stockholm) and in 1999 appointed to the Oxford University Commission on Sustainable Consumption (UK). Ms. Dillon-Ridgley serves as the Chairman of Environmental Advisory Board of Green Mountain Energy Company. Ms. Dillon-Ridgley was appointed by President Clinton to the President’s U.S. Council on Sustainable Development in 1994 and served as Co-Chair of the Council’s International and Population/Consumption Task Forces until the Council’s dissolution in June 1999. Ms. Dillon-Ridgley serves as a Member of Environmental Advisory Board of Green Mountain Energy Company. Ms. Dillon-Ridgley serves as a trustee of River Network, the Center for International Environmental Law, the Natural Step-US and Population Connection. She serves as Director of National Wildlife Federation, Inc. She also serves as a trustee of the International Board of Auburn University’s School of Human Sciences and also serves as a Member of the Editorial Advisory Board for Aspen Law and Business’ Fair Housing and Fair Lending Publications. Ms. Dillon-Ridgley also serves on the Boards of five nonprofit organizations and one private company. Ms. Dillon-Ridgley served as Director of Interface Inc., since February 1997 until May 12, 2014. Ms. Dillon-Ridgley served as a Director of Green Mountain Energy Company since July, 1999. From 1998 to 1999, Ms. Dillon-Ridgley served as an Interim Executive Director of the Women’s Environment and Development Organization, an international women’s advocacy network for environmental, economic and sustainability issues. Ms. Dillon-Ridgley completed her undergraduate work at Howard University and is state-certified by the Iowa Mediation Service as a mediator specializing in agricultural mediation and public policy negotiation.

===============Notes on Brown vs (Topeka) Board of Education (1954) =====

CREDIT: https://en.wikipedia.org/wiki/Brown_v._Board_of_Education

Brown v. Board of Education
(Oliver Brown, et al. v. Board of Education of Topeka, et al.)

Supreme Court of the United States
Argued December 9, 1952
Reargued December 8, 1953
Decided May 17, 1954

Citations
347 U.S. 483 (more)
74 S. Ct. 686; 98 L. Ed. 873; 1954 U.S. LEXIS 2094; 53 Ohio Op. 326; 38 A.L.R.2d 1180

Prior history
Judgment for defendants, 98 F. Supp. 797 (D. Kan. 1951)
Subsequent history
Judgment on relief, 349 U.S. 294 (1955) (Brown II); on remand, 139 F. Supp. 468 (D. Kan. 1955); motion to intervene granted, 84 F.R.D. 383 (D. Kan. 1979); judgment for defendants, 671 F. Supp. 1290 (D. Kan. 1987); reversed, 892 F.2d 851 (10th Cir. 1989); vacated, 503 U.S. 978 (1992) (Brown III); judgment reinstated, 978 F.2d 585 (10th Cir. 1992); judgment for defendants, 56 F. Supp. 2d 1212 (D. Kan. 1999)

Holding
Segregation of students in public schools violates the Equal Protection Clause of the Fourteenth Amendment, because separate facilities are inherently unequal. District Court of Kansas reversed.NOTE: Fourteenth Amendment says “no State shall … deny to any person … the equal protection of the laws”.

Court membership
Chief Justice
Earl Warren
Associate Justices
Hugo Black · Stanley F. Reed
Felix Frankfurter · William O. Douglas
Robert H. Jackson · Harold H. Burton
Tom C. Clark · Sherman Minton

Case opinions
Majority
Warren, joined by unanimous
Laws applied
U.S. Const. amend. XIV

This case overturned a previous ruling or rulings
Plessy v. Ferguson (1896)
Cumming v. Richmond County Board of Education (1899)
Berea College v. Kentucky (1908)

Educational segregation in the US prior to Brown
Brown v. Board of Education of Topeka, 347 U.S. 483 (1954), was a landmark United States Supreme Court case in which the Court declared state laws establishing separate public schools for black and white students to be unconstitutional. The decision overturned the Plessy v. Ferguson decision of 1896, which allowed state-sponsored segregation, insofar as it applied to public education. Handed down on May 17, 1954, the Warren Court’s unanimous (9–0) decision stated that “separate educational facilities are inherently unequal.” As a result, de jure racial segregation was ruled a violation of the Equal Protection Clause of the Fourteenth Amendment of the United States Constitution. This ruling paved the way for integration and was a major victory of the Civil Rights Movement,[1] and a model for many future impact litigation cases.[2] However, the decision’s fourteen pages did not spell out any sort of method for ending racial segregation in schools, and the Court’s second decision in Brown II, 349 U.S. 294 (1955) only ordered states to desegregate “with all deliberate speed”.

Background
For much of the sixty years preceding the Brown case, race relations in the United States had been dominated by racial segregation. This policy had been endorsed in 1896 by the United States Supreme Court case of Plessy v. Ferguson, which held that as long as the separate facilities for the separate races were equal, segregation did not violate the Fourteenth Amendment (“no State shall … deny to any person … the equal protection of the laws”).

The plaintiffs in Brown asserted that this system of racial separation, while masquerading as providing separate but equal treatment of both white and black Americans, instead perpetuated inferior accommodations, services, and treatment for black Americans. Racial segregation in education varied widely from the 17 states that required racial segregation to the 16 in which it was prohibited. Brown was influenced by UNESCO’s 1950 Statement, signed by a wide variety of internationally renowned scholars, titled The Race Question.[3] This declaration denounced previous attempts at scientifically justifying racism as well as morally condemning racism. Another work that the Supreme Court cited was Gunnar Myrdal’s An American Dilemma: The Negro Problem and Modern Democracy (1944).[4] Myrdal had been a signatory of the UNESCO declaration. The research performed by the educational psychologists Kenneth B. Clark and Mamie Phipps Clark also influenced the Court’s decision.[5] The Clarks’ “doll test” studies presented substantial arguments to the Supreme Court about how segregation affected black school children’s mental status.[6]

The United States and the Soviet Union were both at the height of the Cold War during this time, and U.S. officials, including Supreme Court Justices, were highly aware of the harm that segregation and racism played on America’s international image. When Justice William O. Douglas traveled to India in 1950, the first question he was asked was, “Why does America tolerate the lynching of Negroes?” Douglas later wrote that he had learned from his travels that “the attitude of the United States toward its colored minorities is a powerful factor in our relations with India.” Chief Justice Earl Warren, nominated to the Supreme Court by President Eisenhower, echoed Douglas’s concerns in a 1954 speech to the American Bar Association, proclaiming that “Our American system like all others is on trial both at home and abroad, … the extent to which we maintain the spirit of our constitution with its Bill of Rights, will in the long run do more to make it both secure and the object of adulation than the number of hydrogen bombs we stockpile.”[7][8]

In 1951, a class action suit was filed against the Board of Education of the City of Topeka, Kansas in the United States District Court for the District of Kansas. The plaintiffs were thirteen Topeka parents on behalf of their 20 children.[9]

The suit called for the school district to reverse its policy of racial segregation. The Topeka Board of Education operated separate elementary schools under an 1879 Kansas law, which permitted (but did not require) districts to maintain separate elementary school facilities for black and white students in 12 communities with populations over 15,000. The plaintiffs had been recruited by the leadership of the Topeka NAACP. Notable among the Topeka NAACP leaders were the chairman McKinley Burnett; Charles Scott, one of three serving as legal counsel for the chapter; and Lucinda Todd.

The named plaintiff, Oliver L. Brown, was a parent, a welder in the shops of the Santa Fe Railroad, an assistant pastor at his local church, and an African American.[10] He was convinced to join the lawsuit by Scott, a childhood friend. Brown’s daughter Linda, a third grader, had to walk six blocks to her school bus stop to ride to Monroe Elementary, her segregated black school one mile (1.6 km) away, while Sumner Elementary, a white school, was seven blocks from her house.[11][12]

As directed by the NAACP leadership, the parents each attempted to enroll their children in the closest neighborhood school in the fall of 1951. They were each refused enrollment and directed to the segregated schools. Linda Brown Thompson later recalled the experience in a 2004 PBS documentary:

… well. like I say, we lived in an integrated neighborhood and I had all of these playmates of different nationalities. And so when I found out that day that I might be able to go to their school, I was just thrilled, you know. And I remember walking over to Sumner school with my dad that day and going up the steps of the school and the school looked so big to a smaller child. And I remember going inside and my dad spoke with someone and then he went into the inner office with the principal and they left me out … to sit outside with the secretary. And while he was in the inner office, I could hear voices and hear his voice raised, you know, as the conversation went on. And then he immediately came out of the office, took me by the hand and we walked home from the school. I just couldn’t understand what was happening because I was so sure that I was going to go to school with Mona and Guinevere, Wanda, and all of my playmates.[13]

The case “Oliver Brown et al. v. The Board of Education of Topeka, Kansas” was named after Oliver Brown as a legal strategy to have a man at the head of the roster. The lawyers, and the National Chapter of the NAACP, also felt that having Mr. Brown at the head of the roster would be better received by the U.S. Supreme Court Justices. The 13 plaintiffs were: Oliver Brown, Darlene Brown, Lena Carper, Sadie Emmanuel, Marguerite Emerson, Shirley Fleming, Zelma Henderson, Shirley Hodison, Maude Lawton, Alma Lewis, Iona Richardson, and Lucinda Todd.[14][15] The last surviving plaintiff, Zelma Henderson, died in Topeka, on May 20, 2008, at age 88.[16][17]

The District Court ruled in favor of the Board of Education, citing the U.S. Supreme Court precedent set in Plessy v. Ferguson, 163 U.S. 537 (1896), which had upheld a state law requiring “separate but equal” segregated facilities for blacks and whites in railway cars.[18] The three-judge District Court panel found that segregation in public education has a detrimental effect on negro children, but denied relief on the ground that the negro and white schools in Topeka were substantially equal with respect to buildings, transportation, curricula, and educational qualifications of teachers.[19]

Supreme Court review

The case of Brown v. Board of Education as heard before the Supreme Court combined five cases: Brown itself, Briggs v. Elliott (filed in South Carolina), Davis v. County School Board of Prince Edward County (filed in Virginia), Gebhart v. Belton (filed in Delaware), and Bolling v. Sharpe (filed in Washington D.C.).

All were NAACP-sponsored cases. The Davis case, the only case of the five originating from a student protest, began when 16-year-old Barbara Rose Johns organized and led a 450-student walkout of Moton High School.[20] The Gebhart case was the only one where a trial court, affirmed by the Delaware Supreme Court, found that discrimination was unlawful; in all the other cases the plaintiffs had lost as the original courts had found discrimination to be lawful.

The Kansas case was unique among the group in that there was no contention of gross inferiority of the segregated schools’ physical plant, curriculum, or staff. The district court found substantial equality as to all such factors. The lower court, in its opinion, noted that, in Topeka, “the physical facilities, the curricula, courses of study, qualification and quality of teachers, as well as other educational facilities in the two sets of schools [were] comparable.”[21] The lower court observed that “colored children in many instances are required to travel much greater distances than they would be required to travel could they attend a white school” but also noted that the school district “transports colored children to and from school free of charge” and that “[n]o such service [was] provided to white children.”[21]

In the Delaware case the district court judge in Gebhart ordered that the black students be admitted to the white high school due to the substantial harm of segregation and the differences that made the separate schools unequal.

The NAACP’s chief counsel, Thurgood Marshall—who was later appointed to the U.S. Supreme Court in 1967—argued the case before the Supreme Court for the plaintiffs. Assistant attorney general Paul Wilson—later distinguished emeritus professor of law at the University of Kansas—conducted the state’s ambivalent defense in his first appellate argument.
In December 1952, the Justice Department filed a friend of the court brief in the case. The brief was unusual in its heavy emphasis on foreign-policy considerations of the Truman administration in a case ostensibly about domestic issues. Of the seven pages covering “the interest of the United States,” five focused on the way school segregation hurt the United States in the Cold War competition for the friendship and allegiance of non-white peoples in countries then gaining independence from colonial rule. Attorney General James P. McGranery noted that

The existence of discrimination against minority groups in the United States has an adverse effect upon our relations with other countries. Racial discrimination furnishes grist for the Communist propaganda mills.[22]

The brief also quoted a letter by Secretary of State Dean Acheson lamenting that
The United States is under constant attack in the foreign press, over the foreign radio, and in such international bodies as the United Nations because of various practices of discrimination in this country.[23]

British barrister and parliamentarian Anthony Lester has written that “Although the Court’s opinion in Brown made no reference to these considerations of foreign policy, there is no doubt that they significantly influenced the decision.”[23]

Unanimous opinion and consensus building

The members of the U.S. Supreme Court that on May 17, 1954, ruled unanimously that racial segregation in public schools is unconstitutional.

In spring 1953, the Court heard the case but was unable to decide the issue and asked to rehear the case in fall 1953, with special attention to whether the Fourteenth Amendment’s Equal Protection Clause prohibited the operation of separate public schools for whites and blacks.[24]

The Court reargued the case at the behest of Associate Justice Felix Frankfurter, who used reargument as a stalling tactic, to allow the Court to gather a consensus around a Brown opinion that would outlaw segregation. The justices in support of desegregation spent much effort convincing those who initially intended to dissent to join a unanimous opinion. Although the legal effect would be same for a majority rather than unanimous decision, it was felt that dissent could be used by segregation supporters as a legitimizing counter-argument.

Conference notes and draft decisions illustrate the division of opinions before the decision was issued.[25] Justices Douglas, Black, Burton, and Minton were predisposed to overturn Plessy.[25] Fred M. Vinson noted that Congress had not issued desegregation legislation; Stanley F. Reed discussed incomplete cultural assimilation and states’ rights and was inclined to the view that segregation worked to the benefit of the African-American community; Tom C. Clark wrote that “we had led the states on to think segregation is OK and we should let them work it out.”[25] Felix Frankfurter and Robert H. Jackson disapproved of segregation, but were also opposed to judicial activism and expressed concerns about the proposed decision’s enforceability.[25] Chief Justice Vinson had been a key stumbling block. After Vinson died in September 1953, President Dwight D. Eisenhower appointed Earl Warren as Chief Justice.[25] Warren had supported the integration of Mexican-American students in California school systems following Mendez v. Westminster.[26] However, Eisenhower invited Earl Warren to a White House dinner, where the president told him: “These [southern whites] are not bad people. All they are concerned about is to see that their sweet little girls are not required to sit in school alongside some big overgrown Negroes.” Nevertheless, the Justice Department sided with the African American plaintiffs.[27][28][29]

In his reading of the unanimous decision, Justice Warren noted the adverse psychological effects that segregated schools had on African American children.[30]

While all but one justice personally rejected segregation, the judicial restraint faction questioned whether the Constitution gave the Court the power to order its end. The activist faction believed the Fourteenth Amendment did give the necessary authority and were pushing to go ahead. Warren, who held only a recess appointment, held his tongue until the Senate confirmed his appointment.

Warren convened a meeting of the justices, and presented to them the simple argument that the only reason to sustain segregation was an honest belief in the inferiority of Negroes. Warren further submitted that the Court must overrule Plessy to maintain its legitimacy as an institution of liberty, and it must do so unanimously to avoid massive Southern resistance. He began to build a unanimous opinion.

Although most justices were immediately convinced, Warren spent some time after this famous speech convincing everyone to sign onto the opinion. Justices Jackson and Reed finally decided to drop their dissent. The final decision was unanimous. Warren drafted the basic opinion and kept circulating and revising it until he had an opinion endorsed by all the members of the Court.[31] Reed was the last holdout and reportedly cried during the reading of the opinion.[32]

Holding

Reporters who observed the court holding were surprised by two facts. First, the court made a unanimous decision. Prior to the ruling, there were reports that the court members were sharply divided and might not be able to agree. Second, the attendance of Justice Robert H. Jackson who had suffered a mild heart attack and was not expected to return to the bench until early June 1954. “Perhaps to emphasize the unanimity of the court, perhaps from a desire to be present when the history-making verdict was announced, Justice Jackson was in his accustomed seat when the court convened.”[33] Reporters also noted that Dean Acheson, former secretary of state, who had related the case to foreign policy considerations, and Herbert Brownell, the current attorney general, were in the courtroom.[34]

The key holding of the Court was that, even if segregated black and white schools were of equal quality in facilities and teachers, segregation by itself was harmful to black students and unconstitutional. They found that a significant psychological and social disadvantage was given to black children from the nature of segregation itself, drawing on research conducted by Kenneth Clark assisted by June Shagaloff. This aspect was vital because the question was not whether the schools were “equal”, which under Plessy they nominally should have been, but whether the doctrine of separate was constitutional. The justices answered with a strong “no”:

[D]oes segregation of children in public schools solely on the basis of race, even though the physical facilities and other “tangible” factors may be equal, deprive the children of the minority group of equal educational opportunities? We believe that it does. …
“Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The effect is greater when it has the sanction of the law, for the policy of separating the races is usually interpreted as denoting the inferiority of the negro group. A sense of inferiority affects the motivation of a child to learn. Segregation with the sanction of law, therefore, has a tendency to [retard] the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system.” …

We conclude that, in the field of public education, the doctrine of “separate but equal” has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.
Local outcomes

Judgement in the Supreme Court Decision for Brown et al. v. Board of Education of Topeka et al.

The Topeka junior high schools had been integrated since 1941. Topeka High School was integrated from its inception in 1871 and its sports teams from 1949 on.[35] The Kansas law permitting segregated schools allowed them only “below the high school level”.[36]
Soon after the district court decision, election outcomes and the political climate in Topeka changed. The Board of Education of Topeka began to end segregation in the Topeka elementary schools in August 1953, integrating two attendance districts. All the Topeka elementary schools were changed to neighborhood attendance centers in January 1956, although existing students were allowed to continue attending their prior assigned schools at their option.[37][38][39] Plaintiff Zelma Henderson, in a 2004 interview, recalled that no demonstrations or tumult accompanied desegregation in Topeka’s schools:
“They accepted it,” she said. “It wasn’t too long until they integrated the teachers and principals.”[40]

The Topeka Public Schools administration building is named in honor of McKinley Burnett, NAACP chapter president who organized the case.[citation needed]

Monroe Elementary was designated a U.S. National Historic Site unit of the National Park Service on October 26, 1992.

Social implications
Not everyone accepted the Brown v. Board of Education decision. In Virginia, Senator Harry F. Byrd, Sr. organized the Massive Resistance movement that included the closing of schools rather than desegregating them.[41] See, for example, The Southern Manifesto. For more implications of the Brown decision, see Desegregation.

Deep South
Texas Attorney General John Ben Shepperd organized a campaign to generate legal obstacles to implementation of desegregation.[42]

In 1957, Arkansas Governor Orval Faubus called out his state’s National Guard to block black students’ entry to Little Rock Central High School. President Dwight Eisenhower responded by deploying elements of the 101st Airborne Division from Fort Campbell, Kentucky, to Arkansas and by federalizing Arkansas’s National Guard.[43]

Also in 1957, Florida’s response was mixed. Its legislature passed an Interposition Resolution denouncing the decision and declaring it null and void. But Florida Governor LeRoy Collins, though joining in the protest against the court decision, refused to sign it, arguing that the attempt to overturn the ruling must be done by legal methods.
In Mississippi fear of violence prevented any plaintiff from bringing a school desegregation suit for the next nine years.[44] When Medgar Evers sued to desegregate Jackson, Mississippi schools in 1963 White Citizens Council member Byron De La Beckwith murdered him.[45] Two subsequent trials resulted in hung juries. Beckwith was not convicted of the murder until 1994.[46]

In 1963, Alabama Gov. George Wallace personally blocked the door to Foster Auditorium at the University of Alabama to prevent the enrollment of two black students. This became the infamous Stand in the Schoolhouse Door[47] where Wallace personally backed his “segregation now, segregation tomorrow, segregation forever” policy that he had stated in his 1963 inaugural address.[48] He moved aside only when confronted by General Henry Graham of the Alabama National Guard, who was ordered by President John F. Kennedy to intervene.
Upland South

In North Carolina, there was often a strategy of nominally accepting Brown, but tacitly resisting it. On May 18, 1954 the Greensboro, North Carolina school board declared that it would abide by the Brown ruling. This was the result of the initiative of D.E. Hudgins Jr, a former Rhodes Scholar and prominent attorney, who chaired the school board. This made Greensboro the first, and for years the only, city in the South, to announce its intent to comply. However, others in the city resisted integration, putting up legal obstacles[how?] to the actual implementation of school desegregation for years afterward, and in 1969, the federal government found the city was not in compliance with the 1964 Civil Rights Act. Transition to a fully integrated school system did not begin until 1971, after numerous local lawsuits and both nonviolent and violent demonstrations. Historians have noted the irony that Greensboro, which had heralded itself as such a progressive city, was one of the last holdouts for school desegregation.[49][50]
In Moberly, Missouri, the schools were desegregated, as ordered. However, after 1955, the African-American teachers from the local “negro school” were not retained; this was ascribed to poor performance. They appealed their dismissal in Naomi Brooks et al., Appellants, v. School District of City of Moberly, Missouri, Etc., et al.; but it was upheld, and SCOTUS declined to hear a further appeal.[51][52]

North
Many Northern cities also had de facto segregation policies, which resulted in a vast gulf in educational resources between black and white communities. In Harlem, New York, for example, not a single new school had been built since the turn of the century, nor did a single nursery school exist, even as the Second Great Migration caused overcrowding of existing schools. Existing schools tended to be dilapidated and staffed with inexperienced teachers. Northern officials were in denial of the segregation, but Brown helped stimulate activism among African-American parents like Mae Mallory who, with support of the NAACP, initiated a successful lawsuit against the city and State of New York on Brown’s principles. Mallory and thousands of other parents bolstered the pressure of the lawsuit with a school boycott in 1959. During the boycott, some of the first freedom schools of the period were established. The city responded to the campaign by permitting more open transfers to high-quality, historically-white schools. (New York’s African-American community, and Northern desegregation activists generally, now found themselves contending with the problem of white flight, however.)[53][54]

The intellectual roots of Plessy v. Ferguson, the landmark United States Supreme Court decision upholding the constitutionality of racial segregation in 1896 under the doctrine of “separate but equal” were, in part, tied to the scientific racism of the era.[55][56] However, the popular support for the decision was more likely a result of the racist beliefs held by many whites at the time.[57] In deciding Brown v. Board of Education, the Supreme Court rejected the ideas of scientific racists about the need for segregation, especially in schools. The Court buttressed its holding by citing (in footnote 11) social science research about the harms to black children caused by segregated schools.

Both scholarly and popular ideas of hereditarianism played an important role in the attack and backlash that followed the Brown decision.[57] The Mankind Quarterly was founded in 1960, in part in response to the Brown decision.[58][59]
Legal criticism and praise

U.S. circuit judges Robert A. Katzmann, Damon J. Keith, and Sonia Sotomayor at a 2004 exhibit on the Fourteenth Amendment, Thurgood Marshall, and Brown v. Board of Education
William Rehnquist wrote a memo titled “A Random Thought on the Segregation Cases” when he was a law clerk for Justice Robert H. Jackson in 1952, during early deliberations that led to the Brown v. Board of Education decision. In his memo, Rehnquist argued: “I realize that it is an unpopular and unhumanitarian position, for which I have been excoriated by ‘liberal’ colleagues but I think Plessy v. Ferguson was right and should be reaffirmed.” Rehnquist continued, “To the argument . . . that a majority may not deprive a minority of its constitutional right, the answer must be made that while this is sound in theory, in the long run it is the majority who will determine what the constitutional rights of the minorities are.”[60] Rehnquist also argued for Plessy with other law clerks.[61]
However, during his 1971 confirmation hearings, Rehnquist said, “I believe that the memorandum was prepared by me as a statement of Justice Jackson’s tentative views for his own use.” Justice Jackson had initially planned to join a dissent in Brown.[62] Later, at his 1986 hearings for the slot of Chief Justice, Rehnquist put further distance between himself and the 1952 memo: “The bald statement that Plessy was right and should be reaffirmed, was not an accurate reflection of my own views at the time.”[63] In any event, while serving on the Supreme Court, Rehnquist made no effort to reverse or undermine the Brown decision, and frequently relied upon it as precedent.[64]

Chief Justice Warren’s reasoning was broadly criticized by contemporary legal academics with Judge Learned Hand decrying that the Supreme Court had “assumed the role of a third legislative chamber”[65] and Herbert Wechsler finding Brown impossible to justify based on neutral principles.[66]

Some aspects of the Brown decision are still debated. Notably, Supreme Court Justice Clarence Thomas, himself an African American, wrote in Missouri v. Jenkins (1995) that at the very least, Brown I has been misunderstood by the courts.

Brown I did not say that “racially isolated” schools were inherently inferior; the harm that it identified was tied purely to de jure segregation, not de facto segregation. Indeed, Brown I itself did not need to rely upon any psychological or social-science research in order to announce the simple, yet fundamental truth that the Government cannot discriminate among its citizens on the basis of race. …

Segregation was not unconstitutional because it might have caused psychological feelings of inferiority. Public school systems that separated blacks and provided them with superior educational resources making blacks “feel” superior to whites sent to lesser schools—would violate the Fourteenth Amendment, whether or not the white students felt stigmatized, just as do school systems in which the positions of the races are reversed. Psychological injury or benefit is irrelevant …

Given that desegregation has not produced the predicted leaps forward in black educational achievement, there is no reason to think that black students cannot learn as well when surrounded by members of their own race as when they are in an integrated environment. (…) Because of their “distinctive histories and traditions,” black schools can function as the center and symbol of black communities, and provide examples of independent black leadership, success, and achievement.[67]

Some Constitutional originalists, notably Raoul Berger in his influential 1977 book “Government by Judiciary,” make the case that Brown cannot be defended by reference to the original understanding of the 14th Amendment. They support this reading of the 14th amendment by noting that the Civil Rights Act of 1875 did not ban segregated schools and that the same Congress that passed the 14th Amendment also voted to segregate schools in the District of Columbia. Other originalists, including Michael W. McConnell, a federal judge on the United States Court of Appeals for the Tenth Circuit, in his article “Originalism and the Desegregation Decisions,” argue that the Radical Reconstructionists who spearheaded the 14th Amendment were in favor of desegregated southern schools.[68] Evidence supporting this interpretation of the 14th amendment has come from archived Congressional records showing that proposals for federal legislation which would enforce school integration were debated in Congress a few years following the amendment’s ratification.[69]

The case also has attracted some criticism from more liberal authors, including some who say that Chief Justice Warren’s reliance on psychological criteria to find a harm against segregated blacks was unnecessary. For example, Drew S. Days has written:[70] “we have developed criteria for evaluating the constitutionality of racial classifications that do not depend upon findings of psychic harm or social science evidence. They are based rather on the principle that ‘distinctions between citizens solely because of their ancestry are by their very nature odious to a free people whose institutions are founded upon the doctrine of equality,’ Hirabayashi v. United States, 320 U.S. 81 (1943). . . .

In his book The Tempting of America (page 82), Robert Bork endorsed the Brown decision as follows:
By 1954, when Brown came up for decision, it had been apparent for some time that segregation rarely if ever produced equality. Quite aside from any question of psychology, the physical facilities provided for blacks were not as good as those provided for whites. That had been demonstrated in a long series of cases … The Court’s realistic choice, therefore, was either to abandon the quest for equality by allowing segregation or to forbid segregation in order to achieve equality. There was no third choice. Either choice would violate one aspect of the original understanding, but there was no possibility of avoiding that. Since equality and segregation were mutually inconsistent, though the ratifiers did not understand that, both could not be honored. When that is seen, it is obvious the Court must choose equality and prohibit state-imposed segregation. The purpose that brought the fourteenth amendment into being was equality before the law, and equality, not separation, was written into the law.

In June 1987, Philip Elman, a civil rights attorney who served as an associate in the Solicitor General’s office during Harry Truman’s term, claimed he and Associate Justice Felix Frankfurter were mostly responsible for the Supreme Court’s decision, and stated that the NAACP’s arguments did not present strong evidence.[71] Elman has been criticized for offering a self-aggrandizing history of the case, omitting important facts, and denigrating the work of civil rights attorneys who had laid the groundwork for the decision over many decades.[72] However, Frankfurter was also known for being one of court’s most outspoken advocates of the judicial restraint philosophy of basing court rulings on existing law rather than personal or political considerations.[73][74] Public officials in the United States today are nearly unanimous in lauding the ruling. In May 2004, the fiftieth anniversary of the ruling, President George W. Bush spoke at the opening of the Brown v. Board of Education National Historic Site, calling Brown “a decision that changed America for the better, and forever.”[75] Most Senators and Representatives issued press releases hailing the ruling.

In an article in Townhall, Thomas Sowell argued that When Chief Justice Earl Warren declared in the landmark 1954 case of Brown v. Board of Education that racially separate schools were “inherently unequal,” Dunbar High School was a living refutation of that assumption. And it was within walking distance of the Supreme Court.”[76]

Brown II

In 1955, the Supreme Court considered arguments by the schools requesting relief concerning the task of desegregation. In their decision, which became known as “Brown II”[77] the court delegated the task of carrying out school desegregation to district courts with orders that desegregation occur “with all deliberate speed,” a phrase traceable to Francis Thompson’s poem, The Hound of Heaven.[78]

Supporters of the earlier decision were displeased with this decision. The language “all deliberate speed” was seen by critics as too ambiguous to ensure reasonable haste for compliance with the court’s instruction. Many Southern states and school districts interpreted “Brown II” as legal justification for resisting, delaying, and avoiding significant integration for years—and in some cases for a decade or more—using such tactics as closing down school systems, using state money to finance segregated “private” schools, and “token” integration where a few carefully selected black children were admitted to former white-only schools but the vast majority remained in underfunded, unequal black schools.[79]

For example, based on “Brown II,” the U.S. District Court ruled that Prince Edward County, Virginia did not have to desegregate immediately. When faced with a court order to finally begin desegregation in 1959 the county board of supervisors stopped appropriating money for public schools, which remained closed for five years, from 1959 to 1964.

White students in the county were given assistance to attend white-only “private academies” that were taught by teachers formerly employed by the public school system, while black students had no education at all unless they moved out of the county. But the public schools reopened after the Supreme Court overturned “Brown II” in Griffin v. County School Board of Prince Edward County, declaring that “…the time for mere ‘deliberate speed’ has run out,” and that the county must provide a public school system for all children regardless of race.[80]

Brown III

In 1978, Topeka attorneys Richard Jones, Joseph Johnson and Charles Scott, Jr. (son of the original Brown team member), with assistance from the American Civil Liberties Union, persuaded Linda Brown Smith—who now had her own children in Topeka schools—to be a plaintiff in reopening Brown. They were concerned that the Topeka Public Schools’ policy of “open enrollment” had led to and would lead to further segregation. They also believed that with a choice of open enrollment, white parents would shift their children to “preferred” schools that would create both predominantly African American and predominantly European American schools within the district. The district court reopened the Brown case after a 25-year hiatus, but denied the plaintiffs’ request finding the schools “unitary”. In 1989, a three-judge panel of the Tenth Circuit on 2–1 vote found that the vestiges of segregation remained with respect to student and staff assignment. In 1993, the Supreme Court denied the appellant School District’s request for certiorari and returned the case to District Court Judge Richard Rodgers for implementation of the Tenth Circuit’s mandate.

After a 1994 plan was approved and a bond issue passed, additional elementary magnet schools were opened and district attendance plans redrawn, which resulted in the Topeka schools meeting court standards of racial balance by 1998. Unified status was eventually granted to Topeka Unified School District No. 501 on July 27, 1999. One of the new magnet schools is named after the Scott family attorneys for their role in the Brown case and civil rights.[81]

Related cases
• Plessy v. Ferguson, 163 U.S. 537 (1896)—separate but equal for public facilities
• Cumming v. Richmond County Board of Education 175 U.S. 528 (1899)—sanctioned de jure segregation of races
• Lum v. Rice, 275 U.S. 78 (1927)—separate schools for Chinese pupils from white schoolchildren
• Powell v. Alabama, 287 U.S. 45 (1932)—access to counsel
• Missouri ex rel. Gaines v. Canada, 305 U.S. 337 (1938)-states that provide a school to white students must provide in-state education to blacks
• Smith v. Allwright, 321 U.S. 649 (1944)—non-white voters in primary schools
• Hedgepeth and Williams v. Board of Education (1944)-prohibited racial segregation in New Jersey schools.
• Mendez v. Westminster, 64 F. Supp. 544 (1946)—prohibits segregating Mexican American children in California
• Sipuel v. Board of Regents of Univ. of Okla., 332 U.S. 631 (1948)—access to taxpayer state funded law schools
• Shelley v. Kraemer, 334 U.S. 1 (1948)—restrictive covenants
• Sweatt v. Painter, 339 U.S. 629 (1950)—segregated law schools in Texas
• McLaurin v. Oklahoma State Regents, 339 U.S. 637 (1950)—prohibits segregation in a public institution of higher learning
• Hernandez v. Texas, 347 U.S. 475 (1954)—the Fourteenth Amendment protects those beyond the racial classes of white or Negro.
• Briggs v. Elliott, 347 U.S. 483 (1952) Brown Case #1—Summerton, South Carolina.
• Davis v. County School Board of Prince Edward County, 103 F. Supp. 337 (1952) Brown Case #2—Prince Edward County, Virginia.
• Gebhart v. Belton, 33 Del. Ch. 144 (1952) Brown Case #3—Claymont, Delaware
• Bolling v. Sharpe, 347 U.S. 497 (1954) Brown companion case—dealt with the constitutionality of segregation in the District of Columbia, which—as a federal district, not a state—is not subject to the Fourteenth Amendment.
• Browder v. Gayle, 142 F. Supp. 707 (1956) – Montgomery, Alabama bus segregation is unconstitutional under the Fourteenth Amendment protections for equal treatment.
• NAACP v. Alabama, 357 U.S. 449 (1958)—privacy of NAACP membership lists, and free association of members
• Cooper v. Aaron, 358 U.S. 1 (1958) – Federal court enforcement of desegregation
• Boynton v. Virginia, 364 U.S. 454 (1960) – outlawed racial segregation in public transportation
• Heart of Atlanta Motel v. United States, 379 U.S. 241 (1964)—held constitutional the Civil Rights Act of 1964, which banned racial discrimination in public places, particularly in public accommodations even in private property.
• Loving v. Virginia, 388 U.S. 1 (1967) – banned anti-miscegenation laws (race-based restrictions on marriage).
• Alexander v. Holmes County Board of Education, 396 U.S. 1218 (1969) – changed Brown’s requirement of desegregation “with all deliberate speed” to one of “desegregation now”
• Swann v. Charlotte-Mecklenburg Board of Education, 402 U.S. 1 (1971) – established bussing as a solution
• Guey Heung Lee v. Johnson, 404 U.S 1215 (1971) – “Brown v. Board of Education was not written for blacks alone”, desegregation of Asian schools in opposition to parents of Asian students
• Milliken v. Bradley, 418 U.S. 717 (1974) – rejected bussing across school district lines.
• Parents Involved in Community Schools v. Seattle School District No. 1,[82] 551 U.S. 701, 127 S. Ct. 2738 (2007)—rejected using race as the sole determining factor for assigning students to schools.[83]
• List of United States Supreme Court Cases
* See Case citation for an explanation of these numbers.
See also
• African-American Civil Rights Movement (1896–1954)
• Little Rock Nine
• Rubey Mosley Hulen, federal judge who made a similar ruling in an earlier case
• Timeline of the African American Civil Rights Movement
• Ruby Bridges, the first black child to attend an all-white elementary school in the South
References
1 Jump up 
^ Brown v Board of Education Decision ~ Civil Rights Movement Veterans
2 Jump up 
^ Schuck, P.H. (2006). Meditations of a Militant Moderate: Cool Views on Hot Topics. G – Reference, Information and Interdisciplinary Subjects Series. Rowman & Littlefield. p. 104. ISBN 978-0-7425-3961-7.
3 Jump up 
^ Harald E.L. Prins. “Toward a World without Evil: Alfred Métraux as UNESCO Anthropologist (1946–1962)”. UNESCO. “As a direct offshoot of the 1948 “Universal Declaration of Human Rights,” it sought to dismantle any scientific justification or basis for racism and proclaimed that race was not a biological fact of nature but a dangerous social myth. As a milestone, this critically important declaration contributed to the 1954 U.S. Supreme Court desegregation decision in Brown v. Board of Education of Topeka.’”(in English)
4 Jump up 
^ Myrdal, Gunnar (1944). An American Dilemma: The Negro Problem and Modern Democracy. New York: Harper & Row.
5 Jump up 
^ “Desegregation to diversity?”. American Psychological Association. 2004. Retrieved May 15, 2008.
6 Jump up 
^ “Kenneth Clark, 90; His Studies Influenced Ban on Segregation – Los Angeles Times”. Los Angeles Times. May 3, 2005. Retrieved October 15, 2010.
7 Jump up 
^ Mary L. Dudziak, “The Global Impact of Brown v. Board of Education” SCOTUS Blog
8 Jump up 
^ Mary L Dudziak “Brown as a Cold War Case” Journal of American History, June 2004 Archived December 7, 2014, at the Wayback Machine.
9 Jump up 
^ Anderson, Legacy of Brown: Many people part of local case, Thirteen parents representing 20 children signed up as Topeka plaintiffs, The Topeka Capital-Journal (Sunday, May 9, 2004).
10 Jump up 
^ Black, White, and Brown, PBS NewsHour (May 12, 2004).
11 Jump up 
^ Brown v. Board of Education of Topeka MSN Encarta, archived on October 31, 2009 from the original Archived October 31, 2009, at WebCite
12 Jump up 
^ “Interactive map of locations in Topeka important to the Brown case – Topeka Capital Journal online”. Cjonline.com. October 26, 1992. Retrieved October 15, 2010.
13 Jump up 
^ Black/White & Brown Archived September 10, 2005, at the Wayback Machine., transcript of program produced by KTWU Channel 11 in Topeka, Kansas. Originally aired May 3, 2004.
14 Jump up 
^ Brown Foundation for Educational Equity, Excellence and Research, Myths Versus Truths Archived June 27, 2005, at the Wayback Machine. (revised April 11, 2004)
15 Jump up 
^ Ric Anderson, Legacy of Brown: Many people part of local case, Thirteen parents representing 20 children signed up as Topeka plaintiffs, The Topeka Capital-Journal (Sunday, May 9, 2004).
16 Jump up 
^ Fox, Margalit (May 22, 2008). “Zelma Henderson, Who Aided Desegregation, Dies at 88”. The New York Times. Retrieved May 29, 2008.
17 Jump up 
^ Last surviving Brown v. Board plaintiff dies at 88 The Associated Press, May 21, 2008, archived on May 24, 2008 from the original
18 Jump up 
^ School facilities for Negroes here held comparable, The Topeka State Journal (August 3, 1951)
19 Jump up 
^ Brown v. Board of Education, 98 F. Supp. 797 Archived January 4, 2009, at the Wayback Machine. (August 3, 1951).
20 Jump up 
^ Student Strike at Moton High ~ Civil Rights Movement Veterans
21 ^ Jump up to: 
a b Brown v. Board of Education, 98 F. Supp. 797, 798 (D. Kan. 1951), rev’d, 347 U.S. 483 (1954).
22 Jump up 
^ Aryeh Neier “Brown v. Board of Ed: Key Cold War weapon” Reuters Blog, May 14, 2014
23 ^ Jump up to: 
a b Antonly Lester, “Brown v. Board of Education Overseas” PROCEEDINGS OF THE AMERICAN PHILOSOPHICAL SOCIETY VOL. 148, NO. 4, DECEMBER 2004
24 Jump up 
^ See Smithsonian, “Separate is Not Equal: Brown v. Board of Education Archived June 30, 2015, at the Wayback Machine.
25 ^ Jump up to: 
a b c d e Cass R. Sunstein (May 3, 2004). “Did Brown Matter?”. The New Yorker. Retrieved January 22, 2010.
26 Jump up 
^ George R. Goethals, Georgia Jones Sorenson (2006). The quest for a general theory of leadership. Edward Elgar Publishing. p. 165. ISBN 978-1-84542-541-8.
27 Jump up 
^ Digital History:Brown v. Board of Education, 347 U.S. 483 (1954)
28 Jump up 
^ The Gang That Always Liked Ike
29 Jump up 
^ Warren, Earl (1977). The Memoirs of Earl Warren. New York: Doubleday & Company. p. 291. ISBN 0385128355.
30 Jump up 
^ Mungazi, D. A. (2001). Journey to the promised land: The African American struggle for development since the Civil War (pp. 46). Westport, CT: Greenwood Publishing Group
31 Jump up 
^ Patterson, James T. (2001). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy. New York: Oxford University Press. ISBN 0-19-515632-3.
32 Jump up 
^ Caro, Robert A. (2002). Master of the Senate. Vintage Books. p. 696. ISBN 9780394720951. Retrieved 17 May 2017.
33 Jump up 
^ Huston, Luther A. (18 May 1954). “High Court Bans School Segregation; 9-to-0 Decision Grants Time to Comply”. The New York Times. Retrieved 6 March 2013.
34 Jump up 
^ “AP WAS THERE: Original 1954 Brown v. Board story” Archived December 9, 2014, at the Wayback Machine.
35 Jump up 
^ “Topeka Capital Journal article on integration of THS sports teams”. Cjonline.com. July 10, 2001. Retrieved October 15, 2010.
36 Jump up 
^ “Topeka Capital Journal on line article”. Cjonline.com. February 28, 2002. Retrieved October 15, 2010.
37 Jump up 
^ “Racial bar down for teachers here”, Topeka Daily Capital (January 19, 1956)
38 Jump up 
^ “First step taken to end segregation”, Topeka Daily Capital (September 9, 1953)
39 Jump up 
^ “Little Effect On Topeka” Topeka Capital-Journal (May 18, 1954)
40 Jump up 
^ Erin Adamson, “Breaking barriers: Topekans reflect on role in desegregating nation’s schools” Archived April 27, 2004, at the Wayback Machine., Topeka Capital Journal (May 11, 2003)
41 Jump up 
^ “Massive Resistance” to Integration ~ Civil Rights Movement Veterans
42 Jump up 
^ Howell, Mark C., John Ben Shepperd, Attorney General of the State of Texas: His Role in the Continuation of Segregation in Texas, 1953-1957, Master’s Thesis, The University of Texas of the Permian Basin, Odessa, Texas, July 2003.
43 Jump up 
^ The Little Rock Nine ~ Civil Rights Movement Veterans
44 Jump up 
^ Michael Klarman, The Supreme Court, 2012 Term – Comment: Windsor and Brown: Marriage Equality and Racial Equality 127 Harv. L. Rev. 127, 153 (2013).
45 Jump up 
^ Id. citing Karlman, From Jim Crow to Civil Rights: The Supreme Court and the Struggle for Racial Equality at 352-354 (2004).
46 Jump up 
^ De La Beckwith v. State, 707 So. 2d 547 (Miss. 1997).
47 Jump up 
^ Standing In the Schoolhouse Door ~ Civil Rights Movement Veterans
48 Jump up 
^ The American Experience; George Wallace: Settin’ the Woods on Fire; Wallace Quotes, Public Broadcasting Service, pbs.org, 2000. Retrieved February 6, 2007.
49 Jump up 
^ Desegregation and Integration of Greensboro’s Public Schools, 1954-1974
50 Jump up 
^ “Summary of ‘Civilities and Civil Rights’: by William H. Chafe” George Mason University website
51 Jump up 
^ http://law.justia.com/cases/federal/appellate-courts/F2/267/733/393864/
52 Jump up 
^ http://revisionisthistory.com/episodes/13-miss-buchanans-period-of-adjustment
53 Jump up 
^ Melissa F. Weiner, Power, Protest, and the Public Schools: Jewish and African American Struggles in New York City (Rutgers University Press, 2010) p. 51-66
54 Jump up 
^ Adina Back “Exposing the Whole Segregation Myth: The Harlem Nine and New York City Schools” in Freedom north: Black freedom struggles outside the South, 1940-1980, Jeanne Theoharis, Komozi Woodard, eds.(Palgrave Macmillan, 2003) p. 65-91
55 Jump up 
^ Austin Sarat (1997). Race, Law, and Culture: Reflections on Brown v. Board of Education. Oxford University Press. p. 55. ISBN 978-0-19-510622-0. “What lay behind Plessy v. Ferguson? There were, perhaps, some important intellectual roots; this was the era of scientific racism.”
56 Jump up 
^ Charles A. Lofgren (1988). The Plessy Case. Oxford University Press. p. 184. ISBN 978-0-19-505684-6. “But he [ Henry Billings Brown ] at minimum established popular sentiment and practice, along with legal and scientific testimony on race, as a link in his train of reasoning.”
57 ^ Jump up to: 
a b Race, Law, and Culture: Reflections on Brown v. Board of Education By Austin Sarat. Page 55 and 59. 1997. ISBN 0-19-510622-9
58 Jump up 
^ Schaffer, Gavin (2007). “”‘Scientific’ Racism Again?”: Reginald Gates, the Mankind Quarterly and the Question of “Race” in Science after the Second World War”. Journal of American Studies. 41 (2): 253–278. doi:10.1017/S0021875807003477.
59 Jump up 
^ Science for Segregation: Race, Law, and the Case Against Brown v. Board of Education. By John P. Jackson. ISBN 0-8147-4271-8 Page 148
60 Jump up 
^ William Rehnquist, “A Random Thought on the Segregation Cases” Archived June 15, 2007, at the Wayback Machine., S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986).
61 Jump up 
^ Peter S. Canellos,Memos may not hold Roberts’s opinions, The Boston Globe, August 23, 2005. Here is what Rehnquist said in 1986 about his conversations with other clerks about Plessy: I thought Plessy had been wrongly decided at the time, that it was not a good interpretation of the equal protection clause to say that when you segregate people by race, there is no denial of equal protection. But Plessy had been on the books for 60 years; Congress had never acted, and the same Congress that had promulgated the 14th Amendment had required segregation in the District schools. . . . I saw factors on both sides. . . . I did not agree then, and I certainly do not agree now, with the statement that Plessy against Ferguson is right and should be reaffirmed. I had ideas on both sides, and I do not think I ever really finally settled in my own mind on that. . . . [A]round the lunch table I am sure I defended it. . . . I thought there were good arguments to be made in support of it.

S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986).
62 Jump up 
^ Justice William O. Douglas wrote: “In the original conference there were only four who voted that segregation in the public schools was unconstitutional. Those four were Black, Burton, Minton, and myself.” See Bernard Schwartz, Decision: How the Supreme Court Decides Cases, page 96 (Oxford 1996). Likewise, Justice Felix Frankfurter wrote: “I have no doubt that if the segregation cases had reached decision last term, there would have been four dissenters—Vinson, Reed, Jackson, and Clark.” Id. Justice Jackson’s longtime legal secretary had a different view, calling Rehnquist’s Senate testimony an attempt to “smear the reputation of a great justice.” See Alan Dershowitz, Telling the Truth About Chief Justice Rehnquist, Huffington Post, September 5, 2005. Retrieved March 15, 2007. See also Felix Frankfurter on the death of Justice Vinson.
63 Jump up 
^ Adam Liptak, The Memo That Rehnquist Wrote and Had to Disown, NY Times (September 11, 2005)
64 Jump up 
^ Cases where Justice Rehnquist has cited Brown v. Board of Education in support of a proposition Archived June 15, 2007, at the Wayback Machine., S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986). Also see Jeffery Rosen, Rehnquist the Great?, Atlantic Monthly (April 2005): “Rehnquist ultimately embraced the Warren Court’s Brown decision, and after he joined the Court he made no attempt to dismantle the civil-rights revolution, as political opponents feared he would”.
65 Jump up 
^ Michael Klarman, The Supreme Court, 2012 Term – Comment: Windsor and Brown: Marriage Equality and Racial Equality, 127 Harv. L. Rev. 127, 142 (2013) citing Learned Hand, The Bill of Rights at 55 (Oliver Wendell Holmes Lecture, 1958).
66 Jump up 
^ Id., Pamela Karlan, “What Can Brown Do For You: Neutral Principles and the Struggle Over the Equal Protection Clause, 58 DUKE L.J. 1049 (2008) citing Herbert Wechsler, Toward Neutral Principles of Constitutional Law, 73 HARV. L. REV. 1 (Oliver Wendell Holmes Lecture, 1959).
67 Jump up 
^ Missouri v. Jenkins, 515 U.S. 70 (1995) (Thomas, J., concurring).
68 Jump up 
^ McConnell, Michael W. (May 1995). “Originalism and the desegregation decisions”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (4): 947–1140. JSTOR 1073539. doi:10.2307/1073539.
• Response to McConnell: Klarman, Michael J. (October 1995). “Response: Brown, originalism, and constitutional theory: a response to Professor Mcconnell”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (7): 1881–1936. JSTOR 1073643. doi:10.2307/1073643.
• Response to Klarman: McConnell, Michael W. (October 1995). “Reply: The originalist justification for Brown: a reply to Professor Klarman”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (7): 1937–1955. JSTOR 1073644. doi:10.2307/1073644.
69 



70 Jump up 
^ Adam Liptak (November 9, 2009). “From 19th-Century View, Desegregation Is a Test”. New York Times. Retrieved June 4, 2013.
71 Jump up 
^ Days, III, Drew S. (2001), “Days, J., concurring”, in Balkan, Jack; Ackerman, Bruce A., What ‘Brown v. Board of Education’ should have said, New York: New York University Press, p. 97, ISBN 9780814798904. Preview.
72 Jump up 
^ Harvard Law Review, Vol. 100, No. 8 (June 1987), pp. 1938–1948
73 Jump up 
^ See, e.g., Randall Kennedy. “A Reply to Philip Elman.” Harvard Law Review 100 (1987):1938–1948.
74 Jump up 
^ A Justice for All, by Kim Isaac Eisler, page 11; ISBN 0-671-76787-9
75 Jump up 
^ “Supreme Court History: Expanding civil rights, biographies of the robes: Felix Frankfurter”. pbs.org/wnet. Educational Broadcasting Corp., PBS.
76 Jump up 
^ Remarks by the President at Grand Opening of the Brown v Board of Education National Historic Site, Topeka, Kansas (May 17, 2004)
77 Jump up 
^ Thomas Sowell (October 4, 2016). “Dunbar High School After 100 Years”. townhall.com.
78 Jump up 
^ Brown v. Board of Education of Topeka, 349 U.S. 294 (1955)
79 Jump up 
^ Jim Chen, Poetic Justice, 29 Cardozo Law Review (2007)
80 Jump up 
^ The “Brown II,” “All Deliberate Speed” Decision ~ Civil Rights Movement Veterans
81 Jump up 
^ Smith, Bob (1965). They Closed Their Schools. University of North Carolina Press.
82 Jump up 
^ Topeka Public Schools Desegregation History: “The Naming of Scott Computer Technology Magnet” Archived October 1, 2007, at the Wayback Machine.
83 Jump up 
^ “FindLaw | Cases and Codes”. Caselaw.lp.findlaw.com. Retrieved October 15, 2010.
84 Jump up 
^ For analysis of this decision, see also Joel K. Goldstein, “Not Hearing History: A Critique of Chief Justice Roberts’s Reinterpretation of Brown,” 69 Ohio St. L.J. 791 (2008)
Further reading
• Keppel, Ben. Brown v. Board and the Transformation of American Culture (LSU Press, 2016). xiv, 225 pp.
• Kluger, Richard (1975). Simple Justice: The History of Brown v. Board of Education and Black America’s Struggle for Equality. New York: Knopf. ISBN 9780394472898.
External video
Booknotes interview with Charles Ogletree on All Deliberate Speed, May 9, 2004, C-SPAN
• Ogletree, Charles J., Jr. (2004). All Deliberate Speed: Reflections on the First Half Century of Brown v. Board of Education. New York: W.W. Norton. ISBN 9780393058970.
• Patterson, James T., and William W. Freehling. Brown v. Board of Education: A civil rights milestone and its troubled legacy (Oxford University Press, 2001).
• Tushnet, Mark V. (2008). “”Our decision does not end but begins the struggle over segregation” Brown v. Board of Education, 1954: Justice Robert H. Jackson”. In Tushnet, Mark V. I dissent: Great Opposing Opinions in Landmark Supreme Court Cases. Boston: Beacon Press. pp. 133–150. ISBN 9780807000366. Preview.
External links

Wikisource has original text related to this article:
Brown v. Board of Education of Topeka (347 U.S. 483)

Wikimedia Commons has media related to Brown v. Board of Education.
• Case Brief for Brown v. Board of Education of Topeka at Lawnix.com
• Case information and transcripts on The Curiae Project
• Brown v. Board of Education National Historic Site (US Park Service)
• Brown v. Board of Education of Topeka, 347 U.S. 483 (1954) (full text with hyperlinks to cited material)
• A copy of Florida’s 1957 Interposition Resolution in Response to the Brown decision, with Gov. Collin’s handwritten rejection of it. Made available for public use by the State Archives of Florida.
• U.S. District Court of Kansas: Records of Brown v. Board of Education, Dwight D. Eisenhower Presidential Library
• Online documents relating to Brown vs. Board of Education, Dwight D. Eisenhower Presidential Library
• Documents from the district court, including the original complaint and trial transcript, at the Civil Rights Litigation Clearinghouse
• 60th Anniversary of Brown v. Board of Education curated by Michigan State University’s Diversity of Excellence through Artistic Expression
• Brown v. Board of Education, Civil Rights Digital Library.
• “Supreme Court Landmark Case Brown v. Board of Education” from C-SPAN’s Landmark Cases: 12 Historic Supreme Court Decisions
[hide]
• v t e

African-American Civil Rights Movement (1954–1968)

Notable
events
(timeline)

1954–1959
• • • • • Brown v. Board of Education Bolling v. Sharpe Briggs v. Elliott Davis v. County School Board of Prince Edward County Gebhart v. Belton Sarah Keys v. Carolina Coach Company Emmett Till Montgomery bus boycott Browder v. Gayle Tallahassee bus boycott Mansfield school desegregation 1957 Prayer Pilgrimage for Freedom “Give Us the Ballot” Royal Ice Cream Sit-in Little Rock Nine National Guard blockade Civil Rights Act of 1957 Kissing Case Biloxi Wade-Ins

1960–1963
• • • • • Greensboro sit-ins Nashville sit-ins Sit-in movement Civil Rights Act of 1960 Gomillion v. Lightfoot Boynton v. Virginia Rock Hill sit-ins Robert F. Kennedy’s Law Day Address Freedom Rides attacks Garner v. Louisiana Albany Movement University of Chicago sit-ins “Second Emancipation Proclamation” Meredith enrollment, Ole Miss riot “Segregation now, segregation forever” Stand in the Schoolhouse Door 1963 Birmingham campaign Letter from Birmingham Jail Children’s Crusade Birmingham riot 16th Street Baptist Church bombing John F. Kennedy’s Report to the American People on Civil Rights March on Washington “I Have a Dream” St. Augustine movement

1964–1968
• • • • Twenty-fourth Amendment Bloody Tuesday Freedom Summer workers’ murders Civil Rights Act of 1964 1965 Selma to Montgomery marches “How Long, Not Long” Voting Rights Act of 1965 Harper v. Virginia Board of Elections March Against Fear White House Conference on Civil Rights Chicago Freedom Movement/Chicago open housing movement Memphis Sanitation Strike King assassination funeral riots Poor People’s Campaign Civil Rights Act of 1968 Green v. County School Board of New Kent County
Activist
groups
• • Alabama Christian Movement for Human Rights Atlanta Student Movement Brotherhood of Sleeping Car Porters Congress of Racial Equality (CORE) Committee on Appeal for Human Rights Council for United Civil Rights Leadership Dallas County Voters League Deacons for Defense and Justice Georgia Council on Human Relations Highlander Folk School Leadership Conference on Civil Rights Montgomery Improvement Association Nashville Student Movement NAACP Youth Council Northern Student Movement National Council of Negro Women National Urban League Operation Breadbasket Regional Council of Negro Leadership Southern Christian Leadership Conference (SCLC) Southern Regional Council Student Nonviolent Coordinating Committee (SNCC) The Freedom Singers Wednesdays in Mississippi Women’s Political Council

Activists
• Ralph Abernathy Victoria Gray Adams Zev Aelony Mathew Ahmann William G. Anderson Gwendolyn Armstrong Arnold Aronson Ella Baker Marion Barry Daisy Bates Harry Belafonte James Bevel Claude Black Gloria Blackwell Randolph Blackwell Unita Blackwell Ezell Blair Jr. Joanne Bland Julian Bond Joseph E. Boone William Holmes Borders Amelia Boynton Raylawni Branch Ruby Bridges Aurelia Browder H. Rap Brown Guy Carawan Stokely Carmichael Johnnie Carr James Chaney J. L. Chestnut Colia Lafayette Clark Ramsey Clark Septima Clark Xernona Clayton Eldridge Cleaver Kathleen Neal Cleaver Charles E. Cobb Jr. Annie Lee Cooper Dorothy Cotton Claudette Colvin Vernon Dahmer Jonathan Daniels Joseph DeLaine Dave Dennis Annie Devine Patricia Stephens Due Charles Evers Medgar Evers Myrlie Evers-Williams Chuck Fager James Farmer Walter E. Fauntroy James Forman Marie Foster Golden Frinks Andrew Goodman Fred Gray Jack Greenberg Dick Gregory Lawrence Guyot Prathia Hall Fannie Lou Hamer William E. Harbour Vincent Harding Dorothy Height Lola Hendricks Aaron Henry Oliver Hill Donald L. Hollowell James Hood Myles Horton Zilphia Horton T. R. M. Howard Ruby Hurley Jesse Jackson Jimmie Lee Jackson Richie Jean Jackson T. J. Jemison Esau Jenkins Barbara Rose Johns Vernon Johns Frank Minis Johnson Clarence Jones Matthew Jones Vernon Jordan Tom Kahn Clyde Kennard A. D. King C.B. King Coretta Scott King Martin Luther King Jr. Martin Luther King Sr. Bernard Lafayette James Lawson Bernard Lee Sanford R. Leigh Jim Letherer Stanley Levison John Lewis Viola Liuzzo Z. Alexander Looby Joseph Lowery Clara Luper Malcolm X Mae Mallory Vivian Malone Thurgood Marshall Benjamin Mays Franklin McCain Charles McDew Ralph McGill Floyd McKissick Joseph McNeil James Meredith William Ming Jack Minnis Amzie Moore Douglas E. Moore William Lewis Moore Irene Morgan Bob Moses William Moyer Elijah Muhammad Diane Nash Charles Neblett Edgar Nixon Jack O’Dell James Orange Rosa Parks James Peck Charles Person Homer Plessy Adam Clayton Powell Jr. Fay Bellamy Powell Al Raby Lincoln Ragsdale A. Philip Randolph George Raymond Jr. Bernice Johnson Reagon Cordell Reagon James Reeb Frederick D. Reese Gloria Richardson David Richmond Bernice Robinson Jo Ann Robinson Bayard Rustin Bernie Sanders Michael Schwerner Cleveland Sellers Charles Sherrod Alexander D. Shimkin Fred Shuttlesworth Modjeska Monteith Simkins Glenn E. Smiley A. Maceo Smith Kelly Miller Smith Mary Louise Smith Maxine Smith Ruby Doris Smith-Robinson Charles Kenzie Steele Hank Thomas Dorothy Tillman A. P. Tureaud Hartman Turnbow Albert Turner C. T. Vivian Wyatt Tee Walker Hollis Watkins Walter Francis White Roy Wilkins Hosea Williams Kale Williams Robert F. Williams Andrew Young Whitney Young Sammy Younge Jr. James Zwerg

Influences
• • • Nonviolence Padayatra Sermon on the Mount Mohandas K. Gandhi Ahimsa Satyagraha The Kingdom of God is Within You Frederick Douglass W. E. B. Du Bois
Related
• • • • • Jim Crow laws Plessy v. Ferguson Separate but equal Buchanan v. Warley Hocutt v. Wilson Sweatt v. Painter Heart of Atlanta Motel, Inc. v. United States Katzenbach v. McClung Loving v. Virginia Fifth Circuit Four Brown Chapel Holt Street Baptist Church Edmund Pettus Bridge March on Washington Movement African-American churches attacked Journey of Reconciliation Freedom Songs “Kumbaya” “Keep Your Eyes on the Prize” “Oh, Freedom” “This Little Light of Mine” “We Shall Not Be Moved” “We Shall Overcome” Spring Mobilization Committee to End the War in Vietnam “Beyond Vietnam: A Time to Break Silence” Watts riots Voter Education Project 1960s counterculture In popular culture King Memorial Birmingham Civil Rights National Monument Freedom Riders National Monument Civil Rights Memorial

Noted
historians

• Taylor Branch Clayborne Carson John Dittmer Michael Eric Dyson Chuck Fager Adam Fairclough David Garrow David Halberstam Vincent Harding Steven F. Lawson Doug McAdam Diane McWhorter Charles M. Payne Timothy Tyson Akinyele Umoja Movement photographers
[hide]

This page was last edited on 17 October 2017, at 07:10.

Sorting Out Charlottesville

This post is about VICE footage of Charlottesville

I just watched – all the way through. Wow wow wow.

So here’s my take.

I am beyond anger. All these scenes and words and stridency underscore the hot mess we are in. And I am so pissed at POTUS … spitting mad.

But here’s the thing: Trump did not create this. He took the lid off it, and asked everyone to look inside. It’s a petrie dish, and he is a fungus that grew out of it.

The petrie dish is given a beneficial ecology through Fox, so many fungi variations can thrive. Trump just rose up to be the fungi-in-chief.

Fox is wrong that there are 100,000 racists in the country. I think it’s 50-100 million. Trump has made it clear that, with a blind ballot, 30% of the country covertly or overtly has these views.

Maybe they are not as strident. Maybe they are too clever to use these disgusting words (duh, it’s not that smart to talk about that beautiful girl Ivanka being with that piece of sh*t Jew husband). Maybe they whisper to people of like mind. Maybe they are a silent class – and love it when someone speaks up on their behalf.

I’m pretty sure that 50-100 million in the US are glad that someone is speaking out about all “this”.

What “this”? The “this” began, seems to me, when a rough, tough white texan, LBJ, used his swagger and savvy to push through the civil rights legislation we know today.

Then came the hot mess that brought us ten+ million illegal immigrants (as congress failed to figure something out). That is. part of “this”.

Then came affirmative action.

Then came black mayors and electeds.

Then came gay marriage. I’ll bet most of the 30% don’t even know someone that is gay.

Then came all the others – like transgender folks that want to pee.

You get the point. The liberal class has been out there for fifty years “perfecting the union” by extending equality to every group they can think of.

Every time this happens, the class I am talking about says “hey, what am I, chopped liver?”. They get pissed but they have no place to put their anger.

Now, mind you, I LOVE “perfecting the union”. I love Obama’s take on this. But I assumed that progress was possible because this silent class would stand down. WRONG.

The silent class is mega-pissed. Republicans figured this out, and have gotten better and better at speaking to this massive crowd of disaffecteds. But they are covert and clever, not overt. They have perfected the “dog whistle”, where only the disaffected can hear “I am totally with you!”.

As you know, I thought the HRC campaign was abominable. Gotta say, I still thought she would win. But what I am saying above is the best articulation yet of WHY she lost; and WHY we have an idiot president; and Why we have Charlottesville.

I wish I had made all this up. I did not. The best version of this position can be found here;

http://www.npr.org/2017/08/15/543730312/the-once-and-future-liberal-looks-at-shortfalls-of-american-liberalism

Columbia Professor Mark Lilla has a VERY controversial point. But I think it’s correct. And I think Democrats are sunk unless they stop their nonsense, and start speaking about the whole, more than the parts. The quilt of America is fine. Going to the mat for transgender people’s bathroom is not. We now know that as “OVER-REACH” – or, to mix metaphors “A BRIDGE TOO FAR”.

Lilla says that when you speak about the parts, you inevitably leave someone out. I think 50-100 million have heard the elected’s speak on all “this”, and they say: I feel left out.

Compounding this problem is that most media types come from diverse cities, and think like I do. They LOVE perfecting the union, and want their readers to know it. And that makes the 50-100 million even MORE pissed off and left out.

We have a hot mess. But maybe, maybe, maybe it’s better that someone, even a complete fool, ripped the cover off the petrie dish. So we can address the hot mess. Before it’s too late.

Folly of One-Way Loyalty

Maybe, instead of bashing Trump at every turn, we can step back and learn from him.

In this case, John Pitney makes a great point about the folly of one way loyalty:

“John J. Pitney, a political scientist with sterling conservative credentials, has a blistering piece in Politico explaining Trump’s problem: He thinks loyalty flows only one way. “Trump’s life has been a long trail of betrayals,” Pitney writes. He has dumped wives, friends, mentors, protégés, colleagues, business associates, Trump University students and, more recently, political advisers.

“Loyalty is about strength,” Pitney, a professor at Claremont McKenna, writes. “It is about sticking with a person, a cause, an idea or a country even when it is costly, difficult, or unpopular.”

CREDIT: NYT Op Ed

20th Century History on Health Care and Insurance

A historian’s take on health care and insurance in the US:

Key points:

Health care in the US is primarily driven by an “insurance company model”.’
There actually was a “medical marketplace” in early 20th century.
One of the best in that marketplace was a “prepaid physician group” with profit sharing for docs.
Truman proposed universal health care.
A.M.A. fought government intervention.
A.M.A. decided that the best way to keep the government out of their industry was to design a private sector model: the insurance company model.
In the insurance company model, insurance companies would pay physicians using fee-for-service compensation.
Thus, physicians became allied with insurance companies – both striving to keep government out of health care. Fee for service was their chosen model.
The model worked to expand coverage: from 25% of the population in 1945 to about 80 percent in 1965.
Elderly did not get covered as well. Congress stepped in with Medicare in 1965.
Because of rising prices, insurers gradually took over. “To constrain rising prices, insurers gradually introduced cost containment procedures and incrementally claimed supervisory authority over doctors. Soon they were reviewing their medical work, standardizing treatment blueprints tied to reimbursements and shaping the practice of medicine.”
Innovation in lacking. Concierge medicine experiments show some promise, like Atlas is Wichita.

=============
JCR comments:
It’s always easier looking backward. If only 25% of the population have health insurance, it seems eminently sensible that driving that number up to, say, 80% would be a high priority goal.

That’s what America did: it adopted a high priority goal to increase health insurance coverage from 25% to 80%. It’s chosen method was a fee-for-service reimbursement model – the “insurance model”. We put insurance companies in the driver’s seat, and we encouraged them to work with employers and physicians groups.

They were the middle man:

Insurers made sure that their employer clients had the benefits they needed to attract employees, at a cost that was practical.
Insurers also made sure that their physician partners supplied the services that they needed, at prices that were practical.

So, with the insurer-as-middle-man-model, we achieved our goal of enrolling 80%, up from 25%. 80% of the American population had health insurance in 1965.

So – what’s wrong with that?

It’s mostly very good. But…

Looking backward, it is obvious now that what is wrong: it is the remaining 20%. These are the unemployed – or the seniors – or the ones who have such ugly health attributes that their health costs are truly exorbitant.

While America was getting the 80% “squared away”, the 20% were left to fend for themselves. They overran emergency rooms; they took beds in charity hospitals; they died.

In 1965, we adopted Medicare and Medicaid. Medicare addressed the 20% who were seniors.
Medicaid addressed the 20% who were poor, such as:

Low-income families
Pregnant women
People of all ages with disabilities
People who need long-term care

Most of this happened over time, not in 1965. State offerings vary.

In 1997, we adopted CHIP for children. This addressed the 20% who were kids. 11 million kids got coverage. They were from families with too much income to qualify for Medicaid.

in 2003, we adopted MMA “The Medicare Prescription Drug Improvement and Modernization Act of 2003”. Under the MMA, private health plans were offered, approved by Medicare “Medicare Advantage Plans’. An optional prescription drug benefit was offered (“Part D”)

In 2011, the Affordable Care Act was adopted.

So, the key question for today is: why is our health care system such a mess. Read on:

=============
CREDIT: NYT https://www.nytimes.com/2017/06/19/opinion/health-insurance-american-medical-association.html?emc=edit_th_20170619&nl=todaysheadlines&nlid=44049881&_r=0

The Opinion Pages | OP-ED CONTRIBUTOR
How Did Health Care Get to Be Such a Mess?
By CHRISTY FORD CHAPINJUNE 19, 2017
The problem with American health care is not the care. It’s the insurance.
Both parties have stumbled to enact comprehensive health care reform because they insist on patching up a rickety, malfunctioning model. The insurance company model drives up prices and fragments care. Rather than rejecting this jerry-built structure, the Democrats’ Obamacare legislation simply added a cracked support beam or two. The Republican bill will knock those out to focus on spackling other dilapidated parts of the system.

An alternative structure can be found in the early decades of the 20th century, when the medical marketplace offered a variety of models. Unions, businesses, consumer cooperatives and ethnic and African-American mutual aid societies had diverse ways of organizing and paying for medical care.

Physicians established a particularly elegant model: the prepaid doctor group. Unlike today’s physician practices, these groups usually staffed a variety of specialists, including general practitioners, surgeons and obstetricians. Patients received integrated care in one location, with group physicians from across specialties meeting regularly to review treatment options for their chronically ill or hard-to-treat patients.

Individuals and families paid a monthly fee, not to an insurance company but directly to the physician group. This system held down costs. Physicians typically earned a base salary plus a percentage of the group’s quarterly profits, so they lacked incentive to either ration care, which would lose them paying patients, or provide unnecessary care.

This contrasts with current examples of such financing arrangements. Where physicians earn a preset salary — for example, in Kaiser Permanente plans or in the British National Health Service — patients frequently complain about rationed or delayed care. When physicians are paid on a fee-for-service basis, for every service or procedure they provide — as they are under the insurance company model — then care is oversupplied. In these systems, costs escalate quickly.

Unfortunately, the leaders of the American Medical Association saw early health care models — union welfare funds, prepaid physician groups — as a threat. A.M.A. members sat on state licensing boards, so they could revoke the licenses of physicians who joined these “alternative” plans. A.M.A. officials likewise saw to it that recalcitrant physicians had their hospital admitting privileges rescinded.

The A.M.A. was also busy working to prevent government intervention in the medical field. Persistent federal efforts to reform health care began during the 1930s. After World War II, President Harry Truman proposed a universal health care system, and archival evidence suggests that policy makers hoped to build the program around prepaid physician groups.

A.M.A. officials decided that the best way to keep the government out of their industry was to design a private sector model: the insurance company model.

In this system, insurance companies would pay physicians using fee-for-service compensation. Insurers would pay for services even though they lacked the ability to control their supply. Moreover, the A.M.A. forbade insurers from supervising physician work and from financing multispecialty practices, which they feared might develop into medical corporations.

With the insurance company model, the A.M.A. could fight off Truman’s plan for universal care and, over the next decade, oppose more moderate reforms offered during the Eisenhower years.

Through each legislative battle, physicians and their new allies, insurers, argued that federal health care funding was unnecessary because they were expanding insurance coverage. Indeed, because of the perceived threat of reform, insurers weathered rapidly rising medical costs and unfavorable financial conditions to expand coverage from about a quarter of the population in 1945 to about 80 percent in 1965.

But private interests failed to cover a sufficient number of the elderly. Consequently, Congress stepped in to create Medicare in 1965. The private health care sector had far more capacity to manage a large, complex program than did the government, so Medicare was designed around the insurance company model. Insurers, moreover, were tasked with helping administer the program, acting as intermediaries between the government and service providers.

With Medicare, the demand for health services increased and medical costs became a national crisis. To constrain rising prices, insurers gradually introduced cost containment procedures and incrementally claimed supervisory authority over doctors. Soon they were reviewing their medical work, standardizing treatment blueprints tied to reimbursements and shaping the practice of medicine.

It’s easy to see the challenge of real reform: To actually bring down costs, legislators must roll back regulations to allow market innovation outside the insurance company model.

In some places, doctors are already trying their hand at practices similar to prepaid physician groups, as in concierge medicine experiments like the Atlas MD plan, a physician cooperative in Wichita, Kan. These plans must be able to skirt state insurance regulations and other laws, such as those prohibiting physicians from owning their own diagnostic facilities.

Both Democrats and Republicans could learn from this lost history of health care innovation.

Christy Ford Chapin is an associate professor of history at the University of Maryland, Baltimore County, a visiting scholar at Johns Hopkins University and the author of “Ensuring America’s Health: The Public Creation of the Corporate Health Care System.”
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

=============
Brian Hudes Comment:

I saw this one, as well.  The author lost credibility for me.   Ironically, what the author clearly doesn’t realize is that she is making an argument for the Kaiser Permenante model.  However, she unfairly and without any data makes the following claim: 

“Where physicians earn a preset salary — for example, in Kaiser Permanente plans or in the British National Health Service — patients frequently complain about rationed or delayed care”

Here’s a more balanced and comprehensive assessment supported by third party research:

Health Care Members Speak

==========

RECENT COMMENTS
Alex 21 hours ago
Patients are no longer patients, but “customers”.Insurance providers are publicly traded companies. Healthcare costs are meticulously…
T Bone 21 hours ago
Why in the “elegant” model of the old pre-paid doctor’s group would doctors not “ration” care? Sure they would. The solution is provider…
M Shea 1 day ago
I’ve worked in marketing for both healthcare insurance and a healthcare/hospital system. It’s ridiculous how much money is spent selling a…

Smuggling, Capitalism and the Law of Unintended Consequences

To me, this article seems to be about the border wall with Mexico, but it instead is about 1) the law of unintended consequences, and 2) the nature of capitalism.

The law of unintended consequences can never be underestimated; nor can the ability of capitalism to bring out the creativity of entrepreneurs and organizations when there is big money to be made.

A few notes:

“But rather than stopping smuggling, the barriers have just pushed it farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.”

“A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year.”

The price of smuggling any given drug will rise proportionate to the difficulty of smuggling.

52 legal crossings
Nogales (Mexico) and Nogales (US) and the dense homes on border

Tricks:
Coyotes (small drug smugglers)
Donkeys (the people who actually carry the drugs)
“Clavos” Secret compartments whose sophistication grows
Trains (a principal means of smuggling)
“Trampolines” (gigantic catapults that hurl the drugs over any wall)
Tunnel and new technologies (216 discovered since 1990)

===============
CREDIT: New York Times Article: mexican-drug-smugglers-to-trump-thanks!

Mexican Drug Smugglers to Trump: Thanks!

Ioan Grillo
MAY 5, 2017

NOGALES, Mexico — Crouched in the spiky terrain near this border city, a veteran smuggler known as Flaco points to the steel border fence and describes how he has taken drugs and people into the United States for more than three decades. His smuggling techniques include everything from throwing drugs over in gigantic catapults to hiding them in the engine cars of freight trains to making side tunnels off the cross-border sewage system.

When asked whether the border wall promised by President Trump will stop smugglers, he smiles. “This is never going to stop, neither the narco trafficking nor the illegals,” he says. “There will be more tunnels. More holes. If it doesn’t go over, it will go under.”

What will change? The fees that criminal networks charge to transport people and contraband across the border. Every time the wall goes up, so do smuggling profits.

The first time Flaco took people over the line was in 1984, when he was 15; he showed them a hole torn in a wire fence on the edge of Nogales for a tip of 50 cents. Today, many migrants pay smugglers as much as $5,000 to head north without papers, trekking for days through the Sonoran Desert. Most of that money goes to drug cartels that have taken over the profitable business.

“From 50 cents to $5,000,” Flaco says. “As the prices went up, the mafia, which is the Sinaloa cartel, took over everything here, drugs and people smuggling.” Sinaloa dominates Nogales and other parts of northwest Mexico, while rivals, including the Juarez, Gulf and Zetas cartels, control other sections of the border. Flaco finished a five-year prison sentence here for drug trafficking in 2009 and has continued to smuggle since.

His comments underline a problem that has frustrated successive American governments and is likely to haunt President Trump, even if the wall becomes more than a rallying cry and he finally gets the billions of dollars needed to fund it. Strengthening defenses does not stop smuggling. It only makes it more expensive, which inadvertently gives more money to criminal networks.

The cartels have taken advantage of this to build a multibillion industry, and they protect it with brutal violence that destabilizes Mexico and forces thousands of Mexicans to head north seeking asylum.

Stretching almost 2,000 miles from the Pacific Ocean to the Gulf of Mexico, the border has proved treacherous to block. It traverses a sparsely populated desert, patches of soft earth that are easy to tunnel through, and the mammoth Rio Grande, which floods its banks, making fencing difficult.

And it contains 52 legal crossing points, where millions of people, cars, trucks and trains enter the United States every week.

President Trump’s idea of a wall is not new. Chunks of walls, fencing and anti-car spikes have been erected periodically, particularly in 1990 and 2006. On April 30, Congress reached a deal to fund the federal budget through September that failed to approve any money for extending the barriers as President Trump has promised. However, it did allocate several hundred million dollars for repairing existing infrastructure, and the White House has said it will use this to replace some fencing with a more solid wall.

But rather than stopping smuggling, the barriers have just pushed it: farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.
It is particularly concerning how cartels have taken over the human smuggling business. Known as coyotes, these smugglers used to work independently, or in small groups. Now they have to work for the cartel, which takes a huge cut of the profits, Flaco says. If migrants try to cross the border without paying, they risk getting beaten or murdered.

The number of people detained without papers on the southern border has dropped markedly in the first months of the Trump administration, with fewer than 17,000 apprehended in March, the lowest since 2000. But this has nothing to do with the yet-to-be-built new wall. The president’s anti-immigrant rhetoric could be a deterrent — signaling that tweets can have a bigger effect than bricks. However, this may not last, and there is no sign of drug seizures going down.

Flaco grew up in a Nogales slum called Buenos Aires, which has produced generations of smugglers. The residents refer to the people who carry over backpacks full of drugs as burros, or donkeys. “When I first heard about this, I thought they used real donkeys to carry the marijuana,” Flaco says. “Then I realized, we were the donkeys.”

He was paid $500 for his first trip as a donkey when he was in high school, encouraging him to drop out for what seemed like easy money.
The fences haven’t stopped the burros, who use either ropes or their bare hands to scale them. This was captured in extraordinary footage from a Mexican TV crew, showing smugglers climbing into California. But solid walls offer no solution, as they can also be scaled and they make it harder for border patrol agents to spot what smugglers are up to on the Mexican side.

Flaco quickly graduated to building secret compartments in cars. Called clavos, they are fixed into gas tanks, on dashboards, on roofs. The cars, known by customs agents as trap cars, then drive right through the ports of entry. In fact, while most marijuana is caught in the desert, harder drugs such as heroin are far more likely to go over the bridge.
When customs agents learned to look for the switches that opened the secret compartments, smugglers figured out how to do without them. Some new trap cars can be opened only with complex procedures, such as when the driver is in the seat, all doors are closed, the defroster is turned on and a special card is swiped.

Equally sophisticated engineering goes into the tunnels that turn the border into a block of Swiss cheese. Between 1990 and 2016, 224 tunnels were discovered, some with air vents, rails and electric lights. While the drug lord Joaquin Guzman, known as El Chapo, became infamous for using them, Flaco says they are as old as the border itself and began as natural underground rivers.

Tunnels are particularly popular in Nogales, where Mexican federal agents regularly seize houses near the border for having them. Flaco even shows me a filled-in passage that started inside a graveyard tomb. “It’s because Nogales is one of the few border towns that is urbanized right up to the line,” explains Mayor David Cuauhtémoc Galindo. “There are houses that are on both sides of the border at a very short distance,” making it easy to tunnel from one to the other.

Nogales is also connected to its neighbor across the border in Arizona, also called Nogales, by a common drainage system. It cannot be blocked, because the ground slopes downward from Mexico to the United States. Police officers took me into the drainage system and showed me several smuggling tunnels that had been burrowed off it. They had been filled in with concrete, but the officers warned that smugglers could be lurking around to make new ones and that I should hit the ground if we ran into any.

Back above ground, catapults are one of the most spectacular smuggling methods. “We call them trampolines,” Flaco says. “They have a spring that is like a tripod, and two or three people operate them.” Border patrol agents captured one that had been attached to the fence near the city of Douglas, Ariz., in February and showed photos of what looked like a medieval siege weapon.

Freight trains also cross the border, on their way from southern Mexico up to Canada. While agents inspect them, it’s impossible to search all the carriages, which are packed with cargo from cars to canned chilies. Flaco says the train workers are often paid off by the smugglers. He was once caught with a load of marijuana on a train in Arizona, but he managed to persuade police that he was a train worker and did only a month in jail.
While marijuana does less harm, the smugglers also bring heroin, crack cocaine and crystal meth to America, which kill many. Calls to wage war on drugs can be emotionally appealing. The way President Trump linked his promises of a wall to drug problems in rural America was most likely a factor in his victory.

But four decades after Richard Nixon declared a “war on drugs,” despite trillions of dollars spent on agents, soldiers and barriers, drugs are still easy to buy all across America.

President Trump has taken power at a turning point in the drug policy debate. A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year. President Trump has also called for more treatment for drug addicts. He would be wise to make that, and not the wall, a cornerstone of his drug policy.

Reducing the finances of drug cartels could reduce some of the violence, and the number of people fleeing north to escape it. But to really tackle the issue of human smuggling, the United States must provide a path to papers for the millions of undocumented workers already in the country, and then make sure businesses hire only workers with papers in the future. So long as illegal immigrants can make a living in the United States, smugglers will make a fortune leading them there.

Stopping the demand for the smugglers’ services actually hits them in their pockets. Otherwise, they will just keep getting richer as the bricks get higher.

Ioan Grillo is the author of “Gangster Warlords: Drug Dollars, Killing Fields and the New Politics of Latin America” and a contributing opinion writer.

I Thought I Understood the American Right. Trump Proved Me Wrong.

How to explain Trump? This feature-length article in today’s New York Times Magazine does a great job of pulling together, into one place, the historical strands that made Trump possible.

Including:

-The New Deal put conservatives on their “back foot”, and set the stage for an emerging liberal consensus that helps for over fifty years.
-The effort by William F. Buckley and the National Review, beginning in 1955, to make conservatism intellectually attractive – and defensible.
-New South talking points, instead of outright racism, that were more palatable, like “stable housing values” and “quality local education,”. These had enormous appeal to the white American middle class.
– Alan Brinkley arguing, when Reagan was elected, that American conservatism “had been something of an orphan in historical scholarship.”
– Reagan himself, who portrayed a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave.
-Harvard’s Lisa McGirr writing in her 2001 book that conservative, largely suburban “highly educated and thoroughly modern group of men and women,” took on “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

Two quotes stick with me, one which summarizes:

“Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.”

And a second quote about Goldwater:

Richard Hofstadter said, one month before the defeat of Barry Goldwater for president: “when, in all our history, has anyone with ideas so bizarre, so self-confounding, so remote from the basic American consensus, ever gone so far?”

I find that quote revealing. He called the Goldwater crushing defeat. And one would have thought that the exact same quote could apply to November 1, 2016, anticipating a crushing defeat for Donald J. Trump. And yet, he prevailed!

We owe it to ourselves to ask: Why? Here is one historian’s take:

===============
CREDIT: Feature Article from New York Times Magazine
===============
I Thought I Understood the American Right. Trump Proved Me Wrong.

A historian of conservatism looks back at how he and his peers failed to anticipate the rise of the president.

BY RICK PERLSTEIN
APRIL 11, 2017

Until Nov. 8, 2016, historians of American politics shared a rough consensus about the rise of modern American conservatism. It told a respectable tale. By the end of World War II, the story goes, conservatives had become a scattered and obscure remnant, vanquished by the New Deal and the apparent reality that, as the critic Lionel Trilling wrote in 1950, liberalism was “not only the dominant but even the sole intellectual tradition.”

Year Zero was 1955, when William F. Buckley Jr. started National Review, the small-circulation magazine whose aim, Buckley explained, was to “articulate a position on world affairs which a conservative candidate can adhere to without fear of intellectual embarrassment or political surrealism.” Buckley excommunicated the John Birch Society, anti-Semites and supporters of the hyperindividualist Ayn Rand, and his cohort fused the diverse schools of conservative thinking — traditionalist philosophers, militant anti-Communists, libertarian economists — into a coherent ideology, one that eventually came to dominate American politics.

I was one of the historians who helped forge this narrative. My first book, “Before the Storm,” was about the rise of Senator Barry Goldwater, the uncompromising National Review favorite whose refusal to exploit the violent backlash against civil rights, and whose bracingly idealistic devotion to the Constitution as he understood it — he called for Social Security to be made “voluntary” — led to his crushing defeat in the 1964 presidential election. Goldwater’s loss, far from dooming the American right, inspired a new generation of conservative activists to redouble their efforts, paving the way for the Reagan revolution. Educated whites in the prosperous metropolises of the New South sublimated the frenetic, violent anxieties that once marked race relations in their region into more palatable policy concerns about “stable housing values” and “quality local education,” backfooting liberals and transforming conservatives into mainstream champions of a set of positions with enormous appeal to the white American middle class.

These were the factors, many historians concluded, that made America a “center right” nation. For better or for worse, politicians seeking to lead either party faced a new reality. Democrats had to honor the public’s distrust of activist government (as Bill Clinton did with his call for the “end of welfare as we know it”). Republicans, for their part, had to play the Buckley role of denouncing the political surrealism of the paranoid fringe (Mitt Romney’s furious backpedaling after joking, “No one’s ever asked to see my birth certificate”).

Then the nation’s pre-eminent birther ran for president. Trump’s campaign was surreal and an intellectual embarrassment, and political experts of all stripes told us he could never become president. That wasn’t how the story was supposed to end. National Review devoted an issue to writing Trump out of the conservative movement; an editor there, Jonah Goldberg, even became a leader of the “Never Trump” crusade. But Trump won — and some conservative intellectuals embraced a man who exploited the same brutish energies that Buckley had supposedly banished.

The professional guardians of America’s past, in short, had made a mistake. We advanced a narrative of the American right that was far too constricted to anticipate the rise of a man like Trump. Historians, of course, are not called upon to be seers. Our professional canons warn us against presentism — we are supposed to weigh the evidence of the past on its own terms — but at the same time, the questions we ask are conditioned by the present. That is, ultimately, what we are called upon to explain. Which poses a question: If Donald Trump is the latest chapter of conservatism’s story, might historians have been telling that story wrong?

American historians’ relationship to conservatism itself has a troubled history. Even after Ronald Reagan’s electoral-college landslide in 1980, we paid little attention to the right: The central narrative of America’s political development was still believed to be the rise of the liberal state. But as Newt Gingrich’s right-wing revolutionaries prepared to take over the House of Representatives in 1994, the scholar Alan Brinkley published an essay called “The Problem of American Conservatism” in The American Historical Review. American conservatism, Brinkley argued, “had been something of an orphan in historical scholarship,” and that was “coming to seem an ever-more-curious omission.” The article inaugurated the boom in scholarship that brought us the story, now widely accepted, of conservatism’s triumphant rise.

That story was in part a rejection of an older story. Until the 1990s, the most influential writer on the subject of the American right was Richard Hofstadter, a colleague of Trilling’s at Columbia University in the postwar years. Hofstadter was the leader of the “consensus” school of historians; the “consensus” being Americans’ supposed agreement upon moderate liberalism as the nation’s natural governing philosophy. He didn’t take the self-identified conservatives of his own time at all seriously. He called them “pseudoconservatives” and described, for instance, followers of the red-baiting Republican senator Joseph McCarthy as cranks who salved their “status anxiety” with conspiracy theories and bizarre panaceas. He named this attitude “the paranoid style in American politics” and, in an article published a month before Barry Goldwater’s presidential defeat, asked, “When, in all our history, has anyone with ideas so bizarre, so archaic, so self-confounding, so remote from the basic American consensus, ever gone so far?”

It was a strangely ahistoric question; many of Goldwater’s ideas hewed closely to a well-established American distrust of statism that goes back all the way to the nation’s founding. It betokened too a certain willful blindness toward the evidence that was already emerging of a popular backlash against liberalism. Reagan’s gubernatorial victory in California two years later, followed by his two landslide presidential wins, made a mockery of Hofstadter. Historians seeking to grasp conservatism’s newly revealed mass appeal would have to take the movement on its own terms.

That was my aim when I took up the subject in the late 1990s — and, even more explicitly, the aim of Lisa McGirr, now of Harvard University, whose 2001 book, “Suburban Warriors: The Origins of the New American Right,” became a cornerstone of the new literature. Instead of pronouncing upon conservatism from on high, as Hofstadter had, McGirr, a social historian, studied it from the ground up, attending respectfully to what activists understood themselves to be doing. What she found was “a highly educated and thoroughly modern group of men and women,” normal participants in the “bureaucratized world of post-World War II America.” They built a “vibrant and remarkable political mobilization,” she wrote, in an effort to address political concerns that would soon be resonating nationwide — for instance, their anguish at “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

But if Hofstadter was overly dismissive of how conservatives understood themselves, the new breed of historians at times proved too credulous. McGirr diligently played down the sheer bloodcurdling hysteria of conservatives during the period she was studying — for example, one California senator’s report in 1962 that he had received thousands of letters from constituents concerned about a rumor that Communist Chinese commandos were training in Mexico for an imminent invasion of San Diego. I sometimes made the same mistake. Writing about the movement that led to Goldwater’s 1964 Republican nomination, for instance, it never occurred to me to pay much attention to McCarthyism, even though McCarthy helped Goldwater win his Senate seat in 1952, and Goldwater supported McCarthy to the end. (As did William F. Buckley.) I was writing about the modern conservative movement, the one that led to Reagan, not about the brutish relics of a more gothic, ill-formed and supposedly incoherent reactionary era that preceded it.

A few historians have provocatively followed a different intellectual path, avoiding both the bloodlessness of the new social historians and the psychologizing condescension of the old Hofstadter school. Foremost among them is Leo Ribuffo, a professor at George Washington University. Ribuffo’s surname announces his identity in the Dickensian style: Irascible, brilliant and deeply learned, he is one of the profession’s great rebuffers. He made his reputation with an award-winning 1983 study, “The Old Christian Right: The Protestant Far Right From the Great Depression to the Cold War,” and hasn’t published a proper book since — just a series of coruscating essays that frequently focus on what everyone else is getting wrong. In the 1994 issue of The American Historical Review that featured Alan Brinkley’s “The Problem of American Conservatism,” Ribuffo wrote a response contesting Brinkley’s contention, now commonplace, that Trilling was right about American conservatism’s shallow roots. Ribuffo argued that America’s anti-liberal traditions were far more deeply rooted in the past, and far angrier, than most historians would acknowledge, citing a long list of examples from “regional suspicions of various metropolitan centers and the snobs who lived there” to “white racism institutionalized in slavery and segregation.”

After the election, Ribuffo told me that if he were to write a similar response today, he would call it, “Why Is There So Much Scholarship on ‘Conservatism,’ and Why Has It Left the Historical Profession So Obtuse About Trumpism?” One reason, as Ribuffo argues, is the conceptual error of identifying a discrete “modern conservative movement” in the first place. Another reason, though, is that historians of conservatism, like historians in general, tend to be liberal, and are prone to liberalism’s traditions of politesse. It’s no surprise that we are attracted to polite subjects like “colorblind conservatism” or William F. Buckley.

Our work might have been less obtuse had we shared the instincts of a New York University professor named Kim Phillips-Fein. “Historians who write about the right should find ways to do so with a sense of the dignity of their subjects,” she observed in a 2011 review, “but they should not hesitate to keep an eye out for the bizarre, the unusual, or the unsettling.”

Looking back from that perspective, we can now see a history that is indeed unsettling — but also unsettlingly familiar. Consider, for example, an essay published in 1926 by Hiram Evans, the imperial wizard of the Ku Klux Klan, in the exceedingly mainstream North American Review. His subject was the decline of “Americanism.” Evans claimed to speak for an abused white majority, “the so-called Nordic race,” which, “with all its faults, has given the world almost the whole of modern civilization.” Evans, a former dentist, proposed that his was “a movement of plain people,” and acknowledged that this “lays us open to the charge of being hicks and ‘rubes’ and ‘drivers of secondhand Fords.’ ” But over the course of the last generation, he wrote, these good people “have found themselves increasingly uncomfortable, and finally deeply distressed,” watching a “moral breakdown” that was destroying a once-great nation. First, there was “confusion in thought and opinion, a groping and hesitancy about national affairs and private life alike, in sharp contrast to the clear, straightforward purposes of our earlier years.” Next, they found “the control of much of our industry and commerce taken over by strangers, who stacked the cards of success and prosperity against us,” and ultimately these strangers “came to dominate our government.” The only thing that would make America great again, as it were, was “a return of power into the hands of everyday, not highly cultured, not overly intellectualized, but entirely unspoiled and not de-Americanized average citizens of old stock.”

This “Second Klan” (the first was formed during Reconstruction) scrambles our pre-Trump sense of what right-wing ideology does and does not comprise. (Its doctrines, for example, included support for public education, to weaken Catholic parochial schools.) The Klan also put the predations of the international banking class at the center of its rhetoric. Its worldview resembles, in fact, the right-wing politics of contemporary Europe — a tradition, heretofore judged foreign to American politics, called “herrenvolk republicanism,” that reserved social democracy solely for the white majority. By reaching back to the reactionary traditions of the 1920s, we might better understand the alliance between the “alt-right” figures that emerged as fervent Trump supporters during last year’s election and the ascendant far-right nativist political parties in Europe.

None of this history is hidden. Indeed, in the 1990s, a rich scholarly literature emerged on the 1920s Klan and its extraordinary, and decidedly national, influence. (One hotbed of Klan activity, for example, was Anaheim, Calif. McGirr’s “Suburban Warriors” mentions this but doesn’t discuss it; neither did I in my own account of Orange County conservatism in “Before the Storm.” Again, it just didn’t seem relevant to the subject of the modern conservative movement.) The general belief among historians, however, was that the Klan’s national influence faded in the years after 1925, when Indiana’s grand dragon, D.C. Stephenson, who served as the de facto political boss for the entire state, was convicted of murdering a young woman.

But the Klan remained relevant far beyond the South. In 1936 a group called the Black Legion, active in the industrial Midwest, burst into public consciousness after members assassinated a Works Progress Administration official in Detroit. The group, which considered itself a Klan enforcement arm, dominated the news that year. The F.B.I. estimated its membership at 135,000, including a large number of public officials, possibly including Detroit’s police chief. The Associated Press reported in 1936 that the group was suspected of assassinating as many as 50 people. In 1937, Humphrey Bogart starred in a film about it. In an informal survey, however, I found that many leading historians of the right — including one who wrote an important book covering the 1930s — hadn’t heard of the Black Legion.

Stephen H. Norwood, one of the few historians who did study the Black Legion, also mined another rich seam of neglected history in which far-right vigilantism and outright fascism routinely infiltrated the mainstream of American life. The story begins with Father Charles Coughlin, the Detroit-based “radio priest” who at his peak reached as many as 30 million weekly listeners. In 1938, Coughlin’s magazine, Social Justice, began reprinting “Protocols of the Learned Elders of Zion,” a forged tract about a global Jewish conspiracy first popularized in the United States by Henry Ford. After presenting this fictitious threat, Coughlin’s paper called for action, in the form of a “crusade against the anti-Christian forces of the red revolution” — a call that was answered, in New York and Boston, by a new organization, the Christian Front. Its members were among the most enthusiastic participants in a 1939 pro-Hitler rally that packed Madison Square Garden, where the leader of the German-American Bund spoke in front of an enormous portrait of George Washington flanked by swastikas.

The Bund took a mortal hit that same year — its leader was caught embezzling — but the Christian Front soldiered on. In 1940, a New York chapter was raided by the F.B.I. for plotting to overthrow the government. The organization survived, and throughout World War II carried out what the New York Yiddish paper The Day called “small pogroms” in Boston and New York that left Jews in “mortal fear” of “almost daily” beatings. Victims who complained to authorities, according to news reports, were “insulted and beaten again.” Young Irish-Catholic men inspired by the Christian Front desecrated nearly every synagogue in Washington Heights. The New York Catholic hierarchy, the mayor of Boston and the governor of Massachusetts largely looked the other way.

Why hasn’t the presence of organized mobs with backing in powerful places disturbed historians’ conclusion that the American right was dormant during this period? In fact, the “far right” was never that far from the American mainstream. The historian Richard Steigmann-Gall, writing in the journal Social History, points out that “scholars of American history are by and large in agreement that, in spite of a welter of fringe radical groups on the right in the United States between the wars, fascism never ‘took’ here.” And, unlike in Europe, fascists did not achieve governmental power. Nevertheless, Steigmann-Gall continues, “fascism had a very real presence in the U.S.A., comparable to that on continental Europe.” He cites no less mainstream an organization than the American Legion, whose “National Commander” Alvin Owsley proclaimed in 1922, “the Fascisti are to Italy what the American Legion is to the United States.” A decade later, Chicago named a thoroughfare after the Fascist military leader Italo Balbo. In 2011, Italian-American groups in Chicago protested a movement to rename it.

Anti-Semitism in America declined after World War II. But as Leo Ribuffo points out, the underlying narrative — of a diabolical transnational cabal of aliens plotting to undermine the very foundations of Christian civilization — survived in the anti-Communist diatribes of Joseph McCarthy. The alien narrative continues today in the work of National Review writers like Andrew McCarthy (“How Obama Embraces Islam’s Sharia Agenda”) and Lisa Schiffren (who argued that Obama’s parents could be secret Communists because “for a white woman to marry a black man in 1958, or ’60, there was almost inevitably a connection to explicit Communist politics”). And it found its most potent expression in Donald Trump’s stubborn insistence that Barack Obama was not born in the United States.

Trump’s connection to this alternate right-wing genealogy is not just rhetorical. In 1927, 1,000 hooded Klansmen fought police in Queens in what The Times reported as a “free for all.” One of those arrested at the scene was the president’s father, Fred Trump. (Trump’s role in the melee is unclear; the charge — “refusing to disperse” — was later dropped.) In the 1950s, Woody Guthrie, at the time a resident of the Beach Haven housing complex the elder Trump built near Coney Island, wrote a song about “Old Man Trump” and the “Racial hate/He stirred up/In the bloodpot of human hearts/When he drawed/That color line” in one of his housing developments. In 1973, when Donald Trump was working at Fred’s side, both father and son were named in a federal housing-discrimination suit. The family settled with the Justice Department in the face of evidence that black applicants were told units were not available even as whites were welcomed with open arms.

The 1960s and ’70s New York in which Donald Trump came of age, as much as Klan-ridden Indiana in the 1920s or Barry Goldwater’s Arizona in the 1950s, was at conservatism’s cutting edge, setting the emotional tone for a politics of rage. In 1966, when Trump was 20, Mayor John Lindsay placed civilians on a board to more effectively monitor police abuse. The president of the Patrolmen’s Benevolent Association — responding, “I am sick and tired of giving in to minority groups and their gripes and their shouting” — led a referendum effort to dissolve the board that won 63 percent of the vote. Two years later, fights between supporters and protesters of George Wallace at a Madison Square Garden rally grew so violent that, The New Republic observed, “never again will you read about Berlin in the ’30s without remembering this wild confrontation here of two irrational forces.”

The rest of the country followed New York’s lead. In 1970, after the shooting deaths of four students during antiwar protests at Kent State University in Ohio, a Gallup poll found that 58 percent of Americans blamed the students for their own deaths. (“If they didn’t do what the Guards told them, they should have been mowed down,” one parent of Kent State students told an interviewer.) Days later, hundreds of construction workers from the World Trade Center site beat antiwar protesters at City Hall with their hard hats. (“It was just like Iwo Jima,” an impressed witness remarked.) That year, reports the historian Katherine Scott, 76 percent of Americans “said they did not support the First Amendment right to assemble and dissent from government policies.”

In 1973, the reporter Gail Sheehy joined a group of blue-collar workers watching the Watergate hearings in a bar in Astoria, Queens. “If I was Nixon,” one of them said, “I’d shoot every one of them.” (Who “they” were went unspecified.) This was around the time when New Yorkers were leaping to their feet and cheering during screenings of “Death Wish,” a hit movie about a liberal architect, played by Charles Bronson, who shoots muggers at point-blank range. At an October 2015 rally near Nashville, Donald Trump told his supporters: “I have a license to carry in New York, can you believe that? Nobody knows that. Somebody attacks me, oh, they’re gonna be shocked.” He imitated a cowboy-style quick draw, and an appreciative crowd shouted out the name of Bronson’s then-41-year-old film: “ ‘Death Wish’!”

In 1989, a young white woman was raped in Central Park. Five teenagers, four black and one Latino, confessed to participating in the crime. At the height of the controversy, Donald Trump took out full-page ads in all the major New York daily papers calling for the return of the death penalty. It was later proved the police had essentially tortured the five into their confessions, and they were eventually cleared by DNA evidence. Trump, however, continues to insist upon their guilt. That confidence resonates deeply with what the sociologist Lawrence Rosenthal calls New York’s “hard-hat populism” — an attitude, Rosenthal hypothesizes, that Trump learned working alongside the tradesmen in his father’s real estate empire. But the case itself also resonates deeply with narratives dating back to the first Ku Klux Klan of white womanhood defiled by dark savages. Trump’s public call for the supposed perpetrators’ hides, no matter the proof of guilt or innocence, mimics the rituals of Southern lynchings.

When Trump vowed on the campaign trail to Make America Great Again, he was generally unclear about when exactly it stopped being great. The Vanderbilt University historian Jefferson Cowie tells a story that points to a possible answer. In his book “The Great Exception,” he suggests that what historians considered the main event in 20th century American political development — the rise and consolidation of the “New Deal order” — was in fact an anomaly, made politically possible by a convergence of political factors. One of those was immigration. At the beginning of the 20th century, millions of impoverished immigrants, mostly Catholic and Jewish, entered an overwhelmingly Protestant country. It was only when that demographic transformation was suspended by the 1924 Immigration Act that majorities of Americans proved willing to vote for many liberal policies. In 1965, Congress once more allowed large-scale immigration to the United States — and it is no accident that this date coincides with the increasing conservative backlash against liberalism itself, now that its spoils would be more widely distributed among nonwhites.

The liberalization of immigration law is an obsession of the alt-right. Trump has echoed their rage. “We’ve admitted 59 million immigrants to the United States between 1965 and 2015,” he noted last summer, with rare specificity. “ ‘Come on in, anybody. Just come on in.’ Not anymore.” This was a stark contrast to Reagan, who venerated immigrants, proudly signing a 1986 bill, sponsored by the conservative Republican senator Alan Simpson, that granted many undocumented immigrants citizenship. Shortly before announcing his 1980 presidential run, Reagan even boasted of his wish “to create, literally, a common market situation here in the Americas with an open border between ourselves and Mexico.” But on immigration, at least, it is Trump, not Reagan, who is the apotheosis of the brand of conservatism that now prevails.

A puzzle remains. If Donald Trump was elected as a Marine Le Pen-style — or Hiram Evans-style — herrenvolk republican, what are we to make of the fact that he placed so many bankers and billionaires in his cabinet, and has relentlessly pursued so many 1-percent-friendly policies? More to the point, what are we to the make of the fact that his supporters don’t seem to mind?

Here, however, Trump is far from unique. The history of bait-and-switch between conservative electioneering and conservative governance is another rich seam that calls out for fresh scholarly excavation: not of how conservative voters see their leaders, but of the neglected history of how conservative leaders see their voters.

In their 1987 book, “Right Turn,” the political scientists Joel Rogers and Thomas Ferguson presented public-opinion data demonstrating that Reagan’s crusade against activist government, which was widely understood to be the source of his popularity, was not, in fact, particularly popular. For example, when Reagan was re-elected in 1984, only 35 percent of voters favored significant cuts in social programs to reduce the deficit. Much excellent scholarship, well worth revisiting in the age of Trump, suggests an explanation for Reagan’s subsequent success at cutting back social programs in the face of hostile public opinion: It was business leaders, not the general public, who moved to the right, and they became increasingly aggressive and skilled in manipulating the political process behind the scenes.
But another answer hides in plain sight. The often-cynical negotiation between populist electioneering and plutocratic governance on the right has long been not so much a matter of policy as it has been a matter of show business. The media scholar Tim Raphael, in his 2009 book, “The President Electric: Ronald Reagan and the Politics of Performance,” calls the three-minute commercials that interrupted episodes of The General Electric Theater — starring Reagan and his family in their state-of-the-art Pacific Palisades home, outfitted for them by G.E. — television’s first “reality show.” For the California voters who soon made him governor, the ads created a sense of Reagan as a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave. Years later, the producers of “The Apprentice” carefully crafted a Trump character who was the quintessence of steely resolve and all-knowing mastery. American voters noticed. Linda Lucchese, a Trump convention delegate from Illinois who had never previously been involved in politics, told me that she watched “The Apprentice” and decided that Trump would make a perfect president. “All those celebrities,” she told me: “They showed him respect.”

It is a short leap from advertising and reality TV to darker forms of manipulation. Consider the parallels since the 1970s between conservative activism and the traditional techniques of con men. Direct-mail pioneers like Richard Viguerie created hair-on-fire campaign-fund-raising letters about civilization on the verge of collapse. One 1979 pitch warned that “federal and state legislatures are literally flooded with proposed laws that are aimed at total confiscation of firearms from law-abiding citizens.” Another, from the 1990s, warned that “babies are being harvested and sold on the black market by Planned Parenthood clinics.” Recipients of these alarming missives sent checks to battle phony crises, and what they got in return was very real tax cuts for the rich. Note also the more recent connection between Republican politics and “multilevel marketing” operations like Amway (Trump’s education secretary, Betsy DeVos, is the wife of Amway’s former president and the daughter-in-law of its co-founder); and how easily some of these marketing schemes shade into the promotion of dubious miracle cures (Ben Carson, secretary of housing and urban development, with “glyconutrients”; Mike Huckabee shilling for a “solution kit” to “reverse” diabetes; Trump himself taking on a short-lived nutritional-supplements multilevel marketing scheme in 2009). The dubious grifting of Donald Trump, in short, is a part of the structure of conservative history.

Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.

Editors’ Note: April 16, 2017
An essay on Page 36 this weekend by a historian about how conservatism has changed over the years cites Jonah Goldberg of the National Review as an example of a conservative intellectual who embraced Donald J. Trump following the presidential election. That is a mischaracterization of the views of Mr. Goldberg, who has continued to be critical of Mr. Trump.

Rick Perlstein is the author, most recently, of “The Invisible Bridge: The Fall of Nixon and the Rise of Reagan.”