Category Archives: Politics

Dianne Dillon-Ridgley

Karen first met Dianne through the Women’s Network for a Sustainable Future (WNSF).

As I got to know Dianne more, I realized that there were many stories: facets of her experience and interests that make her life very complex, but also very interesting.

I came to realize that she believes that her myriad interests are really one interest: justice.

If I were to try to summarize her interests, I might do it this way:

Sustainability (including Energy, Environment, Environmental Health)
Civil Rights
Women’s Rights

Her story includes many close relatives that were are part of the Thurgood Marshall precedent cases that led up to Brown v Board of Education. That ruling, in 1954, overturned Plessy vs. Ferguson (1896), which held that segregation was legal, so long as facilities were “separate but equal”. The court ruled that segregation violated the Fourteenth Amendment (“no State shall … deny to any person … the equal protection of the laws”).

Her organizational affiliations:

Interface
Howard University (alumnus)
Women’s Network for a Sustainable Future (WNSF)
Green Mountain Energy
Auburn University
River Network
Center for International Environmental Law
National Wildlife Federation
University of Indiana (School for Public Environmental Administration)
Zero Population Growth

Her full biography is below:
Ms. Dianne Dillon-Ridgley serves as an Adjunct Lecturer of University of Indiana School for Public Environmental Administration. Since 1997, Ms. Dillon-Ridgley has represented the World Young Women’s Christian Association at U.N. headquarters. From 1995 to 1998, Ms. Dillon-Ridgley served as a Senior Policy Analyst of the Women’s Environment and Development Organization and from 1998 to 1999, Ms. Dillon-Ridgley served as an Executive Director of that organization. From 1994 to 1997, Ms. Dillon-Ridgley served as a National President of Zero Population Growth, the nation’s largest grassroots organization concerned with rapid population growth and the environment. In 1998, Ms. Dillon-Ridgley was elected to the Global Water Partnership (Stockholm) and in 1999 appointed to the Oxford University Commission on Sustainable Consumption (UK). Ms. Dillon-Ridgley serves as the Chairman of Environmental Advisory Board of Green Mountain Energy Company. Ms. Dillon-Ridgley was appointed by President Clinton to the President’s U.S. Council on Sustainable Development in 1994 and served as Co-Chair of the Council’s International and Population/Consumption Task Forces until the Council’s dissolution in June 1999. Ms. Dillon-Ridgley serves as a Member of Environmental Advisory Board of Green Mountain Energy Company. Ms. Dillon-Ridgley serves as a trustee of River Network, the Center for International Environmental Law, the Natural Step-US and Population Connection. She serves as Director of National Wildlife Federation, Inc. She also serves as a trustee of the International Board of Auburn University’s School of Human Sciences and also serves as a Member of the Editorial Advisory Board for Aspen Law and Business’ Fair Housing and Fair Lending Publications. Ms. Dillon-Ridgley also serves on the Boards of five nonprofit organizations and one private company. Ms. Dillon-Ridgley served as Director of Interface Inc., since February 1997 until May 12, 2014. Ms. Dillon-Ridgley served as a Director of Green Mountain Energy Company since July, 1999. From 1998 to 1999, Ms. Dillon-Ridgley served as an Interim Executive Director of the Women’s Environment and Development Organization, an international women’s advocacy network for environmental, economic and sustainability issues. Ms. Dillon-Ridgley completed her undergraduate work at Howard University and is state-certified by the Iowa Mediation Service as a mediator specializing in agricultural mediation and public policy negotiation.

===============Notes on Brown vs (Topeka) Board of Education (1954) =====

CREDIT: https://en.wikipedia.org/wiki/Brown_v._Board_of_Education

Brown v. Board of Education
(Oliver Brown, et al. v. Board of Education of Topeka, et al.)

Supreme Court of the United States
Argued December 9, 1952
Reargued December 8, 1953
Decided May 17, 1954

Citations
347 U.S. 483 (more)
74 S. Ct. 686; 98 L. Ed. 873; 1954 U.S. LEXIS 2094; 53 Ohio Op. 326; 38 A.L.R.2d 1180

Prior history
Judgment for defendants, 98 F. Supp. 797 (D. Kan. 1951)
Subsequent history
Judgment on relief, 349 U.S. 294 (1955) (Brown II); on remand, 139 F. Supp. 468 (D. Kan. 1955); motion to intervene granted, 84 F.R.D. 383 (D. Kan. 1979); judgment for defendants, 671 F. Supp. 1290 (D. Kan. 1987); reversed, 892 F.2d 851 (10th Cir. 1989); vacated, 503 U.S. 978 (1992) (Brown III); judgment reinstated, 978 F.2d 585 (10th Cir. 1992); judgment for defendants, 56 F. Supp. 2d 1212 (D. Kan. 1999)

Holding
Segregation of students in public schools violates the Equal Protection Clause of the Fourteenth Amendment, because separate facilities are inherently unequal. District Court of Kansas reversed.NOTE: Fourteenth Amendment says “no State shall … deny to any person … the equal protection of the laws”.

Court membership
Chief Justice
Earl Warren
Associate Justices
Hugo Black · Stanley F. Reed
Felix Frankfurter · William O. Douglas
Robert H. Jackson · Harold H. Burton
Tom C. Clark · Sherman Minton

Case opinions
Majority
Warren, joined by unanimous
Laws applied
U.S. Const. amend. XIV

This case overturned a previous ruling or rulings
Plessy v. Ferguson (1896)
Cumming v. Richmond County Board of Education (1899)
Berea College v. Kentucky (1908)

Educational segregation in the US prior to Brown
Brown v. Board of Education of Topeka, 347 U.S. 483 (1954), was a landmark United States Supreme Court case in which the Court declared state laws establishing separate public schools for black and white students to be unconstitutional. The decision overturned the Plessy v. Ferguson decision of 1896, which allowed state-sponsored segregation, insofar as it applied to public education. Handed down on May 17, 1954, the Warren Court’s unanimous (9–0) decision stated that “separate educational facilities are inherently unequal.” As a result, de jure racial segregation was ruled a violation of the Equal Protection Clause of the Fourteenth Amendment of the United States Constitution. This ruling paved the way for integration and was a major victory of the Civil Rights Movement,[1] and a model for many future impact litigation cases.[2] However, the decision’s fourteen pages did not spell out any sort of method for ending racial segregation in schools, and the Court’s second decision in Brown II, 349 U.S. 294 (1955) only ordered states to desegregate “with all deliberate speed”.

Background
For much of the sixty years preceding the Brown case, race relations in the United States had been dominated by racial segregation. This policy had been endorsed in 1896 by the United States Supreme Court case of Plessy v. Ferguson, which held that as long as the separate facilities for the separate races were equal, segregation did not violate the Fourteenth Amendment (“no State shall … deny to any person … the equal protection of the laws”).

The plaintiffs in Brown asserted that this system of racial separation, while masquerading as providing separate but equal treatment of both white and black Americans, instead perpetuated inferior accommodations, services, and treatment for black Americans. Racial segregation in education varied widely from the 17 states that required racial segregation to the 16 in which it was prohibited. Brown was influenced by UNESCO’s 1950 Statement, signed by a wide variety of internationally renowned scholars, titled The Race Question.[3] This declaration denounced previous attempts at scientifically justifying racism as well as morally condemning racism. Another work that the Supreme Court cited was Gunnar Myrdal’s An American Dilemma: The Negro Problem and Modern Democracy (1944).[4] Myrdal had been a signatory of the UNESCO declaration. The research performed by the educational psychologists Kenneth B. Clark and Mamie Phipps Clark also influenced the Court’s decision.[5] The Clarks’ “doll test” studies presented substantial arguments to the Supreme Court about how segregation affected black school children’s mental status.[6]

The United States and the Soviet Union were both at the height of the Cold War during this time, and U.S. officials, including Supreme Court Justices, were highly aware of the harm that segregation and racism played on America’s international image. When Justice William O. Douglas traveled to India in 1950, the first question he was asked was, “Why does America tolerate the lynching of Negroes?” Douglas later wrote that he had learned from his travels that “the attitude of the United States toward its colored minorities is a powerful factor in our relations with India.” Chief Justice Earl Warren, nominated to the Supreme Court by President Eisenhower, echoed Douglas’s concerns in a 1954 speech to the American Bar Association, proclaiming that “Our American system like all others is on trial both at home and abroad, … the extent to which we maintain the spirit of our constitution with its Bill of Rights, will in the long run do more to make it both secure and the object of adulation than the number of hydrogen bombs we stockpile.”[7][8]

In 1951, a class action suit was filed against the Board of Education of the City of Topeka, Kansas in the United States District Court for the District of Kansas. The plaintiffs were thirteen Topeka parents on behalf of their 20 children.[9]

The suit called for the school district to reverse its policy of racial segregation. The Topeka Board of Education operated separate elementary schools under an 1879 Kansas law, which permitted (but did not require) districts to maintain separate elementary school facilities for black and white students in 12 communities with populations over 15,000. The plaintiffs had been recruited by the leadership of the Topeka NAACP. Notable among the Topeka NAACP leaders were the chairman McKinley Burnett; Charles Scott, one of three serving as legal counsel for the chapter; and Lucinda Todd.

The named plaintiff, Oliver L. Brown, was a parent, a welder in the shops of the Santa Fe Railroad, an assistant pastor at his local church, and an African American.[10] He was convinced to join the lawsuit by Scott, a childhood friend. Brown’s daughter Linda, a third grader, had to walk six blocks to her school bus stop to ride to Monroe Elementary, her segregated black school one mile (1.6 km) away, while Sumner Elementary, a white school, was seven blocks from her house.[11][12]

As directed by the NAACP leadership, the parents each attempted to enroll their children in the closest neighborhood school in the fall of 1951. They were each refused enrollment and directed to the segregated schools. Linda Brown Thompson later recalled the experience in a 2004 PBS documentary:

… well. like I say, we lived in an integrated neighborhood and I had all of these playmates of different nationalities. And so when I found out that day that I might be able to go to their school, I was just thrilled, you know. And I remember walking over to Sumner school with my dad that day and going up the steps of the school and the school looked so big to a smaller child. And I remember going inside and my dad spoke with someone and then he went into the inner office with the principal and they left me out … to sit outside with the secretary. And while he was in the inner office, I could hear voices and hear his voice raised, you know, as the conversation went on. And then he immediately came out of the office, took me by the hand and we walked home from the school. I just couldn’t understand what was happening because I was so sure that I was going to go to school with Mona and Guinevere, Wanda, and all of my playmates.[13]

The case “Oliver Brown et al. v. The Board of Education of Topeka, Kansas” was named after Oliver Brown as a legal strategy to have a man at the head of the roster. The lawyers, and the National Chapter of the NAACP, also felt that having Mr. Brown at the head of the roster would be better received by the U.S. Supreme Court Justices. The 13 plaintiffs were: Oliver Brown, Darlene Brown, Lena Carper, Sadie Emmanuel, Marguerite Emerson, Shirley Fleming, Zelma Henderson, Shirley Hodison, Maude Lawton, Alma Lewis, Iona Richardson, and Lucinda Todd.[14][15] The last surviving plaintiff, Zelma Henderson, died in Topeka, on May 20, 2008, at age 88.[16][17]

The District Court ruled in favor of the Board of Education, citing the U.S. Supreme Court precedent set in Plessy v. Ferguson, 163 U.S. 537 (1896), which had upheld a state law requiring “separate but equal” segregated facilities for blacks and whites in railway cars.[18] The three-judge District Court panel found that segregation in public education has a detrimental effect on negro children, but denied relief on the ground that the negro and white schools in Topeka were substantially equal with respect to buildings, transportation, curricula, and educational qualifications of teachers.[19]

Supreme Court review

The case of Brown v. Board of Education as heard before the Supreme Court combined five cases: Brown itself, Briggs v. Elliott (filed in South Carolina), Davis v. County School Board of Prince Edward County (filed in Virginia), Gebhart v. Belton (filed in Delaware), and Bolling v. Sharpe (filed in Washington D.C.).

All were NAACP-sponsored cases. The Davis case, the only case of the five originating from a student protest, began when 16-year-old Barbara Rose Johns organized and led a 450-student walkout of Moton High School.[20] The Gebhart case was the only one where a trial court, affirmed by the Delaware Supreme Court, found that discrimination was unlawful; in all the other cases the plaintiffs had lost as the original courts had found discrimination to be lawful.

The Kansas case was unique among the group in that there was no contention of gross inferiority of the segregated schools’ physical plant, curriculum, or staff. The district court found substantial equality as to all such factors. The lower court, in its opinion, noted that, in Topeka, “the physical facilities, the curricula, courses of study, qualification and quality of teachers, as well as other educational facilities in the two sets of schools [were] comparable.”[21] The lower court observed that “colored children in many instances are required to travel much greater distances than they would be required to travel could they attend a white school” but also noted that the school district “transports colored children to and from school free of charge” and that “[n]o such service [was] provided to white children.”[21]

In the Delaware case the district court judge in Gebhart ordered that the black students be admitted to the white high school due to the substantial harm of segregation and the differences that made the separate schools unequal.

The NAACP’s chief counsel, Thurgood Marshall—who was later appointed to the U.S. Supreme Court in 1967—argued the case before the Supreme Court for the plaintiffs. Assistant attorney general Paul Wilson—later distinguished emeritus professor of law at the University of Kansas—conducted the state’s ambivalent defense in his first appellate argument.
In December 1952, the Justice Department filed a friend of the court brief in the case. The brief was unusual in its heavy emphasis on foreign-policy considerations of the Truman administration in a case ostensibly about domestic issues. Of the seven pages covering “the interest of the United States,” five focused on the way school segregation hurt the United States in the Cold War competition for the friendship and allegiance of non-white peoples in countries then gaining independence from colonial rule. Attorney General James P. McGranery noted that

The existence of discrimination against minority groups in the United States has an adverse effect upon our relations with other countries. Racial discrimination furnishes grist for the Communist propaganda mills.[22]

The brief also quoted a letter by Secretary of State Dean Acheson lamenting that
The United States is under constant attack in the foreign press, over the foreign radio, and in such international bodies as the United Nations because of various practices of discrimination in this country.[23]

British barrister and parliamentarian Anthony Lester has written that “Although the Court’s opinion in Brown made no reference to these considerations of foreign policy, there is no doubt that they significantly influenced the decision.”[23]

Unanimous opinion and consensus building

The members of the U.S. Supreme Court that on May 17, 1954, ruled unanimously that racial segregation in public schools is unconstitutional.

In spring 1953, the Court heard the case but was unable to decide the issue and asked to rehear the case in fall 1953, with special attention to whether the Fourteenth Amendment’s Equal Protection Clause prohibited the operation of separate public schools for whites and blacks.[24]

The Court reargued the case at the behest of Associate Justice Felix Frankfurter, who used reargument as a stalling tactic, to allow the Court to gather a consensus around a Brown opinion that would outlaw segregation. The justices in support of desegregation spent much effort convincing those who initially intended to dissent to join a unanimous opinion. Although the legal effect would be same for a majority rather than unanimous decision, it was felt that dissent could be used by segregation supporters as a legitimizing counter-argument.

Conference notes and draft decisions illustrate the division of opinions before the decision was issued.[25] Justices Douglas, Black, Burton, and Minton were predisposed to overturn Plessy.[25] Fred M. Vinson noted that Congress had not issued desegregation legislation; Stanley F. Reed discussed incomplete cultural assimilation and states’ rights and was inclined to the view that segregation worked to the benefit of the African-American community; Tom C. Clark wrote that “we had led the states on to think segregation is OK and we should let them work it out.”[25] Felix Frankfurter and Robert H. Jackson disapproved of segregation, but were also opposed to judicial activism and expressed concerns about the proposed decision’s enforceability.[25] Chief Justice Vinson had been a key stumbling block. After Vinson died in September 1953, President Dwight D. Eisenhower appointed Earl Warren as Chief Justice.[25] Warren had supported the integration of Mexican-American students in California school systems following Mendez v. Westminster.[26] However, Eisenhower invited Earl Warren to a White House dinner, where the president told him: “These [southern whites] are not bad people. All they are concerned about is to see that their sweet little girls are not required to sit in school alongside some big overgrown Negroes.” Nevertheless, the Justice Department sided with the African American plaintiffs.[27][28][29]

In his reading of the unanimous decision, Justice Warren noted the adverse psychological effects that segregated schools had on African American children.[30]

While all but one justice personally rejected segregation, the judicial restraint faction questioned whether the Constitution gave the Court the power to order its end. The activist faction believed the Fourteenth Amendment did give the necessary authority and were pushing to go ahead. Warren, who held only a recess appointment, held his tongue until the Senate confirmed his appointment.

Warren convened a meeting of the justices, and presented to them the simple argument that the only reason to sustain segregation was an honest belief in the inferiority of Negroes. Warren further submitted that the Court must overrule Plessy to maintain its legitimacy as an institution of liberty, and it must do so unanimously to avoid massive Southern resistance. He began to build a unanimous opinion.

Although most justices were immediately convinced, Warren spent some time after this famous speech convincing everyone to sign onto the opinion. Justices Jackson and Reed finally decided to drop their dissent. The final decision was unanimous. Warren drafted the basic opinion and kept circulating and revising it until he had an opinion endorsed by all the members of the Court.[31] Reed was the last holdout and reportedly cried during the reading of the opinion.[32]

Holding

Reporters who observed the court holding were surprised by two facts. First, the court made a unanimous decision. Prior to the ruling, there were reports that the court members were sharply divided and might not be able to agree. Second, the attendance of Justice Robert H. Jackson who had suffered a mild heart attack and was not expected to return to the bench until early June 1954. “Perhaps to emphasize the unanimity of the court, perhaps from a desire to be present when the history-making verdict was announced, Justice Jackson was in his accustomed seat when the court convened.”[33] Reporters also noted that Dean Acheson, former secretary of state, who had related the case to foreign policy considerations, and Herbert Brownell, the current attorney general, were in the courtroom.[34]

The key holding of the Court was that, even if segregated black and white schools were of equal quality in facilities and teachers, segregation by itself was harmful to black students and unconstitutional. They found that a significant psychological and social disadvantage was given to black children from the nature of segregation itself, drawing on research conducted by Kenneth Clark assisted by June Shagaloff. This aspect was vital because the question was not whether the schools were “equal”, which under Plessy they nominally should have been, but whether the doctrine of separate was constitutional. The justices answered with a strong “no”:

[D]oes segregation of children in public schools solely on the basis of race, even though the physical facilities and other “tangible” factors may be equal, deprive the children of the minority group of equal educational opportunities? We believe that it does. …
“Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The effect is greater when it has the sanction of the law, for the policy of separating the races is usually interpreted as denoting the inferiority of the negro group. A sense of inferiority affects the motivation of a child to learn. Segregation with the sanction of law, therefore, has a tendency to [retard] the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system.” …

We conclude that, in the field of public education, the doctrine of “separate but equal” has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.
Local outcomes

Judgement in the Supreme Court Decision for Brown et al. v. Board of Education of Topeka et al.

The Topeka junior high schools had been integrated since 1941. Topeka High School was integrated from its inception in 1871 and its sports teams from 1949 on.[35] The Kansas law permitting segregated schools allowed them only “below the high school level”.[36]
Soon after the district court decision, election outcomes and the political climate in Topeka changed. The Board of Education of Topeka began to end segregation in the Topeka elementary schools in August 1953, integrating two attendance districts. All the Topeka elementary schools were changed to neighborhood attendance centers in January 1956, although existing students were allowed to continue attending their prior assigned schools at their option.[37][38][39] Plaintiff Zelma Henderson, in a 2004 interview, recalled that no demonstrations or tumult accompanied desegregation in Topeka’s schools:
“They accepted it,” she said. “It wasn’t too long until they integrated the teachers and principals.”[40]

The Topeka Public Schools administration building is named in honor of McKinley Burnett, NAACP chapter president who organized the case.[citation needed]

Monroe Elementary was designated a U.S. National Historic Site unit of the National Park Service on October 26, 1992.

Social implications
Not everyone accepted the Brown v. Board of Education decision. In Virginia, Senator Harry F. Byrd, Sr. organized the Massive Resistance movement that included the closing of schools rather than desegregating them.[41] See, for example, The Southern Manifesto. For more implications of the Brown decision, see Desegregation.

Deep South
Texas Attorney General John Ben Shepperd organized a campaign to generate legal obstacles to implementation of desegregation.[42]

In 1957, Arkansas Governor Orval Faubus called out his state’s National Guard to block black students’ entry to Little Rock Central High School. President Dwight Eisenhower responded by deploying elements of the 101st Airborne Division from Fort Campbell, Kentucky, to Arkansas and by federalizing Arkansas’s National Guard.[43]

Also in 1957, Florida’s response was mixed. Its legislature passed an Interposition Resolution denouncing the decision and declaring it null and void. But Florida Governor LeRoy Collins, though joining in the protest against the court decision, refused to sign it, arguing that the attempt to overturn the ruling must be done by legal methods.
In Mississippi fear of violence prevented any plaintiff from bringing a school desegregation suit for the next nine years.[44] When Medgar Evers sued to desegregate Jackson, Mississippi schools in 1963 White Citizens Council member Byron De La Beckwith murdered him.[45] Two subsequent trials resulted in hung juries. Beckwith was not convicted of the murder until 1994.[46]

In 1963, Alabama Gov. George Wallace personally blocked the door to Foster Auditorium at the University of Alabama to prevent the enrollment of two black students. This became the infamous Stand in the Schoolhouse Door[47] where Wallace personally backed his “segregation now, segregation tomorrow, segregation forever” policy that he had stated in his 1963 inaugural address.[48] He moved aside only when confronted by General Henry Graham of the Alabama National Guard, who was ordered by President John F. Kennedy to intervene.
Upland South

In North Carolina, there was often a strategy of nominally accepting Brown, but tacitly resisting it. On May 18, 1954 the Greensboro, North Carolina school board declared that it would abide by the Brown ruling. This was the result of the initiative of D.E. Hudgins Jr, a former Rhodes Scholar and prominent attorney, who chaired the school board. This made Greensboro the first, and for years the only, city in the South, to announce its intent to comply. However, others in the city resisted integration, putting up legal obstacles[how?] to the actual implementation of school desegregation for years afterward, and in 1969, the federal government found the city was not in compliance with the 1964 Civil Rights Act. Transition to a fully integrated school system did not begin until 1971, after numerous local lawsuits and both nonviolent and violent demonstrations. Historians have noted the irony that Greensboro, which had heralded itself as such a progressive city, was one of the last holdouts for school desegregation.[49][50]
In Moberly, Missouri, the schools were desegregated, as ordered. However, after 1955, the African-American teachers from the local “negro school” were not retained; this was ascribed to poor performance. They appealed their dismissal in Naomi Brooks et al., Appellants, v. School District of City of Moberly, Missouri, Etc., et al.; but it was upheld, and SCOTUS declined to hear a further appeal.[51][52]

North
Many Northern cities also had de facto segregation policies, which resulted in a vast gulf in educational resources between black and white communities. In Harlem, New York, for example, not a single new school had been built since the turn of the century, nor did a single nursery school exist, even as the Second Great Migration caused overcrowding of existing schools. Existing schools tended to be dilapidated and staffed with inexperienced teachers. Northern officials were in denial of the segregation, but Brown helped stimulate activism among African-American parents like Mae Mallory who, with support of the NAACP, initiated a successful lawsuit against the city and State of New York on Brown’s principles. Mallory and thousands of other parents bolstered the pressure of the lawsuit with a school boycott in 1959. During the boycott, some of the first freedom schools of the period were established. The city responded to the campaign by permitting more open transfers to high-quality, historically-white schools. (New York’s African-American community, and Northern desegregation activists generally, now found themselves contending with the problem of white flight, however.)[53][54]

The intellectual roots of Plessy v. Ferguson, the landmark United States Supreme Court decision upholding the constitutionality of racial segregation in 1896 under the doctrine of “separate but equal” were, in part, tied to the scientific racism of the era.[55][56] However, the popular support for the decision was more likely a result of the racist beliefs held by many whites at the time.[57] In deciding Brown v. Board of Education, the Supreme Court rejected the ideas of scientific racists about the need for segregation, especially in schools. The Court buttressed its holding by citing (in footnote 11) social science research about the harms to black children caused by segregated schools.

Both scholarly and popular ideas of hereditarianism played an important role in the attack and backlash that followed the Brown decision.[57] The Mankind Quarterly was founded in 1960, in part in response to the Brown decision.[58][59]
Legal criticism and praise

U.S. circuit judges Robert A. Katzmann, Damon J. Keith, and Sonia Sotomayor at a 2004 exhibit on the Fourteenth Amendment, Thurgood Marshall, and Brown v. Board of Education
William Rehnquist wrote a memo titled “A Random Thought on the Segregation Cases” when he was a law clerk for Justice Robert H. Jackson in 1952, during early deliberations that led to the Brown v. Board of Education decision. In his memo, Rehnquist argued: “I realize that it is an unpopular and unhumanitarian position, for which I have been excoriated by ‘liberal’ colleagues but I think Plessy v. Ferguson was right and should be reaffirmed.” Rehnquist continued, “To the argument . . . that a majority may not deprive a minority of its constitutional right, the answer must be made that while this is sound in theory, in the long run it is the majority who will determine what the constitutional rights of the minorities are.”[60] Rehnquist also argued for Plessy with other law clerks.[61]
However, during his 1971 confirmation hearings, Rehnquist said, “I believe that the memorandum was prepared by me as a statement of Justice Jackson’s tentative views for his own use.” Justice Jackson had initially planned to join a dissent in Brown.[62] Later, at his 1986 hearings for the slot of Chief Justice, Rehnquist put further distance between himself and the 1952 memo: “The bald statement that Plessy was right and should be reaffirmed, was not an accurate reflection of my own views at the time.”[63] In any event, while serving on the Supreme Court, Rehnquist made no effort to reverse or undermine the Brown decision, and frequently relied upon it as precedent.[64]

Chief Justice Warren’s reasoning was broadly criticized by contemporary legal academics with Judge Learned Hand decrying that the Supreme Court had “assumed the role of a third legislative chamber”[65] and Herbert Wechsler finding Brown impossible to justify based on neutral principles.[66]

Some aspects of the Brown decision are still debated. Notably, Supreme Court Justice Clarence Thomas, himself an African American, wrote in Missouri v. Jenkins (1995) that at the very least, Brown I has been misunderstood by the courts.

Brown I did not say that “racially isolated” schools were inherently inferior; the harm that it identified was tied purely to de jure segregation, not de facto segregation. Indeed, Brown I itself did not need to rely upon any psychological or social-science research in order to announce the simple, yet fundamental truth that the Government cannot discriminate among its citizens on the basis of race. …

Segregation was not unconstitutional because it might have caused psychological feelings of inferiority. Public school systems that separated blacks and provided them with superior educational resources making blacks “feel” superior to whites sent to lesser schools—would violate the Fourteenth Amendment, whether or not the white students felt stigmatized, just as do school systems in which the positions of the races are reversed. Psychological injury or benefit is irrelevant …

Given that desegregation has not produced the predicted leaps forward in black educational achievement, there is no reason to think that black students cannot learn as well when surrounded by members of their own race as when they are in an integrated environment. (…) Because of their “distinctive histories and traditions,” black schools can function as the center and symbol of black communities, and provide examples of independent black leadership, success, and achievement.[67]

Some Constitutional originalists, notably Raoul Berger in his influential 1977 book “Government by Judiciary,” make the case that Brown cannot be defended by reference to the original understanding of the 14th Amendment. They support this reading of the 14th amendment by noting that the Civil Rights Act of 1875 did not ban segregated schools and that the same Congress that passed the 14th Amendment also voted to segregate schools in the District of Columbia. Other originalists, including Michael W. McConnell, a federal judge on the United States Court of Appeals for the Tenth Circuit, in his article “Originalism and the Desegregation Decisions,” argue that the Radical Reconstructionists who spearheaded the 14th Amendment were in favor of desegregated southern schools.[68] Evidence supporting this interpretation of the 14th amendment has come from archived Congressional records showing that proposals for federal legislation which would enforce school integration were debated in Congress a few years following the amendment’s ratification.[69]

The case also has attracted some criticism from more liberal authors, including some who say that Chief Justice Warren’s reliance on psychological criteria to find a harm against segregated blacks was unnecessary. For example, Drew S. Days has written:[70] “we have developed criteria for evaluating the constitutionality of racial classifications that do not depend upon findings of psychic harm or social science evidence. They are based rather on the principle that ‘distinctions between citizens solely because of their ancestry are by their very nature odious to a free people whose institutions are founded upon the doctrine of equality,’ Hirabayashi v. United States, 320 U.S. 81 (1943). . . .

In his book The Tempting of America (page 82), Robert Bork endorsed the Brown decision as follows:
By 1954, when Brown came up for decision, it had been apparent for some time that segregation rarely if ever produced equality. Quite aside from any question of psychology, the physical facilities provided for blacks were not as good as those provided for whites. That had been demonstrated in a long series of cases … The Court’s realistic choice, therefore, was either to abandon the quest for equality by allowing segregation or to forbid segregation in order to achieve equality. There was no third choice. Either choice would violate one aspect of the original understanding, but there was no possibility of avoiding that. Since equality and segregation were mutually inconsistent, though the ratifiers did not understand that, both could not be honored. When that is seen, it is obvious the Court must choose equality and prohibit state-imposed segregation. The purpose that brought the fourteenth amendment into being was equality before the law, and equality, not separation, was written into the law.

In June 1987, Philip Elman, a civil rights attorney who served as an associate in the Solicitor General’s office during Harry Truman’s term, claimed he and Associate Justice Felix Frankfurter were mostly responsible for the Supreme Court’s decision, and stated that the NAACP’s arguments did not present strong evidence.[71] Elman has been criticized for offering a self-aggrandizing history of the case, omitting important facts, and denigrating the work of civil rights attorneys who had laid the groundwork for the decision over many decades.[72] However, Frankfurter was also known for being one of court’s most outspoken advocates of the judicial restraint philosophy of basing court rulings on existing law rather than personal or political considerations.[73][74] Public officials in the United States today are nearly unanimous in lauding the ruling. In May 2004, the fiftieth anniversary of the ruling, President George W. Bush spoke at the opening of the Brown v. Board of Education National Historic Site, calling Brown “a decision that changed America for the better, and forever.”[75] Most Senators and Representatives issued press releases hailing the ruling.

In an article in Townhall, Thomas Sowell argued that When Chief Justice Earl Warren declared in the landmark 1954 case of Brown v. Board of Education that racially separate schools were “inherently unequal,” Dunbar High School was a living refutation of that assumption. And it was within walking distance of the Supreme Court.”[76]

Brown II

In 1955, the Supreme Court considered arguments by the schools requesting relief concerning the task of desegregation. In their decision, which became known as “Brown II”[77] the court delegated the task of carrying out school desegregation to district courts with orders that desegregation occur “with all deliberate speed,” a phrase traceable to Francis Thompson’s poem, The Hound of Heaven.[78]

Supporters of the earlier decision were displeased with this decision. The language “all deliberate speed” was seen by critics as too ambiguous to ensure reasonable haste for compliance with the court’s instruction. Many Southern states and school districts interpreted “Brown II” as legal justification for resisting, delaying, and avoiding significant integration for years—and in some cases for a decade or more—using such tactics as closing down school systems, using state money to finance segregated “private” schools, and “token” integration where a few carefully selected black children were admitted to former white-only schools but the vast majority remained in underfunded, unequal black schools.[79]

For example, based on “Brown II,” the U.S. District Court ruled that Prince Edward County, Virginia did not have to desegregate immediately. When faced with a court order to finally begin desegregation in 1959 the county board of supervisors stopped appropriating money for public schools, which remained closed for five years, from 1959 to 1964.

White students in the county were given assistance to attend white-only “private academies” that were taught by teachers formerly employed by the public school system, while black students had no education at all unless they moved out of the county. But the public schools reopened after the Supreme Court overturned “Brown II” in Griffin v. County School Board of Prince Edward County, declaring that “…the time for mere ‘deliberate speed’ has run out,” and that the county must provide a public school system for all children regardless of race.[80]

Brown III

In 1978, Topeka attorneys Richard Jones, Joseph Johnson and Charles Scott, Jr. (son of the original Brown team member), with assistance from the American Civil Liberties Union, persuaded Linda Brown Smith—who now had her own children in Topeka schools—to be a plaintiff in reopening Brown. They were concerned that the Topeka Public Schools’ policy of “open enrollment” had led to and would lead to further segregation. They also believed that with a choice of open enrollment, white parents would shift their children to “preferred” schools that would create both predominantly African American and predominantly European American schools within the district. The district court reopened the Brown case after a 25-year hiatus, but denied the plaintiffs’ request finding the schools “unitary”. In 1989, a three-judge panel of the Tenth Circuit on 2–1 vote found that the vestiges of segregation remained with respect to student and staff assignment. In 1993, the Supreme Court denied the appellant School District’s request for certiorari and returned the case to District Court Judge Richard Rodgers for implementation of the Tenth Circuit’s mandate.

After a 1994 plan was approved and a bond issue passed, additional elementary magnet schools were opened and district attendance plans redrawn, which resulted in the Topeka schools meeting court standards of racial balance by 1998. Unified status was eventually granted to Topeka Unified School District No. 501 on July 27, 1999. One of the new magnet schools is named after the Scott family attorneys for their role in the Brown case and civil rights.[81]

Related cases
• Plessy v. Ferguson, 163 U.S. 537 (1896)—separate but equal for public facilities
• Cumming v. Richmond County Board of Education 175 U.S. 528 (1899)—sanctioned de jure segregation of races
• Lum v. Rice, 275 U.S. 78 (1927)—separate schools for Chinese pupils from white schoolchildren
• Powell v. Alabama, 287 U.S. 45 (1932)—access to counsel
• Missouri ex rel. Gaines v. Canada, 305 U.S. 337 (1938)-states that provide a school to white students must provide in-state education to blacks
• Smith v. Allwright, 321 U.S. 649 (1944)—non-white voters in primary schools
• Hedgepeth and Williams v. Board of Education (1944)-prohibited racial segregation in New Jersey schools.
• Mendez v. Westminster, 64 F. Supp. 544 (1946)—prohibits segregating Mexican American children in California
• Sipuel v. Board of Regents of Univ. of Okla., 332 U.S. 631 (1948)—access to taxpayer state funded law schools
• Shelley v. Kraemer, 334 U.S. 1 (1948)—restrictive covenants
• Sweatt v. Painter, 339 U.S. 629 (1950)—segregated law schools in Texas
• McLaurin v. Oklahoma State Regents, 339 U.S. 637 (1950)—prohibits segregation in a public institution of higher learning
• Hernandez v. Texas, 347 U.S. 475 (1954)—the Fourteenth Amendment protects those beyond the racial classes of white or Negro.
• Briggs v. Elliott, 347 U.S. 483 (1952) Brown Case #1—Summerton, South Carolina.
• Davis v. County School Board of Prince Edward County, 103 F. Supp. 337 (1952) Brown Case #2—Prince Edward County, Virginia.
• Gebhart v. Belton, 33 Del. Ch. 144 (1952) Brown Case #3—Claymont, Delaware
• Bolling v. Sharpe, 347 U.S. 497 (1954) Brown companion case—dealt with the constitutionality of segregation in the District of Columbia, which—as a federal district, not a state—is not subject to the Fourteenth Amendment.
• Browder v. Gayle, 142 F. Supp. 707 (1956) – Montgomery, Alabama bus segregation is unconstitutional under the Fourteenth Amendment protections for equal treatment.
• NAACP v. Alabama, 357 U.S. 449 (1958)—privacy of NAACP membership lists, and free association of members
• Cooper v. Aaron, 358 U.S. 1 (1958) – Federal court enforcement of desegregation
• Boynton v. Virginia, 364 U.S. 454 (1960) – outlawed racial segregation in public transportation
• Heart of Atlanta Motel v. United States, 379 U.S. 241 (1964)—held constitutional the Civil Rights Act of 1964, which banned racial discrimination in public places, particularly in public accommodations even in private property.
• Loving v. Virginia, 388 U.S. 1 (1967) – banned anti-miscegenation laws (race-based restrictions on marriage).
• Alexander v. Holmes County Board of Education, 396 U.S. 1218 (1969) – changed Brown’s requirement of desegregation “with all deliberate speed” to one of “desegregation now”
• Swann v. Charlotte-Mecklenburg Board of Education, 402 U.S. 1 (1971) – established bussing as a solution
• Guey Heung Lee v. Johnson, 404 U.S 1215 (1971) – “Brown v. Board of Education was not written for blacks alone”, desegregation of Asian schools in opposition to parents of Asian students
• Milliken v. Bradley, 418 U.S. 717 (1974) – rejected bussing across school district lines.
• Parents Involved in Community Schools v. Seattle School District No. 1,[82] 551 U.S. 701, 127 S. Ct. 2738 (2007)—rejected using race as the sole determining factor for assigning students to schools.[83]
• List of United States Supreme Court Cases
* See Case citation for an explanation of these numbers.
See also
• African-American Civil Rights Movement (1896–1954)
• Little Rock Nine
• Rubey Mosley Hulen, federal judge who made a similar ruling in an earlier case
• Timeline of the African American Civil Rights Movement
• Ruby Bridges, the first black child to attend an all-white elementary school in the South
References
1 Jump up 
^ Brown v Board of Education Decision ~ Civil Rights Movement Veterans
2 Jump up 
^ Schuck, P.H. (2006). Meditations of a Militant Moderate: Cool Views on Hot Topics. G – Reference, Information and Interdisciplinary Subjects Series. Rowman & Littlefield. p. 104. ISBN 978-0-7425-3961-7.
3 Jump up 
^ Harald E.L. Prins. “Toward a World without Evil: Alfred Métraux as UNESCO Anthropologist (1946–1962)”. UNESCO. “As a direct offshoot of the 1948 “Universal Declaration of Human Rights,” it sought to dismantle any scientific justification or basis for racism and proclaimed that race was not a biological fact of nature but a dangerous social myth. As a milestone, this critically important declaration contributed to the 1954 U.S. Supreme Court desegregation decision in Brown v. Board of Education of Topeka.’”(in English)
4 Jump up 
^ Myrdal, Gunnar (1944). An American Dilemma: The Negro Problem and Modern Democracy. New York: Harper & Row.
5 Jump up 
^ “Desegregation to diversity?”. American Psychological Association. 2004. Retrieved May 15, 2008.
6 Jump up 
^ “Kenneth Clark, 90; His Studies Influenced Ban on Segregation – Los Angeles Times”. Los Angeles Times. May 3, 2005. Retrieved October 15, 2010.
7 Jump up 
^ Mary L. Dudziak, “The Global Impact of Brown v. Board of Education” SCOTUS Blog
8 Jump up 
^ Mary L Dudziak “Brown as a Cold War Case” Journal of American History, June 2004 Archived December 7, 2014, at the Wayback Machine.
9 Jump up 
^ Anderson, Legacy of Brown: Many people part of local case, Thirteen parents representing 20 children signed up as Topeka plaintiffs, The Topeka Capital-Journal (Sunday, May 9, 2004).
10 Jump up 
^ Black, White, and Brown, PBS NewsHour (May 12, 2004).
11 Jump up 
^ Brown v. Board of Education of Topeka MSN Encarta, archived on October 31, 2009 from the original Archived October 31, 2009, at WebCite
12 Jump up 
^ “Interactive map of locations in Topeka important to the Brown case – Topeka Capital Journal online”. Cjonline.com. October 26, 1992. Retrieved October 15, 2010.
13 Jump up 
^ Black/White & Brown Archived September 10, 2005, at the Wayback Machine., transcript of program produced by KTWU Channel 11 in Topeka, Kansas. Originally aired May 3, 2004.
14 Jump up 
^ Brown Foundation for Educational Equity, Excellence and Research, Myths Versus Truths Archived June 27, 2005, at the Wayback Machine. (revised April 11, 2004)
15 Jump up 
^ Ric Anderson, Legacy of Brown: Many people part of local case, Thirteen parents representing 20 children signed up as Topeka plaintiffs, The Topeka Capital-Journal (Sunday, May 9, 2004).
16 Jump up 
^ Fox, Margalit (May 22, 2008). “Zelma Henderson, Who Aided Desegregation, Dies at 88”. The New York Times. Retrieved May 29, 2008.
17 Jump up 
^ Last surviving Brown v. Board plaintiff dies at 88 The Associated Press, May 21, 2008, archived on May 24, 2008 from the original
18 Jump up 
^ School facilities for Negroes here held comparable, The Topeka State Journal (August 3, 1951)
19 Jump up 
^ Brown v. Board of Education, 98 F. Supp. 797 Archived January 4, 2009, at the Wayback Machine. (August 3, 1951).
20 Jump up 
^ Student Strike at Moton High ~ Civil Rights Movement Veterans
21 ^ Jump up to: 
a b Brown v. Board of Education, 98 F. Supp. 797, 798 (D. Kan. 1951), rev’d, 347 U.S. 483 (1954).
22 Jump up 
^ Aryeh Neier “Brown v. Board of Ed: Key Cold War weapon” Reuters Blog, May 14, 2014
23 ^ Jump up to: 
a b Antonly Lester, “Brown v. Board of Education Overseas” PROCEEDINGS OF THE AMERICAN PHILOSOPHICAL SOCIETY VOL. 148, NO. 4, DECEMBER 2004
24 Jump up 
^ See Smithsonian, “Separate is Not Equal: Brown v. Board of Education Archived June 30, 2015, at the Wayback Machine.
25 ^ Jump up to: 
a b c d e Cass R. Sunstein (May 3, 2004). “Did Brown Matter?”. The New Yorker. Retrieved January 22, 2010.
26 Jump up 
^ George R. Goethals, Georgia Jones Sorenson (2006). The quest for a general theory of leadership. Edward Elgar Publishing. p. 165. ISBN 978-1-84542-541-8.
27 Jump up 
^ Digital History:Brown v. Board of Education, 347 U.S. 483 (1954)
28 Jump up 
^ The Gang That Always Liked Ike
29 Jump up 
^ Warren, Earl (1977). The Memoirs of Earl Warren. New York: Doubleday & Company. p. 291. ISBN 0385128355.
30 Jump up 
^ Mungazi, D. A. (2001). Journey to the promised land: The African American struggle for development since the Civil War (pp. 46). Westport, CT: Greenwood Publishing Group
31 Jump up 
^ Patterson, James T. (2001). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy. New York: Oxford University Press. ISBN 0-19-515632-3.
32 Jump up 
^ Caro, Robert A. (2002). Master of the Senate. Vintage Books. p. 696. ISBN 9780394720951. Retrieved 17 May 2017.
33 Jump up 
^ Huston, Luther A. (18 May 1954). “High Court Bans School Segregation; 9-to-0 Decision Grants Time to Comply”. The New York Times. Retrieved 6 March 2013.
34 Jump up 
^ “AP WAS THERE: Original 1954 Brown v. Board story” Archived December 9, 2014, at the Wayback Machine.
35 Jump up 
^ “Topeka Capital Journal article on integration of THS sports teams”. Cjonline.com. July 10, 2001. Retrieved October 15, 2010.
36 Jump up 
^ “Topeka Capital Journal on line article”. Cjonline.com. February 28, 2002. Retrieved October 15, 2010.
37 Jump up 
^ “Racial bar down for teachers here”, Topeka Daily Capital (January 19, 1956)
38 Jump up 
^ “First step taken to end segregation”, Topeka Daily Capital (September 9, 1953)
39 Jump up 
^ “Little Effect On Topeka” Topeka Capital-Journal (May 18, 1954)
40 Jump up 
^ Erin Adamson, “Breaking barriers: Topekans reflect on role in desegregating nation’s schools” Archived April 27, 2004, at the Wayback Machine., Topeka Capital Journal (May 11, 2003)
41 Jump up 
^ “Massive Resistance” to Integration ~ Civil Rights Movement Veterans
42 Jump up 
^ Howell, Mark C., John Ben Shepperd, Attorney General of the State of Texas: His Role in the Continuation of Segregation in Texas, 1953-1957, Master’s Thesis, The University of Texas of the Permian Basin, Odessa, Texas, July 2003.
43 Jump up 
^ The Little Rock Nine ~ Civil Rights Movement Veterans
44 Jump up 
^ Michael Klarman, The Supreme Court, 2012 Term – Comment: Windsor and Brown: Marriage Equality and Racial Equality 127 Harv. L. Rev. 127, 153 (2013).
45 Jump up 
^ Id. citing Karlman, From Jim Crow to Civil Rights: The Supreme Court and the Struggle for Racial Equality at 352-354 (2004).
46 Jump up 
^ De La Beckwith v. State, 707 So. 2d 547 (Miss. 1997).
47 Jump up 
^ Standing In the Schoolhouse Door ~ Civil Rights Movement Veterans
48 Jump up 
^ The American Experience; George Wallace: Settin’ the Woods on Fire; Wallace Quotes, Public Broadcasting Service, pbs.org, 2000. Retrieved February 6, 2007.
49 Jump up 
^ Desegregation and Integration of Greensboro’s Public Schools, 1954-1974
50 Jump up 
^ “Summary of ‘Civilities and Civil Rights’: by William H. Chafe” George Mason University website
51 Jump up 
^ http://law.justia.com/cases/federal/appellate-courts/F2/267/733/393864/
52 Jump up 
^ http://revisionisthistory.com/episodes/13-miss-buchanans-period-of-adjustment
53 Jump up 
^ Melissa F. Weiner, Power, Protest, and the Public Schools: Jewish and African American Struggles in New York City (Rutgers University Press, 2010) p. 51-66
54 Jump up 
^ Adina Back “Exposing the Whole Segregation Myth: The Harlem Nine and New York City Schools” in Freedom north: Black freedom struggles outside the South, 1940-1980, Jeanne Theoharis, Komozi Woodard, eds.(Palgrave Macmillan, 2003) p. 65-91
55 Jump up 
^ Austin Sarat (1997). Race, Law, and Culture: Reflections on Brown v. Board of Education. Oxford University Press. p. 55. ISBN 978-0-19-510622-0. “What lay behind Plessy v. Ferguson? There were, perhaps, some important intellectual roots; this was the era of scientific racism.”
56 Jump up 
^ Charles A. Lofgren (1988). The Plessy Case. Oxford University Press. p. 184. ISBN 978-0-19-505684-6. “But he [ Henry Billings Brown ] at minimum established popular sentiment and practice, along with legal and scientific testimony on race, as a link in his train of reasoning.”
57 ^ Jump up to: 
a b Race, Law, and Culture: Reflections on Brown v. Board of Education By Austin Sarat. Page 55 and 59. 1997. ISBN 0-19-510622-9
58 Jump up 
^ Schaffer, Gavin (2007). “”‘Scientific’ Racism Again?”: Reginald Gates, the Mankind Quarterly and the Question of “Race” in Science after the Second World War”. Journal of American Studies. 41 (2): 253–278. doi:10.1017/S0021875807003477.
59 Jump up 
^ Science for Segregation: Race, Law, and the Case Against Brown v. Board of Education. By John P. Jackson. ISBN 0-8147-4271-8 Page 148
60 Jump up 
^ William Rehnquist, “A Random Thought on the Segregation Cases” Archived June 15, 2007, at the Wayback Machine., S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986).
61 Jump up 
^ Peter S. Canellos,Memos may not hold Roberts’s opinions, The Boston Globe, August 23, 2005. Here is what Rehnquist said in 1986 about his conversations with other clerks about Plessy: I thought Plessy had been wrongly decided at the time, that it was not a good interpretation of the equal protection clause to say that when you segregate people by race, there is no denial of equal protection. But Plessy had been on the books for 60 years; Congress had never acted, and the same Congress that had promulgated the 14th Amendment had required segregation in the District schools. . . . I saw factors on both sides. . . . I did not agree then, and I certainly do not agree now, with the statement that Plessy against Ferguson is right and should be reaffirmed. I had ideas on both sides, and I do not think I ever really finally settled in my own mind on that. . . . [A]round the lunch table I am sure I defended it. . . . I thought there were good arguments to be made in support of it.

S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986).
62 Jump up 
^ Justice William O. Douglas wrote: “In the original conference there were only four who voted that segregation in the public schools was unconstitutional. Those four were Black, Burton, Minton, and myself.” See Bernard Schwartz, Decision: How the Supreme Court Decides Cases, page 96 (Oxford 1996). Likewise, Justice Felix Frankfurter wrote: “I have no doubt that if the segregation cases had reached decision last term, there would have been four dissenters—Vinson, Reed, Jackson, and Clark.” Id. Justice Jackson’s longtime legal secretary had a different view, calling Rehnquist’s Senate testimony an attempt to “smear the reputation of a great justice.” See Alan Dershowitz, Telling the Truth About Chief Justice Rehnquist, Huffington Post, September 5, 2005. Retrieved March 15, 2007. See also Felix Frankfurter on the death of Justice Vinson.
63 Jump up 
^ Adam Liptak, The Memo That Rehnquist Wrote and Had to Disown, NY Times (September 11, 2005)
64 Jump up 
^ Cases where Justice Rehnquist has cited Brown v. Board of Education in support of a proposition Archived June 15, 2007, at the Wayback Machine., S. Hrg. 99-1067, Hearings Before the Senate Committee on the Judiciary on the Nomination of Justice William Hubbs Rehnquist to be Chief Justice of the United States (July 29, 30, 31, and August 1, 1986). Also see Jeffery Rosen, Rehnquist the Great?, Atlantic Monthly (April 2005): “Rehnquist ultimately embraced the Warren Court’s Brown decision, and after he joined the Court he made no attempt to dismantle the civil-rights revolution, as political opponents feared he would”.
65 Jump up 
^ Michael Klarman, The Supreme Court, 2012 Term – Comment: Windsor and Brown: Marriage Equality and Racial Equality, 127 Harv. L. Rev. 127, 142 (2013) citing Learned Hand, The Bill of Rights at 55 (Oliver Wendell Holmes Lecture, 1958).
66 Jump up 
^ Id., Pamela Karlan, “What Can Brown Do For You: Neutral Principles and the Struggle Over the Equal Protection Clause, 58 DUKE L.J. 1049 (2008) citing Herbert Wechsler, Toward Neutral Principles of Constitutional Law, 73 HARV. L. REV. 1 (Oliver Wendell Holmes Lecture, 1959).
67 Jump up 
^ Missouri v. Jenkins, 515 U.S. 70 (1995) (Thomas, J., concurring).
68 Jump up 
^ McConnell, Michael W. (May 1995). “Originalism and the desegregation decisions”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (4): 947–1140. JSTOR 1073539. doi:10.2307/1073539.
• Response to McConnell: Klarman, Michael J. (October 1995). “Response: Brown, originalism, and constitutional theory: a response to Professor Mcconnell”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (7): 1881–1936. JSTOR 1073643. doi:10.2307/1073643.
• Response to Klarman: McConnell, Michael W. (October 1995). “Reply: The originalist justification for Brown: a reply to Professor Klarman”. Virginia Law Review. The Virginia Law Review Association via JSTOR. 81 (7): 1937–1955. JSTOR 1073644. doi:10.2307/1073644.
69 



70 Jump up 
^ Adam Liptak (November 9, 2009). “From 19th-Century View, Desegregation Is a Test”. New York Times. Retrieved June 4, 2013.
71 Jump up 
^ Days, III, Drew S. (2001), “Days, J., concurring”, in Balkan, Jack; Ackerman, Bruce A., What ‘Brown v. Board of Education’ should have said, New York: New York University Press, p. 97, ISBN 9780814798904. Preview.
72 Jump up 
^ Harvard Law Review, Vol. 100, No. 8 (June 1987), pp. 1938–1948
73 Jump up 
^ See, e.g., Randall Kennedy. “A Reply to Philip Elman.” Harvard Law Review 100 (1987):1938–1948.
74 Jump up 
^ A Justice for All, by Kim Isaac Eisler, page 11; ISBN 0-671-76787-9
75 Jump up 
^ “Supreme Court History: Expanding civil rights, biographies of the robes: Felix Frankfurter”. pbs.org/wnet. Educational Broadcasting Corp., PBS.
76 Jump up 
^ Remarks by the President at Grand Opening of the Brown v Board of Education National Historic Site, Topeka, Kansas (May 17, 2004)
77 Jump up 
^ Thomas Sowell (October 4, 2016). “Dunbar High School After 100 Years”. townhall.com.
78 Jump up 
^ Brown v. Board of Education of Topeka, 349 U.S. 294 (1955)
79 Jump up 
^ Jim Chen, Poetic Justice, 29 Cardozo Law Review (2007)
80 Jump up 
^ The “Brown II,” “All Deliberate Speed” Decision ~ Civil Rights Movement Veterans
81 Jump up 
^ Smith, Bob (1965). They Closed Their Schools. University of North Carolina Press.
82 Jump up 
^ Topeka Public Schools Desegregation History: “The Naming of Scott Computer Technology Magnet” Archived October 1, 2007, at the Wayback Machine.
83 Jump up 
^ “FindLaw | Cases and Codes”. Caselaw.lp.findlaw.com. Retrieved October 15, 2010.
84 Jump up 
^ For analysis of this decision, see also Joel K. Goldstein, “Not Hearing History: A Critique of Chief Justice Roberts’s Reinterpretation of Brown,” 69 Ohio St. L.J. 791 (2008)
Further reading
• Keppel, Ben. Brown v. Board and the Transformation of American Culture (LSU Press, 2016). xiv, 225 pp.
• Kluger, Richard (1975). Simple Justice: The History of Brown v. Board of Education and Black America’s Struggle for Equality. New York: Knopf. ISBN 9780394472898.
External video
Booknotes interview with Charles Ogletree on All Deliberate Speed, May 9, 2004, C-SPAN
• Ogletree, Charles J., Jr. (2004). All Deliberate Speed: Reflections on the First Half Century of Brown v. Board of Education. New York: W.W. Norton. ISBN 9780393058970.
• Patterson, James T., and William W. Freehling. Brown v. Board of Education: A civil rights milestone and its troubled legacy (Oxford University Press, 2001).
• Tushnet, Mark V. (2008). “”Our decision does not end but begins the struggle over segregation” Brown v. Board of Education, 1954: Justice Robert H. Jackson”. In Tushnet, Mark V. I dissent: Great Opposing Opinions in Landmark Supreme Court Cases. Boston: Beacon Press. pp. 133–150. ISBN 9780807000366. Preview.
External links

Wikisource has original text related to this article:
Brown v. Board of Education of Topeka (347 U.S. 483)

Wikimedia Commons has media related to Brown v. Board of Education.
• Case Brief for Brown v. Board of Education of Topeka at Lawnix.com
• Case information and transcripts on The Curiae Project
• Brown v. Board of Education National Historic Site (US Park Service)
• Brown v. Board of Education of Topeka, 347 U.S. 483 (1954) (full text with hyperlinks to cited material)
• A copy of Florida’s 1957 Interposition Resolution in Response to the Brown decision, with Gov. Collin’s handwritten rejection of it. Made available for public use by the State Archives of Florida.
• U.S. District Court of Kansas: Records of Brown v. Board of Education, Dwight D. Eisenhower Presidential Library
• Online documents relating to Brown vs. Board of Education, Dwight D. Eisenhower Presidential Library
• Documents from the district court, including the original complaint and trial transcript, at the Civil Rights Litigation Clearinghouse
• 60th Anniversary of Brown v. Board of Education curated by Michigan State University’s Diversity of Excellence through Artistic Expression
• Brown v. Board of Education, Civil Rights Digital Library.
• “Supreme Court Landmark Case Brown v. Board of Education” from C-SPAN’s Landmark Cases: 12 Historic Supreme Court Decisions
[hide]
• v t e

African-American Civil Rights Movement (1954–1968)

Notable
events
(timeline)

1954–1959
• • • • • Brown v. Board of Education Bolling v. Sharpe Briggs v. Elliott Davis v. County School Board of Prince Edward County Gebhart v. Belton Sarah Keys v. Carolina Coach Company Emmett Till Montgomery bus boycott Browder v. Gayle Tallahassee bus boycott Mansfield school desegregation 1957 Prayer Pilgrimage for Freedom “Give Us the Ballot” Royal Ice Cream Sit-in Little Rock Nine National Guard blockade Civil Rights Act of 1957 Kissing Case Biloxi Wade-Ins

1960–1963
• • • • • Greensboro sit-ins Nashville sit-ins Sit-in movement Civil Rights Act of 1960 Gomillion v. Lightfoot Boynton v. Virginia Rock Hill sit-ins Robert F. Kennedy’s Law Day Address Freedom Rides attacks Garner v. Louisiana Albany Movement University of Chicago sit-ins “Second Emancipation Proclamation” Meredith enrollment, Ole Miss riot “Segregation now, segregation forever” Stand in the Schoolhouse Door 1963 Birmingham campaign Letter from Birmingham Jail Children’s Crusade Birmingham riot 16th Street Baptist Church bombing John F. Kennedy’s Report to the American People on Civil Rights March on Washington “I Have a Dream” St. Augustine movement

1964–1968
• • • • Twenty-fourth Amendment Bloody Tuesday Freedom Summer workers’ murders Civil Rights Act of 1964 1965 Selma to Montgomery marches “How Long, Not Long” Voting Rights Act of 1965 Harper v. Virginia Board of Elections March Against Fear White House Conference on Civil Rights Chicago Freedom Movement/Chicago open housing movement Memphis Sanitation Strike King assassination funeral riots Poor People’s Campaign Civil Rights Act of 1968 Green v. County School Board of New Kent County
Activist
groups
• • Alabama Christian Movement for Human Rights Atlanta Student Movement Brotherhood of Sleeping Car Porters Congress of Racial Equality (CORE) Committee on Appeal for Human Rights Council for United Civil Rights Leadership Dallas County Voters League Deacons for Defense and Justice Georgia Council on Human Relations Highlander Folk School Leadership Conference on Civil Rights Montgomery Improvement Association Nashville Student Movement NAACP Youth Council Northern Student Movement National Council of Negro Women National Urban League Operation Breadbasket Regional Council of Negro Leadership Southern Christian Leadership Conference (SCLC) Southern Regional Council Student Nonviolent Coordinating Committee (SNCC) The Freedom Singers Wednesdays in Mississippi Women’s Political Council

Activists
• Ralph Abernathy Victoria Gray Adams Zev Aelony Mathew Ahmann William G. Anderson Gwendolyn Armstrong Arnold Aronson Ella Baker Marion Barry Daisy Bates Harry Belafonte James Bevel Claude Black Gloria Blackwell Randolph Blackwell Unita Blackwell Ezell Blair Jr. Joanne Bland Julian Bond Joseph E. Boone William Holmes Borders Amelia Boynton Raylawni Branch Ruby Bridges Aurelia Browder H. Rap Brown Guy Carawan Stokely Carmichael Johnnie Carr James Chaney J. L. Chestnut Colia Lafayette Clark Ramsey Clark Septima Clark Xernona Clayton Eldridge Cleaver Kathleen Neal Cleaver Charles E. Cobb Jr. Annie Lee Cooper Dorothy Cotton Claudette Colvin Vernon Dahmer Jonathan Daniels Joseph DeLaine Dave Dennis Annie Devine Patricia Stephens Due Charles Evers Medgar Evers Myrlie Evers-Williams Chuck Fager James Farmer Walter E. Fauntroy James Forman Marie Foster Golden Frinks Andrew Goodman Fred Gray Jack Greenberg Dick Gregory Lawrence Guyot Prathia Hall Fannie Lou Hamer William E. Harbour Vincent Harding Dorothy Height Lola Hendricks Aaron Henry Oliver Hill Donald L. Hollowell James Hood Myles Horton Zilphia Horton T. R. M. Howard Ruby Hurley Jesse Jackson Jimmie Lee Jackson Richie Jean Jackson T. J. Jemison Esau Jenkins Barbara Rose Johns Vernon Johns Frank Minis Johnson Clarence Jones Matthew Jones Vernon Jordan Tom Kahn Clyde Kennard A. D. King C.B. King Coretta Scott King Martin Luther King Jr. Martin Luther King Sr. Bernard Lafayette James Lawson Bernard Lee Sanford R. Leigh Jim Letherer Stanley Levison John Lewis Viola Liuzzo Z. Alexander Looby Joseph Lowery Clara Luper Malcolm X Mae Mallory Vivian Malone Thurgood Marshall Benjamin Mays Franklin McCain Charles McDew Ralph McGill Floyd McKissick Joseph McNeil James Meredith William Ming Jack Minnis Amzie Moore Douglas E. Moore William Lewis Moore Irene Morgan Bob Moses William Moyer Elijah Muhammad Diane Nash Charles Neblett Edgar Nixon Jack O’Dell James Orange Rosa Parks James Peck Charles Person Homer Plessy Adam Clayton Powell Jr. Fay Bellamy Powell Al Raby Lincoln Ragsdale A. Philip Randolph George Raymond Jr. Bernice Johnson Reagon Cordell Reagon James Reeb Frederick D. Reese Gloria Richardson David Richmond Bernice Robinson Jo Ann Robinson Bayard Rustin Bernie Sanders Michael Schwerner Cleveland Sellers Charles Sherrod Alexander D. Shimkin Fred Shuttlesworth Modjeska Monteith Simkins Glenn E. Smiley A. Maceo Smith Kelly Miller Smith Mary Louise Smith Maxine Smith Ruby Doris Smith-Robinson Charles Kenzie Steele Hank Thomas Dorothy Tillman A. P. Tureaud Hartman Turnbow Albert Turner C. T. Vivian Wyatt Tee Walker Hollis Watkins Walter Francis White Roy Wilkins Hosea Williams Kale Williams Robert F. Williams Andrew Young Whitney Young Sammy Younge Jr. James Zwerg

Influences
• • • Nonviolence Padayatra Sermon on the Mount Mohandas K. Gandhi Ahimsa Satyagraha The Kingdom of God is Within You Frederick Douglass W. E. B. Du Bois
Related
• • • • • Jim Crow laws Plessy v. Ferguson Separate but equal Buchanan v. Warley Hocutt v. Wilson Sweatt v. Painter Heart of Atlanta Motel, Inc. v. United States Katzenbach v. McClung Loving v. Virginia Fifth Circuit Four Brown Chapel Holt Street Baptist Church Edmund Pettus Bridge March on Washington Movement African-American churches attacked Journey of Reconciliation Freedom Songs “Kumbaya” “Keep Your Eyes on the Prize” “Oh, Freedom” “This Little Light of Mine” “We Shall Not Be Moved” “We Shall Overcome” Spring Mobilization Committee to End the War in Vietnam “Beyond Vietnam: A Time to Break Silence” Watts riots Voter Education Project 1960s counterculture In popular culture King Memorial Birmingham Civil Rights National Monument Freedom Riders National Monument Civil Rights Memorial

Noted
historians

• Taylor Branch Clayborne Carson John Dittmer Michael Eric Dyson Chuck Fager Adam Fairclough David Garrow David Halberstam Vincent Harding Steven F. Lawson Doug McAdam Diane McWhorter Charles M. Payne Timothy Tyson Akinyele Umoja Movement photographers
[hide]

This page was last edited on 17 October 2017, at 07:10.

Sorting Out Charlottesville

This post is about VICE footage of Charlottesville

I just watched – all the way through. Wow wow wow.

So here’s my take.

I am beyond anger. All these scenes and words and stridency underscore the hot mess we are in. And I am so pissed at POTUS … spitting mad.

But here’s the thing: Trump did not create this. He took the lid off it, and asked everyone to look inside. It’s a petrie dish, and he is a fungus that grew out of it.

The petrie dish is given a beneficial ecology through Fox, so many fungi variations can thrive. Trump just rose up to be the fungi-in-chief.

Fox is wrong that there are 100,000 racists in the country. I think it’s 50-100 million. Trump has made it clear that, with a blind ballot, 30% of the country covertly or overtly has these views.

Maybe they are not as strident. Maybe they are too clever to use these disgusting words (duh, it’s not that smart to talk about that beautiful girl Ivanka being with that piece of sh*t Jew husband). Maybe they whisper to people of like mind. Maybe they are a silent class – and love it when someone speaks up on their behalf.

I’m pretty sure that 50-100 million in the US are glad that someone is speaking out about all “this”.

What “this”? The “this” began, seems to me, when a rough, tough white texan, LBJ, used his swagger and savvy to push through the civil rights legislation we know today.

Then came the hot mess that brought us ten+ million illegal immigrants (as congress failed to figure something out). That is. part of “this”.

Then came affirmative action.

Then came black mayors and electeds.

Then came gay marriage. I’ll bet most of the 30% don’t even know someone that is gay.

Then came all the others – like transgender folks that want to pee.

You get the point. The liberal class has been out there for fifty years “perfecting the union” by extending equality to every group they can think of.

Every time this happens, the class I am talking about says “hey, what am I, chopped liver?”. They get pissed but they have no place to put their anger.

Now, mind you, I LOVE “perfecting the union”. I love Obama’s take on this. But I assumed that progress was possible because this silent class would stand down. WRONG.

The silent class is mega-pissed. Republicans figured this out, and have gotten better and better at speaking to this massive crowd of disaffecteds. But they are covert and clever, not overt. They have perfected the “dog whistle”, where only the disaffected can hear “I am totally with you!”.

As you know, I thought the HRC campaign was abominable. Gotta say, I still thought she would win. But what I am saying above is the best articulation yet of WHY she lost; and WHY we have an idiot president; and Why we have Charlottesville.

I wish I had made all this up. I did not. The best version of this position can be found here;

http://www.npr.org/2017/08/15/543730312/the-once-and-future-liberal-looks-at-shortfalls-of-american-liberalism

Columbia Professor Mark Lilla has a VERY controversial point. But I think it’s correct. And I think Democrats are sunk unless they stop their nonsense, and start speaking about the whole, more than the parts. The quilt of America is fine. Going to the mat for transgender people’s bathroom is not. We now know that as “OVER-REACH” – or, to mix metaphors “A BRIDGE TOO FAR”.

Lilla says that when you speak about the parts, you inevitably leave someone out. I think 50-100 million have heard the elected’s speak on all “this”, and they say: I feel left out.

Compounding this problem is that most media types come from diverse cities, and think like I do. They LOVE perfecting the union, and want their readers to know it. And that makes the 50-100 million even MORE pissed off and left out.

We have a hot mess. But maybe, maybe, maybe it’s better that someone, even a complete fool, ripped the cover off the petrie dish. So we can address the hot mess. Before it’s too late.

Folly of One-Way Loyalty

Maybe, instead of bashing Trump at every turn, we can step back and learn from him.

In this case, John Pitney makes a great point about the folly of one way loyalty:

“John J. Pitney, a political scientist with sterling conservative credentials, has a blistering piece in Politico explaining Trump’s problem: He thinks loyalty flows only one way. “Trump’s life has been a long trail of betrayals,” Pitney writes. He has dumped wives, friends, mentors, protégés, colleagues, business associates, Trump University students and, more recently, political advisers.

“Loyalty is about strength,” Pitney, a professor at Claremont McKenna, writes. “It is about sticking with a person, a cause, an idea or a country even when it is costly, difficult, or unpopular.”

CREDIT: NYT Op Ed

20th Century History on Health Care and Insurance

A historian’s take on health care and insurance in the US:

Key points:

Health care in the US is primarily driven by an “insurance company model”.’
There actually was a “medical marketplace” in early 20th century.
One of the best in that marketplace was a “prepaid physician group” with profit sharing for docs.
Truman proposed universal health care.
A.M.A. fought government intervention.
A.M.A. decided that the best way to keep the government out of their industry was to design a private sector model: the insurance company model.
In the insurance company model, insurance companies would pay physicians using fee-for-service compensation.
Thus, physicians became allied with insurance companies – both striving to keep government out of health care. Fee for service was their chosen model.
The model worked to expand coverage: from 25% of the population in 1945 to about 80 percent in 1965.
Elderly did not get covered as well. Congress stepped in with Medicare in 1965.
Because of rising prices, insurers gradually took over. “To constrain rising prices, insurers gradually introduced cost containment procedures and incrementally claimed supervisory authority over doctors. Soon they were reviewing their medical work, standardizing treatment blueprints tied to reimbursements and shaping the practice of medicine.”
Innovation in lacking. Concierge medicine experiments show some promise, like Atlas is Wichita.

=============
JCR comments:
It’s always easier looking backward. If only 25% of the population have health insurance, it seems eminently sensible that driving that number up to, say, 80% would be a high priority goal.

That’s what America did: it adopted a high priority goal to increase health insurance coverage from 25% to 80%. It’s chosen method was a fee-for-service reimbursement model – the “insurance model”. We put insurance companies in the driver’s seat, and we encouraged them to work with employers and physicians groups.

They were the middle man:

Insurers made sure that their employer clients had the benefits they needed to attract employees, at a cost that was practical.
Insurers also made sure that their physician partners supplied the services that they needed, at prices that were practical.

So, with the insurer-as-middle-man-model, we achieved our goal of enrolling 80%, up from 25%. 80% of the American population had health insurance in 1965.

So – what’s wrong with that?

It’s mostly very good. But…

Looking backward, it is obvious now that what is wrong: it is the remaining 20%. These are the unemployed – or the seniors – or the ones who have such ugly health attributes that their health costs are truly exorbitant.

While America was getting the 80% “squared away”, the 20% were left to fend for themselves. They overran emergency rooms; they took beds in charity hospitals; they died.

In 1965, we adopted Medicare and Medicaid. Medicare addressed the 20% who were seniors.
Medicaid addressed the 20% who were poor, such as:

Low-income families
Pregnant women
People of all ages with disabilities
People who need long-term care

Most of this happened over time, not in 1965. State offerings vary.

In 1997, we adopted CHIP for children. This addressed the 20% who were kids. 11 million kids got coverage. They were from families with too much income to qualify for Medicaid.

in 2003, we adopted MMA “The Medicare Prescription Drug Improvement and Modernization Act of 2003”. Under the MMA, private health plans were offered, approved by Medicare “Medicare Advantage Plans’. An optional prescription drug benefit was offered (“Part D”)

In 2011, the Affordable Care Act was adopted.

So, the key question for today is: why is our health care system such a mess. Read on:

=============
CREDIT: NYT https://www.nytimes.com/2017/06/19/opinion/health-insurance-american-medical-association.html?emc=edit_th_20170619&nl=todaysheadlines&nlid=44049881&_r=0

The Opinion Pages | OP-ED CONTRIBUTOR
How Did Health Care Get to Be Such a Mess?
By CHRISTY FORD CHAPINJUNE 19, 2017
The problem with American health care is not the care. It’s the insurance.
Both parties have stumbled to enact comprehensive health care reform because they insist on patching up a rickety, malfunctioning model. The insurance company model drives up prices and fragments care. Rather than rejecting this jerry-built structure, the Democrats’ Obamacare legislation simply added a cracked support beam or two. The Republican bill will knock those out to focus on spackling other dilapidated parts of the system.

An alternative structure can be found in the early decades of the 20th century, when the medical marketplace offered a variety of models. Unions, businesses, consumer cooperatives and ethnic and African-American mutual aid societies had diverse ways of organizing and paying for medical care.

Physicians established a particularly elegant model: the prepaid doctor group. Unlike today’s physician practices, these groups usually staffed a variety of specialists, including general practitioners, surgeons and obstetricians. Patients received integrated care in one location, with group physicians from across specialties meeting regularly to review treatment options for their chronically ill or hard-to-treat patients.

Individuals and families paid a monthly fee, not to an insurance company but directly to the physician group. This system held down costs. Physicians typically earned a base salary plus a percentage of the group’s quarterly profits, so they lacked incentive to either ration care, which would lose them paying patients, or provide unnecessary care.

This contrasts with current examples of such financing arrangements. Where physicians earn a preset salary — for example, in Kaiser Permanente plans or in the British National Health Service — patients frequently complain about rationed or delayed care. When physicians are paid on a fee-for-service basis, for every service or procedure they provide — as they are under the insurance company model — then care is oversupplied. In these systems, costs escalate quickly.

Unfortunately, the leaders of the American Medical Association saw early health care models — union welfare funds, prepaid physician groups — as a threat. A.M.A. members sat on state licensing boards, so they could revoke the licenses of physicians who joined these “alternative” plans. A.M.A. officials likewise saw to it that recalcitrant physicians had their hospital admitting privileges rescinded.

The A.M.A. was also busy working to prevent government intervention in the medical field. Persistent federal efforts to reform health care began during the 1930s. After World War II, President Harry Truman proposed a universal health care system, and archival evidence suggests that policy makers hoped to build the program around prepaid physician groups.

A.M.A. officials decided that the best way to keep the government out of their industry was to design a private sector model: the insurance company model.

In this system, insurance companies would pay physicians using fee-for-service compensation. Insurers would pay for services even though they lacked the ability to control their supply. Moreover, the A.M.A. forbade insurers from supervising physician work and from financing multispecialty practices, which they feared might develop into medical corporations.

With the insurance company model, the A.M.A. could fight off Truman’s plan for universal care and, over the next decade, oppose more moderate reforms offered during the Eisenhower years.

Through each legislative battle, physicians and their new allies, insurers, argued that federal health care funding was unnecessary because they were expanding insurance coverage. Indeed, because of the perceived threat of reform, insurers weathered rapidly rising medical costs and unfavorable financial conditions to expand coverage from about a quarter of the population in 1945 to about 80 percent in 1965.

But private interests failed to cover a sufficient number of the elderly. Consequently, Congress stepped in to create Medicare in 1965. The private health care sector had far more capacity to manage a large, complex program than did the government, so Medicare was designed around the insurance company model. Insurers, moreover, were tasked with helping administer the program, acting as intermediaries between the government and service providers.

With Medicare, the demand for health services increased and medical costs became a national crisis. To constrain rising prices, insurers gradually introduced cost containment procedures and incrementally claimed supervisory authority over doctors. Soon they were reviewing their medical work, standardizing treatment blueprints tied to reimbursements and shaping the practice of medicine.

It’s easy to see the challenge of real reform: To actually bring down costs, legislators must roll back regulations to allow market innovation outside the insurance company model.

In some places, doctors are already trying their hand at practices similar to prepaid physician groups, as in concierge medicine experiments like the Atlas MD plan, a physician cooperative in Wichita, Kan. These plans must be able to skirt state insurance regulations and other laws, such as those prohibiting physicians from owning their own diagnostic facilities.

Both Democrats and Republicans could learn from this lost history of health care innovation.

Christy Ford Chapin is an associate professor of history at the University of Maryland, Baltimore County, a visiting scholar at Johns Hopkins University and the author of “Ensuring America’s Health: The Public Creation of the Corporate Health Care System.”
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

=============
Brian Hudes Comment:

I saw this one, as well.  The author lost credibility for me.   Ironically, what the author clearly doesn’t realize is that she is making an argument for the Kaiser Permenante model.  However, she unfairly and without any data makes the following claim: 

“Where physicians earn a preset salary — for example, in Kaiser Permanente plans or in the British National Health Service — patients frequently complain about rationed or delayed care”

Here’s a more balanced and comprehensive assessment supported by third party research:

Health Care Members Speak

==========

RECENT COMMENTS
Alex 21 hours ago
Patients are no longer patients, but “customers”.Insurance providers are publicly traded companies. Healthcare costs are meticulously…
T Bone 21 hours ago
Why in the “elegant” model of the old pre-paid doctor’s group would doctors not “ration” care? Sure they would. The solution is provider…
M Shea 1 day ago
I’ve worked in marketing for both healthcare insurance and a healthcare/hospital system. It’s ridiculous how much money is spent selling a…

Smuggling, Capitalism and the Law of Unintended Consequences

To me, this article seems to be about the border wall with Mexico, but it instead is about 1) the law of unintended consequences, and 2) the nature of capitalism.

The law of unintended consequences can never be underestimated; nor can the ability of capitalism to bring out the creativity of entrepreneurs and organizations when there is big money to be made.

A few notes:

“But rather than stopping smuggling, the barriers have just pushed it farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.”

“A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year.”

The price of smuggling any given drug will rise proportionate to the difficulty of smuggling.

52 legal crossings
Nogales (Mexico) and Nogales (US) and the dense homes on border

Tricks:
Coyotes (small drug smugglers)
Donkeys (the people who actually carry the drugs)
“Clavos” Secret compartments whose sophistication grows
Trains (a principal means of smuggling)
“Trampolines” (gigantic catapults that hurl the drugs over any wall)
Tunnel and new technologies (216 discovered since 1990)

===============
CREDIT: New York Times Article: mexican-drug-smugglers-to-trump-thanks!

Mexican Drug Smugglers to Trump: Thanks!

Ioan Grillo
MAY 5, 2017

NOGALES, Mexico — Crouched in the spiky terrain near this border city, a veteran smuggler known as Flaco points to the steel border fence and describes how he has taken drugs and people into the United States for more than three decades. His smuggling techniques include everything from throwing drugs over in gigantic catapults to hiding them in the engine cars of freight trains to making side tunnels off the cross-border sewage system.

When asked whether the border wall promised by President Trump will stop smugglers, he smiles. “This is never going to stop, neither the narco trafficking nor the illegals,” he says. “There will be more tunnels. More holes. If it doesn’t go over, it will go under.”

What will change? The fees that criminal networks charge to transport people and contraband across the border. Every time the wall goes up, so do smuggling profits.

The first time Flaco took people over the line was in 1984, when he was 15; he showed them a hole torn in a wire fence on the edge of Nogales for a tip of 50 cents. Today, many migrants pay smugglers as much as $5,000 to head north without papers, trekking for days through the Sonoran Desert. Most of that money goes to drug cartels that have taken over the profitable business.

“From 50 cents to $5,000,” Flaco says. “As the prices went up, the mafia, which is the Sinaloa cartel, took over everything here, drugs and people smuggling.” Sinaloa dominates Nogales and other parts of northwest Mexico, while rivals, including the Juarez, Gulf and Zetas cartels, control other sections of the border. Flaco finished a five-year prison sentence here for drug trafficking in 2009 and has continued to smuggle since.

His comments underline a problem that has frustrated successive American governments and is likely to haunt President Trump, even if the wall becomes more than a rallying cry and he finally gets the billions of dollars needed to fund it. Strengthening defenses does not stop smuggling. It only makes it more expensive, which inadvertently gives more money to criminal networks.

The cartels have taken advantage of this to build a multibillion industry, and they protect it with brutal violence that destabilizes Mexico and forces thousands of Mexicans to head north seeking asylum.

Stretching almost 2,000 miles from the Pacific Ocean to the Gulf of Mexico, the border has proved treacherous to block. It traverses a sparsely populated desert, patches of soft earth that are easy to tunnel through, and the mammoth Rio Grande, which floods its banks, making fencing difficult.

And it contains 52 legal crossing points, where millions of people, cars, trucks and trains enter the United States every week.

President Trump’s idea of a wall is not new. Chunks of walls, fencing and anti-car spikes have been erected periodically, particularly in 1990 and 2006. On April 30, Congress reached a deal to fund the federal budget through September that failed to approve any money for extending the barriers as President Trump has promised. However, it did allocate several hundred million dollars for repairing existing infrastructure, and the White House has said it will use this to replace some fencing with a more solid wall.

But rather than stopping smuggling, the barriers have just pushed it: farther into the desert, deeper into the ground, into more sophisticated secret compartments in cars and into the drug cartels’ hands.
It is particularly concerning how cartels have taken over the human smuggling business. Known as coyotes, these smugglers used to work independently, or in small groups. Now they have to work for the cartel, which takes a huge cut of the profits, Flaco says. If migrants try to cross the border without paying, they risk getting beaten or murdered.

The number of people detained without papers on the southern border has dropped markedly in the first months of the Trump administration, with fewer than 17,000 apprehended in March, the lowest since 2000. But this has nothing to do with the yet-to-be-built new wall. The president’s anti-immigrant rhetoric could be a deterrent — signaling that tweets can have a bigger effect than bricks. However, this may not last, and there is no sign of drug seizures going down.

Flaco grew up in a Nogales slum called Buenos Aires, which has produced generations of smugglers. The residents refer to the people who carry over backpacks full of drugs as burros, or donkeys. “When I first heard about this, I thought they used real donkeys to carry the marijuana,” Flaco says. “Then I realized, we were the donkeys.”

He was paid $500 for his first trip as a donkey when he was in high school, encouraging him to drop out for what seemed like easy money.
The fences haven’t stopped the burros, who use either ropes or their bare hands to scale them. This was captured in extraordinary footage from a Mexican TV crew, showing smugglers climbing into California. But solid walls offer no solution, as they can also be scaled and they make it harder for border patrol agents to spot what smugglers are up to on the Mexican side.

Flaco quickly graduated to building secret compartments in cars. Called clavos, they are fixed into gas tanks, on dashboards, on roofs. The cars, known by customs agents as trap cars, then drive right through the ports of entry. In fact, while most marijuana is caught in the desert, harder drugs such as heroin are far more likely to go over the bridge.
When customs agents learned to look for the switches that opened the secret compartments, smugglers figured out how to do without them. Some new trap cars can be opened only with complex procedures, such as when the driver is in the seat, all doors are closed, the defroster is turned on and a special card is swiped.

Equally sophisticated engineering goes into the tunnels that turn the border into a block of Swiss cheese. Between 1990 and 2016, 224 tunnels were discovered, some with air vents, rails and electric lights. While the drug lord Joaquin Guzman, known as El Chapo, became infamous for using them, Flaco says they are as old as the border itself and began as natural underground rivers.

Tunnels are particularly popular in Nogales, where Mexican federal agents regularly seize houses near the border for having them. Flaco even shows me a filled-in passage that started inside a graveyard tomb. “It’s because Nogales is one of the few border towns that is urbanized right up to the line,” explains Mayor David Cuauhtémoc Galindo. “There are houses that are on both sides of the border at a very short distance,” making it easy to tunnel from one to the other.

Nogales is also connected to its neighbor across the border in Arizona, also called Nogales, by a common drainage system. It cannot be blocked, because the ground slopes downward from Mexico to the United States. Police officers took me into the drainage system and showed me several smuggling tunnels that had been burrowed off it. They had been filled in with concrete, but the officers warned that smugglers could be lurking around to make new ones and that I should hit the ground if we ran into any.

Back above ground, catapults are one of the most spectacular smuggling methods. “We call them trampolines,” Flaco says. “They have a spring that is like a tripod, and two or three people operate them.” Border patrol agents captured one that had been attached to the fence near the city of Douglas, Ariz., in February and showed photos of what looked like a medieval siege weapon.

Freight trains also cross the border, on their way from southern Mexico up to Canada. While agents inspect them, it’s impossible to search all the carriages, which are packed with cargo from cars to canned chilies. Flaco says the train workers are often paid off by the smugglers. He was once caught with a load of marijuana on a train in Arizona, but he managed to persuade police that he was a train worker and did only a month in jail.
While marijuana does less harm, the smugglers also bring heroin, crack cocaine and crystal meth to America, which kill many. Calls to wage war on drugs can be emotionally appealing. The way President Trump linked his promises of a wall to drug problems in rural America was most likely a factor in his victory.

But four decades after Richard Nixon declared a “war on drugs,” despite trillions of dollars spent on agents, soldiers and barriers, drugs are still easy to buy all across America.

President Trump has taken power at a turning point in the drug policy debate. A majority of Americans now favor marijuana legalization, which is hitting the pockets of Mexican smugglers and will do so even more when California starts issuing licenses to sell recreational cannabis next year. President Trump has also called for more treatment for drug addicts. He would be wise to make that, and not the wall, a cornerstone of his drug policy.

Reducing the finances of drug cartels could reduce some of the violence, and the number of people fleeing north to escape it. But to really tackle the issue of human smuggling, the United States must provide a path to papers for the millions of undocumented workers already in the country, and then make sure businesses hire only workers with papers in the future. So long as illegal immigrants can make a living in the United States, smugglers will make a fortune leading them there.

Stopping the demand for the smugglers’ services actually hits them in their pockets. Otherwise, they will just keep getting richer as the bricks get higher.

Ioan Grillo is the author of “Gangster Warlords: Drug Dollars, Killing Fields and the New Politics of Latin America” and a contributing opinion writer.

I Thought I Understood the American Right. Trump Proved Me Wrong.

How to explain Trump? This feature-length article in today’s New York Times Magazine does a great job of pulling together, into one place, the historical strands that made Trump possible.

Including:

-The New Deal put conservatives on their “back foot”, and set the stage for an emerging liberal consensus that helps for over fifty years.
-The effort by William F. Buckley and the National Review, beginning in 1955, to make conservatism intellectually attractive – and defensible.
-New South talking points, instead of outright racism, that were more palatable, like “stable housing values” and “quality local education,”. These had enormous appeal to the white American middle class.
– Alan Brinkley arguing, when Reagan was elected, that American conservatism “had been something of an orphan in historical scholarship.”
– Reagan himself, who portrayed a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave.
-Harvard’s Lisa McGirr writing in her 2001 book that conservative, largely suburban “highly educated and thoroughly modern group of men and women,” took on “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

Two quotes stick with me, one which summarizes:

“Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.”

And a second quote about Goldwater:

Richard Hofstadter said, one month before the defeat of Barry Goldwater for president: “when, in all our history, has anyone with ideas so bizarre, so self-confounding, so remote from the basic American consensus, ever gone so far?”

I find that quote revealing. He called the Goldwater crushing defeat. And one would have thought that the exact same quote could apply to November 1, 2016, anticipating a crushing defeat for Donald J. Trump. And yet, he prevailed!

We owe it to ourselves to ask: Why? Here is one historian’s take:

===============
CREDIT: Feature Article from New York Times Magazine
===============
I Thought I Understood the American Right. Trump Proved Me Wrong.

A historian of conservatism looks back at how he and his peers failed to anticipate the rise of the president.

BY RICK PERLSTEIN
APRIL 11, 2017

Until Nov. 8, 2016, historians of American politics shared a rough consensus about the rise of modern American conservatism. It told a respectable tale. By the end of World War II, the story goes, conservatives had become a scattered and obscure remnant, vanquished by the New Deal and the apparent reality that, as the critic Lionel Trilling wrote in 1950, liberalism was “not only the dominant but even the sole intellectual tradition.”

Year Zero was 1955, when William F. Buckley Jr. started National Review, the small-circulation magazine whose aim, Buckley explained, was to “articulate a position on world affairs which a conservative candidate can adhere to without fear of intellectual embarrassment or political surrealism.” Buckley excommunicated the John Birch Society, anti-Semites and supporters of the hyperindividualist Ayn Rand, and his cohort fused the diverse schools of conservative thinking — traditionalist philosophers, militant anti-Communists, libertarian economists — into a coherent ideology, one that eventually came to dominate American politics.

I was one of the historians who helped forge this narrative. My first book, “Before the Storm,” was about the rise of Senator Barry Goldwater, the uncompromising National Review favorite whose refusal to exploit the violent backlash against civil rights, and whose bracingly idealistic devotion to the Constitution as he understood it — he called for Social Security to be made “voluntary” — led to his crushing defeat in the 1964 presidential election. Goldwater’s loss, far from dooming the American right, inspired a new generation of conservative activists to redouble their efforts, paving the way for the Reagan revolution. Educated whites in the prosperous metropolises of the New South sublimated the frenetic, violent anxieties that once marked race relations in their region into more palatable policy concerns about “stable housing values” and “quality local education,” backfooting liberals and transforming conservatives into mainstream champions of a set of positions with enormous appeal to the white American middle class.

These were the factors, many historians concluded, that made America a “center right” nation. For better or for worse, politicians seeking to lead either party faced a new reality. Democrats had to honor the public’s distrust of activist government (as Bill Clinton did with his call for the “end of welfare as we know it”). Republicans, for their part, had to play the Buckley role of denouncing the political surrealism of the paranoid fringe (Mitt Romney’s furious backpedaling after joking, “No one’s ever asked to see my birth certificate”).

Then the nation’s pre-eminent birther ran for president. Trump’s campaign was surreal and an intellectual embarrassment, and political experts of all stripes told us he could never become president. That wasn’t how the story was supposed to end. National Review devoted an issue to writing Trump out of the conservative movement; an editor there, Jonah Goldberg, even became a leader of the “Never Trump” crusade. But Trump won — and some conservative intellectuals embraced a man who exploited the same brutish energies that Buckley had supposedly banished.

The professional guardians of America’s past, in short, had made a mistake. We advanced a narrative of the American right that was far too constricted to anticipate the rise of a man like Trump. Historians, of course, are not called upon to be seers. Our professional canons warn us against presentism — we are supposed to weigh the evidence of the past on its own terms — but at the same time, the questions we ask are conditioned by the present. That is, ultimately, what we are called upon to explain. Which poses a question: If Donald Trump is the latest chapter of conservatism’s story, might historians have been telling that story wrong?

American historians’ relationship to conservatism itself has a troubled history. Even after Ronald Reagan’s electoral-college landslide in 1980, we paid little attention to the right: The central narrative of America’s political development was still believed to be the rise of the liberal state. But as Newt Gingrich’s right-wing revolutionaries prepared to take over the House of Representatives in 1994, the scholar Alan Brinkley published an essay called “The Problem of American Conservatism” in The American Historical Review. American conservatism, Brinkley argued, “had been something of an orphan in historical scholarship,” and that was “coming to seem an ever-more-curious omission.” The article inaugurated the boom in scholarship that brought us the story, now widely accepted, of conservatism’s triumphant rise.

That story was in part a rejection of an older story. Until the 1990s, the most influential writer on the subject of the American right was Richard Hofstadter, a colleague of Trilling’s at Columbia University in the postwar years. Hofstadter was the leader of the “consensus” school of historians; the “consensus” being Americans’ supposed agreement upon moderate liberalism as the nation’s natural governing philosophy. He didn’t take the self-identified conservatives of his own time at all seriously. He called them “pseudoconservatives” and described, for instance, followers of the red-baiting Republican senator Joseph McCarthy as cranks who salved their “status anxiety” with conspiracy theories and bizarre panaceas. He named this attitude “the paranoid style in American politics” and, in an article published a month before Barry Goldwater’s presidential defeat, asked, “When, in all our history, has anyone with ideas so bizarre, so archaic, so self-confounding, so remote from the basic American consensus, ever gone so far?”

It was a strangely ahistoric question; many of Goldwater’s ideas hewed closely to a well-established American distrust of statism that goes back all the way to the nation’s founding. It betokened too a certain willful blindness toward the evidence that was already emerging of a popular backlash against liberalism. Reagan’s gubernatorial victory in California two years later, followed by his two landslide presidential wins, made a mockery of Hofstadter. Historians seeking to grasp conservatism’s newly revealed mass appeal would have to take the movement on its own terms.

That was my aim when I took up the subject in the late 1990s — and, even more explicitly, the aim of Lisa McGirr, now of Harvard University, whose 2001 book, “Suburban Warriors: The Origins of the New American Right,” became a cornerstone of the new literature. Instead of pronouncing upon conservatism from on high, as Hofstadter had, McGirr, a social historian, studied it from the ground up, attending respectfully to what activists understood themselves to be doing. What she found was “a highly educated and thoroughly modern group of men and women,” normal participants in the “bureaucratized world of post-World War II America.” They built a “vibrant and remarkable political mobilization,” she wrote, in an effort to address political concerns that would soon be resonating nationwide — for instance, their anguish at “liberal permissiveness” about matters like rising crime rates and the teaching of sex education in public schools.

But if Hofstadter was overly dismissive of how conservatives understood themselves, the new breed of historians at times proved too credulous. McGirr diligently played down the sheer bloodcurdling hysteria of conservatives during the period she was studying — for example, one California senator’s report in 1962 that he had received thousands of letters from constituents concerned about a rumor that Communist Chinese commandos were training in Mexico for an imminent invasion of San Diego. I sometimes made the same mistake. Writing about the movement that led to Goldwater’s 1964 Republican nomination, for instance, it never occurred to me to pay much attention to McCarthyism, even though McCarthy helped Goldwater win his Senate seat in 1952, and Goldwater supported McCarthy to the end. (As did William F. Buckley.) I was writing about the modern conservative movement, the one that led to Reagan, not about the brutish relics of a more gothic, ill-formed and supposedly incoherent reactionary era that preceded it.

A few historians have provocatively followed a different intellectual path, avoiding both the bloodlessness of the new social historians and the psychologizing condescension of the old Hofstadter school. Foremost among them is Leo Ribuffo, a professor at George Washington University. Ribuffo’s surname announces his identity in the Dickensian style: Irascible, brilliant and deeply learned, he is one of the profession’s great rebuffers. He made his reputation with an award-winning 1983 study, “The Old Christian Right: The Protestant Far Right From the Great Depression to the Cold War,” and hasn’t published a proper book since — just a series of coruscating essays that frequently focus on what everyone else is getting wrong. In the 1994 issue of The American Historical Review that featured Alan Brinkley’s “The Problem of American Conservatism,” Ribuffo wrote a response contesting Brinkley’s contention, now commonplace, that Trilling was right about American conservatism’s shallow roots. Ribuffo argued that America’s anti-liberal traditions were far more deeply rooted in the past, and far angrier, than most historians would acknowledge, citing a long list of examples from “regional suspicions of various metropolitan centers and the snobs who lived there” to “white racism institutionalized in slavery and segregation.”

After the election, Ribuffo told me that if he were to write a similar response today, he would call it, “Why Is There So Much Scholarship on ‘Conservatism,’ and Why Has It Left the Historical Profession So Obtuse About Trumpism?” One reason, as Ribuffo argues, is the conceptual error of identifying a discrete “modern conservative movement” in the first place. Another reason, though, is that historians of conservatism, like historians in general, tend to be liberal, and are prone to liberalism’s traditions of politesse. It’s no surprise that we are attracted to polite subjects like “colorblind conservatism” or William F. Buckley.

Our work might have been less obtuse had we shared the instincts of a New York University professor named Kim Phillips-Fein. “Historians who write about the right should find ways to do so with a sense of the dignity of their subjects,” she observed in a 2011 review, “but they should not hesitate to keep an eye out for the bizarre, the unusual, or the unsettling.”

Looking back from that perspective, we can now see a history that is indeed unsettling — but also unsettlingly familiar. Consider, for example, an essay published in 1926 by Hiram Evans, the imperial wizard of the Ku Klux Klan, in the exceedingly mainstream North American Review. His subject was the decline of “Americanism.” Evans claimed to speak for an abused white majority, “the so-called Nordic race,” which, “with all its faults, has given the world almost the whole of modern civilization.” Evans, a former dentist, proposed that his was “a movement of plain people,” and acknowledged that this “lays us open to the charge of being hicks and ‘rubes’ and ‘drivers of secondhand Fords.’ ” But over the course of the last generation, he wrote, these good people “have found themselves increasingly uncomfortable, and finally deeply distressed,” watching a “moral breakdown” that was destroying a once-great nation. First, there was “confusion in thought and opinion, a groping and hesitancy about national affairs and private life alike, in sharp contrast to the clear, straightforward purposes of our earlier years.” Next, they found “the control of much of our industry and commerce taken over by strangers, who stacked the cards of success and prosperity against us,” and ultimately these strangers “came to dominate our government.” The only thing that would make America great again, as it were, was “a return of power into the hands of everyday, not highly cultured, not overly intellectualized, but entirely unspoiled and not de-Americanized average citizens of old stock.”

This “Second Klan” (the first was formed during Reconstruction) scrambles our pre-Trump sense of what right-wing ideology does and does not comprise. (Its doctrines, for example, included support for public education, to weaken Catholic parochial schools.) The Klan also put the predations of the international banking class at the center of its rhetoric. Its worldview resembles, in fact, the right-wing politics of contemporary Europe — a tradition, heretofore judged foreign to American politics, called “herrenvolk republicanism,” that reserved social democracy solely for the white majority. By reaching back to the reactionary traditions of the 1920s, we might better understand the alliance between the “alt-right” figures that emerged as fervent Trump supporters during last year’s election and the ascendant far-right nativist political parties in Europe.

None of this history is hidden. Indeed, in the 1990s, a rich scholarly literature emerged on the 1920s Klan and its extraordinary, and decidedly national, influence. (One hotbed of Klan activity, for example, was Anaheim, Calif. McGirr’s “Suburban Warriors” mentions this but doesn’t discuss it; neither did I in my own account of Orange County conservatism in “Before the Storm.” Again, it just didn’t seem relevant to the subject of the modern conservative movement.) The general belief among historians, however, was that the Klan’s national influence faded in the years after 1925, when Indiana’s grand dragon, D.C. Stephenson, who served as the de facto political boss for the entire state, was convicted of murdering a young woman.

But the Klan remained relevant far beyond the South. In 1936 a group called the Black Legion, active in the industrial Midwest, burst into public consciousness after members assassinated a Works Progress Administration official in Detroit. The group, which considered itself a Klan enforcement arm, dominated the news that year. The F.B.I. estimated its membership at 135,000, including a large number of public officials, possibly including Detroit’s police chief. The Associated Press reported in 1936 that the group was suspected of assassinating as many as 50 people. In 1937, Humphrey Bogart starred in a film about it. In an informal survey, however, I found that many leading historians of the right — including one who wrote an important book covering the 1930s — hadn’t heard of the Black Legion.

Stephen H. Norwood, one of the few historians who did study the Black Legion, also mined another rich seam of neglected history in which far-right vigilantism and outright fascism routinely infiltrated the mainstream of American life. The story begins with Father Charles Coughlin, the Detroit-based “radio priest” who at his peak reached as many as 30 million weekly listeners. In 1938, Coughlin’s magazine, Social Justice, began reprinting “Protocols of the Learned Elders of Zion,” a forged tract about a global Jewish conspiracy first popularized in the United States by Henry Ford. After presenting this fictitious threat, Coughlin’s paper called for action, in the form of a “crusade against the anti-Christian forces of the red revolution” — a call that was answered, in New York and Boston, by a new organization, the Christian Front. Its members were among the most enthusiastic participants in a 1939 pro-Hitler rally that packed Madison Square Garden, where the leader of the German-American Bund spoke in front of an enormous portrait of George Washington flanked by swastikas.

The Bund took a mortal hit that same year — its leader was caught embezzling — but the Christian Front soldiered on. In 1940, a New York chapter was raided by the F.B.I. for plotting to overthrow the government. The organization survived, and throughout World War II carried out what the New York Yiddish paper The Day called “small pogroms” in Boston and New York that left Jews in “mortal fear” of “almost daily” beatings. Victims who complained to authorities, according to news reports, were “insulted and beaten again.” Young Irish-Catholic men inspired by the Christian Front desecrated nearly every synagogue in Washington Heights. The New York Catholic hierarchy, the mayor of Boston and the governor of Massachusetts largely looked the other way.

Why hasn’t the presence of organized mobs with backing in powerful places disturbed historians’ conclusion that the American right was dormant during this period? In fact, the “far right” was never that far from the American mainstream. The historian Richard Steigmann-Gall, writing in the journal Social History, points out that “scholars of American history are by and large in agreement that, in spite of a welter of fringe radical groups on the right in the United States between the wars, fascism never ‘took’ here.” And, unlike in Europe, fascists did not achieve governmental power. Nevertheless, Steigmann-Gall continues, “fascism had a very real presence in the U.S.A., comparable to that on continental Europe.” He cites no less mainstream an organization than the American Legion, whose “National Commander” Alvin Owsley proclaimed in 1922, “the Fascisti are to Italy what the American Legion is to the United States.” A decade later, Chicago named a thoroughfare after the Fascist military leader Italo Balbo. In 2011, Italian-American groups in Chicago protested a movement to rename it.

Anti-Semitism in America declined after World War II. But as Leo Ribuffo points out, the underlying narrative — of a diabolical transnational cabal of aliens plotting to undermine the very foundations of Christian civilization — survived in the anti-Communist diatribes of Joseph McCarthy. The alien narrative continues today in the work of National Review writers like Andrew McCarthy (“How Obama Embraces Islam’s Sharia Agenda”) and Lisa Schiffren (who argued that Obama’s parents could be secret Communists because “for a white woman to marry a black man in 1958, or ’60, there was almost inevitably a connection to explicit Communist politics”). And it found its most potent expression in Donald Trump’s stubborn insistence that Barack Obama was not born in the United States.

Trump’s connection to this alternate right-wing genealogy is not just rhetorical. In 1927, 1,000 hooded Klansmen fought police in Queens in what The Times reported as a “free for all.” One of those arrested at the scene was the president’s father, Fred Trump. (Trump’s role in the melee is unclear; the charge — “refusing to disperse” — was later dropped.) In the 1950s, Woody Guthrie, at the time a resident of the Beach Haven housing complex the elder Trump built near Coney Island, wrote a song about “Old Man Trump” and the “Racial hate/He stirred up/In the bloodpot of human hearts/When he drawed/That color line” in one of his housing developments. In 1973, when Donald Trump was working at Fred’s side, both father and son were named in a federal housing-discrimination suit. The family settled with the Justice Department in the face of evidence that black applicants were told units were not available even as whites were welcomed with open arms.

The 1960s and ’70s New York in which Donald Trump came of age, as much as Klan-ridden Indiana in the 1920s or Barry Goldwater’s Arizona in the 1950s, was at conservatism’s cutting edge, setting the emotional tone for a politics of rage. In 1966, when Trump was 20, Mayor John Lindsay placed civilians on a board to more effectively monitor police abuse. The president of the Patrolmen’s Benevolent Association — responding, “I am sick and tired of giving in to minority groups and their gripes and their shouting” — led a referendum effort to dissolve the board that won 63 percent of the vote. Two years later, fights between supporters and protesters of George Wallace at a Madison Square Garden rally grew so violent that, The New Republic observed, “never again will you read about Berlin in the ’30s without remembering this wild confrontation here of two irrational forces.”

The rest of the country followed New York’s lead. In 1970, after the shooting deaths of four students during antiwar protests at Kent State University in Ohio, a Gallup poll found that 58 percent of Americans blamed the students for their own deaths. (“If they didn’t do what the Guards told them, they should have been mowed down,” one parent of Kent State students told an interviewer.) Days later, hundreds of construction workers from the World Trade Center site beat antiwar protesters at City Hall with their hard hats. (“It was just like Iwo Jima,” an impressed witness remarked.) That year, reports the historian Katherine Scott, 76 percent of Americans “said they did not support the First Amendment right to assemble and dissent from government policies.”

In 1973, the reporter Gail Sheehy joined a group of blue-collar workers watching the Watergate hearings in a bar in Astoria, Queens. “If I was Nixon,” one of them said, “I’d shoot every one of them.” (Who “they” were went unspecified.) This was around the time when New Yorkers were leaping to their feet and cheering during screenings of “Death Wish,” a hit movie about a liberal architect, played by Charles Bronson, who shoots muggers at point-blank range. At an October 2015 rally near Nashville, Donald Trump told his supporters: “I have a license to carry in New York, can you believe that? Nobody knows that. Somebody attacks me, oh, they’re gonna be shocked.” He imitated a cowboy-style quick draw, and an appreciative crowd shouted out the name of Bronson’s then-41-year-old film: “ ‘Death Wish’!”

In 1989, a young white woman was raped in Central Park. Five teenagers, four black and one Latino, confessed to participating in the crime. At the height of the controversy, Donald Trump took out full-page ads in all the major New York daily papers calling for the return of the death penalty. It was later proved the police had essentially tortured the five into their confessions, and they were eventually cleared by DNA evidence. Trump, however, continues to insist upon their guilt. That confidence resonates deeply with what the sociologist Lawrence Rosenthal calls New York’s “hard-hat populism” — an attitude, Rosenthal hypothesizes, that Trump learned working alongside the tradesmen in his father’s real estate empire. But the case itself also resonates deeply with narratives dating back to the first Ku Klux Klan of white womanhood defiled by dark savages. Trump’s public call for the supposed perpetrators’ hides, no matter the proof of guilt or innocence, mimics the rituals of Southern lynchings.

When Trump vowed on the campaign trail to Make America Great Again, he was generally unclear about when exactly it stopped being great. The Vanderbilt University historian Jefferson Cowie tells a story that points to a possible answer. In his book “The Great Exception,” he suggests that what historians considered the main event in 20th century American political development — the rise and consolidation of the “New Deal order” — was in fact an anomaly, made politically possible by a convergence of political factors. One of those was immigration. At the beginning of the 20th century, millions of impoverished immigrants, mostly Catholic and Jewish, entered an overwhelmingly Protestant country. It was only when that demographic transformation was suspended by the 1924 Immigration Act that majorities of Americans proved willing to vote for many liberal policies. In 1965, Congress once more allowed large-scale immigration to the United States — and it is no accident that this date coincides with the increasing conservative backlash against liberalism itself, now that its spoils would be more widely distributed among nonwhites.

The liberalization of immigration law is an obsession of the alt-right. Trump has echoed their rage. “We’ve admitted 59 million immigrants to the United States between 1965 and 2015,” he noted last summer, with rare specificity. “ ‘Come on in, anybody. Just come on in.’ Not anymore.” This was a stark contrast to Reagan, who venerated immigrants, proudly signing a 1986 bill, sponsored by the conservative Republican senator Alan Simpson, that granted many undocumented immigrants citizenship. Shortly before announcing his 1980 presidential run, Reagan even boasted of his wish “to create, literally, a common market situation here in the Americas with an open border between ourselves and Mexico.” But on immigration, at least, it is Trump, not Reagan, who is the apotheosis of the brand of conservatism that now prevails.

A puzzle remains. If Donald Trump was elected as a Marine Le Pen-style — or Hiram Evans-style — herrenvolk republican, what are we to make of the fact that he placed so many bankers and billionaires in his cabinet, and has relentlessly pursued so many 1-percent-friendly policies? More to the point, what are we to the make of the fact that his supporters don’t seem to mind?

Here, however, Trump is far from unique. The history of bait-and-switch between conservative electioneering and conservative governance is another rich seam that calls out for fresh scholarly excavation: not of how conservative voters see their leaders, but of the neglected history of how conservative leaders see their voters.

In their 1987 book, “Right Turn,” the political scientists Joel Rogers and Thomas Ferguson presented public-opinion data demonstrating that Reagan’s crusade against activist government, which was widely understood to be the source of his popularity, was not, in fact, particularly popular. For example, when Reagan was re-elected in 1984, only 35 percent of voters favored significant cuts in social programs to reduce the deficit. Much excellent scholarship, well worth revisiting in the age of Trump, suggests an explanation for Reagan’s subsequent success at cutting back social programs in the face of hostile public opinion: It was business leaders, not the general public, who moved to the right, and they became increasingly aggressive and skilled in manipulating the political process behind the scenes.
But another answer hides in plain sight. The often-cynical negotiation between populist electioneering and plutocratic governance on the right has long been not so much a matter of policy as it has been a matter of show business. The media scholar Tim Raphael, in his 2009 book, “The President Electric: Ronald Reagan and the Politics of Performance,” calls the three-minute commercials that interrupted episodes of The General Electric Theater — starring Reagan and his family in their state-of-the-art Pacific Palisades home, outfitted for them by G.E. — television’s first “reality show.” For the California voters who soon made him governor, the ads created a sense of Reagan as a certain kind of character: the kindly paterfamilias, a trustworthy and nonthreatening guardian of the white middle-class suburban enclave. Years later, the producers of “The Apprentice” carefully crafted a Trump character who was the quintessence of steely resolve and all-knowing mastery. American voters noticed. Linda Lucchese, a Trump convention delegate from Illinois who had never previously been involved in politics, told me that she watched “The Apprentice” and decided that Trump would make a perfect president. “All those celebrities,” she told me: “They showed him respect.”

It is a short leap from advertising and reality TV to darker forms of manipulation. Consider the parallels since the 1970s between conservative activism and the traditional techniques of con men. Direct-mail pioneers like Richard Viguerie created hair-on-fire campaign-fund-raising letters about civilization on the verge of collapse. One 1979 pitch warned that “federal and state legislatures are literally flooded with proposed laws that are aimed at total confiscation of firearms from law-abiding citizens.” Another, from the 1990s, warned that “babies are being harvested and sold on the black market by Planned Parenthood clinics.” Recipients of these alarming missives sent checks to battle phony crises, and what they got in return was very real tax cuts for the rich. Note also the more recent connection between Republican politics and “multilevel marketing” operations like Amway (Trump’s education secretary, Betsy DeVos, is the wife of Amway’s former president and the daughter-in-law of its co-founder); and how easily some of these marketing schemes shade into the promotion of dubious miracle cures (Ben Carson, secretary of housing and urban development, with “glyconutrients”; Mike Huckabee shilling for a “solution kit” to “reverse” diabetes; Trump himself taking on a short-lived nutritional-supplements multilevel marketing scheme in 2009). The dubious grifting of Donald Trump, in short, is a part of the structure of conservative history.

Future historians won’t find all that much of a foundation for Trumpism in the grim essays of William F. Buckley, the scrupulous constitutionalist principles of Barry Goldwater or the bright-eyed optimism of Ronald Reagan. They’ll need instead to study conservative history’s political surrealists and intellectual embarrassments, its con artists and tribunes of white rage. It will not be a pleasant story. But if those historians are to construct new arguments to make sense of Trump, the first step may be to risk being impolite.

Editors’ Note: April 16, 2017
An essay on Page 36 this weekend by a historian about how conservatism has changed over the years cites Jonah Goldberg of the National Review as an example of a conservative intellectual who embraced Donald J. Trump following the presidential election. That is a mischaracterization of the views of Mr. Goldberg, who has continued to be critical of Mr. Trump.

Rick Perlstein is the author, most recently, of “The Invisible Bridge: The Fall of Nixon and the Rise of Reagan.”

Media Eco-Systems

CREDIT: ARTICLE FROM COLUMBIA JOURNALISM REVIEW

CJR has done a fine piece of work here! They studied 1.25 million stories, published by 25,000 sources, between 4/15 and 11/16.

A few of the most choice insights:

“What we find in our data is a network of mutually-reinforcing hyper-partisan sites that revive what Richard Hofstadter called “the paranoid style in American politics,” combining decontextualized truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world.”

“Take a look at Ending the Fed, which, according to Buzzfeed’s examination of fake news in November 2016, accounted for five of the top 10 of the top fake stories in the election. In our data, Ending the Fed is indeed prominent by Facebook measures, but not by Twitter shares. In the month before the election, for example, it was one of the three most-shared right-wing sites on Facebook, alongside Breitbart and Truthfeed.”

JCR note: take a look at www.endingthefed.com. I wasn’t even aware of it. Total scum reporting. If this website is even half as powerful as CJR says, we are in a world of hurt. For more on this, see Buzzfeed Commentary on End the Fed

“Use of disinformation by partisan media sources is neither new nor limited to the right wing, but the insulation of the partisan right-wing media from traditional journalistic media sources, and the vehemence of its attacks on journalism in common cause with a similarly outspoken president, is new and distinctive.”

“It is a mistake to dismiss these stories as “fake news”; their power stems from a potent mix of verifiable facts (the leaked Podesta emails), familiar repeated falsehoods, paranoid logic, and consistent political orientation within a mutually-reinforcing network of like-minded sites.”

“A remarkable feature of the right-wing media ecosystem is how new it is. Out of all the outlets favored by Trump followers, only the New York Post existed when Ronald Reagan was elected president in 1980. By the election of Bill Clinton in 1992, only the Washington Times, Rush Limbaugh, and arguably Sean Hannity had joined the fray. Alex Jones of Infowars started his first outlet on the radio in 1996. Fox News was not founded until 1996. Breitbart was founded in 2007, and most of the other major nodes in the right-wing media system were created even later.”

And my own reflection is:

I am guilty, as usual, of assuming that revolutions of one time are revolutions for all time. What I mean is … I was so, so impressed with the social media revolution that arguably swept President Obama into the White House. That campaign’s ability to pivot quickly, disseminate their points social media ways, was a thing to behold!

What I failed to realize is that the NEXT revolution was following right on its heels! And, sadly, I think the Democratic Party missed it too.

The next revolution was the right wing social media eco-system, a complex fabric of sites that reinforced each other. Rather than spouting “fake news”, as the New York Enquirer did with regularity way back when, these sites specialized in disinformation.

And they came on the scene very recently – led by Breitbart. It is stunning to me how Breitbart nudged Fox News out of the center of the media eco-system in early 2016, and then invited them back into the center, along with them, as their views increasingly aligned. This has got to be one of the greatest media coups of all time, orchestrated by Breitbart News, whose leader is now in the White House.

====================STUDY FOLLOWS==================
Study: Breitbart-led right-wing media ecosystem altered broader media agenda
By Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman
MARCH 3, 2017

THE 2016 PRESIDENTIAL ELECTION SHOOK the foundations of American politics. Media reports immediately looked for external disruption to explain the unanticipated victory—with theories ranging from Russian hacking to “fake news.”

We have a less exotic, but perhaps more disconcerting explanation: Our own study of over 1.25 million stories published online between April 1, 2015 and Election Day shows that a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world. This pro-Trump media sphere appears to have not only successfully set the agenda for the conservative media sphere, but also strongly influenced the broader media agenda, in particular coverage of Hillary Clinton.

While concerns about political and media polarization online are longstanding, our study suggests that polarization was asymmetric. Pro-Clinton audiences were highly attentive to traditional media outlets, which continued to be the most prominent outlets across the public sphere, alongside more left-oriented online sites. But pro-Trump audiences paid the majority of their attention to polarized outlets that have developed recently, many of them only since the 2008 election season.

Attacks on the integrity and professionalism of opposing media were also a central theme of right-wing media. Rather than “fake news” in the sense of wholly fabricated falsities, many of the most-shared stories can more accurately be understood as disinformation: the purposeful construction of true or partly true bits of information into a message that is, at its core, misleading. Over the course of the election, this turned the right-wing media system into an internally coherent, relatively insulated knowledge community, reinforcing the shared worldview of readers and shielding them from journalism that challenged it. The prevalence of such material has created an environment in which the President can tell supporters about events in Sweden that never happened, or a presidential advisor can reference a non-existent “Bowling Green massacre.”

We began to study this ecosystem by looking at the landscape of what sites people share. If a person shares a link from Breitbart, is he or she more likely also to share a link from Fox News or from The New York Times? We analyzed hyperlinking patterns, social media sharing patterns on Facebook and Twitter, and topic and language patterns in the content of the 1.25 million stories, published by 25,000 sources over the course of the election, using Media Cloud, an open-source platform for studying media ecosystems developed by Harvard’s Berkman Klein Center for Internet & Society and MIT’s Center for Civic Media.

When we map media sources this way, we see that Breitbart became the center of a distinct right-wing media ecosystem, surrounded by Fox News, the Daily Caller, the Gateway Pundit, the Washington Examiner, Infowars, Conservative Treehouse, and Truthfeed.

Fig. 1: Media sources shared on Twitter during the election (nodes sized in proportion to Twitter shares).

(Chart not printed here)

Fig. 2: Media sources shared on Twitter during the election (nodes sized in proportion to Facebook shares).

(Chart not printed here) 

The most frequently shared media sources for Twitter users that retweeted either Trump or Clinton.

Notes: In the above clouds, the nodes are sized according to how often they were shared on Twitter (Fig. 1) or Facebook (Fig. 2). The location of nodes is determined by whether two sites were shared by the same Twitter user on the same day, representing the extent to which two sites draw similar audiences. The colors assigned to a site in the map reflect the share of that site’s stories tweeted by users who also retweeted either Clinton or Trump during the election. These colors therefore reflect the attention patterns of audiences, not analysis of content of the sites. Dark blue sites draw attention in ratios of at least 4:1 from Clinton followers; red sites 4:1 Trump followers. Green sites are retweeted more or less equally by followers of each candidate. Light-blue sites draw 3:2 Clinton followers, and pink draw 3:2 Trump followers.

Our analysis challenges a simple narrative that the internet as a technology is what fragments public discourse and polarizes opinions, by allowing us to inhabit filter bubbles or just read “the daily me.” If technology were the most important driver towards a “post-truth” world, we would expect to see symmetric patterns on the left and the right. Instead, different internal political dynamics in the right and the left led to different patterns in the reception and use of the technology by each wing. While Facebook and Twitter certainly enabled right-wing media to circumvent the gatekeeping power of traditional media, the pattern was not symmetric.

The size of the nodes marking traditional professional media like The New York Times, The Washington Post, and CNN, surrounded by the Hill, ABC, and NBC, tell us that these media drew particularly large audiences. Their color tells us that Clinton followers attended to them more than Trump followers, and their proximity on the map to more quintessentially partisan sites—like Huffington Post, MSNBC, or the Daily Beast—suggests that attention to these more partisan outlets on the left was more tightly interwoven with attention to traditional media. The Breitbart-centered wing, by contrast, is farther from the mainstream set and lacks bridging nodes that draw attention and connect it to that mainstream.

Moreover, the fact that these asymmetric patterns of attention were similar on both Twitter and Facebook suggests that human choices and political campaigning, not one company’s algorithm, were responsible for the patterns we observe. These patterns might be the result of a coordinated campaign, but they could also be an emergent property of decentralized behavior, or some combination of both. Our data to this point cannot distinguish between these alternatives.

Another way of seeing this asymmetry is to graph how much attention is given to sites that draw attention mostly from one side of the partisan divide. There are very few center-right sites: sites that draw many Trump followers, but also a substantial number of Clinton followers. Between the moderately conservative Wall Street Journal, which draws Clinton and Trump supporters in equal shares, and the starkly partisan sites that draw Trump supporters by ratios of 4:1 or more, there are only a handful of sites. Once a threshold of partisan-only attention is reached, the number of sites in the clearly partisan right increases, and indeed exceeds the number of sites in the clearly partisan left. By contrast, starting at The Wall Street Journal and moving left, attention is spread more evenly across a range of sites whose audience reflects a gradually increasing proportion of Clinton followers as opposed to Trump followers. Unlike on the right, on the left there is no dramatic increase in either the number of sites or levels of attention they receive as we move to  more clearly partisan sites.

(Chart not printed here)

Sites by partisan attention and Twitter shares.

(Chart not printed here)

Sites by partisan attention and Facebook shares.
 
The primary explanation of such asymmetric polarization is more likely politics and culture than technology.

A remarkable feature of the right-wing media ecosystem is how new it is. Out of all the outlets favored by Trump followers, only the New York Post existed when Ronald Reagan was elected president in 1980. By the election of Bill Clinton in 1992, only the Washington Times, Rush Limbaugh, and arguably Sean Hannity had joined the fray. Alex Jones of Infowars started his first outlet on the radio in 1996. Fox News was not founded until 1996. Breitbart was founded in 2007, and most of the other major nodes in the right-wing media system were created even later. Outside the right-wing, the map reflects a mixture of high attention to traditional journalistic outlets and dispersed attention to new, online-only, and partisan media.

The pattern of hyper-partisan attack was set during the primary campaign, targeting not only opposing candidates but also media that did not support Trump’s candidacy. In our data, looking at the most widely-shared stories during the primary season and at the monthly maps of media during those months, we see that Jeb Bush, Marco Rubio, and Fox News were the targets of attack.

The first and seventh most highly-tweeted stories from Infowars.com, one of the 10 most influential sites in the right-wing media system.
 
The February map, for example, shows Fox News as a smaller node quite distant from the Breitbart-centered right. It reflects the fact that Fox News received less attention than it did earlier or later in the campaign, and less attention, in particular, from users who also paid attention to the core Breitbart-centered sites and whose attention would have drawn Fox closer to Breitbart. The March map is similar, and only over April and May will Fox’s overall attention and attention from Breitbart followers revive.

This sidelining of Fox News in early 2016 coincided with sustained attacks against it by Breitbart. The top-20 stories in the right-wing media ecology during January included, for example, “Trump Campaign Manager Reveals Fox News Debate Chief Has Daughter Working for Rubio.” More generally, the five most-widely shared stories in which Breitbart refers to Fox are stories aimed to delegitimize Fox as the central arbiter of conservative news, tying it to immigration, terrorism and Muslims, and corruption:
• The Anti-Trump Network: Fox News Money Flows into Open Borders Group;
• NY Times Bombshell Scoop: Fox News Colluded with Rubio to Give Amnesty to Illegal Aliens;
• Google and Fox TV Invite Anti-Trump, Hitler-Citing, Muslim Advocate to Join Next GOP TV-Debate;
• Fox, Google Pick 1994 Illegal Immigrant To Ask Question In Iowa GOP Debate;
• Fox News At Facebook Meeting Is Misdirection: Murdoch and Zuckerberg Are Deeply Connected Over Immigration.

The repeated theme of conspiracy, corruption, and media betrayal is palpable in these highly shared Breitbart headlines linking Fox News, Rubio, and illegal immigration.
 

As the primaries ended, our maps show that attention to Fox revived and was more closely integrated with Breitbart and the remainder of the right-wing media sphere. The primary target of the right-wing media then became all other traditional media. While the prominence of different media sources in the right-wing sphere vary when viewed by shares on Facebook and Twitter, the content and core structure, with Breitbart at the center, is stable across platforms. Infowars, and similarly radical sites Truthfeed and Ending the Fed, gain in prominence in the Facebook map.

(Chart not printed here)

October 2016 by Twitter shares

(Chart not printed here)

October 2016 by Facebook shares

These two maps reveal the same pattern. Even in the highly-charged pre-election month, everyone outside the Breitbart-centered universe forms a tightly interconnected attention network, with major traditional mass media and professional sources at the core. The right, by contrast, forms its own insular sphere.
 
The right-wing media was also able to bring the focus on immigration, Clinton emails, and scandals more generally to the broader media environment. A sentence-level analysis of stories throughout the media environment suggests that Donald Trump’s substantive agenda—heavily focused on immigration and direct attacks on Hillary Clinton—came to dominate public discussions.

Number of sentences in mainstream media that address Trump and Clinton issues and scandals.
 
Coverage of Clinton overwhelmingly focused on emails, followed by the Clinton Foundation and Benghazi. Coverage of Trump included some scandal, but the most prevalent topic of Trump-focused stories was his main substantive agenda item—immigration—and his arguments about jobs and trade also received more attention than his scandals.

Proportion of election coverage that discusses immigration for selected media sources.
 
While mainstream media coverage was often critical, it nonetheless revolved around the agenda that the right-wing media sphere set: immigration. Right-wing media, in turn, framed immigration in terms of terror, crime, and Islam, as a review of Breitbart and other right-wing media stories about immigration most widely shared on social media exhibits.
Immigration is the key topic around which Trump and Breitbart found common cause; just as Trump made this a focal point for his campaign, Breitbart devoted disproportionate attention to the topic.
 

Top immigration related stories from right wing media shared on Twitter or Facebook.
 
What we find in our data is a network of mutually-reinforcing hyper-partisan sites that revive what Richard Hofstadter called “the paranoid style in American politics,” combining decontextualized truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world. “Fake news,” which implies made of whole cloth by politically disinterested parties out to make a buck of Facebook advertising dollars, rather than propaganda and disinformation, is not an adequate term. By repetition, variation, and circulation through many associated sites, the network of sites make their claims familiar to readers, and this fluency with the core narrative gives credence to the incredible.

Take a look at Ending the Fed, which, according to Buzzfeed’s examination of fake news in November 2016, accounted for five of the top 10 of the top fake stories in the election. In our data, Ending the Fed is indeed prominent by Facebook measures, but not by Twitter shares. In the month before the election, for example, it was one of the three most-shared right-wing sites on Facebook, alongside Breitbart and Truthfeed. While Ending the Fed clearly had great success marketing stories on Facebook, our analysis shows nothing distinctive about the site—it is simply part-and-parcel of the Breitbart-centered sphere.

And the false claims perpetuated in Ending the Fed’s most-shared posts are well established tropes in right wing media: the leaked Podesta emails, alleged Saudi funding of Clinton’s campaign, and a lack of credibility in media. The most Facebook-shared story by Ending the Fed in October was “IT’S OVER: Hillary’s ISIS Email Just Leaked & It’s Worse Than Anyone Could Have Imagined.” See also, Infowars’ “Saudi Arabia has funded 20% of Hillary’s Presidential Campaign, Saudi Crown Prince Claims,” and Breitbart’s  “Clinton Cash: Khizr Khan’s Deep Legal, Financial Connections to Saudi Arabia, Hillary’s Clinton Foundation Tie Terror, Immigration, Email Scandals Together.” This mix of claims and facts, linked through paranoid logic characterizes much of the most shared content linked to Breitbart. It is a mistake to dismiss these stories as “fake news”; their power stems from a potent mix of verifiable facts (the leaked Podesta emails), familiar repeated falsehoods, paranoid logic, and consistent political orientation within a mutually-reinforcing network of like-minded sites.

Use of disinformation by partisan media sources is neither new nor limited to the right wing, but the insulation of the partisan right-wing media from traditional journalistic media sources, and the vehemence of its attacks on journalism in common cause with a similarly outspoken president, is new and distinctive.

Rebuilding a basis on which Americans can form a shared belief about what is going on is a precondition of democracy, and the most important task confronting the press going forward. Our data strongly suggest that most Americans, including those who access news through social networks, continue to pay attention to traditional media, following professional journalistic practices, and cross-reference what they read on partisan sites with what they read on mass media sites.

To accomplish this, traditional media needs to reorient, not by developing better viral content and clickbait to compete in the social media environment, but by recognizing that it is operating in a propaganda and disinformation-rich environment. This, not Macedonian teenagers or Facebook, is the real challenge of the coming years. Rising to this challenge could usher in a new golden age for the Fourth Estate.

The election study was funded by the Open Society Foundations U.S. Program.  Media Cloud has received funding from The Bill and Melinda Gates Foundation, the Robert Woods Johnson Foundation, the Ford Foundation, and the Open Societies Foundations.

Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman are the authors. Benkler is a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society at Harvard; Faris is research director at BKC; Roberts is a fellow at BKC and technical lead of Media Cloud; and Zuckerman is director of the MIT Center for Civic Media.

Our miserable 21st century

Below is dense – but worth it. It is written by a conservative, but an honest one.

It is the best documentation I have found on the thesis that I wrote about last year: that the 21st century economy is a structural mess, and the mess is a non-partisan one!

My basic contention is really simple:

9/11 diverted us from this issue, and then …
we compounded the diversion with two idiotic wars, and then …
we compounded the diversion further with an idiotic, devastating recession. and then …
we started to stabilize, which is why President Obama goes to the head of the class, and then …
we built a three ring circus, and elected a clown as the ringmaster.

While we watch this three-ring circus in Washington, no one is paying attention to this structural problem in the economy….so we are wasting time, when we should be tackling this central issue of our time. Its a really complicated one, and there are no easy answers (sorry Trump and Bernie Sanders).

PUT YOUR POLITICAL ARTILLERY DOWN AND READ ON …..

=======BEGIN=============

CREDIT: https://www.commentarymagazine.com/articles/our-miserable-21st-century/

Our Miserable 21st Century
From work to income to health to social mobility, the year 2000 marked the beginning of what has become a distressing era for the United States
NICHOLAS N. EBERSTADT / FEB. 15, 2017

In the morning of November 9, 2016, America’s elite—its talking and deciding classes—woke up to a country they did not know. To most privileged and well-educated Americans, especially those living in its bicoastal bastions, the election of Donald Trump had been a thing almost impossible even to imagine. What sort of country would go and elect someone like Trump as president? Certainly not one they were familiar with, or understood anything about.

Whatever else it may or may not have accomplished, the 2016 election was a sort of shock therapy for Americans living within what Charles Murray famously termed “the bubble” (the protective barrier of prosperity and self-selected associations that increasingly shield our best and brightest from contact with the rest of their society). The very fact of Trump’s election served as a truth broadcast about a reality that could no longer be denied: Things out there in America are a whole lot different from what you thought.

Yes, things are very different indeed these days in the “real America” outside the bubble. In fact, things have been going badly wrong in America since the beginning of the 21st century.

It turns out that the year 2000 marks a grim historical milestone of sorts for our nation. For whatever reasons, the Great American Escalator, which had lifted successive generations of Americans to ever higher standards of living and levels of social well-being, broke down around then—and broke down very badly.

The warning lights have been flashing, and the klaxons sounding, for more than a decade and a half. But our pundits and prognosticators and professors and policymakers, ensconced as they generally are deep within the bubble, were for the most part too distant from the distress of the general population to see or hear it. (So much for the vaunted “information era” and “big-data revolution.”) Now that those signals are no longer possible to ignore, it is high time for experts and intellectuals to reacquaint themselves with the country in which they live and to begin the task of describing what has befallen the country in which we have lived since the dawn of the new century.

II
Consider the condition of the American economy. In some circles people still widely believe, as one recent New York Times business-section article cluelessly insisted before the inauguration, that “Mr. Trump will inherit an economy that is fundamentally solid.” But this is patent nonsense. By now it should be painfully obvious that the U.S. economy has been in the grip of deep dysfunction since the dawn of the new century. And in retrospect, it should also be apparent that America’s strange new economic maladies were almost perfectly designed to set the stage for a populist storm.

Ever since 2000, basic indicators have offered oddly inconsistent readings on America’s economic performance and prospects. It is curious and highly uncharacteristic to find such measures so very far out of alignment with one another. We are witnessing an ominous and growing divergence between three trends that should ordinarily move in tandem: wealth, output, and employment. Depending upon which of these three indicators you choose, America looks to be heading up, down, or more or less nowhere.
From the standpoint of wealth creation, the 21st century is off to a roaring start. By this yardstick, it looks as if Americans have never had it so good and as if the future is full of promise. Between early 2000 and late 2016, the estimated net worth of American households and nonprofit institutions more than doubled, from $44 trillion to $90 trillion. (SEE FIGURE 1.)

Although that wealth is not evenly distributed, it is still a fantastic sum of money—an average of over a million dollars for every notional family of four. This upsurge of wealth took place despite the crash of 2008—indeed, private wealth holdings are over $20 trillion higher now than they were at their pre-crash apogee. The value of American real-estate assets is near or at all-time highs, and America’s businesses appear to be thriving. Even before the “Trump rally” of late 2016 and early 2017, U.S. equities markets were hitting new highs—and since stock prices are strongly shaped by expectations of future profits, investors evidently are counting on the continuation of the current happy days for U.S. asset holders for some time to come.

A rather less cheering picture, though, emerges if we look instead at real trends for the macro-economy. Here, performance since the start of the century might charitably be described as mediocre, and prospects today are no better than guarded.

The recovery from the crash of 2008—which unleashed the worst recession since the Great Depression—has been singularly slow and weak. According to the Bureau of Economic Analysis (BEA), it took nearly four years for America’s gross domestic product (GDP) to re-attain its late 2007 level. As of late 2016, total value added to the U.S. economy was just 12 percent higher than in 2007. (SEE FIGURE 2.) The situation is even more sobering if we consider per capita growth. It took America six and a half years—until mid-2014—to get back to its late 2007 per capita production levels. And in late 2016, per capita output was just 4 percent higher than in late 2007—nine years earlier. By this reckoning, the American economy looks to have suffered something close to a lost decade.

But there was clearly trouble brewing in America’s macro-economy well before the 2008 crash, too. Between late 2000 and late 2007, per capita GDP growth averaged less than 1.5 percent per annum. That compares with the nation’s long-term postwar 1948–2000 per capita growth rate of almost 2.3 percent, which in turn can be compared to the “snap back” tempo of 1.1 percent per annum since per capita GDP bottomed out in 2009. Between 2000 and 2016, per capita growth in America has averaged less than 1 percent a year. To state it plainly: With postwar, pre-21st-century rates for the years 2000–2016, per capita GDP in America would be more than 20 percent higher than it is today.

The reasons for America’s newly fitful and halting macroeconomic performance are still a puzzlement to economists and a subject of considerable contention and debate.1Economists are generally in consensus, however, in one area: They have begun redefining the growth potential of the U.S. economy downwards. The U.S. Congressional Budget Office (CBO), for example, suggests that the “potential growth” rate for the U.S. economy at full employment of factors of production has now dropped below 1.7 percent a year, implying a sustainable long-term annual per capita economic growth rate for America today of well under 1 percent.

Then there is the employment situation. If 21st-century America’s GDP trends have been disappointing, labor-force trends have been utterly dismal. Work rates have fallen off a cliff since the year 2000 and are at their lowest levels in decades. We can see this by looking at the estimates by the Bureau of Labor Statistics (BLS) for the civilian employment rate, the jobs-to-population ratio for adult civilian men and women. (SEE FIGURE 3.) Between early 2000 and late 2016, America’s overall work rate for Americans age 20 and older underwent a drastic decline. It plunged by almost 5 percentage points (from 64.6 to 59.7). Unless you are a labor economist, you may not appreciate just how severe a falloff in employment such numbers attest to. Postwar America never experienced anything comparable.

From peak to trough, the collapse in work rates for U.S. adults between 2008 and 2010 was roughly twice the amplitude of what had previously been the country’s worst postwar recession, back in the early 1980s. In that previous steep recession, it took America five years to re-attain the adult work rates recorded at the start of 1980. This time, the U.S. job market has as yet, in early 2017, scarcely begun to claw its way back up to the work rates of 2007—much less back to the work rates from early 2000.

As may be seen in Figure 3, U.S. adult work rates never recovered entirely from the recession of 2001—much less the crash of ’08. And the work rates being measured here include people who are engaged in any paid employment—any job, at any wage, for any number of hours of work at all.

On Wall Street and in some parts of Washington these days, one hears that America has gotten back to “near full employment.” For Americans outside the bubble, such talk must seem nonsensical. It is true that the oft-cited “civilian unemployment rate” looked pretty good by the end of the Obama era—in December 2016, it was down to 4.7 percent, about the same as it had been back in 1965, at a time of genuine full employment. The problem here is that the unemployment rate only tracks joblessness for those still in the labor force; it takes no account of workforce dropouts. Alas, the exodus out of the workforce has been the big labor-market story for America’s new century. (At this writing, for every unemployed American man between 25 and 55 years of age, there are another three who are neither working nor looking for work.) Thus the “unemployment rate” increasingly looks like an antique index devised for some earlier and increasingly distant war: the economic equivalent of a musket inventory or a cavalry count.

By the criterion of adult work rates, by contrast, employment conditions in America remain remarkably bleak. From late 2009 through early 2014, the country’s work rates more or less flatlined. So far as can be told, this is the only “recovery” in U.S. economic history in which that basic labor-market indicator almost completely failed to respond.

Since 2014, there has finally been a measure of improvement in the work rate—but it would be unwise to exaggerate the dimensions of that turnaround. As of late 2016, the adult work rate in America was still at its lowest level in more than 30 years. To put things another way: If our nation’s work rate today were back up to its start-of-the-century highs, well over 10 million more Americans would currently have paying jobs.

There is no way to sugarcoat these awful numbers. They are not a statistical artifact that can be explained away by population aging, or by increased educational enrollment for adult students, or by any other genuine change in contemporary American society. The plain fact is that 21st-century America has witnessed a dreadful collapse of work.
For an apples-to-apples look at America’s 21st-century jobs problem, we can focus on the 25–54 population—known to labor economists for self-evident reasons as the “prime working age” group. For this key labor-force cohort, work rates in late 2016 were down almost 4 percentage points from their year-2000 highs. That is a jobs gap approaching 5 million for this group alone.

It is not only that work rates for prime-age males have fallen since the year 2000—they have, but the collapse of work for American men is a tale that goes back at least half a century. (I wrote a short book last year about this sad saga.2) What is perhaps more startling is the unexpected and largely unnoticed fall-off in work rates for prime-age women. In the U.S. and all other Western societies, postwar labor markets underwent an epochal transformation. After World War II, work rates for prime women surged, and continued to rise—until the year 2000. Since then, they too have declined. Current work rates for prime-age women are back to where they were a generation ago, in the late 1980s. The 21st-century U.S. economy has been brutal for male and female laborers alike—and the wreckage in the labor market has been sufficiently powerful to cancel, and even reverse, one of our society’s most distinctive postwar trends: the rise of paid work for women outside the household.

In our era of no more than indifferent economic growth, 21st–century America has somehow managed to produce markedly more wealth for its wealthholders even as it provided markedly less work for its workers. And trends for paid hours of work look even worse than the work rates themselves. Between 2000 and 2015, according to the BEA, total paid hours of work in America increased by just 4 percent (as against a 35 percent increase for 1985–2000, the 15-year period immediately preceding this one). Over the 2000–2015 period, however, the adult civilian population rose by almost 18 percent—meaning that paid hours of work per adult civilian have plummeted by a shocking 12 percent thus far in our new American century.

This is the terrible contradiction of economic life in what we might call America’s Second Gilded Age (2000—). It is a paradox that may help us understand a number of overarching features of our new century. These include the consistent findings that public trust in almost all U.S. institutions has sharply declined since 2000, even as growing majorities hold that America is “heading in the wrong direction.” It provides an immediate answer to why overwhelming majorities of respondents in public-opinion surveys continue to tell pollsters, year after year, that our ever-richer America is still stuck in the middle of a recession. The mounting economic woes of the “little people” may not have been generally recognized by those inside the bubble, or even by many bubble inhabitants who claimed to be economic specialists—but they proved to be potent fuel for the populist fire that raged through American politics in 2016.

III
So general economic conditions for many ordinary Americans—not least of these, Americans who did not fit within the academy’s designated victim classes—have been rather more insecure than those within the comfort of the bubble understood. But the anxiety, dissatisfaction, anger, and despair that range within our borders today are not wholly a reaction to the way our economy is misfiring. On the nonmaterial front, it is likewise clear that many things in our society are going wrong and yet seem beyond our powers to correct.

Some of these gnawing problems are by no means new: A number of them (such as family breakdown) can be traced back at least to the 1960s, while others are arguably as old as modernity itself (anomie and isolation in big anonymous communities, secularization and the decline of faith). But a number have roared down upon us by surprise since the turn of the century—and others have redoubled with fearsome new intensity since roughly the year 2000.

American health conditions seem to have taken a seriously wrong turn in the new century. It is not just that overall health progress has been shockingly slow, despite the trillions we devote to medical services each year. (Which “Cold War babies” among us would have predicted we’d live to see the day when life expectancy in East Germany was higher than in the United States, as is the case today?)

Alas, the problem is not just slowdowns in health progress—there also appears to have been positive retrogression for broad and heretofore seemingly untroubled segments of the national population. A short but electrifying 2015 paper by Anne Case and Nobel Economics Laureate Angus Deaton talked about a mortality trend that had gone almost unnoticed until then: rising death rates for middle-aged U.S. whites. By Case and Deaton’s reckoning, death rates rose somewhat slightly over the 1999–2013 period for all non-Hispanic white men and women 45–54 years of age—but they rose sharply for those with high-school degrees or less, and for this less-educated grouping most of the rise in death rates was accounted for by suicides, chronic liver cirrhosis, and poisonings (including drug overdoses).

Though some researchers, for highly technical reasons, suggested that the mortality spike might not have been quite as sharp as Case and Deaton reckoned, there is little doubt that the spike itself has taken place. Health has been deteriorating for a significant swath of white America in our new century, thanks in large part to drug and alcohol abuse. All this sounds a little too close for comfort to the story of modern Russia, with its devastating vodka- and drug-binging health setbacks. Yes: It can happen here, and it has. Welcome to our new America.

In December 2016, the Centers for Disease Control and Prevention (CDC) reported that for the first time in decades, life expectancy at birth in the United States had dropped very slightly (to 78.8 years in 2015, from 78.9 years in 2014). Though the decline was small, it was statistically meaningful—rising death rates were characteristic of males and females alike; of blacks and whites and Latinos together. (Only black women avoided mortality increases—their death levels were stagnant.) A jump in “unintentional injuries” accounted for much of the overall uptick.
It would be unwarranted to place too much portent in a single year’s mortality changes; slight annual drops in U.S. life expectancy have occasionally been registered in the past, too, followed by continued improvements. But given other developments we are witnessing in our new America, we must wonder whether the 2015 decline in life expectancy is just a blip, or the start of a new trend. We will find out soon enough. It cannot be encouraging, though, that the Human Mortality Database, an international consortium of demographers who vet national data to improve comparability between countries, has suggested that health progress in America essentially ceased in 2012—that the U.S. gained on average only about a single day of life expectancy at birth between 2012 and 2014, before the 2015 turndown.

The opioid epidemic of pain pills and heroin that has been ravaging and shortening lives from coast to coast is a new plague for our new century. The terrifying novelty of this particular drug epidemic, of course, is that it has gone (so to speak) “mainstream” this time, effecting breakout from disadvantaged minority communities to Main Street White America. By 2013, according to a 2015 report by the Drug Enforcement Administration, more Americans died from drug overdoses (largely but not wholly opioid abuse) than from either traffic fatalities or guns. The dimensions of the opioid epidemic in the real America are still not fully appreciated within the bubble, where drug use tends to be more carefully limited and recreational. In Dreamland, his harrowing and magisterial account of modern America’s opioid explosion, the journalist Sam Quinones notes in passing that “in one three-month period” just a few years ago, according to the Ohio Department of Health, “fully 11 percent of all Ohioans were prescribed opiates.” And of course many Americans self-medicate with licit or illicit painkillers without doctors’ orders.

In the fall of 2016, Alan Krueger, former chairman of the President’s Council of Economic Advisers, released a study that further refined the picture of the real existing opioid epidemic in America: According to his work, nearly half of all prime working-age male labor-force dropouts—an army now totaling roughly 7 million men—currently take pain medication on a daily basis.

We already knew from other sources (such as BLS “time use” surveys) that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching—watching TV, DVDs, Internet, hand-held devices, etc.—and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens—stoned.

But how did so many millions of un-working men, whose incomes are limited, manage en masse to afford a constant supply of pain medication? Oxycontin is not cheap. As Dreamland carefully explains, one main mechanism today has been the welfare state: more specifically, Medicaid, Uncle Sam’s means-tested health-benefits program. Here is how it works (we are with Quinones in Portsmouth, Ohio):

[The Medicaid card] pays for medicine—whatever pills a doctor deems that the insured patient needs. Among those who receive Medicaid cards are people on state welfare or on a federal disability program known as SSI. . . . If you could get a prescription from a willing doctor—and Portsmouth had plenty of them—Medicaid health-insurance cards paid for that prescription every month. For a three-dollar Medicaid co-pay, therefore, addicts got pills priced at thousands of dollars, with the difference paid for by U.S. and state taxpayers. A user could turn around and sell those pills, obtained for that three-dollar co-pay, for as much as ten thousand dollars on the street.

In 21st-century America, “dependence on government” has thus come to take on an entirely new meaning.

You may now wish to ask: What share of prime-working-age men these days are enrolled in Medicaid? According to the Census Bureau’s SIPP survey (Survey of Income and Program Participation), as of 2013, over one-fifth (21 percent) of all civilian men between 25 and 55 years of age were Medicaid beneficiaries. For prime-age people not in the labor force, the share was over half (53 percent). And for un-working Anglos (non-Hispanic white men not in the labor force) of prime working age, the share enrolled in Medicaid was 48 percent.

By the way: Of the entire un-working prime-age male Anglo population in 2013, nearly three-fifths (57 percent) were reportedly collecting disability benefits from one or more government disability program in 2013. Disability checks and means-tested benefits cannot support a lavish lifestyle. But they can offer a permanent alternative to paid employment, and for growing numbers of American men, they do. The rise of these programs has coincided with the death of work for larger and larger numbers of American men not yet of retirement age. We cannot say that these programs caused the death of work for millions upon millions of younger men: What is incontrovertible, however, is that they have financed it—just as Medicaid inadvertently helped finance America’s immense and increasing appetite for opioids in our new century.

It is intriguing to note that America’s nationwide opioid epidemic has not been accompanied by a nationwide crime wave (excepting of course the apparent explosion of illicit heroin use). Just the opposite: As best can be told, national victimization rates for violent crimes and property crimes have both reportedly dropped by about two-thirds over the past two decades.3 The drop in crime over the past generation has done great things for the general quality of life in much of America. There is one complication from this drama, however, that inhabitants of the bubble may not be aware of, even though it is all too well known to a great many residents of the real America. This is the extraordinary expansion of what some have termed America’s “criminal class”—the population sentenced to prison or convicted of felony offenses—in recent decades. This trend did not begin in our century, but it has taken on breathtaking enormity since the year 2000.

Most well-informed readers know that the U.S. currently has a higher share of its populace in jail or prison than almost any other country on earth, that Barack Obama and others talk of our criminal-justice process as “mass incarceration,” and know that well over 2 million men were in prison or jail in recent years.4 But only a tiny fraction of all living Americans ever convicted of a felony is actually incarcerated at this very moment. Quite the contrary: Maybe 90 percent of all sentenced felons today are out of confinement and living more or less among us. The reason: the basic arithmetic of sentencing and incarceration in America today. Correctional release and sentenced community supervision (probation and parole) guarantee a steady annual “flow” of convicted felons back into society to augment the very considerable “stock” of felons and ex-felons already there. And this “stock” is by now truly enormous.

One forthcoming demographic study by Sarah Shannon and five other researchers estimates that the cohort of current and former felons in America very nearly reached 20 million by the year 2010. If its estimates are roughly accurate, and if America’s felon population has continued to grow at more or less the same tempo traced out for the years leading up to 2010, we would expect it to surpass 23 million persons by the end of 2016 at the latest. Very rough calculations might therefore suggest that at this writing, America’s population of non-institutionalized adults with a felony conviction somewhere in their past has almost certainly broken the 20 million mark by the end of 2016. A little more rough arithmetic suggests that about 17 million men in our general population have a felony conviction somewhere in their CV. That works out to one of every eight adult males in America today.

We have to use rough estimates here, rather than precise official numbers, because the government does not collect any data at all on the size or socioeconomic circumstances of this population of 20 million, and never has. Amazing as this may sound and scandalous though it may be, America has, at least to date, effectively banished this huge group—a group roughly twice the total size of our illegal-immigrant population and an adult population larger than that in any state but California—to a near-total and seemingly unending statistical invisibility. Our ex-cons are, so to speak, statistical outcasts who live in a darkness our polity does not care enough to illuminate—beyond the scope or interest of public policy, unless and until they next run afoul of the law.

Thus we cannot describe with any precision or certainty what has become of those who make up our “criminal class” after their (latest) sentencing or release. In the most stylized terms, however, we might guess that their odds in the real America are not all that favorable. And when we consider some of the other trends we have already mentioned—employment, health, addiction, welfare dependence—we can see the emergence of a malign new nationwide undertow, pulling downward against social mobility.
Social mobility has always been the jewel in the crown of the American mythos and ethos. The idea (not without a measure of truth to back it up) was that people in America are free to achieve according to their merit and their grit—unlike in other places, where they are trapped by barriers of class or the misfortune of misrule. Nearly two decades into our new century, there are unmistakable signs that America’s fabled social mobility is in trouble—perhaps even in serious trouble.

Consider the following facts. First, according to the Census Bureau, geographical mobility in America has been on the decline for three decades, and in 2016 the annual movement of households from one location to the next was reportedly at an all-time (postwar) low. Second, as a study by three Federal Reserve economists and a Notre Dame colleague demonstrated last year, “labor market fluidity”—the churning between jobs that among other things allows people to get ahead—has been on the decline in the American labor market for decades, with no sign as yet of a turnaround. Finally, and not least important, a December 2016 report by the “Equal Opportunity Project,” a team led by the formidable Stanford economist Raj Chetty, calculated that the odds of a 30-year-old’s earning more than his parents at the same age was now just 51 percent: down from 86 percent 40 years ago. Other researchers who have examined the same data argue that the odds may not be quite as low as the Chetty team concludes, but agree that the chances of surpassing one’s parents’ real income have been on the downswing and are probably lower now than ever before in postwar America.

Thus the bittersweet reality of life for real Americans in the early 21st century: Even though the American economy still remains the world’s unrivaled engine of wealth generation, those outside the bubble may have less of a shot at the American Dream than has been the case for decades, maybe generations—possibly even since the Great Depression.

IV
The funny thing is, people inside the bubble are forever talking about “economic inequality,” that wonderful seminar construct, and forever virtue-signaling about how personally opposed they are to it. By contrast, “economic insecurity” is akin to a phrase from an unknown language. But if we were somehow to find a “Google Translate” function for communicating from real America into the bubble, an important message might be conveyed:

The abstraction of “inequality” doesn’t matter a lot to ordinary Americans. The reality of economic insecurity does. The Great American Escalator is broken—and it badly needs to be fixed.

With the election of 2016, Americans within the bubble finally learned that the 21st century has gotten off to a very bad start in America. Welcome to the reality. We have a lot of work to do together to turn this around.

1 Some economists suggest the reason has to do with the unusual nature of the Great Recession: that downturns born of major financial crises intrinsically require longer adjustment and correction periods than the more familiar, ordinary business-cycle downturn. Others have proposed theories to explain why the U.S. economy may instead have downshifted to a more tepid tempo in the Bush-Obama era. One such theory holds that the pace of productivity is dropping because the scale of recent technological innovation is unrepeatable. There is also a “secular stagnation” hypothesis, surmising we have entered into an age of very low “natural real interest rates” consonant with significantly reduced demand for investment. What is incontestable is that the 10-year moving average for per capita economic growth is lower for America today than at any time since the Korean War—and that the slowdown in growth commenced in the decade before the 2008 crash. (It is also possible that the anemic status of the U.S. macro-economy is being exaggerated by measurement issues—productivity improvements from information technology, for example, have been oddly elusive in our officially reported national output—but few today would suggest that such concealed gains would totally transform our view of the real economy’s true performance.)
2 Nicholas Eberstadt, Men Without Work: America’s Invisible Crisis (Templeton Press, 2016)
3 This is not to ignore the gruesome exceptions—places like Chicago and Baltimore—or to neglect the risk that crime may make a more general comeback: It is simply to acknowledge one of the bright trends for America in the new century.
4 In 2013, roughly 2.3 million men were behind bars according to the Bureau of Justice Statistics.

One could be forgiven for wondering what Kellyanne Conway, a close adviser to President Trump, was thinking recently when she turned the White House briefing room into the set of the Home Shopping Network. “Go buy Ivanka’s stuff!” she told Fox News viewers during an interview, referring to the clothing and accessories line of the president’s daughter. It’s not clear if her cheerleading led to any spike in sales, but it did lead to calls for an investigation into whether she violated federal ethics rules, and prompted the White House to later state that it had “counseled” Conway about her behavior.

To understand what provoked Conway’s on-air marketing campaign, look no further than the ongoing boycotts targeting all things Trump. This latest manifestation of the passion to impose financial harm to make a political point has taken things in a new and odd direction. Once, boycotts were serious things, requiring serious commitment and real sacrifice. There were boycotts by aggrieved workers, such as the United Farm Workers, against their employers; boycotts by civil-rights activists and religious groups; and boycotts of goods produced by nations like apartheid-era South Africa. Many of these efforts, sustained over years by committed cadres of activists, successfully pressured businesses and governments to change.

Since Trump’s election, the boycott has become less an expression of long-term moral and practical opposition and more an expression of the left’s collective id. As Harvard Business School professor Michael Norton told the Atlantic recently, “Increasingly, the way we express our political opinions is through buying or not buying instead of voting or not voting.” And evidently the way some people express political opinions when someone they don’t like is elected is to launch an endless stream of virtue-signaling boycotts. Democratic politicians ostentatiously boycotted Trump’s inauguration. New Balance sneaker owners vowed to boycott the company and filmed themselves torching their shoes after a company spokesman tweeted praise for Trump. Trump detractors called for a boycott of L.L. Bean after one of its board members was discovered to have (gasp!) given a personal contribution to a pro-Trump PAC.

By their nature, boycotts are a form of proxy warfare, tools wielded by consumers who want to send a message to a corporation or organization about their displeasure with specific practices.

Trump-era boycotts, however, merely seem to be a way to channel an overwhelming yet vague feeling of political frustration. Take the “Grab Your Wallet” campaign, whose mission, described in humblebragging detail on its website, is as follows: “Since its first humble incarnation as a screenshot on October 11, the #GrabYourWallet boycott list has grown as a central resource for understanding how our own consumer purchases have inadvertently supported the political rise of the Trump family.”

So this boycott isn’t against a specific business or industry; it’s a protest against one man and his children, with trickle-down effects for anyone who does business with them. Grab Your Wallet doesn’t just boycott Trump-branded hotels and golf courses; the group targets businesses such as Bed Bath & Beyond, for example, because it carries Ivanka Trump diaper bags. Even QVC and the Carnival Cruise corporation are targeted for boycott because they advertise on Celebrity Apprentice, which supposedly “further enriches Trump.”

Grab Your Wallet has received support from “notable figures” such as “Don Cheadle, Greg Louganis, Lucy Lawless, Roseanne Cash, Neko Case, Joyce Carol Oates, Robert Reich, Pam Grier, and Ben Cohen (of Ben & Jerry’s),” according to the group’s website. This rogues gallery of celebrity boycotters has been joined by enthusiastic hashtag activists on Twitter who post remarks such as, “Perhaps fed govt will buy all Ivanka merch & force prisoners & detainees in coming internment camps 2 wear it” and “Forced to #DressLikeaWoman by a sexist boss? #GrabYourWallet and buy a nice FU pantsuit at Trump-free shops.” There’s even a website, dontpaytrump.com, which offers a free plug-in extension for your Web browser. It promises a “simple Trump boycott extension that makes it easy to be a conscious consumer and keep your money out of Trump’s tiny hands.”

Many of the companies targeted for boycott—Bed, Bath & Beyond, QVC, TJ Maxx, Amazon—are the kind of retailers that carry moderately priced merchandise that working- and middle-class families can afford. But the list of Grab Your Wallet–approved alternatives for shopping are places like Bergdorf’s and Barney’s. These are hardly accessible choices for the TJ Maxx customer. Indeed, there is more than a whiff of quasi-racist elitism in the self-congratulatory tweets posted by Grab Your Wallet supporters, such as this response to news that Nordstrom is no longer planning to carry Ivanka’s shoe line: “Soon we’ll see Ivanka shoes at Dollar Store, next to Jalapeno Windex and off-brand batteries.”

If Grab Your Wallet is really about “flexing of consumer power in favor of a more respectful, inclusive society,” then it has some work to do.
And then there are the conveniently malleable ethics of the anti-Trump boycott brigade. A small number of affordable retailers like Old Navy made the Grab Your Wallet cut for “approved” alternatives for shopping. But just a few years ago, a progressive website described in detail the “living hell of a Bangladeshi sweatshop” that manufactures Old Navy clothing. Evidently progressives can now sleep peacefully at night knowing large corporations like Old Navy profit from young Bangladeshis making 20 cents an hour and working 17-hour days churning out cheap cargo pants—as long as they don’t bear a Trump label.

In truth, it matters little if Ivanka’s fashion business goes bust. It was always just a branding game anyway. The world will go on in the absence of Ivanka-named suede ankle booties. And in some sense the rash of anti-Trump boycotts is just what Trump, who frequently calls for boycotts of media outlets such as Rolling Stone and retailers like Macy’s, deserves.
But the left’s boycott braggadocio might prove short-lived. Nordstrom denied that it dropped Ivanka’s line of apparel and shoes because of pressure from the Grab Your Wallet campaign; it blamed lagging sales. And the boycotters’ tone of moral superiority—like the ridiculous posturing of the anti-Trump left’s self-flattering designation, “the resistance”—won’t endear them to the Trump voters they must convert if they hope to gain ground in the midterm elections.

As for inclusiveness, as one contributor to Psychology Today noted, the demographic breakdown of the typical boycotter, “especially consumer and ecological boycotts,” is a young, well-educated, politically left woman, undermining somewhat the idea of boycotts as a weapon of the weak and oppressed.

Self-indulgent protests and angry boycotts are no doubt cathartic for their participants (a 2016 study in the Journal of Consumer Affairs cited psychological research that found “by venting their frustrations, consumers can diminish their negative psychological states and, as a result, experience relief”). But such protests are not always ultimately catalytic. As researchers noted in a study published recently at Social Science Research Network, protesters face what they call “the activists’ dilemma,” which occurs when “tactics that raise awareness also tend to reduce popular support.” As the study found, “while extreme tactics may succeed in attracting attention, they typically reduce popular public support for the movement by eroding bystanders’ identification with the movement, ultimately deterring bystanders from supporting the cause or becoming activists themselves.”

The progressive left should be thoughtful about the reality of such protest fatigue. Writing in the Guardian, Jamie Peck recently enthused: “Of course, boycotts alone will not stop Trumpism. Effective resistance to authoritarianism requires more disruptive actions than not buying certain products . . . . But if there’s anything the past few weeks have taught us, it’s that resistance must take as many forms as possible, and it’s possible to call attention to the ravages of neoliberalism while simultaneously allying with any and all takers against the immediate dangers posed by our impetuous orange president.”

Boycotts are supposed to be about accountability. But accountability is a two-way street. The motives and tactics of the boycotters themselves are of the utmost importance. In his book about consumer boycotts, scholar Monroe Friedman advises that successful ones depend on a “rationale” that is “simple, straightforward, and appear[s] legitimate.” Whatever Trump’s flaws (and they are legion), by “going low” with scattershot boycotts, the left undermines its own legitimacy—and its claims to the moral high ground of “resistance” in the process.

========END===============

UHVDC and China

Credit: Economist Article about UHVDC and China

A greener grid
China’s embrace of a new electricity-transmission technology holds lessons for others
The case for high-voltage direct-current connectors
Jan 14th 2017

YOU cannot negotiate with nature. From the offshore wind farms of the North Sea to the solar panels glittering in the Atacama desert, renewable energy is often generated in places far from the cities and industrial centres that consume it. To boost renewables and drive down carbon-dioxide emissions, a way must be found to send energy over long distances efficiently.

The technology already exists (see article). Most electricity is transmitted today as alternating current (AC), which works well over short and medium distances. But transmission over long distances requires very high voltages, which can be tricky for AC systems. Ultra-high-voltage direct-current (UHVDC) connectors are better suited to such spans. These high-capacity links not only make the grid greener, but also make it more stable by balancing supply. The same UHVDC links that send power from distant hydroelectric plants, say, can be run in reverse when their output is not needed, pumping water back above the turbines.

Boosters of UHVDC lines envisage a supergrid capable of moving energy around the planet. That is wildly premature. But one country has grasped the potential of these high-capacity links. State Grid, China’s state-owned electricity utility, is halfway through a plan to spend $88bn on UHVDC lines between 2009 and 2020. It wants 23 lines in operation by 2030.

That China has gone furthest in this direction is no surprise. From railways to cities, China’s appetite for big infrastructure projects is legendary (see article). China’s deepest wells of renewable energy are remote—think of the sun-baked Gobi desert, the windswept plains of Xinjiang and the mountain ranges of Tibet where rivers drop precipitously. Concerns over pollution give the government an additional incentive to locate coal-fired plants away from population centres. But its embrace of the technology holds two big lessons for others. The first is a demonstration effect. China shows that UHVDC lines can be built on a massive scale. The largest, already under construction, will have the capacity to power Greater London almost three times over, and will span more than 3,000km.

The second lesson concerns the co-ordination problems that come with long-distance transmission. UHVDCs are as much about balancing interests as grids. The costs of construction are hefty. Utilities that already sell electricity at high prices are unlikely to welcome competition from suppliers of renewable energy; consumers in renewables-rich areas who buy electricity at low prices may balk at the idea of paying more because power is being exported elsewhere. Reconciling such interests is easier the fewer the utilities involved—and in China, State Grid has a monopoly.

That suggests it will be simpler for some countries than others to follow China’s lead. Developing economies that lack an established electricity infrastructure have an advantage. Solar farms on Africa’s plains and hydroplants on its powerful rivers can use UHVDC lines to get energy to growing cities. India has two lines on the drawing-board, and should have more.

Things are more complicated in the rich world. Europe’s utilities work pretty well together but a cross-border UHVDC grid will require a harmonised regulatory framework. America is the biggest anomaly. It is a continental-sized economy with the wherewithal to finance UHVDCs. It is also horribly fragmented. There are 3,000 utilities, each focused on supplying power to its own customers. Consumers a few states away are not a priority, no matter how much sense it might make to send them electricity. A scheme to connect the three regional grids in America is stuck. The only way that America will create a green national grid will be if the federal government throws its weight behind it.

Live wire
Building a UHVDC network does not solve every energy problem. Security of supply remains an issue, even within national borders: any attacker who wants to disrupt the electricity supply to China’s east coast will soon have a 3,000km-long cable to strike. Other routes to a cleaner grid are possible, such as distributed solar power and battery storage. But to bring about a zero-carbon grid, UHVDC lines will play a role. China has its foot on the gas. Others should follow.
This article appeared in the Leaders section of the print edition under the headline “A greener grid”