Category Archives: Academics

Academics, History, Philosophy, Literature, Music, Drama, Science, Mathematics, Logic, Sociology, Economics, Behavioral Economics, Sociology, Psychology

A History of Ireland

A BRIEF HISTORY OF IRELAND

CREDIT: http://www.livinginireland.ie/en/culture_society/a_brief_history_of_ireland/
CREDIT: http://www.wesleyjohnston.com/users/ireland/past/history/index.htm
CREDIT: The Course of Irish History, Moody and Martin. (Note this is the companion book to the acclaimed 21-part television series of the same name).
CREDIT: The Great Shame, and the Trumph of the Irish in the English Speaking World, Thomas Keneally
CREDIT: Ireland, Land, People, and History, Richard Killeen
CREDIT: The Druids, by Stuart Piggott
CREDIT: The Ancient Celts, by Barry Cunlifffe

Preface
Ireland is an island – at once remote and at the same time close. Its remote west coast faces the Atlantic. Its close east coast faces the Irish sea, easily traversed from Dublin to Holyhead, England (110 km). It is 13 miles across at its closest point to Scotland. It is this closeness that invites adventurers, plunderers, and settlers alike.

It is a beautiful island. For those looking for beauty as an end onto itself, Ireland should be high on the list. Beware though: JH Andrews cautions that “nature has remained inhospitable”; “seared by winter gales”; “goodness washed from the soil by drenching rain”. All of this beauty comes with a cost.

Ireland’s natural beauty include beautiful highlands; steep-sided glens; low-land fields, peet bogs; abundant rivers, streams, harbors; hilltops, and long views everywhere. But note: these very natural attributes have confounded invaders for thousands of years.

Ireland is also blessed with what people have added over the years: lovely villages, great ports, winding roads, and castles. And yet, even here it should be noted that savvy people have placed their castles at the most secure points, where they secure land and river travel, and where the winding roads allow complex transversals through steep hillsides.

It is a country of poets and writers and musicians and farmers. You can’t go far without seeing or being inspired by the land that WB Yeats and James Joyce so proudly wrote about.

Ireland has a tumultuous and complex history. Every Irish man, woman and child proudly feels all the glory and all the pain of this history. It is a rich heritage. And it is by no means monolithic.

Ireland’s heritage is conquest, first by Celts, then by Irish Kings, then by Vikings, then Anglo-Normans, and finally by English Kings. From conquest comes rebellion – the Irish are rebels to the core. With rebellion comes, way too frequently, crushing defeat and retribution.

In spite of this, using today’s lens, Ireland looks resilient and proud – the Irish always come back! And they look victorious – ultimately triumphing. It is an exciting history of leaders, their shifting lands, their shifting allegiances, treacheries and betrayals, myths and oral histories, and evolving forms of church and state.

Overview
The history below starts from the beginning. This overview starts from the end, today, and goes backward, from present-day Ireland to first evidence of human activity.

Today, Ireland is split into two – the Republic of Ireland (4.8 million population) and Northern Ireland (1.8 million). The split is the legacy of the Irish War of Independence in 1919-1922. It’s end culminated in this compromise, which neither side liked and which has been contentious ever since.

The Republic of Ireland in independent, and joined the EU in 1973. Northern Ireland is part of Great Britain. It was created in 1920, and the War of Independence left it unchanged.

From 1798 to 1922, Ireland lost population and failed to recover from two brutal events: the crushing defeat of the the Irish rebellion in 1798, and the devastating Potato Famine of 1846-1848. English rulers were not kind during this period, and stoked the hatred that the Irish felt for the English.

Why such hatred? Because, from 1541 to 1798, the English attempted to impose their laws and religion. The Irish resisted, an the resistance often became open rebellion. These rebellions were often brutally crushed, and retribution was swift. The Irish hated them for it. The English brought to Ireland hated movements of all kinds. There were movements to take away Irish religion; to take away Irish land; to take away Irish rights; and to invoke cruel forms of genocide, e.g. their actions during the potato famine. Henry VIII kicked off this horrible period when he declared himself King of Ireland in 1541, with the Pope’s blessing.

From 1169 to 1541, Ireland was in the hands of Anglo-Norman lords and Irish Kings. The lords were the result of the successful conquest of much of Ireland by “Strongbow”, and English lord, in 1169. He brought with him a coalition of English and Normans.

From the eighth to twelfth century – 400 years – Ireland was ruled by Irish Kings and Vikings. Saint Patrick is widely credited with leading the move of Ireland toward Christianity. Viking influence faded as Irish Kings and Christianity began to dominate.

Before the fifth century, Ireland was an island of farmers. Amazingly, the Island missed a period of Roman Empire domination. Historians have reported discussion by Roman leaders to invade Ireland, but those ideas were never implemented. As a result, Ireland is one of the few countries in Europe without legacies from the Roman Empire.

A BRIEF HISTORY OF IRELAND

Early Irish History
Ireland’s early settlers trace to about 6,000 BC – a relatively late stage in European terms. Farming began around 4000 BC. Metal-working, thanks to copper deposits, began around 2000 BC. Evidence of European trading (tools, etc) trace to about 800 BC. Celts came to Ireland from mainland Europe around 300 BC, and stayed well over a thousand years. Celts had mastered iron-working, and no doubt had superior weapons (note the Celts sacked Rome in 390 BC). Ireland’s language. Irish (or Gaeilge), stems from Celtic language. Celtic is considered an “Indo-European” Language – like Sanskrit, Greek, German, Latin, Slavonic, and Persian.

A fitting summary comes from Binchey, who says that the Celt culture, which spread throughout northwest Europe, was “tribal, rural, hierarchical and familiar”. While Roman culture was redefining civilization, Ireland remained outside its reach – backward by today’s standards.

Many writers say that this tendency to see this era of Ireland as “backward” is a bit unfair. In fact, very early romantics, in idealizing this culture, actually saw it as virtuous. One spoke of it as a political virtue: “turning humanity back to its state of prehistoric innocence.” Surely this is a stretch, written by those who are aware of the tendency for civilization to turn harsh, and even corrupt, at many historical stages. Nonetheless, maybe it is good to remember that disparaging remarks from the civilized have a counterpoint – the longing for a virtuous state of innocence.

Its a great image, this “Golden Age”, where “good-hearted barbarians” live outside of corruption, outside of war-making, outside of technologic disruption, living on the land, and epitomizing the “noble savage” . Even today, that idealization inspires us, and allows us to dream about a distant, remote past where innocence is primary, and treachery takes a backseat. As this path is followed, the subject of Druids inevitably comes up. They were somehow considered an elite group, with special attributes (some refer to magic). One of those attributes was their wanderings, which gave them insights not available to most. They were considered “learned and holy men.”

Early Christian and Viking Ireland – 600 AD
By now, this hierarchical Irish culture had continuity with its past, it had thankfully settled into an agricultural island, composed of at least 150 small “kingdoms” (called tuatha). This insight is what leads to the articulation of hierarchy below.

Above the king could be an over-king (ri tuatha), and even a Provincial King (ri coicid).

Beneath this societal unit were extended families, called “fine”, including all relations of male descent going back five generations. Property rights of each fine were respected, and laws evolved that assigned assets and liabilities, and even crimes, according to the fine.

So here is where one’s heritage becomes of vital importance. Status in society depended on your fine. The top of this very stratified society was a king of a tuath , but an ollam, or chief poet, and a bishop were accorded equal status to a king.

Indeed, the ollam enjoyed a special place in Irish society: they could roam freely from tuath to tuath; their brethren were considered “filid” – the “men of art”. The filid were also known as Druids, and they became the class that eventually formed the basis of higher learning, and the class that were the “true bearers of the ancient Celtic tradition”.

And kingdoms mattered because of tribute. Unless you were related to the king, you no doubt were obligated to pay tribute to the king. In return, the King offered security and a very rough sense of a rule of law.

Interestingly, there were no towns or villages then. The island had less than 500,000 people.

Saint Patrick and other missionaries brought Christianity in the early to mid-5th century, replacing the indigenous pagan religion by the year 600 AD. The Rock of Cashel stands today as a testimony to this long history of missionaries for Christ, led by Saint Patrick.

At the end of the 8th century, Vikings from Scandinavia began to invade and then gradually intermingle with Irish society. The Vikings founded Dublin, Ireland’s capital city in 988. Following the defeat of the Vikings by Brian Boru, the High King of Ireland, in 1014, Viking influence faded.

The Anglo-Norman Era and “Strongbow” – 1166
The English and Normans arrived in the twelfth century (1166), driving out the Vikings. They brought walled towns, castles, churches, and monasteries. They also increased agriculture and commerce in Ireland.

They were actually invited. The just-defeated King of Leinster visited England and met with King Henry there, as well as many noblemen. His mission: garner support to retake his lands. The king demurred,, so he sought the support of Anglo-Norman noblemen in re-establishing his prior dominance (he had been badly defeated in a battle by the “High King of Ireland”, who was from Connaught).

Several noblemen offered help. They were led by the Earl of Pembroke, also known as “Strongbow”. Strongbow’s army landed in 1169 and quickly re-took Leinster.

Importantly, they went further and also defeated Dublin, exiling its Viking king. In 1171, the existing King of Leinster (Mac Murchada) died, and and Strongbow became King of Leinster.

The English King Henry decided to sail to Ireland in 1171 – clearly in a move to now consolidate his power after his subjects had vanquished their opponents.

The visit went very well, and resulted in Strongbow pledging his loyalty to Henry. Every other very major Irish king did as well (other than O’Connor of Connacht and O’Neill in the north).

In the next period of years, the English Kings, with Anglo-Norman lords, consolidated their new lands by building castles, setting up market towns and engaging in large-scale colonisation. Prince John arrived in Waterford in 1185 and initiated the building of a number of castles in the South East region. An example is Dungarvan Castle (King John’s Castle).

Importantly, the Anglo-Normans did not move the native Irish from their land in any major way.

Ireland became a Kingdom in 1199 with Papal approval. All English laws were extended to Ireland in 1210. By 1261, most of Ireland was ruled by Anglo-Norman lords, under the watchful eye of the King of England. Thus, what Strongbow had initiated in 1169, had evolved well into a Kingdom under the King of England.

But stability was a to last for less than 100 years. Ireland kept spinning out of control.

English Kings Lose to Scotland’s Bruce family – 1315

In fact, by 1316, the King had essentially lost control of Ireland, except for Dublin. Cleverly, the few remaining Irish Kings forged an alliance with Scotland that upset British domination. Here is what happened: The Irish Lords in Ulster – O’Neill and O’Donnell – allied with Robert Bruce of Scotland.

Bruce’s military prowess had become so legendary that he actually was crowned King of Scotland in 1314. With Bruce’s help, they together essentially defeated the Anglo-Normans in 1315. Edward Bruce was named King of Ireland until he was assassinated by Normans in 1318.

Since then, until Henry VIII ultimately succeeded in 1541, several major attempts were made by English Kings to regain control. Several invasions by English kings during this period were deeply resented by all locals, including the lords. They were hailed as successes at the time, but were also short-lived. By 1450, English control of Ireland had been reduced to Dublin.

Meanwhile, across centuries, many Norman lords and their ancestors essentially “went native” – increasingly adopting Irish customs and culture through inter-marriage, etc.

Anglo-Norman Feudalism – Counties, Liberties and Charters
The Irish adventure of English Kings can be best understood by studying incentives – the motives by which men went to war, to win land, and to then subjugate themselves to the King.

Before the “feudal” system of government and tex collection is discussed, a simple method of keeping score is useful. The score in questions is: how can you easily measure how much control England had at any given moment in time? The answer lies in counting how many of three. jurisdictions existed at any point in time:

“Counties” were areas in Ireland that were under the complete control of the King and his feudal system.
“Liberties” were areas in Ireland that were loyal to the King, but did not participate in tributes to the king via feudal arrangements.
“Charters” were areas in Ireland that were independent of the King, completely under Irish Kings and Lords. However, importantly, these areas had treaties with the King that specified how they could co-exist.

Knowing this, the King had every incentive to create counties, which would pay taxes and pledge their allegiance to the Crown. The King’s men would conquer lands that were Charters, and make them Liberties or Counties. Then, they would slowly evolve any remaining Liberties to be Counties. This describes the first three centuries of evolving Irish Government. In 1250, vast majorities of northern Ireland were chartered lands, while counties were mostly strong on the east coast.

So how did it come to pass that Ireland was mostly counties by the 16th century?

The incentives for the warrior noblemen were enormous. The King, on seeing that Irish land had been conquered by his loyal subject, would frequently make a land grant back to that noblemen (mostly of the land that had just been conquered) – in return for feudal arrangements discussed below.

Anglo-Norman society was based on the “feudal” system of government, a standard practice though-out Europe in the day. Under this system, the king owned all land. He in turn granted it to Lords. In return, the Lords agreed to pay an annual “tribute” to the King. This could take the form of money, goods, or even armies in times of war.

The Lords, in turn, granted parcels of their lordships to peasants (ordinary people) in return for money, a soldier at time of war or some goods. Many lords set up market towns in their lordships to encourage trade and to convert goods into money. At the bottom of the hierarchy were landless peasants who were granted a plot of land on another peasant’s plot in return for manual labour on the farm.

The Irish system, by contrast, saw no overall ownership of land, but rather each individual Lord had absolute ownership of their land. The commoners worked on the Lord’s land in return for accommodation and food.
Peasants were granted land by a lord in return for annual payment of crops. The lords, in turn, were granted land by the King.

Henry VIII, New English Control and “Plantations” – 1534
King Henry VIII declared himself head of the Church in England in 1534. He also ensured that the Irish Parliament declared him King of Ireland in 1541, with the support of the Pope.

There followed 250 years of brutality that culminated in a deep hatred by the Irish for the English.

From 1541 to 1798, Ireland had been in the grip of the English Crown. Those 250 years – from 1541 to 1798 – are scandalous, because the English brought to Ireland hated movements of all kinds. There were movements to change their religion (from Catholic to Anglican); to steal Irish land; to take away Irish rights; and in many ways to invoke on the Irish people a peculiar form of genocide (for example, look at the history of the decimating Irish potato famine of 1846-8). Henry VIII kicked off this horrible period when he declared himself King of Ireland in 1641, with the Pope’s blessing. The English brutally ruled from 1541 to 1798, and the Irish hated them for it.

Enforcing his will, to change England and Ireland from Catholic to Protestant (he named himself the head of the Anglican church), the King adopted a “plantation” policy. Under this policy, Protestants would get massive land grants, displacing Catholic land-holders. Thousands of English and Scottish Protestant settlers arrived during his reign. Catholics lost their land.

From this period on, a common theme became Irish rebellion followed by crushing defeat followed by retribution. The victors, England, understandably attempted to impose their will. In the process of doing so, though, it became a hated history – of depriving catholics of their land, and then their rights, and then their very lives.

Bloody 17th Century and “Penal Laws”
The 17th century was a bloody one in Ireland. England imposed the “Penal laws” on Ireland. These laws took away rights from Catholics. They took away the right, for example, to rent or own land above a certain value. They outlawed Catholic clergy,. They forbid Catholic higher education, entry into the professions. During the 18th century, Penal laws eased but by then resentment and hate dominated the country.

Irish Defeated During Rebellion – 1798
In 1782, Henry Grattan (a Protestant) successfully agitated for a more favourable trading relationship with England and for greater legislative independence for the Parliament of Ireland. Inspired by the French Revolution, in 1791 an organisation called the United Irishmen was formed with the ideal of bringing Irish people of all religions together to reform and reduce Britain’s power in Ireland. Its leader was a young Dublin Protestant called Theobald Wolfe Tone. The United Irishmen were the inspiration for the armed rebellion of 1798. Despite attempts to help from the French the rebellion failed and in 1801 the Act of Union was passed uniting Ireland politically with Britain.

Catholic Emancipation and Daniel O’Connell – 1829
In 1829 one of Ireland’s greatest leaders Daniel O’Connell, known as ‘the great liberator’ was central in getting the Act of Catholic Emancipation passed in the parliament in London. He succeeded in getting the total ban on voting by Catholics lifted. They could now also become Members of the Parliament in London.

After this success O’Connell aimed to cancel the Act of Union and re-establish an Irish parliament. However, this was a much bigger task and O’Connell’s approach of non-violence was not supported by all. Such political issues were overshadowed however by the worst disaster and tragedy in Irish history – the great famine.

The Great Famine – 1845 – 1847
When a potato blight destroyed the crops of 1845, 1846, and 1847, disaster followed. During this decade, Ireland’s population plummeted from 8 to 4 million. Two million died. Others left – seeking refuge in America. Potatoes were the staple food of a growing population at the time. The response of the British government also contributed to the disaster. While millions of people were starving, Ireland was forced to export abundant harvests of wheat and dairy products.

The famine brought death. But, with lasting consequences, it also brought resentment and more hate.

First Move to “Home Rule” defeated – 1877- Irish Home Rule Party and Charles Stewart Parnell
“Home Rule” became the cry of all who wanted self-government in Ireland. Until 1877, there was no effective challenge to Britain’s rule over Ireland. Then, at the age of 31, Charles Stewart Parnell (1846-91) became leader of the Irish Home Rule Party, which became the Irish Parliamentary Party in 1882.

Parnell failed to achieve Home Rule. For his efforts, though, he was widely recognized as ‘the uncrowned king of Ireland’. His efforts gave the idea of Home Rule legitimacy.

Irish Unionists – led by Sir Edward Carson in Northern Ireland
In Ulster in the north of Ireland the majority of people were Protestants. They favored the union with Britain – fearing that they would suffer retribution and a loss of rights as a minority in a Catholic controlled country. The Unionist Party was lead by Sir Edward Carson. Carson threatened an armed struggle for a separate Northern Ireland if independence was granted to Ireland.

Second Move to Home Rule Successful – 1912 – but not enacted because of WWI
A Home Rule Bill was passed in 1912 but crucially it was not brought into law. The Home Rule Act was suspended at the outbreak of World War One in 1914. Many Irish nationalists believed that Home Rule would be granted after the war if they supported the British war effort. John Redmond the leader of the Irish Parliamentary Party encouraged people to join the British forces and many did join.

However, a minority of nationalists did not trust the British government leading to one of the most pivotal events in Irish history, the Easter Rising.

Declaration of Independence I – Easter Rising – Irish rebels defeated – 1916
On April 24 (Easter Monday) 1916, two groups of armed rebels, the Irish Volunteers and the Irish Citizen Army seized key locations in Dublin. The Irish Volunteers were led by Padraig Pearse and the Irish Citizen Army was led by James Connolly. Outside the GPO (General Post Office) in Dublin city centre, Padraig Pearse read the “Proclamation of the Republic” (the Irish Declaration of Independence). The. Proclamation declared an Irish Republic independent of Britain. Understandably, Britain was appalled.

Battles ensued with casualties on both sides and among the civilian population. The Easter Rising finished on April 30th with the surrender of the rebels – a bitter defeat for the insurgents.

But public opinion was turning. The majority of the public was actually opposed to the Rising. However, public opinion turned when the British administration responded by executing many of the leaders and participants in the Rising.

All seven signatories to the proclamation were executed including Pearse and Connolly. These executions became a war cry in the war that followed.

Declaration of Independence II – 1919
Two of the key figures who were involved in the rising who avoided execution were Éamon de Valera and Michael Collins. In the December 1918 elections the Sinn Féin party led by Éamon de Valera won a majority of the Ireland based seats of the House of Commons. On the 21 January 1919 the Sinn Féin members of the House of Commons gathered in Dublin to form an Irish Republic parliament called Dáil Éireann, unilaterally declaring power over the entire island.

War of Independence – 1919-1921
What followed is known as the ‘war of independence’ when the Irish Republican Army – the army of the newly declared Irish Republic – waged a guerrilla war against British forces from 1919 to 1921. One of the key leaders of this war was Michael Collins.

Michael Collins led this initiative that ended with the label “Bloody Sunday”. It was one day of violence in Dublin on November 21, 1920, during the Irish War of Independence. In total, 32 people were killed, including thirteen British soldiers and police, sixteen Irish civilians, and three Irish republican prisoners.

The day began when Michael Collins led the IRA to assassinate the ‘Cairo Gang’ – British undercover intelligence agents. IRA members went to a number of addresses and shot dead fourteen people.

Later that afternoon, members of the Auxiliary Division and RIC opened fire on the crowd at a Gaelic football match in Croke Park, killing eleven civilians and wounding at least sixty.That evening, three IRA suspects being held in Dublin Castle were beaten and killed by their captors, who claimed they were trying to escape.

Overall, while its events cost relatively few lives, Bloody Sunday was considered a victory for the IRA, as Collins’s operation severely damaged British intelligence, The net effect was to increase support for the IRA at home and abroad.

Third attempt at Home Rule adopted – 1920 Government of Ireland Act – Creates Southern and Northern Ireland under Home Rule
This Act of Parliament was intended to keep Ireland part of the United Kingdom – through Home Rule institutions. The Act establishes two new subdivisions of Ireland: the six north-eastern counties were to form “Northern Ireland”, while the larger part of the country was to form “Southern Ireland”.

Home Rule never took effect in Southern Ireland, due to the Irish War of Independence, which resulted instead in the Anglo-Irish Treaty and the establishment in 1922 of the Irish Free State. However, the institutions set up under this Act for Northern Ireland continued to function until they were suspended by the British parliament in 1972 as a consequence of the Troubles.The Government of Ireland Act of 1920 created the Irish Free State. At the same time, the Parliament of Northern Ireland was created. The Parliament consisted of a majority of Protestants and while there was relative stability for decades.

Anglo-Irish Treaty Ends War of Independence – Divides Ireland – Dec 1921
In December 1921 a treaty was signed by the Irish and British authorities. One of the key provisions of the treaty was a compromise. Ireland was to be divided into Northern Ireland (6 counties) and the Irish Free State (26 counties) which was established in 1922.

As in the Home Rule Act of 1920, most signatories of the treaty were hopeful that the division of the country into Ireland and Northern Ireland would be temporary. However, the division has stayed in place for almost a century.

While a clear level of independence was finally granted to Ireland the contents of the treaty were to split Irish public and political opinion.

Instead of bringing peace, the signing of the Treaty plunged the country into a three year Civil War.

Civil War – 1922-1923
A Civil War followed from 1922 to 1923 between pro and anti treaty forces, with Collins (pro-treaty) and de Valera (anti-treaty) on opposing sides. The consequences of the Civil war can be seen to this day where the two largest political parties in Ireland have their roots in the opposing sides of the civil war – Fine Gael (pro-treaty) and Fianna Fáil (anti-treaty).

It was in 1923, after such war-torn strife, that William Butler Yeats was given the Nobel Prize for Literature. Fittingly, the Award was made with this quote: “for his always inspired poetry, which in a highly artistic form gives expression to the spirit of a whole nation”

Republic of Ireland – 1937 – Join EU
The 1937 Constitution re-established the state as the Republic of Ireland.
In 1973 Ireland joined the European Economic Community (now the European Union).

Northern Ireland Catholics Rebel – 1968
Stability in Northern Ireland ended in the late 1960s due to systematic discrimination against Catholics. 1968 saw the beginning of Catholic civil rights marches in Northern Ireland. These protests led to violent reactions from some Protestant loyalists and from the police force. What followed was a period known as ‘the Troubles’ when nationalist/republican and loyalist/unionist groups clashed.

In 1969 British troops were sent to maintain order and to protect the Catholic minority. However, the army soon came to be seen as a tool of the Protestant majority by the minority Catholic community.

Bloody Sunday – 1972 and “The Troubles”

This was reinforced by events such as Bloody Sunday in 1972 when British forces opened fire on a Catholic civil rights march in Derry killing 13 people. An escalation of paramilitary violence followed with many atrocities committed by both sides.

Between 1969 and 1998 it is estimated that well over 3,000 people were killed by paramilitary groups on opposing sides of the conflict.

Peace with Belfast Agreement – 1998
The period of ‘the Troubles’ are generally agreed to have finished with the Belfast (or Good Friday) Agreement of April 10th 1998.

Since 1998 considerable stability and peace has come to Northern Ireland. In 2007 former bitterly opposing parties the Democratic Unionist Party (DUP) and Sinn Féin began to co-operate in government together in Northern Ireland.

20th Century to present day
In the 1980s the Irish economy was in recession and large numbers of people emigrated for employment reasons. Many young people emigrated to the United Kingdom, the United States of America and Australia.
Economic reforms in the 1980s along with membership of the European Community (now European Union) created one of the world’s highest economic growth rates. Ireland in the 1990s, so long considered a country of emigration, became a country of immigration. This period in Irish history was called the Celtic Tiger.

It occurs to me…

My Model of How Civilization works needs revision

It occurs to me, after reading so much Irish History, that perhaps my model of how “civilization” works is somewhat wrong.

My model addresses the “how” of civilization. How does order come out of chaos? How do we rise out of the muck, and become more human and less savage beast? How do we develop rules that guide our behavior, particularly toward each other?

My model is about government, and governing – and the rule of law which protects the average citizen from arbitrariness, and instead substitutes “justice”.

Perhaps the church played, and plays, a greater role than I had originally thought.

Perhaps the church was co-equal, or even superior, to bringing order out of chaos.

Perhaps the church was the core source of hope: that tomorrow would be better than today, that protections and justice come to those with faith, and that the world is much bigger than me, and even includes the heavens and a place called hell.

So …. Aren’t there really two rules of law – the law of the church and the law of the government?

And, if we read history with openness, perhaps a truthful statement is:

For any given person, at any given time in history, the experience of “civilization” is either more true or less true.

If that person’s experience is more true:

The person is probably also experiencing a “culture” – sometimes without even being aware of this notion. That culture is gifted with cultural norms that form the basis of civilization – rules or laws that guide all members’ behavior.

The person likely has two institutions to thank for this “culture: their government and their church.

Which is primary, and by how much, is an important marker for a given time, in a given geography.

History gets very interesting as we ponder which communities were more aligned with the church as their primary institutional reference, and which communities were more aligned with their government.

Is Ireland a Microcosm of the World?

The intrigue, the conquest, the rebellion, the retribution: is Ireland, this small island of six million people, a microcosm of the world? Does it provide universal insights?

Possibly. Here are two:

Incentives to control land and expand reach are powerful.

The story of Ireland is a story of powerful men exploiting opportunities. Any Irish King, or Norman Lord, or English Kings encouraged loyalists to to form alliances which bound them to fight for land. Once victorious, the loyalist would bring that victory back to the King, add – more often than not – the King would grant that land back to the loyalist (Lord) who had conquered the land. This, of course, was only done with a pledge of continuing loyalty from the loyalist, and a pledge to give back to the king payments in the form of taxes, armies, or whatever the king required.

Conquest breeds resentment, and even hatred.

The story of Ireland is a story of conquest, followed by new rules and retribution for the conquered. We know change is hard, and culture change is the hardest of all. In all cases, imposed changes fostered resentment, and retribution, often harsh, bred hatred. When it.surfaced, it became rebellion.

Crash and War Anger

Crash and War Anger

All of us love validation – especially when it comes from an admired source.

That’s the way I feel after reading the NYT Review by Fareed Zakaria – possibly my most admired journalist

The review is of a book called Crash, by an eminent scholar writing about the consequences of the crash of 2008. The review is below.

It validates my deep belief that the seeds of Trump’s victory go back to the “crash” of 2008. It was a moment of major negative “reset” for far too many Americans. Their savings, or their employability, or their home values, or their prospects for credit changed so negatively that it created an emergent body politic. The new body politic was characterized by a primary sentiment: seething anger. More importantly, it was characterized by a call to action: “throw the bums out!”.

The deep irony here is that democracy handed the angry a “throw the bums out” choice that many didn’t want – Barack Obama.

But their anger at inside-the-beltway Republicans, and George W. Bush, was so strong that – inside the ballot box – they pulled the lever for Obama.

When Donald Trump had the courage to viciously criticize the Republican establishment, and especially Bush, he was speaking directly to this new body politic. If their sentiment was resentment, they found their gladiator in Trump.

My only beef with the book and the review is that they do not go back far enough.

I believe the seeds of Trump’s victory go back to 9/11. It was that fateful day that itself created a new body politic, whose primary sentiment was “We are under attack and we must fight back.”

George W. Bush was responding to that scary, new sentiment when he announced not just one, but two new wars. History will record that the Iraq War – which cost trillions – was a major mistake. History will be somewhat more kind about the war in Afghanistan, which led the nation into a massively expensive 15+ year engagement of limited success and many, many unintended consequences.

So my point is that 9/11 reigned holy hell on the nation – because of the new body politic of “we are under attack and we must fight back.” – by pushing a very weak leader George W. Bush – to start two wars that almost immediately looked incompetent and wrong.

The 2008 crash was the final straw. Two stupid wars and a major economic reset were enough to push most Americans over the edge to a seething anger and a “throw the bums out” call to action.

It took the decade after to weave a tapestry of cause and effect, supported by right wing media. Never mind that most of the tapestry was a fabrication. Never mind that he was a serial liar. It was soothing to have a gladiator (Trump) that spoke the truth about the subjects that really mattered: “those folks in Washington don’t know what they are doing and they need to go”; “you are being screwed by the economic resets” and “the war in Iraq was a major mistake”.“

There is very little question in my mind that Donald Trump will go down in history as our worst president. He will be remembered by his failures, his indecency and his lack of integrity. He will be remembered for his failures abroad, where he embarrasses us and plays the fool, and his failures at home, where he depletes the treasury and breaks the back of the Affordable Care Act. Everywhere he goes, he does what is bad, and undoes decades of progress In defining what is good,, e.g. environmental regulation. His indecency and his lack of integrity will leave lasting scars on the office, but hopefully schools and parents will now have an example of what not to be, how not to act.

How did the nation get to this horrible outcome? We will only have perspective on this decades from now, but the “second draft of history”, to me, traces it all back to 9/11 and it’s two awful wars. The cruel irony was that after eight Bush years of misguided foreign adventurism, the American economy collapsed. It was the straw that broke the camels back, causing all of us to say “we are mad as hell and we need to throw the bums out!”

So the math, looking back thirty years form now, might well be:

Afghan war + Iraq war + economic crash = Obama
(Obama was the backlash. We threw the bums out for him, and thank God he was as level headed and as competent and decent as he was)

Economic reset for most Americans + unresolved racist and nationalistic impulses + Comey + Russia = Trump
(Trump was the backlash to Obama, supported by all events above)
================
CRASH

CREDIT: https://www.nytimes.com/2018/08/10/books/review/adam-tooze-crashed.html?rref=collection%2Fsectioncollection%2Fbook-review&action=click&contentCollection=review&region=rank&module=package&version=highlights&contentPlacement=1&pgtype=sectionfront
NONFICTION
Looking Back at the Economic Crash of 2008

By Fareed Zakaria

Aug. 10, 2018

How a Decade of Financial Crises Changed the World

By Adam Tooze
706 pp. Viking. $35.

Steve Bannon can date the start of the Trump “revolution.” When I interviewed him for CNN in May, in Rome, he explained that the origins of Trump’s victory could be found 10 years ago, in the financial crisis of 2008. “The implosion of those world capital markets has never really been sorted out,” he told me. “The fuse that was lit then that eventually brought the Trump revolution is the same thing that’s happened here in Italy.” (Italy had just held elections in which populist forces had won 50 percent of the vote.) Adam Tooze would likely agree. An economic historian at Columbia University, he has written a detailed account of the financial shocks and their aftereffects, which, his subtitle asserts, “changed the world.”

If journalism is the first rough draft of history, Tooze’s book is the second draft. A distinguished scholar with a deep grasp of financial markets, Tooze knows that it is a challenge to gain perspective on events when they have not yet played out. He points out that a 10-year-old history of the crash of 1929 would have been written in 1939, when most of its consequences were ongoing and unresolved. But still he has persisted and produced an intelligent explanation of the mechanisms that produced the crisis and the response to it. We continue to live with the consequences of both today.

As is often the case with financial crashes, markets and experts alike turned out to have been focused on the wrong things, blind to the true problem that was metastasizing. By 2007, many were warning about a dangerous fragility in the system. But they worried about America’s gargantuan government deficits and debt — which had exploded as a result of the Bush administration’s tax cuts and increased spending after 9/11. It was an understandable focus. The previous decade had been littered with collapses when a country borrowed too much and its creditors finally lost faith in it — from Mexico in 1994 to Thailand, Malaysia and South Korea in 1997 to Russia in 1998. In particular, many fretted about the identity of America’s chief foreign creditor — the government of China. Yet it was not a Chinese sell-off of American debt that triggered the crash, but rather, as Tooze writes, a problem “fully native to Western capitalism — a meltdown on Wall Street driven by toxic securitized subprime mortgages.”

Tooze calls it a problem in “Western capitalism” intentionally. It was not just an American problem. When it began, many saw it as such and dumped the blame on Washington. In September 2008, as Wall Street burned, the German finance minister Peer Steinbruck explained that the collapse was centered in the United States because of America’s “simplistic” and “dangerous” laissez-faire approach. Italy’s finance minister assured the world that its banking system was stable because “it did not speak English.”

In fact this was nonsense. One of the great strengths of Tooze’s book is to demonstrate the deeply intertwined nature of the European and American financial systems. In 2006, European banks generated a third of America’s riskiest privately issued mortgage-backed securities. By 2007, two-thirds of commercial paper issued was sponsored by a European financial entity. The enormous expansion of the global financial system had largely been a trans-Atlantic project, with European banks jumping in as eagerly and greedily to find new sources of profit as American banks. European regulators were as blind to the mounting problems as their American counterparts, which led to problems on a similar scale. “Between 2001 and 2006,” Tooze writes, “Greece, Finland, Sweden, Belgium, Denmark, the U.K., France, Ireland and Spain all experienced real estate booms more severe than those that energized the United States.”

But while the crisis may have been caused in both America and Europe, it was solved largely by Washington. Partly, this reflected the post-Cold War financial system, in which the dollar had become the hyperdominant global currency and, as a result, the Federal Reserve had truly become the world’s central bank. But Tooze also convincingly shows that the European Central Bank mismanaged things from the start. The Fed acted aggressively and also in highly ingenious ways, becoming a guarantor of last resort to the battered balance sheets of American but also European banks. About half the liquidity support the Fed provided during the crisis went to European banks, Tooze observes.

Before the rescue and even in its early stages, the global economy was falling into a bottomless abyss. In the first months after the panic on Wall Street, world trade and industrial production fell at least as fast as they did during the first months of the Great Depression. Global capital flows declined by a staggering 90 percent. The Federal Reserve, with some assistance from other central banks, arrested this decline. The Obama fiscal stimulus also helped to break the fall. Tooze points out that almost all serious analyses of the stimulus conclude that it played a significant positive role. In fact, most experts believe it ended much too soon. He also points out that large parts of the so-called Obama stimulus were the result of automatic government spending, like unemployment insurance, that would have happened no matter who was president. And finally, he notes that China, with its own gigantic stimulus, created an oasis of growth in an otherwise stagnant global economy.

The rescue worked better than almost anyone imagined. It is worth recalling that none of the dangers confidently prophesied by legions of critics took place. There was no run on the dollar or American treasuries, no hyperinflation, no double-dip recession, no China crash. American banks stabilized and in fact prospered, households began saving again, growth returned slowly but surely. The governing elite did not anticipate the crisis — as few elites have over hundreds of years of capitalism. But once it happened, many of them — particularly in America — acted quickly and intelligently, and as a result another Great Depression was averted. The system worked, as Daniel Drezner notes in his own book of that title.

But therein lies the unique feature of the crash of 2008. Unlike that of 1929, it was not followed by a Great Depression. It was not so much the crisis as the rescue and its economic, political and social consequences that mattered most. On the left, the entire episode discredited the market-friendly policies of Tony Blair, Bill Clinton and Gerhard Schroeder, disheartening the center-left and emboldening those who want more government intervention in the economy in all kinds of ways. On the right, it became a rallying cry against bailouts and the Fed, buoying an imaginary free-market alternative to government intervention. Unlike in the 1930s, when the libertarian strategy was tried and only deepened the Depression, in the last 10 years it has been possible for the right to argue against the bailouts, secure in the knowledge that their proposed policies will never actually be implemented.

Bannon is right. The crash brought together many forces that were around anyway — stagnant wages, widening inequality, anger about immigration and, above all, a deep distrust of elites and government — and supercharged them. The result has been a wave of nationalism, protectionism and populism in the West today. A confirmation of this can be found in the one major Western country that did not have a financial crisis and has little populism in its wake — Canada.

The facts remain: No government handled the crisis better than that of the United States, which acted in a surprisingly bipartisan fashion in late 2008 and almost seamlessly coordinated policy between the outgoing Bush and incoming Obama administrations. And yet, the backlash to the bailouts has produced the most consequential result in the United States.
Tooze notes in his concluding chapter that experts are considering the new vulnerabilities of a global economy with many new participants, especially the behemoth in Beijing. But instead of a challenge from an emerging China that began its rise outside the economic and political system, we are confronting a quite different problem — an erratic, unpredictable United States led by a president who seems inclined to redo or even scrap the basic architecture of the system that America has painstakingly built since 1945. How will the world handle this unexpected development? What will be its outcome? This is the current crisis that we will live through and that historians will soon analyze.

Fareed Zakaria is a CNN anchor, a Washington Post columnist and the author of “The Post American World.”

Ireland’s History

A BRIEF HISTORY OF IRELAND

CREDIT: http://www.livinginireland.ie/en/culture_society/a_brief_history_of_ireland/
CREDIT: http://www.wesleyjohnston.com/users/ireland/past/history/index.htm

Foreward
Ireland is beautiful – so beautiful that it is overwhelming.

It is blessed with natural beauty: beautiful land; abundant rivers, streams, harbors; hilltops and long views everywhere.

It is also blessed with what history has added: lovely villages, great ports, winding roads, castles and saved remnants of its proud past.

As the visitor moves from county to county, a few will ask: “What is Ireland’s history?”

The answer, of course, is complex – and every Irish man, woman and child proudly feels all the glory and all the pain. It is a rich heritage. And it is by no means monolithic.

Ireland’s heritage is conquest, first by Celts, then by Irish Kings, then by Vikings, then Anglo-Normans, and finally by English Kings. From conquest comes rebellion – the Irish are rebels to the core. With rebellion comes, way too frequently, crushing defeat and retribution.

In spite of this, using today’s lens, Ireland looks resilient and proud – the Irish always come back! And they look victorious – ultimately triumphing. It is an exciting history of leaders, their shifting lands, their shifting allegiances, treacheries and betrayals, and evolving forms of government and administration of justice.

Overview
The history below starts from the beginning. This overview starts from the end, today, and goes backward, from present-day Ireland to first evidence of human activity.

Today, Ireland is split into two – the Republic of Ireland (4.8 million population) and Northern Ireland (1.8 million). The split is the legacy of the Irish War of Independence in 1919-1922. It’s end culminated in this compromise, which neither side liked and which has been contentious ever since.

The Republic of Ireland in independent, and joined the EU in 1973. Northern Ireland is part of Great Britain. It was created in 1920, and the War of Independence left it unchanged.

From 1798 to 1922, Ireland lost population and failed to recover from two brutal events: the crushing defeat of the the Irish rebellion in 1798, and the devastating Potato Famine of 1846-1848. The English rulers were not kind during this period, and it exacerbated the simmering hatred that the Irish felt for the English.

Why such hatred? Because, from 1541 to 1798, the English attempted to impose their laws and customs, and the Irish resisted, and sometimes the resistance became open rebellion. These rebellions were often brutally crushed, and retribution was swift. The Irish hated them for it. The English brought to Ireland hated movements of all kinds. There were movements to take away Irish religion; to take away Irish land; to take away Irish rights; and to invoke cruel forms of genocide (for example, look at the history of the decimating Irish potato famine of 1846-8). Henry VIII kicked off this horrible period when he declared himself King of Ireland in 1541, with the Pope’s blessing.

From 1169 to 1541, England was in the hands of Anglo-Norman lords and Irish Kings. The lords were the result of the successful conquest of much of Ireland by “Strongbow”. The conquest began with his landing in 1169. Strongbow was himself an English Earl, but he brought with him a coalition of English and Normans.

From the eighth century until the end of the twelfth century – 400 years – Ireland was ruled by Irish Kings and Vikings. It was also a period of early christianity, with monasteries being built throughout the country. Viking influence faded as Irish Kings began to dominate.

Before the fifth century, Ireland was an island of farmers. Amazingly, the Island missed a period of Roman Empire domination. Historians have reported discussion by Roman leaders to invade Ireland, but those ideas were never implemented. As a result, Ireland is one of the few countries in Europe without legacies from the Roman Empire.

Microcosm?
The intrigue, the conquest, the rebellion, the retribution: is Ireland, this small island of six million people, a microcosm of the world? Does it provide universal insights?

Possibly. Here are two:

Incentives to control land and expand reach are powerful.

The story of Ireland is a story of powerful men exploiting opportunities. Any Irish King, or Norman Lord, or English Kings encouraged loyalists to to form alliances which bound them to fight for land. Once victorious, the loyalist would bring that victory back to the King, add – more often than not – the King would grant that land back to the loyalist (Lord) who had conquered the land. This, of course, was only done with a pledge of continuing loyalty from the loyalist, and a pledge to give back to the king payments in the form of taxes, armies, or whatever the king required.

Conquest breeds resentment, and even hatred.

The story of Ireland is a story of conquest, followed by new rules and retribution for the conquered. We know change is hard, and culture change is the hardest of all. In all cases, imposed changes fostered resentment, and retribution, often harsh, bred hatred. When it.surfaced, it became rebellion.

A BRIEF HISTORY OF IRELAND

Early Irish History
Ireland’s early settlers trace to about 10,000 years ago – a relatively late stage in European terms. Farming began around 4000 BC. Celts came to Ireland from mainland Europe around 300 BC. Ireland’s language. Irish (or Gaeilge), stems from Celtic language.

Early Christian and Viking Ireland – 600 AD
Saint Patrick and other missionaries brought Christianity in the early to mid-5th century, replacing the indigenous pagan religion by the year 600 AD. The Rock of Cashel stands today as a testimony to this long history of missionaries for Christ, led by Saint Patrick.

At the end of the 8th century, Vikings from Scandinavia began to invade and then gradually settle into and mix with Irish society. The Vikings founded Dublin, Ireland’s capital city in 988. Following the defeat of the Vikings by Brian Boru, the High King of Ireland, at Clontarf in 1014, Viking influence faded.

The Anglo-Norman Era and “Strongbow” – 1166
The English and Normans arrived in the twelfth century (1166), driving out the Vikings. They brought walled towns, castles, churches, and monasteries. They also increased agriculture and commerce in Ireland.

They were actually invited. The then-King of Leinster visited England and met with King Henry there, as well as many noblemen. His mission: garner support to retake his lands. The king demurred,, so he sought the support of Anglo-French noblemen in re-establishing his prior dominance (he had been badly defeated in a battle by the “High King of Ireland”, who was from Connaught).

Several noblemen offered help. They were led by the Earl of Pembroke, also known as Strongbow. Strongbow’s army landed in 1169 and quickly re-took Leinster.

Importantly, they went further and also defeated Dublin, exiling its Viking king. In 1171, the existing King of Leinster (Mac Murchada) died, and and Strongbow became King of Leinster.
The English King Henry decided to sail to Ireland in 1171. Strongbow pledged his loyalty to Henry, and every major Irish king did as well (other than O’Connor of Connacht and O’Neill in the north).

In the next period of years, the Anglo-Normans consolidated their new lands by building castles, setting up market towns and engaging in large-scale colonisation. Prince John arrived in Waterford in 1185 and initiated the building of a number of castles in the South East region. An example is Dungarvan Castle (King John’s Castle).

Importantly, the Anglo-Normans did not move the native Irish from their land in any major way.

Ireland became a Kingdom in 1199 with Papal approval. All English laws were extended to Ireland in 1210. By 1261, most of Ireland was ruled by Anglo-Norman lords, under the watchful eye of the King of England. This was to be short-lived.

In fact, by 1316, the King has essentially lost control of Ireland, except for Dublin. Cleverly, the few remaining Irish Kings (The Irish Lords in Ulster, O’Neill and O’Donnell) allied with Robert Bruce of Scotland, whose military prowess had become so legendary that he actually was crowned King of Scotland in 1314. With Bruce’s help, they together essentially defeated the Anglo-Normans in 1315. Edward Bruce was named King of Ireland until he was assassinated by Normans in 1318.

From 1261 to 1541, several major attempts were made by English Kings to regain control. Several invasions by English kings during this period were deeply resented by all locals, including the lords. They were hailed as successes at the time, but were also short-lived. By 1450, English control of Ireland had been reduced to Dublin.

Meanwhile, across centuries, many Norman lords and their ancestors essentially “went native” – increasingly adopting Irish customs and culture through inter-marriage, etc.

Anglo-Norman Feudalism – Counties, Liberties and Charters
The Irish adventure of English Kings can be best understood by studying incentives – the motives by which men went to war, to win land, and to then subjugate themselves to the King.

Before the “feudal” system of government and tex collection is discussed, a simple method of keeping score is useful. The score in questions is: how can you easily measure how much control England had at any given moment in time? The answer lies in counting how many of three. jurisdictions existed at any point in time:

“counties” were areas in Ireland that were under the complete control of the King and his feudal system.
Liberties were areas in Ireland that were loyal to the King, but did not participate in tributes to the king via feudal arrangements.
Charters were areas in Ireland that were independent of the King, completely under Irish Kings and Lords. However, importantly, these areas had treaties with the King that specified how they could co-exist.

Knowing this, the King had every incentive to create counties, which would pay taxes and pledge their allegiance to the Crown. The King’s men would conquer lands that were Charters, and make them Liberties or Counties. Then, they would slowly evolve any remaining Liberties to be Counties. This describes the first three centuries of evolving Irish Government. In 1250, vast majorities of northern Ireland were chartered lands, while counties were mostly strong on the east coast.

So how did it come to pass that Ireland was mostly counties by the 16th century?

The incentives for the warrior noblemen were enormous. The King, on seeing that Irish land had been conquered by his loyal subject, would frequently make a land grant back to that noblemen (mostly of the land that had just been conquered) – in return for feudal arrangements discussed below.

Anglo-Norman society was based on the “feudal” system of government, a standard practice though-out Europe in the day. Under this system, the king owned all land. He in turn granted it to Lords. In return, the Lords agreed to pay an annual “tribute” to the King. This could take the form of money, goods, or even armies in times of war.

The Lords, in turn, granted parcels of their lordships to peasants (ordinary people) in return for money, a soldier at time of war or some goods. Many lords set up market towns in their lordships to encourage trade and to convert goods into money. At the bottom of the hierarchy were landless peasants who were granted a plot of land on another peasant’s plot in return for manual labour on the farm.

The Irish system, by contrast, saw no overall ownership of land, but rather each individual Lord had absolute ownership of their land. The commoners worked on the Lord’s land in return for accommodation and food.
Peasants were granted land by a lord in return for annual payment of crops. The lords, in turn, were granted land by the King.

Henry VIII, “Plantations” – 1534
King Henry VIII declared himself head of the Church in England in 1534. He also ensured that the Irish Parliament declared him King of Ireland in 1541, with the support of the Pope.

There followed 250 years of brutality that culminated in a deep hatred by the Irish for the English.

From 1541 to 1798, Ireland had been in the grip of the English Crown. Those 250 years – from 1541 to 1798 – are scandalous, because the English brought to Ireland hated movements of all kinds. There were movements to change their religion (from Catholic to Anglican; to steal Irish land; to take away Irish rights; and in many ways to invoke on the Irish people a peculiar form of genocide (for example, look at the history of the decimating Irish potato famine of 1846-8). Henry VIII kicked off this horrible period when he declared himself King of Ireland in 1641, with the Pope’s blessing. The English brutally ruled from 1541 to 1798, and the Irish hated them for it.

Enforcing his will, to change England and Ireland from Catholic to Protestant (he named himself the head of the Anglican church), the King adopted a “plantation” policy. Under this policy, Protestants would get massive land grants, displacing Catholic land-holders. Thousands of English and Scottish Protestant settlers arrived during his reign. Catholics lost their land.

From this period on, a common theme became Irish rebellion followed by crushing defeat followed by retribution. The victors, England, understandably attempted to impose their will. In the process of doing so, though, it became a hated history – of depriving catholics of their land, and then their rights, and then their very lives.

Bloody 17th Century and “Penal Laws”
The 17th century was a bloody one in Ireland. England imposed the “Penal laws” on Ireland. These laws took away rights from Catholics. They took away the right, for example, to rent or own land above a certain value. They outlawed Catholic clergy,. They forbid Catholic higher education, entry into the professions. During the 18th century, Penal laws eased but by then resentment and hate dominated the country.

Defeated During Rebellion – 1798
In 1782, Henry Grattan (a Protestant) successfully agitated for a more favourable trading relationship with England and for greater legislative independence for the Parliament of Ireland. Inspired by the French Revolution, in 1791 an organisation called the United Irishmen was formed with the ideal of bringing Irish people of all religions together to reform and reduce Britain’s power in Ireland. Its leader was a young Dublin Protestant called Theobald Wolfe Tone. The United Irishmen were the inspiration for the armed rebellion of 1798. Despite attempts at help from the French the rebellion failed and in 1801 the Act of Union was passed uniting Ireland politically with Britain.

Catholic Emancipation and Daniel O’Connell – 1829
In 1829 one of Ireland’s greatest leaders Daniel O’Connell, known as ‘the great liberator’ was central in getting the Act of Catholic Emancipation passed in the parliament in London. He succeeded in getting the total ban on voting by Catholics lifted and they could now also become Members of the Parliament in London.

After this success O’Connell aimed to cancel the Act of Union and re-establish an Irish parliament. However, this was a much bigger task and O’Connell’s approach of non-violence was not supported by all. Such political issues were overshadowed however by the worst disaster and tragedy in Irish history – the great famine.

The Great Famine – 1845 – 1847
When a potato blight destroyed the crops of 1845, 1846, and 1847, disaster followed. During this decade, Ireland’s population plummeted from 8 to 4 million. Two million died. Others left – seeking refuge in America. Potatoes were the staple food of a growing population at the time. The response of the British government also contributed to the disaster. While millions of people were starving, Ireland was forced to export abundant harvests of wheat and dairy products.

The famine brought death. But, with lasting consequences, it also brought resentment and more hate.

First Move to “Home Rule” defeated – 1877- Irish Home Rule Party and Charles Stewart Parnell
“Home Rule” became the cry of all who wanted self-government in Ireland. Until 1877, there was no effective challenge to Britain’s rule over Ireland. Then, at the age of 31, Charles Stewart Parnell (1846-91) became leader of the Irish Home Rule Party, which became the Irish Parliamentary Party in 1882.

Parnell failed to achieve Home Rule. For his efforts, though, he was widely recognised as ‘the uncrowned king of Ireland’. His efforts gave the idea of Home Rule legitimacy.

Irish Unionists – led by Sir Edward Carson in Northern Ireland
In Ulster in the north of Ireland the majority of people were Protestants. They favoured the union with Britain – fearing that they would suffer retribution and a loss of rights as a minority in a Catholic controlled country. The Unionist Party was lead by Sir Edward Carson. Carson threatened an armed struggle for a separate Northern Ireland if independence was granted to Ireland.

Second Move to Home Rule Successful – 1912 – but not enacted because of WWI
A Home Rule Bill was passed in 1912 but crucially it was not brought into law. The Home Rule Act was suspended at the outbreak of World War One in 1914. Many Irish nationalists believed that Home Rule would be granted after the war if they supported the British war effort. John Redmond the leader of the Irish Parliamentary Party encouraged people to join the British forces and many did join.

However, a minority of nationalists did not trust the British government leading to one of the most pivotal events in Irish history, the Easter Rising.

Declaration of Independence I – Easter Rising – Irish rebels defeated – 1916
On April 24 (Easter Monday) 1916, two groups of armed rebels, the Irish Volunteers and the Irish Citizen Army seized key locations in Dublin. The Irish Volunteers were led by Padraig Pearse and the Irish Citizen Army was led by James Connolly. Outside the GPO (General Post Office) in Dublin city centre, Padraig Pearse read the “Proclamation of the Republic” (the Irish Declaration of Independence). The. Proclamation declared an Irish Republic independent of Britain.

Battles ensued with casualties on both sides and among the civilian population. The Easter Rising finished on April 30th with the surrender of the rebels. The majority of the public was actually opposed to the Rising. However, public opinion turned when the British administration responded by executing many of the leaders and participants in the Rising.

All seven signatories to the proclamation were executed including Pearse and Connolly.

Declaration of Independence II – 1919
Two of the key figures who were involved in the rising who avoided execution were Éamon de Valera and Michael Collins. In the December 1918 elections the Sinn Féin party led by Éamon de Valera won a majority of the Ireland based seats of the House of Commons. On the 21 January 1919 the Sinn Féin members of the House of Commons gathered in Dublin to form an Irish Republic parliament called Dáil Éireann, unilaterally declaring power over the entire island.

War of Independence – 1919-1921
What followed is known as the ‘war of independence’ when the Irish Republican Army – the army of the newly declared Irish Republic – waged a guerilla war against British forces from 1919 to 1921. One of the key leaders of this war was Michael Collins.

Michael Collins led and initiative that ended with the label “Bloody Sunday”. It was one day of violence in Dublin on November 21, 1920, during the Irish War of Independence. In total, 32 people were killed, including thirteen British soldiers and police, sixteen Irish civilians, and three Irish republican prisoners.

The day began when Michael Collins led the IRA to assassinate the ‘Cairo Gang’ – British undercover intelligence agents. IRA members went to a number of addresses and shot dead fourteen people.

Later that afternoon, members of the Auxiliary Division and RIC opened fire on the crowd at a Gaelic football match in Croke Park, killing eleven civilians and wounding at least sixty.That evening, three IRA suspects being held in Dublin Castle were beaten and killed by their captors, who claimed they were trying to escape.

Overall, while its events cost relatively few lives, Bloody Sunday was considered a victory for the IRA, as Collins’s operation severely damaged British intelligence, The net effect was to increase support for the IRA at home and abroad.

Third attempt at Home Rule adopted – 1920 Government of Ireland Act – Creates Southern and Northern Ireland under Home Rule
This Act of Parliament was intended to keep Ireland part of the United Kingdom – through Home Rule institutions. The Act establishes two new subdivisions of Ireland: the six north-eastern counties were to form “Northern Ireland”, while the larger part of the country was to form “Southern Ireland”.

Home Rule never took effect in Southern Ireland, due to the Irish War of Independence, which resulted instead in the Anglo-Irish Treaty and the establishment in 1922 of the Irish Free State. However, the institutions set up under this Act for Northern Ireland continued to function until they were suspended by the British parliament in 1972 as a consequence of the Troubles.The Government of Ireland Act of 1920 created the Irish Free State. At the same time, the Parliament of Northern Ireland was created. The Parliament consisted of a majority of Protestants and while there was relative stability for decades.

Anglo-Irish Treaty Ends War of Independence – Divides Ireland – Dec 1921
In December 1921 a treaty was signed by the Irish and British authorities. One of the key provisions of the treaty was a compromise. Ireland was to be divided into Northern Ireland (6 counties) and the Irish Free State (26 counties) which was established in 1922.

As in the Home Rule Act of 1920, most signatories of the treaty were hopeful that the division of the country into Ireland and Northern Ireland would be temporary. However, the division has stayed in place for almost a century.

While a clear level of independence was finally granted to Ireland the contents of the treaty were to split Irish public and political opinion.

Instead of bringing peace, the signing of the Treaty plunged the country into a three year Civil War.

Civil War – 1922-1923
A Civil War followed from 1922 to 1923 between pro and anti treaty forces, with Collins (pro-treaty) and de Valera (anti-treaty) on opposing sides. The consequences of the Civil war can be seen to this day where the two largest political parties in Ireland have their roots in the opposing sides of the civil war – Fine Gael (pro-treaty) and Fianna Fáil (anti-treaty).

Republic of Ireland – 1937 – Join EU
The 1937 Constitution re-established the state as the Republic of Ireland.
In 1973 Ireland joined the European Economic Community (now the European Union).

Northern Ireland Catholics Rebel – 1968
Stability in Northern Ireland ended in the late 1960s due to systematic discrimination against Catholics. 1968 saw the beginning of Catholic civil rights marches in Northern Ireland. These protests led to violent reactions from some Protestant loyalists and from the police force. What followed was a period known as ‘the Troubles’ when nationalist/republican and loyalist/unionist groups clashed.

In 1969 British troops were sent to maintain order and to protect the Catholic minority. However, the army soon came to be seen as a tool of the Protestant majority by the minority Catholic community.

Bloody Sunday – 1972 and “The Troubles”

This was reinforced by events such as Bloody Sunday in 1972 when British forces opened fire on a Catholic civil rights march in Derry killing 13 people. An escalation of paramilitary violence followed with many atrocities committed by both sides.

Between 1969 and 1998 it is estimated that well over 3,000 people were killed by paramilitary groups on opposing sides of the conflict.

Peace with Belfast Agreement – 1998
The period of ‘the Troubles’ are generally agreed to have finished with the Belfast (or Good Friday) Agreement of April 10th 1998.

Since 1998 considerable stability and peace has come to Northern Ireland. In 2007 former bitterly opposing parties the Democratic Unionist Party (DUP) and Sinn Féin began to co-operate in government together in Northern Ireland.

20th Century to present day
In the 1980s the Irish economy was in recession and large numbers of people emigrated for employment reasons. Many young people emigrated to the United Kingdom, the United States of America and Australia.
Economic reforms in the 1980s along with membership of the European Community (now European Union) created one of the world’s highest economic growth rates. Ireland in the 1990s, so long considered a country of emigration, became a country of immigration. This period in Irish history was called the Celtic Tiger.

Pittsburgh

I’m struck by how many locals are here. Like Boston, if you grew up in Pittsburgh, it seems like you never leave. Lots of natives.

The importance of the city seems so obvious, now that I know what I know.

For starters, it’s location is strategic. It’s literally sits at the point of land where two great rivers, the Allegheny and the Monongahela, come together to join the Ohio River. These are massive bodies of water.

Before rail and interstate highways, how does the growing economy move its steel, industrial products and consumer products?

The rivers!

And it now makes perfect sense to me… That, where the Three Rivers join, the leadership of this area decided that this point should be commemorated as a major park.

Point Park is just that. It’s a monument to this area, where everyone can come and see why the area exists in the first place. And as a monument inside of a monument, a massive fountain stands at the point of Point Park.

As I look to the Ohio River, to my left is the train tracks. I see a massive freight train passing, with hundreds of cars. I see an incline up to Mount Washington, which overlooks the city. The inclines are a vestige of a past when it was difficult to access the hill tops. They are everywhere.

Also to my left is the Fort Pitt Tunnel, the exit from the city to the east across the M river.

Heinz Park, the stadium, over looks Point Park to the right.

So this is where it all begins. Point Park.

The city seems to grow out of Point Park, in a gradual incline from there, with the Allegheny to the left and the M to the right.

The M River is made for walking and biking.

Three Rivers heritage Trail is the bike/walking path that goes from downtown all the way up west side of the M river

Great Appalachian Trail goes up the east side of M River.

Lots of hills. Lots of green (Schenley, Point, Highland and Frick Park are extra special).

They talk here about the “Pittsburgh Renaissance”.

They mean by that the transition of Pittsburgh from being at the center of the industrial economy, with all of its disgusting grit, to being at the center of the knowledge economy, with the University of Pittsburg, Carnegie Mellon, and Duquesne university leading the way.

Fifth Ave., Forbes Avenue, and Center Street, Penn, and Liberty all connect downtown to these outer neighborhoods.

The neighborhoods are Squirrel Hill, East Liberty, Lawrenceville, the Strip District, Bloomfield, Oakland, Shadyside, and Southside. Like Atlanta, they each have their own pride and style.

Oakland is the Univ. of Pittsburgh and Carnegie Mellon and Schenley Park. Fifth Ave runs through it. The University of Pittsburgh is center stage. It’s a huge urban campus, with all of the hallmarks of classical architecture great libraries, chapels etc. But everyone knows that Carnegie-Mellon is the powerhouse – where you find the rocket fuel of the knowledge economy. It is every bit as much to Pittsburgh as MIT is to Boston.

Shadyside is a great little find. A real neighborhood, centered on Walnut Street (at Ivy). Only a mile walk to Oakland, East Liberty, and Squirrel Hill. Girasole is here – Italian upscale.

Bloomfield is “little Italy”. Not a very good little Italy, but maybe they will keep trying.

Squirrel Hill is awesome. Highly diverse. Walkable. On top of a hill. Close to everything. Great houses. A great find: Everyone Noodle, the home of soup dumplings in Pittsburgh. Place is always full. Big Jewish Community here too. Frick Park is here.

Strip District is warehouses, converted. 21st street and Penn is the center. My favorite: Wholey’s Seafood Market. It’s massive and very cool retail. Like Stew Leonard’s , only better.

Southside is bars, lots of them. The main drag is East Carson. It’s a wide, flat street that looks more like Texas than Pittsburgh. One place in particular, Hofbrahaus, is a raucous German beer hall, with steins, sausages, oom-pah-pah live music. In a section of Southside called Southside Works. At night, ask Uber to take you to the Bartram House Bakery at 2612 East Carson. Easy walk from Birmingham or Hot Metal Bridge.

Lawrenceville is restaurants, lots of them. It’s 48th – 40th. Past the Strip District near Penn. it’s a hike, but a good walk takes you from 48th to 21st.

Everyone here believes that Pittsburgh, a finalist, it’s going to land the second headquarters of Amazon.

Their attitude toward Amazon is a little bit like their attitude toward all sports, but particularly the Steelers: can do.

Co-Working – Update

In my first post on this subject, dated 1/2015, copied below, I said “This field is going to explode”.

Today’s Sunday Times published a major article on WeWork, which confirmed my suspicion.

JCR notes:
– they have 200,000 members
– they are in 20 countries
– revenue this year should to $2.3 billion
– apparently, he has convinced investors, including SoftBank and Benchmark, that it deserves a valuation around $20 billion, more than 10x IWG, its publicly traded competitor.
– they have started WeLive, its residential offering, and Rise, its gym.
– they acquired Meetup, the social network that facilitates in-person gatherings, and the Flatiron School, a coding academy.
– they bought the iconic Lord & Taylor building on Fifth Avenue in Manhattan, which is being transformed into the company’s new headquarters.
– SoftBank, the Japanese technology group led by the enigmatic billionaire Masayoshi Son, recently invested $4 billion
– in plans: WeGrow, the company’s for-profit elementary school, set to open in September.
– IWG,IWG, better known as Regus, has been around for decades. It is a publicly traded co-working company that has more members and more real estate than WeWork. IWG is valued at just $2 billion. Yet Mr. Neumann has convinced investors that WeWork is worth 10 times that figure.

Here it the article:

CREDIT: Sunday New York Times article on WeWork

The WeWork Manifesto: First, Office Space. Next, the World.

The brash, ambitious founders of WeWork, a global network of shared office spaces, want nothing less than to transform the way we work, live and play.

By DAVID GELLES
FEB. 17, 2018

On a cold February morning at the Brooklyn Navy Yard, the skeleton of a modern 15-story building was rising from a muddy construction site along the East River. As long and as tall as a cruise ship, the sleek glass structure loomed above rusty, century-old dry docks, serving notice to the industrial neighborhood that the new economy was coming.
The project, known as Dock 72, is the brainchild of WeWork, the fast-growing New York start-up valued at a whopping $20 billion. In just eight years, WeWork has built a network of 212 shared working spaces around the globe. But WeWork’s chief executive and co-founder, Adam Neumann, isn’t content to just lease out communal offices. Mr. Neumann — a lanky, longhaired 38-year-old Israeli — wants nothing less than to radically transform the way we work, live and play.

When Dock 72 is completed this year, if the aggressive timeline holds, it will represent the fullest expression of Mr. Neumann’s expansive vision to date. There will be an enormous co-working space, a luxury spa and large offices, for other companies like IBM and Verizon, that are designed and run by WeWork. There will be a juice bar, a real bar, a gym with a boxing studio, an outdoor basketball court and panoramic vistas of Manhattan. There will be restaurants and maybe even dry cleaning services and a barbershop.

It will be the kind of place you never have to leave until you need to go to sleep — and if Mr. Neumann has his way, you’ll sleep at one of the apartments he is renting nearby.

It’s an all-encompassing sort of ambition, and Mr. Neumann is the brash and idealistic pitchman. Simply by encouraging strangers to share a beer at the office, he argues, WeWork can heal our fractured society.

“How do you change the world?” Mr. Neumann asked in a recent interview. “Bring people together. Where is the easiest big place to bring people together? In the work environment.”

It may sound simplistic, but around the globe, companies are buying whatever it is that Mr. Neumann and his co-founder, Miguel McKelvey, are selling. WeWork has rapidly expanded to 20 countries, assembled a formidable executive team and attracted some 200,000 members. Big companies like JPMorgan Chase and Siemens are signing on as tenants, and revenues are growing fast, expected to top $2.3 billion this year.

WeWork last year bought the iconic Lord & Taylor building on Fifth Avenue in Manhattan, which is being transformed into the company’s new headquarters. That deal was made possible in part by a recent $4.4 billion investment from SoftBank, the Japanese technology group led by the enigmatic billionaire Masayoshi Son.

Already the company has started WeLive, its residential offering, and Rise, its gym. It acquired Meetup, the social network that facilitates in-person gatherings, and the Flatiron School, a coding academy. Still to come: WeGrow, the company’s for-profit elementary school, set to open in September. WeWork has even invested in plans to create giant wave pools for inland surfing.

A company ostensibly about co-working now employs yoga instructors, architects, teachers, environmental scientists, software engineers, molecular biologists and social psychologists.

Is it all a bit much for a young company still trying to build out its core business? “I’ve made that argument,” said Bruce Dunlevie, a WeWork board member and partner at the venture capital firm Benchmark. But, he said, “great entrepreneurs like Adam don’t listen to guys like me.”

As WeWork expands in all directions, it faces persistent questions about its rich valuation and the durability of its business model. Critics argue that the company does little more than corporate real estate arbitrage — leasing a space, spiffing it up, then subleasing it out to other tenants. The company owns hardly any properties, giving it precious few hard assets. Its growth projections strike many as unattainable, and it has missed expectations before. A number of upstarts loom as potential competitors, seeking to replicate WeWork’s success. And many WeWork tenants are unproven start-ups that could quickly fold.

IWG, a publicly traded co-working company that has more members and more real estate than WeWork, is valued at just $2 billion. Yet Mr. Neumann has convinced investors that WeWork is worth 10 times that figure.

“Adam’s explanation for the valuation of WeWork speaks for itself,” said Chris Kelly, co-founder and president of Convene, a company that offers flexible event spaces and is backed by major real estate firms. “This is not an Excel spreadsheet calculation. He believes there’s an energy behind the brand, and he’s gotten people to invest at that valuation. He has not tried to explain it in traditional financial terms.”

Indeed, to assess WeWork by conventional metrics is to miss the point, according to Mr. Neumann. WeWork isn’t really a real estate company. It’s a state of consciousness, he argues, a generation of interconnected emotionally intelligent entrepreneurs. And Mr. Neumann, with his combination of inspiration and chutzpah, wants to transform not just the way we work and live, but the very world we live in.

It’s an audacious, perhaps delusional plan for a company that made its mark by building communal desks and providing refreshments. And so far, it seems to be working.

Mr. Son, WeWork’s largest investor, is betting that the company will grow exponentially in the years to come, making his multibillion-dollar investment a veritable bargain.
“Make it 10 times bigger than your original plan,” Mr. Son told Forbes late last year. “If you think in that manner, the valuation is cheap. It can be worth a few hundred billion dollars.”

Close Communities
The notion that white-collar workers might actually like their offices is a relatively new one. From the countinghouses of industrial England to the skyscrapers of 1980s Manhattan, offices were mostly uninspiring places designed to maximize space, often with row upon row of unglamorous desks.

“The only kind of model that anyone had for laying out a large workplace was a factory,” said Nikil Saval, author of “Cubed: A Secret History of the Workplace.” “So the office was made to resemble an assembly line.”

This dreary state of affairs began to change in earnest, at least for some, during the dot-com bubble. Tech companies built playful offices with beanbags and Ping-Pong tables, making work spaces less formal. Free food became commonplace.

Raised expectations for amenities and interior design gradually seeped into the mainstream, and today, more and more employees — especially millennials — expect enlightened, unconventional offices.

Enter WeWork. With people bouncing between employers, jobs concentrated in cities and technology making it easier to work remotely, the demand for co-working was suddenly real, and ready to be monetized. Mr. Neumann, who grew up on a kibbutz in Israel, had an epiphany: Bring the communal vibe to the office.

Soon he and a friend — Mr. McKelvey, an equally tall Oregonian who grew up on a collective and was working as an architect — founded an eco-friendly co-working space in Brooklyn. They sold it, but they quickly turned around and started WeWork in 2010.
“Me and Miguel have this common ground,” Mr. Neumann said. “We both grew up in very close communities.”

WeWork didn’t invent co-working spaces, of course. IWG, better known as Regus, has been around for decades. But Mr. Neumann and Mr. McKelvey quickly hit upon a recipe that drew throngs of start-ups: an industrial chic aesthetic, some big common areas with comfy couches, free beer and piped-in pop music.

Individuals pay as little as $45 a month for occasional access to a desk in a common area. Start-ups can pay a few thousand dollars for a private room on a month-to-month basis, and some big companies pay millions of dollars a year for spaces that hold thousands of employees over multiple locations.

It’s a formula that has caught on from New York to Tel Aviv to Shanghai. In New York alone, WeWork has 49 spaces, most of them nearly full. At the WeWork in Harlem, dance companies share space with hair care start-ups in a common area adorned with murals of jazz musicians. At a WeWork in TriBeCa, fashion designers and alcohol distributors work shoulder to shoulder in a spartan space decorated with neon lighting.

For WeWork to really succeed in changing the way we all work, it is going to have to win over big corporations seeking space for thousands of employees. The strategy is an odd reversal for WeWork, which made its name catering to freelancers and start-ups.
The Weather Channel recently moved its ad sales team into an enormous WeWork in Midtown Manhattan. Barbara Bekkedahl, who runs the group, said the transition was easy and the space comfortable and stylish.
But Ms. Bekkedahl had a complaint, too, one that highlights one of the downsides of communal work space. She suggested that the hygienic and sartorial habits of some of her new office mates were lacking.

“As a TV sales team, we groom and dress for outside sales,” she said. “Some of the techie and start-up types housed at WeWork aren’t facing customers all day, so don’t always have the same standards.”

Gripes about grooming are unlikely to slow down WeWork’s business with corporate clients, especially if Mr. Neumann makes good on his promise to save them money. Because WeWork is building out so much space and buying so much furniture, Mr. Neumann says, he can renovate and operate an office for a fraction of the cost that companies would normally spend.

“We have economies of scale,” he said. “I’ll cut your operational costs between 20 to 50 percent.”

It might seem like another instance of Mr. Neumann’s talking a big game but for the fact that more and more companies — GE, HSBC, Salesforce and Microsoft among them — are signing on.
For years now, big companies have outsourced payroll processing, janitorial services and security. It’s not a stretch to imagine more of them outsourcing the design and maintenance of their offices to a company like WeWork.
“We only have 200,000 members,” Mr. McKelvey, 43, said. “That’s ridiculous. We need to have two million and then 20 million.”

More ‘We’ Than ‘Me’
Bankers and lawyers poured out of skyscrapers and made for the suburbs on a recent Monday night in Manhattan’s financial district. But at 110 Wall Street, a building controlled entirely by WeWork, the party was just getting started.

Last year, this 1960s-era office tower was converted into a mixed-use development of Mr. Neumann’s design. There is a co-working space. On the ground floor are trendy restaurants including Westville, Fuku, Momofuku Milk Bar and a bar called the Mail Room.

And then there is a WeLive: a complex of about 200 fully furnished apartments rented out on a short-term basis. Tenants get the signature WeWork aesthetic of unpolished wood and wrought iron, as well as various perks. There are hot tubs on the terrace. There are arcade games and a pool table in the laundry room. There are a chef’s kitchen and a communal dining room. At a bar on a residential floor, a happy hour was brewing and free Tempranillo was flowing.

In the communal dining area, three brothers — Jordan, Jake and Jimmy DeCicco — were cooking for a half-dozen social media influencers, hoping to stir up enthusiasm for their protein-infused iced coffee company. Over rib-eye steaks and brussels sprouts, they talked about promoting the brand and breaking into new markets, passing out beers to anyone who walked by.

The brothers are all in: They live in WeLive, work in the adjacent WeWork space and exercise at WeWork’s nearby gym, Rise.

“It’s awesome,” Jake DeCicco said. “You just roll out of bed, go down the elevator and get to work.”

Had Mr. Neumann been there to share a beer, those words would have been music to his ears. He believes that creating a work and living environment where people mingle is in fact a world-changing innovation. Each WeWork has a “community manager” who keeps tabs on members, makes introductions and organizes social activities.

If more strangers are colliding by the grapefruit water, the thinking goes, they are more likely to meet up and invest in one another’s socially responsible start-ups, and then the world will be a better place.

“Once you choose to enter a WeWork, you choose to be part of something more ‘we’ than ‘me,’” Mr. Neumann said. “People start coming together. They’ll see each other in the elevator, they talk in the stairways. There’s a thousand other things they do.”
Elevators. Stairways. Hardly world-changing innovations. But WeWork takes extra steps to encourage fraternization. Like beer kegs that never run dry.

More than most companies, WeWork promotes the consumption of alcohol as an inherent virtue. Posters on the wall encourage people to have a drink. There are wine tastings at WeLive. Company parties feature top-shelf liquor. Mr. Neumann has a well-known penchant for tequila, and a well-stocked bar is prominent in his office.

On a recent Tuesday at 4:07 p.m., the community manager of a WeWork in Midtown Manhattan sent an email reading: “It’s time to get your creative juices flowing! Join us on the 5th floor to drink some wine & paint a beautiful picture.” Just after noon on Valentine’s Day, there was an invitation to share wine and cake in the common area.

Though alcohol is a social lubricant for some, it can be off-putting to many others. Many women have shared stories of feeling uncomfortable with what they described as a frat house culture at some WeWorks, prompting some to leave.

As WeWork has grown, minor scandals have rattled the company. In 2015, the company grew ensnared in a complicated legal dispute with a group of former janitors who tried to unionize at a subcontractor that WeWork used. The next year, WeWork drew scrutiny for its use of arbitration to settle workplace disputes, and for its firing of an employee who refused to adhere to a related policy.

But so far nothing — not alcohol, labor disputes, questions about the business fundamentals or bad publicity — has managed to alter the company’s trajectory.
“We’re a disrupter of the way people view the spaces they work in on a day-to-day basis,” said Mr. Dunlevie of Benchmark. “And we’re in the early days of taking advantage of that phenomenon.”

Teaching Tykes
In September, WeWork will open its most ambitious project to date: a kindergarten. It may also be the effort that tests whether WeWork is flying too close to the sun.
The creation of Mr. Neumann’s wife, Rebekah, 39, the school is known as WeGrow. When it opens, it promises a well-designed space with a curriculum that emphasizes socializing and entrepreneurship for 3-year-olds on up.

WeGrow fits neatly into Mr. Neumann’s expansive vision for creating a generation of empathetic social impact entrepreneurs. But the risk-reward calculus is different when starting a school.

WeGrow won’t scale as rapidly as WeWork has, so the financial upside is limited. Yet should something go wrong, the fallout could be devastating: It’s one thing to be responsible for the internet going out or paper running low at the communal printer. It’s another thing to take responsibility for the health and development of someone’s child.

Though Ms. Neumann has no background in education (on the website, she describes herself as “an avid student of life” and says her “superpower” is “intuition”), she has applied for accreditation from the state, has hired a team of career educators and is accepting applications for the coming school year. Tuition for toddlers: $36,000 a year.

“We all understand how complicated and regulated school is compared to the simpler business that we are already in,” Mr. Neumann said. “But we decided we’re going to go into education. If you really want to change the world, change kids when they’re 2.”
As he proselytized, Mr. Neumann was sitting on an enormous leather couch in his Chelsea office, which is bigger than many New York City apartments. It included a conference table, a video conferencing setup, several desks, a bar, spreads of food, a Peloton exercise bike, a climbing machine, a boxing bag hanging from the ceiling, a gong, an antechamber where assistants work and a private bathroom.

“It’s going to work,” Mr. Neumann continued. “Is it going to be perfect? Definitely not. Are we going to make mistakes? A hundred percent. Are we going to be comfortable admitting those mistakes? Definitely. It’s what we do here.”

Though such unbridled zeal can be abrasive to some, it could also be viewed as the mark of a peripatetic savant. Walter Isaacson, the biographer of Steve Jobs, Albert Einstein, Benjamin Franklin and Leonardo da Vinci, counts Mr. Neumann as a friend, and said he shared some of the attributes that had allowed those other titans to succeed.

“He has an instinctive feel for how millennials are going to want to have community work experiences without joining large corporations,” Mr. Isaacson said. “And like Steve Jobs and other great entrepreneurs, he knows how to connect the humanities with business and technology.”

‘Make a Life’
It can be tempting to dismiss WeWork as just another overvalued start-up that is high on its own rhetoric and flush with easy money from naïve investors. With little more than faddish interior design, free beer and an invitation to socialize with strangers, Mr. Neumann claims to have conjured up a whole new paradigm for white-collar workers — and for education — and vows that it can change the world.

It’s the kind of utopian prattle that can come off as dangerously out of touch at a moment when a backlash against big tech is brewing. But if any of these potential pitfalls concern Mr. Neumann, he doesn’t show it.
On a Wednesday night in January, Mr. Neumann strode onstage before a packed house at the Theater at Madison Square Garden, basked in spotlights. Wearing a black leather biker jacket and a T-shirt that read “High on We,” Mr. Neumann was playing host at his own extravagant party, a multiday celebration of WeWork and its extended community.

On this, the first night of festivities, Mr. Neumann would oversee a “Shark Tank”-like competition for socially responsible small businesses — ranging from a start-up that made customized prosthetics to a food-delivery service staffed by refugees — each vying for a $1 million prize.

Earlier, Mr. Neumann had rattled off the company’s achievements and outlined some of its more outsize ambitions. As the evening’s performer, the Grammy-winning rapper Macklemore, waited backstage, Mr. Neumann went on an impromptu riff about how people should “make a life, not just a living,” the company’s aspirational motto.

Mr. Neumann also stated that it was important to support social entrepreneurs. “Of course that makes a lot of sense, but who’s going to pay for that?” he said. “And we said, ‘Well, Masa might!’”

The line generated a laugh among the hundreds of knowing employees in the room — Mr. Son, whose nickname is Masa, was conveniently absent — but it was a tell from Mr. Neumann, a sly admission that at this point he is playing with house money.

Then, when the time came to choose a winner, Mr. Neumann made a surprise announcement: Instead of choosing one recipient, WeWork would give away $1 million each to two of the companies — Re:3D, a 3-D printing company, and Global Vision 2020, a nonprofit that provides prescription glasses to people in the developing world. And it would give another couple of million to the other half-dozen finalists.

Confetti fell from the rafters. The winners cried on stage. Mr. Neumann took it all in, beaming.

Even Macklemore was taken aback by all the money flying around. “I was just watching it, chugging a Red Bull,” he said shortly into his set, “and I immediately thought, ‘Damn, I should have got into technology.’”
Continue reading the main story

Cole Wilson for The New York Times
David Gelles is the Corner Office columnist and a business reporter. Follow him on Twitter @dgelles and LinkedIn.

A version of this article appears in print on February 18, 2018, on Page BU1 of the New York edition with the headline: First, Office Space; Then the World.

=================== PRIOR POST from January, 2015 ============
Co-Working

The field is going to explode.

The model for co-working is ROAM. GREAT business model – huge uptake. Place was packed.

They currently are in Alpharetta and Dunwoody, and are opening a Buckhead facility in Tower Place this summer. They have a mini-cafeteria, office space, mail handling, membership services, printing, etc.

Here is the download:

http://meetatroam.com

“Roam is the innovator’s workplace; a meeting and gathering experience for the new workforce. We are partnering for success by creating environments where people focus, collaborate, learn and socialize.”

“We are a Collective, a Local Community of Innovators, Pioneers and Visionaries.”

From a member: “Patrick also thinks that energy is Roam’s differentiator. “When you walk into Roam Dunwoody, it’s like you walk into a room full of vibrations,” he says. He loves interacting with the other members here and feeding off of that energy. “Every Roam member is passionate about whatever they do. They really want their business to make an impact.” The members as a whole are a forward-thinking group, open to new ideas and supportive of innovation, “

This entry was posted in Architecture, Well-Being – PersonaL and tagged Architecture, personal well-being, Serenbe on January 14, 2015.

Homeostasis

One of the smartest guys in the room, Antonio Damasio, give his views about neuroscience and its relationship to pain, pleasure, and feelings. He points out that they all play a giant role in one of life’s most important concepts: homeostasis.

CREDIT: http://nautil.us/issue/56/perspective/antonio-damasio-tells-us-why-pain-is-necessary

Antonio Damasio Tells Us Why Pain Is Necessary
The neuroscientist explains why feelings evolved.

BY KEVIN BERGER
JANUARY 18, 2018

Following Oliver Sacks, Antonio Damasio may be the neuroscientist whose popular books have done the most to inform readers about the biological machinery in our heads, how it generates thoughts and emotions, creates a self to cling to, and a sense of transcendence to escape by. But since he published Descartes’ Error in 1994, Damasio has been concerned that a central thesis in his books, that brains don’t define us, has been muted by research that states how much they do. To Damasio’s dismay, the view of the human brain as a computer, the command center of the body, has become lodged in popular culture.

In his new book, The Strange Order of Things, Damasio, a professor of neuroscience and the director of the Brain and Creativity Institute at the University of Southern California, mounts his boldest argument yet for the egalitarian role of the brain. In “Why Your Biology Runs on Feelings,” another article in this chapter of Nautilus, drawn from his new book, Damasio tells us “mind and brain influence the body proper just as much as the body proper can influence the brain and the mind. They are merely two aspects of the very same being.”

BEYOND SCIENCE: Antonio Damasio, director of the Brain and Creativity Institute at USC, sings the glories of the arts in his new book, The Strange Order of Things: “The sciences alone cannot illuminate the entirety of human experience without the light that comes from art and humanities.”

The Strange Order of Things offers a sharp and uncommon focus on feelings, on how their biological evolution fueled our prosperity as a species, spurred science and medicine, religion and art. “When I look back on Descartes’ Error, it was completely timid compared to what I’m saying now,” Damasio says. He knows his new book may rile believers in the brain as emperor of all. “I was entirely open with my ideas,” he says. “If people don’t like it, they don’t like it. They can criticize it, of course, which is fair, but I want to tell them, because it’s so interesting, this is why you have feelings.”
In this interview with Nautilus, Damasio, in high spirits, explains why feelings deserve a starring role in human culture, what the real problem with consciousness studies are, and why Shakespeare is the finest cognitive scientist of them all.

One thing I like about The Strange Order of Things is it counters the idea that we are just our brains.

Oh, that idea is absolutely wrong.

Not long ago I was watching a PBS series on the brain, in which host and neurologist David Eagleman, referring to our brain, declares, “What we feel, what matters to us, our beliefs and our hopes, everything we are happens in here.”

That’s not the whole story. Of course, we couldn’t have minds with all of their enormous complexity without nervous systems. That goes without saying. But minds are not the result of nervous systems alone. The statement you quote reminds me of Francis Crick, someone whom I admired immensely and was a great friend. Francis was quite opposed to my views on this issue. We would have huge discussions because he was the one who said that everything you are, your thoughts, your feelings, your mental this and that, are nothing but your neurons. This is a big mistake, in my view, because we are mentally and behaviorally far more than our neurons. We cannot have feelings arising from neurons alone. The nervous systems are in constant interaction and cooperation with the rest of the organism. The reason why nervous systems exist in the first place is to assist the rest of the organism. That fact is constantly missed.

The concept of “homeostasis” is critical in your new book. What is homeostasis?

It’s the fundamental property of life that governs everything that living cells do, whether they’re living cells alone, or living cells as part of a tissue or an organ, or a complex system such as ourselves. Most of the time, when people hear the word homeostasis, they think of balance, they think of equilibrium. That is incorrect because if we ever were in “equilibrium,” we would be dead. Thermodynamically, equilibrium means zero thermal differences and death. Equilibrium is the last thing that nature aims for.

The importance of feeling is that it makes you critically aware of what you are doing in moral terms.

What we must have is efficient functioning of a variety of components of an organism. We procure energy so that the organism can be perpetuated, but then we do something very important and almost always missed, which is hoard energy. We need to maintain positive energy balances, something that goes beyond what we need right now because that’s what ensures the future. What’s so beautiful about homeostasis is that it’s not just about sustaining life at the moment, but about having a sort of guarantee that it will continue into the future. Without those positive energy balances, we court death.

What’s a good example of homeostasis?

If you are at the edge of your energy reserves and you’re sick with the flu, you can easily tip over and die. That’s one of the reasons why there’s fat accumulation in our bodies. We need to maintain the possibility of meeting the extra needs that come from stress, in the broad sense of the term. I poetically describe this as a desire for permanence, but it’s not just poetic. I believe it’s reality.

You write homeostasis is maintained in complex creatures like us through a constant interplay of pleasure and pain. Are you giving a biological basis to Freud’s pleasure principle—life is governed by a drive for pleasure and avoidance of pain?

Yes, to a great extent. What’s so interesting is that for most of the existence of life on earth, all organisms have had this effective, automated machinery that operates for the purpose of maintenance and continuation of life. I like to call the organisms that only have that form of regulation, “living automata.” They can fight. They can cooperate. They can segregate. But there’s no evidence that they know that they’re doing so. There’s no evidence of anything we might call a mind. Obviously we have more than automatic regulation. We can control regulation in part, if we wish to. How did that come about?
Very late in the game of life there’s the appearance of nervous systems. Now you have the possibility of mapping the inside and outside world. When you map the inside world, guess what you get? You get feelings. Of necessity, the machinery of life is either in a state of reasonable efficiency or in a state of inefficiency, which is most often the case. Organisms with nervous systems can image these states. And when you start having imagery, you start having minds. Now you begin to have the possibility of responding in a way that you could call “knowledgeable.” That happens when organisms make images. A bad internal state would have been imaged as the first pains, the first malaises, the first sufferings. Now the organism has the possibility of knowingly avoiding whatever caused the pain or prefer a place or a thing or another animal that causes the opposite of that, which is well-being and pleasure.

Why would feelings have evolved?

Feelings triumphed in evolution because they were so helpful to the organisms that first had them. It’s important to understand that nervous systems serve the organism and not the other way around. We do not have brains controlling the entire operation. Brains adjust controls. They are the servants of a living organism. Brains triumphed because they provided something useful: coordination. Once organisms got to the point of being so complex that they had an endocrine system, immune system, circulation, and central metabolism, they needed a device to coordinate all that activity. They needed to have something that would simultaneously act on point A and point Z, across the entire organism, so that the parts would not be working at cross purposes. That’s what nervous systems first achieve: making things run smoothly.

Now, in the process of doing that, over millions of years, we have developed nervous systems that do plenty of other things that do not necessarily result in coordination of the organism’s interior, but happen to be very good at coordinating the internal world in relation to the outside world. This is what the higher reaches of our nervous system, namely the cerebral cortex, does. It gives us the possibilities of perceiving, of memorizing, of reasoning over the knowledge that we memorize, of manipulating all of that and even translating it into language. That is all very beautiful, and it is also homeostatic, in the sense that all of it is convenient to maintain life. It if were not, it would just have been discarded by evolution.

How does your thesis square with the hard problem of consciousness, how the physical tissue in our heads produces immaterial sensations?

Some philosophers of mind will say, “Well, we face this gigantic problem. How does consciousness emerge out of these nerve cells?” Well, it doesn’t. You’re not dealing with the brain alone. You have to think in terms of the whole organism. And you have to think in evolutionary terms.

The critical problem of consciousness is subjectivity. You need to have a “subject.” You can call it an I or a self. Not only are you aware right now that you are listening to my words, which are in the panorama of your consciousness, but you are aware of being alive, you realize that you’re there, you’re ticking. We are so distracted by what is going on around us that we forget sometimes that we are, A-R-E in capitals. But actually you are watching what you are, and so you need to have a mechanism in the brain that allows you to fabricate that part of the mind that is the watcher.
You do that with a number of devices that have to do, for example, with mapping the movements of your eyes, the position of your head, and the musculature of your body. This allows you to literally construct images of yourself making images. And you also have a layer of consciousness that is made by your perception of the outside world; and another layer that is made of appreciating the feelings that are being generated inside of you. Once you have this stack of processes, you have a fighting chance of creating consciousness.

Why do you object to comparing the brain to a computer?

In the early days of neuroscience, one of our mentors was Warren McCulloch. He was a gigantic figure of neuroscience, one of the originators of what is today computational neuroscience. When you go back to the ’40s and ’50s, you find this amazing discovery that neurons can be either active or inactive, in a way that can be described mathematically as zeroes and ones. Combine that with Alan Turing and you get this idea that the brain is like a computer and that it produces minds using that same simple method.

Religions have been one of the great causes of violence throughout history. But you can’t blame Christ for it.

That has been a very useful idea. And true enough, it explains a good part of the complex operations, that our brains produce such as language. Those operations require a lot of precision and are being carried out by cerebral cortex, with enormous detail, and probably in a basic computational mode. All the great successes of artificial intelligence used this idea and have been concerned with high-level reasoning. That is why A.I. has been so successful with games such as chess or Go. They use large memories and powerful reasoning.

Are you saying neural codes or algorithms don’t blend with living systems?

Well, they match very well with things that are high on the scale of the mental operations and behaviors, such as those we require for our conversation. But they don’t match well with the basic systems that organize life, that regulate, for example, the degree of mental energy and excitation or with how you emote and feel. The reason is that the operations of the nervous system responsible for such regulation relies less on synaptic signaling, the one that can be described in terms of zeroes and ones, and far more on non-synaptic messaging, which lends itself less to a rigid all or none operation.
Perhaps more importantly, computers are machines invented by us, made of durable materials. None of those materials has the vulnerability of the cells in our body, all of which are at risk of defective homeostasis, disease, and death. In fact, computers lack most of the characteristics that are key to a living system. A living system is maintained in operation, against all odds, thanks to a complicated mechanism that can fall apart as a result of minimal amounts of malfunction. We are extremely vulnerable creatures. People often forget that. Which is one of the reasons why our culture, or Western cultures in general, are a bit too calm and complacent about the threats to our lives. I think we are becoming less sensitive to the idea that life is what dictates what we should do or not do with ourselves and with others.

What is love for?

To protect, to cause flourishing, to give and receive pleasure, to procreate, to soothe. Endless great uses, as you can see.

How do emotions such as anger or sadness serve homeostasis?

At individual levels, both anger and sadness are protective. Anger lets your adversary know that you mean business and that there may be costs to attacking you. These days anger is an expression of sociopolitical conflicts. It is overused and has largely become ineffectual. Sadness is a prelude to mental hibernation. It lets you retreat and lick your wounds. It lets you plan a strategy of response to the cause of the wounds.

You say feelings spurred the creation of cultures. How so?

Before I started The Strange Order of Things, I was asking friends and colleagues how they thought cultures had begun. Invariably what people said was, “Oh, we’re so smart. We’re so intellectually powerful. We have all this reasoning ability. On top of it all, we have language—and there you are.” To which I say, “Fine, that’s true. How would you invent anything if you were stupid?” You would not. But the issue is to recognize the motive behind what you do. Why is it that you did it in the first place? Why did Moses come down from the mountain with Ten Commandments? Well, the Ten Commandments are representative of homeostasis because they tell you not to kill, not to steal, not to lie, not to do a lot of bad things. It sounds trivial but it’s not. We fail to think about motivation and so we do not factor it into the process of invention. We do not factor in the motives behind science or technology or governance or religion.

How does consciousness emerge out of nerve cells? Well, it doesn’t. You’re not dealing with the brain alone.

And there’s one more thing: The importance of feeling is that it makes you critically aware of what you are doing in moral terms. It forces you to look back and realize that what people were doing historically, at the outset, at the moment of invention of a cultural instrument or a cultural practice, was an attempt to reduce the amount of suffering and to maximize the amount of wellbeing not only for the inventor, but for the community around them. One person alone can invent a painting or a musical composition, but it is not meant for that person alone. And you do not invent a moral system or a government system alone or for yourself alone. It requires a society, a community.

The assertion that intellect is governed by feelings can sound New Age-y. It seems to undermine the powers of reason. How should we understand reason if it’s always motivated by subjective feelings?

Subjective simply means that it has a personal point of view, that it pertains to the self. It is compatible with “objective” facts and with truth. It is not about relativism. The fact that feelings motivate the use of knowledge and reason do not make the knowledge and the reason any less truthful or valid. Feelings are simply a call to action.

If humans formed societies and cultures to avoid suffering and pain, why do we have violence and wars?

Your question is very important. Take developments of political systems. On the face of it, when you look at Marxist ideas, you say, “This is obviously homeostatic.” What Marx and others were trying to do in the 19th century is confront and modify a social arrangement that was not equitable, that had some people suffering too much and some profiting too much. So having a system that produced equality made a lot of sense. In a way that is something that biological systems have been trying to do, quite naturally, for a long time. And when the natural systems do not succeed at improved regulation, guess what? They are weeded out by evolution because they promote illness.
Biological evolution, through genetic selection, eliminates those mechanisms. At the cultural level something comparable occurs. Seen in retrospect, Marxism as applied in Russia resulted in one of the worst tragedies of humankind. But Russian communism was ultimately weeded out by cultural selection. It took around 70 years to do it, but cultural selection did operate in a homeostatic way. It led to the fall of the Berlin Wall and the Soviet empire. It was a homeostatic correction achieved by social means.
The same reasoning applies to religions. For example, we can claim that religions have been one of the great causes of violence throughout history. But you certainly can’t blame Christ for that violence. He preached compassion, and the pardoning of enemies, and love. It does not follow that good recommendations can be implemented correctly and always produce good results. These facts in no way deny the homeostatic intent of religions.

You write, “The increasing knowledge of biology from molecules to systems reinforces the humanist project.” How so?

This knowledge gives us a broader picture of who we are and where we are in the history of life on earth. We had modest beginnings, and we have incorporated an incredible amount of living wisdom that comes from as far down as bacteria. There are characteristics of our personal and cultural behavior that can be found in single-cell organisms or in social insects. They clearly do not have the kind of highly developed brains that we have. In some cases, they don’t have any brain at all. But by analyzing this strange order of developments we are confronted with the spectacle of life processes that are complex and rich in spite of their apparent modesty, so complex and rich that they can deliver the high level of behaviors that we normally, quite pretentiously, attribute only to our great human smarts. We should be far more humble. That’s one of my main messages. In general, connecting cultures to the life process makes apparent a link that we have ignored for far too long.


What would you be if you weren’t a scientist?

When I was an adolescent, I often thought that I might become a philosopher or perhaps a playwright or filmmaker. That’s because I so admired what philosophers and storytellers had found about the human mind. Today when people ask me, “Who’s your most admired cognitive scientist?” I say Shakespeare. He knew it all and knew it with enormous precision. He didn’t have the nice fMRI scanner and electrophysiology techniques we have in our Institute. But he knew human beings. Watch a good performance of Hamlet, King Lear, or Othello. All of our psychology is there, richly analyzed, ready for us to experience and appreciate.

Philip Roth Update

I found this chock full of wisdom:

CREDIT: NYT Interview with Philip Roth

In an exclusive interview, the (former) novelist shares his thoughts on Trump, #MeToo and retirement.

With the death of Richard Wilbur in October, Philip Roth became the longest-serving member in the literature department of the American Academy of Arts and Letters, that august Hall of Fame on Audubon Terrace in northern Manhattan, which is to the arts what Cooperstown is to baseball. He’s been a member so long he can recall when the academy included now all-but-forgotten figures like Malcolm Cowley and Glenway Wescott — white-haired luminaries from another era. Just recently Roth joined William Faulkner, Henry James and Jack London as one of very few Americans to be included in the French Pleiades editions (the model for our own Library of America), and the Italian publisher Mondadori is also bringing out his work in its Meridiani series of classic authors. All this late-life eminence — which also includes the Spanish Prince of Asturias Award in 2012 and being named a commander in the Légion d’Honneur of France in 2013 — seems both to gratify and to amuse him. “Just look at this,” he said to me last month, holding up the ornately bound Mondadori volume, as thick as a Bible and comprising titles like “Lamento di Portnoy” and “Zuckerman Scatenato.” “Who reads books like this?”
In 2012, as he approached 80, Roth famously announced that he had retired from writing. (He actually stopped two years earlier.) In the years since, he has spent a certain amount of time setting the record straight. He wrote a lengthy and impassioned letter to Wikipedia, for example, challenging the online encyclopedia’s preposterous contention that he was not a credible witness to his own life. (Eventually, Wikipedia backed down and redid the Roth entry in its entirety.) Roth is also in regular touch with Blake Bailey, whom he appointed as his official biographer and who has already amassed 1,900 pages of notes for a book expected to be half that length. And just recently, he supervised the publication of “Why Write?,” the 10th and last volume in the Library of America edition of his work. A sort of final sweeping up, a polishing of the legacy, it includes a selection of literary essays from the 1960s and ’70s; the full text of “Shop Talk,” his 2001 collection of conversations and interviews with other writers, many of them European; and a section of valedictory essays and addresses, several published here for the first time. Not accidentally, the book ends with the three-word sentence “Here I am” — between hard covers, that is.
But mostly now Roth leads the quiet life of an Upper West Side retiree. (His house in Connecticut, where he used to seclude himself for extended bouts of writing, he now uses only in the summer.) He sees friends, goes to concerts, checks his email, watches old movies on FilmStruck. Not long ago he had a visit from David Simon, the creator of “The Wire,” who is making a six-part mini-series of “The Plot Against America,” and afterward he said he was sure his novel was in good hands. Roth’s health is good, though he has had several surgeries for a recurring back problem, and he seems cheerful and contented. He’s thoughtful but still, when he wants to be, very funny.
I have interviewed Roth on several occasions over the years, and last month I asked if we could talk again. Like a lot of his readers, I wondered what the author of “American Pastoral,” “I Married a Communist” and “The Plot Against America” made of this strange period we are living in now. And I was curious about how he spent his time. Sudoku? Daytime TV? He agreed to be interviewed but only if it could be done via email. He needed to take some time, he said, and think about what he wanted to say.
C.M. In a few months you’ll turn 85. Do you feel like an elder? What has growing old been like?
P.R. Yes, in just a matter of months I’ll depart old age to enter deep old age — easing ever deeper daily into the redoubtable Valley of the Shadow. Right now it is astonishing to find myself still here at the end of each day. Getting into bed at night I smile and think, “I lived another day.” And then it’s astonishing again to awaken eight hours later and to see that it is morning of the next day and that I continue to be here. “I survived another night,” which thought causes me to smile once more. I go to sleep smiling and I wake up smiling. I’m very pleased that I’m still alive. Moreover, when this happens, as it has, week after week and month after month since I began drawing Social Security, it produces the illusion that this thing is just never going to end, though of course I know that it can stop on a dime. It’s something like playing a game, day in and day out, a high-stakes game that for now, even against the odds, I just keep winning. We will see how long my luck holds out.
C.M. Now that you’ve retired as a novelist, do you ever miss writing, or think about un-retiring?
P.R. No, I don’t. That’s because the conditions that prompted me to stop writing fiction seven years ago haven’t changed. As I say in “Why Write?,” by 2010 I had “a strong suspicion that I’d done my best work and anything more would be inferior. I was by this time no longer in possession of the mental vitality or the verbal energy or the physical fitness needed to mount and sustain a large creative attack of any duration on a complex structure as demanding as a novel…. Every talent has its terms — its nature, its scope, its force; also its term, a tenure, a life span…. Not everyone can be fruitful forever.”
C.M. Looking back, how do you recall your 50-plus years as a writer?
P.R. Exhilaration and groaning. Frustration and freedom. Inspiration and uncertainty. Abundance and emptiness. Blazing forth and muddling through. The day-by-day repertoire of oscillating dualities that any talent withstands — and tremendous solitude, too. And the silence: 50 years in a room silent as the bottom of a pool, eking out, when all went well, my minimum daily allowance of usable prose.
C.M. In “Why Write?” you reprint your famous essay “Writing American Fiction,” which argues that American reality is so crazy that it almost outstrips the writer’s imagination. It was 1960 when you said that. What about now? Did you ever foresee an America like the one we live in today?
P.R. No one I know of has foreseen an America like the one we live in today. No one (except perhaps the acidic H. L. Mencken, who famously described American democracy as “the worship of jackals by jackasses”) could have imagined that the 21st-century catastrophe to befall the U.S.A., the most debasing of disasters, would appear not, say, in the terrifying guise of an Orwellian Big Brother but in the ominously ridiculous commedia dell’arte figure of the boastful buffoon. How naïve I was in 1960 to think that I was an American living in preposterous times! How quaint! But then what could I know in 1960 of 1963 or 1968 or 1974 or 2001 or 2016?
C.M. Your 2004 novel, “The Plot Against America,” seems eerily prescient today. When that novel came out, some people saw it as a commentary on the Bush administration, but there were nowhere near as many parallels then as there seem to be now.
P.R. However prescient “The Plot Against America” might seem to you, there is surely one enormous difference between the political circumstances I invent there for the U.S. in 1940 and the political calamity that dismays us so today. It’s the difference in stature between a President Lindbergh and a President Trump. Charles Lindbergh, in life as in my novel, may have been a genuine racist and an anti-Semite and a white supremacist sympathetic to Fascism, but he was also — because of the extraordinary feat of his solo trans-Atlantic flight at the age of 25 — an authentic American hero 13 years before I have him winning the presidency. Lindbergh, historically, was the courageous young pilot who in 1927, for the first time, flew nonstop across the Atlantic, from Long Island to Paris. He did it in 33.5 hours in a single-seat, single-engine monoplane, thus making him a kind of 20th-century Leif Ericson, an aeronautical Magellan, one of the earliest beacons of the age of aviation. Trump, by comparison, is a massive fraud, the evil sum of his deficiencies, devoid of everything but the hollow ideology of a megalomaniac.
C.M. One of your recurrent themes has been male sexual desire — thwarted desire, as often as not — and its many manifestations. What do you make of the moment we seem to be in now, with so many women coming forth and accusing so many highly visible men of sexual harassment and abuse?
P.R. I am, as you indicate, no stranger as a novelist to the erotic furies. Men enveloped by sexual temptation is one of the aspects of men’s lives that I’ve written about in some of my books. Men responsive to the insistent call of sexual pleasure, beset by shameful desires and the undauntedness of obsessive lusts, beguiled even by the lure of the taboo — over the decades, I have imagined a small coterie of unsettled men possessed by just such inflammatory forces they must negotiate and contend with. I’ve tried to be uncompromising in depicting these men each as he is, each as he behaves, aroused, stimulated, hungry in the grip of carnal fervor and facing the array of psychological and ethical quandaries the exigencies of desire present. I haven’t shunned the hard facts in these fictions of why and how and when tumescent men do what they do, even when these have not been in harmony with the portrayal that a masculine public-relations campaign — if there were such a thing — might prefer. I’ve stepped not just inside the male head but into the reality of those urges whose obstinate pressure by its persistence can menace one’s rationality, urges sometimes so intense they may even be experienced as a form of lunacy. Consequently, none of the more extreme conduct I have been reading about in the newspapers lately has astonished me.
C.M. Before you were retired, you were famous for putting in long, long days. Now that you’ve stopped writing, what do you do with all that free time?
P.R. I read — strangely or not so strangely, very little fiction. I spent my whole working life reading fiction, teaching fiction, studying fiction and writing fiction. I thought of little else until about seven years ago. Since then I’ve spent a good part of each day reading history, mainly American history but also modern European history. Reading has taken the place of writing, and constitutes the major part, the stimulus, of my thinking life.
C.M. What have you been reading lately?
P.R. I seem to have veered off course lately and read a heterogeneous collection of books. I’ve read three books by Ta-Nehisi Coates, the most telling from a literary point of view, “The Beautiful Struggle,” his memoir of the boyhood challenge from his father. From reading Coates I learned about Nell Irvin Painter’s provocatively titled compendium “The History of White People.” Painter sent me back to American history, to Edmund Morgan’s “American Slavery, American Freedom,” a big scholarly history of what Morgan calls “the marriage of slavery and freedom” as it existed in early Virginia. Reading Morgan led me circuitously to reading the essays of Teju Cole, though not before my making a major swerve by reading Stephen Greenblatt’s “The Swerve,” about the circumstances of the 15th-century discovery of the manuscript of Lucretius’ subversive “On the Nature of Things.” This led to my tackling some of Lucretius’ long poem, written sometime in the first century B.C.E., in a prose translation by A. E. Stallings. From there I went on to read Greenblatt’s book about “how Shakespeare became Shakespeare,” “Will in the World.” How in the midst of all this I came to read and enjoy Bruce Springsteen’s autobiography, “Born to Run,” I can’t explain other than to say that part of the pleasure of now having so much time at my disposal to read whatever comes my way invites unpremeditated surprises.
Pre-publication copies of books arrive regularly in the mail, and that’s how I discovered Steven Zipperstein’s “Pogrom: Kishinev and the Tilt of History.” Zipperstein pinpoints the moment at the start of the 20th century when the Jewish predicament in Europe turned deadly in a way that foretold the end of everything. “Pogrom” led me to find a recent book of interpretive history, Yuri Slezkine’s “The Jewish Century,” which argues that “the Modern Age is the Jewish Age, and the 20th century, in particular, is the Jewish Century.” I read Isaiah Berlin’s “Personal Impressions,” his essay-portraits of the cast of influential 20th-century figures he’d known or observed. There is a cameo of Virginia Woolf in all her terrifying genius and there are especially gripping pages about the initial evening meeting in badly bombarded Leningrad in 1945 with the magnificent Russian poet Anna Akhmatova, when she was in her 50s, isolated, lonely, despised and persecuted by the Soviet regime. Berlin writes, “Leningrad after the war was for her nothing but a vast cemetery, the graveyard of her friends. … The account of the unrelieved tragedy of her life went far beyond anything which anyone had ever described to me in spoken words.” They spoke until 3 or 4 in the morning. The scene is as moving as anything in Tolstoy.
Just in the past week, I read books by two friends, Edna O’Brien’s wise little biography of James Joyce and an engagingly eccentric autobiography, “Confessions of an Old Jewish Painter,” by one of my dearest dead friends, the great American artist R. B. Kitaj. I have many dear dead friends. A number were novelists. I miss finding their new books in the mail.
Follow New York Times Books on Facebook and Twitter (@nytimesbooks), and sign up for our newsletter.
Charles McGrath, a former editor of the Book Review, is a contributing writer for The Times. He is the editor of a Library of America collection of John O’Hara stories.

Fiber’s Role in Diet

In this post, I discuss the role of the microbiome and the role of fiber in supporting a healthy microbiome. A healthy microbiome is related to the amount and diversity of the bacteria found within it.

If I had to summarize, I would say this: new research strongly confirms that high fiber diets are healthy diets. Because of this finding, eat 20-200 grams of fiber daily, by eating nuts, berries, whole grains, beans and vegetables.

The Role of the Microbiome
Bacteria in the gut – the “microbiome” – has been the subject of intense research interest over the last decade.

We now know that a healthy microbiome is essential to health and wellbeing.

On a scientific level, we now know that a healthy biome is one with billions of bacteria, of many kinds.

And specifically, we now know that a healthy biome has a layer of mucus along the walls of the intestine.

“The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome.”

If that mucus layer is thick, it is healthy. If it is thin, it is unhealthy (thin mucus layers have been linked to chronic inflammation). (“Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.”)

The Role of Fiber in Supporting a Healthy Microbiome
“Fiber” refers to ruffage from fruits, vegetables, and beans that is hard to digest. If fiber is hard to digest, why are they so universally hailed as “good for you”?

That’s the subject of two newly-reported experiments.

The answer seems to lie in bacteria in the gut – the “microbiome”. Much has been written about their beneficial role in the body. But now it seems that some bacteria in the gut have an additional role: they digest fiber that human enzymes cannot digest.

So some bacteria thrive in the gut because of the fiber they eat. And, in an important natural chain, apparently there are some bacteria in the gut that that thrive because the waste of the bacteria that eats fiber. An ecosystem of bacteria tracing to fiber!

This speaks to one of the most-discussed subjects in science today: how and why is one microbiome populated with relatively few bacteria numbers and types, and why is another microbiome much more diverse – with many more bacteria and bacteria types?

One study, shown below, reports from Tanzania, after reviewing data from tribes that sustain themselves on high fiber foods. The results, reported in Science, clearly show that an ultra-high fiber diet results in ultra high bacteria counts and diversity.

Other findings suggest that fiber is the food of many bacteria types. Because of this, a diverse, healthy bacterial microbiome is dependent on a fiber-rich diet. (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.”)

Indeed, it may well be true that many types of fibers support many types of bacteria.

Proof of this?

Researchers, including Dr. Gerwitz at Georgia State proved that more fiber seems to be better:

Bad: high, fat, low fiber (“On a low-fiber diet, they found, the population crashed, shrinking tenfold.” “Many common species became rare, and rare species became common.“)

Good: modest fiber
Better: high dose fiber (“Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.”)

Best: high dose of fiber-feeding bacteria
(“Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.”

(“Research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.”)

===========================
This article documents rich-in-fiber foods:

CREDIT: http://www.todaysdietitian.com/newarchives/063008p28.shtml

In recognition of fiber’s benefits, Today’s Dietitian looks at some of the best ways to boost fiber intake,from whole to fortified foods,using data from the USDA National Nutrient Database for Standard Reference.

Top Fiber-Rich Foods
1. Get on the Bran Wagon (Oat bran, All-bran cereal, fiber-one chewy bars, etc)
One simple way to increase fiber intake is to power up on bran. Bran from many grains is very rich in dietary fiber. Oat bran is high in soluble fiber, which has been shown to lower blood cholesterol levels. Wheat, corn, and rice bran are high in insoluble fiber, which helps prevent constipation. Bran can be sprinkled into your favorite foods,from hot cereal and pancakes to muffins and cookies. Many popular high-fiber cereals and bars are also packed with bran.

2. Take a Trip to Bean Town (Limas, Pintos, Lentils, etc)
Beans really are the magical fruit. They are one of the most naturally rich sources of fiber, as well as protein, lysine, vitamins, and minerals, in the plant kingdom. It’s no wonder so many indigenous diets include a bean or two in the mix. Some people experience intestinal gas and discomfort associated with bean intake, so they may be better off slowly introducing beans into their diet. Encourage a variety of beans as an animal protein replacement in stews, side dishes, salads, soups, casseroles, and dips.

3. Go Berry Picking (especially blackberries and raspberries)
Jewel-like berries are in the spotlight due to their antioxidant power, but let’s not forget about their fiber bonus. Berries happen to yield one of the best fiber-per-calorie bargains on the planet. Since berries are packed with tiny seeds, their fiber content is typically higher than that of many fruits. Clients can enjoy berries year-round by making the most of local berries in the summer and eating frozen, preserved, and dried berries during the other seasons. Berries make great toppings for breakfast cereal, yogurt, salads, and desserts.

4. Wholesome Whole Grains (especially barley, oats, brown rice, rye wafers)
One of the easiest ways to up fiber intake is to focus on whole grains. A grain in nature is essentially the entire seed of the plant made up of the bran, germ, and endosperm. Refining the grain removes the germ and the bran; thus, fiber, protein, and other key nutrients are lost. The Whole Grains Council recognizes a variety of grains and defines whole grains or foods made from them as containing “all the essential parts and naturally-occurring nutrients of the entire grain seed. If the grain has been processed, the food product should deliver approximately the same rich balance of nutrients that are found in the original grain seed.â€‌ Have clients choose different whole grains as features in side dishes, pilafs, salads, breads, crackers, snacks, and desserts.

5. Sweet Peas (especially frozen green peas, black eyed peas)
Peas,from fresh green peas to dried peas,are naturally chock full of fiber. In fact, food technologists have been studying pea fiber as a functional food ingredient. Clients can make the most of peas by using fresh or frozen green peas and dried peas in soups, stews, side dishes, casseroles, salads, and dips.

6. Green, the Color of Fiber (Spinach, etc)
Deep green, leafy vegetables are notoriously rich in beta-carotene, vitamins, and minerals, but their fiber content isn’t too shabby either. There are more than 1,000 species of plants with edible leaves, many with similar nutritional attributes, including high-fiber content. While many leafy greens are fabulous tossed in salads, saut ©ing them in olive oil, garlic, lemon, and herbs brings out a rich flavor.

7. Squirrel Away Nuts and Seeds (especially flaxseed and sesame seed)
Go nuts to pack a fiber punch. One ounce of nuts and seeds can provide a hearty contribution to the day’s fiber recommendation, along with a bonus of healthy fats, protein, and phytochemicals. Sprinkling a handful of nuts or seeds over breakfast cereals, yogurt, salads, and desserts is a tasty way to do fiber.

8. Play Squash (especially acorn squash)
Dishing up squash,from summer to winter squash,all year is another way that clients can ratchet up their fiber intake. These nutritious gems are part of the gourd family and contribute a variety of flavors, textures, and colors, as well as fiber, vitamins, minerals, and carotenoids, to the dinner plate. Squash can be turned into soups, stews, side dishes, casseroles, salads, and crudit ©s. Brush squash with olive oil and grill it in the summertime for a healthy, flavorful accompaniment to grilled meats.

9. Brassica or Bust (broccoli, cauliflower, kale, cabbage, and Brussels sprouts)
Brassica vegetables have been studied for their cancer-protective effects associated with high levels of glucosinolates. But these brassy beauties, including broccoli, cauliflower, kale, cabbage, and Brussels sprouts, are also full of fiber. They can be enjoyed in stir-fries, casseroles, soups, and salads and steamed as a side dish.

10. Hot Potatoes
The humble spud, the top vegetable crop in the world, is plump with fiber. Since potatoes are so popular in America, they’re an easy way to help pump up people’s fiber potential. Why stop at Russets? There are numerous potatoes that can provide a rainbow of colors, nutrients, and flavors, and remind clients to eat the skins to reap the greatest fiber rewards. Try adding cooked potatoes with skins to salads, stews, soups, side dishes, stir-fries, and casseroles or simply enjoy baked potatoes more often.

11. Everyday Fruit Basket (especially pears and oranges)
Look no further than everyday fruits to realize your full fiber potential. Many are naturally packed with fiber, as well as other important vitamins and minerals. Maybe the doctor was right when he advised an apple a day, but he could have added pears, oranges, and bananas to the prescription as well. When between fruit seasons, clients can rely on dried fruits to further fortify their diet. Encourage including fruit at breakfast each morning instead of juice; mixing dried fruits into cereals, yogurts, and salads; and reaching for the fruit bowl at snack time. It’s a healthy habit all the way around.

12. Exotic Destinations (especially avocado)
Some of the plants with the highest fiber content in the world may be slightly out of your clients’ comfort zone and, for that matter, time zone. A rainbow of indigenous fruits and vegetables used in cultural food traditions around the globe are very high in fiber. Entice clients to introduce a few new plant foods into their diets to push up the flavor, as well as their fiber, quotient.

13. Fiber Fortification Power
More foods,from juice to yogurt,are including fiber fortification in their ingredient lineup. Such foods may help busy people achieve their fiber goals. As consumer interest in foods with functional benefits, such as digestive health and cardiovascular protection, continues to grow, expect to see an even greater supply of food products promoting fiber content on supermarket shelves.

===========================

This article documents the newly-reported experiments:

CREDIT: NYT Article on Fiber Science

Fiber is Good for You. Now we Know Why

By Carl Zimmer
Jan. 1, 2018
A diet of fiber-rich foods, such as fruits and vegetables, reduces the risk of developing diabetes, heart disease and arthritis. Indeed, the evidence for fiber’s benefits extends beyond any particular ailment: Eating more fiber seems to lower people’s mortality rate, whatever the cause.

That’s why experts are always saying how good dietary fiber is for us. But while the benefits are clear, it’s not so clear why fiber is so great. “It’s an easy question to ask and a hard one to really answer,” said Fredrik Bäckhed, a biologist at the University of Gothenburg in Sweden.

He and other scientists are running experiments that are yielding some important new clues about fiber’s role in human health. Their research indicates that fiber doesn’t deliver many of its benefits directly to our bodies.

Instead, the fiber we eat feeds billions of bacteria in our guts. Keeping them happy means our intestines and immune systems remain in good working order.

In order to digest food, we need to bathe it in enzymes that break down its molecules. Those molecular fragments then pass through the gut wall and are absorbed in our intestines.
But our bodies make a limited range of enzymes, so that we cannot break down many of the tough compounds in plants. The term “dietary fiber” refers to those indigestible molecules.

But they are indigestible only to us. The gut is coated with a layer of mucus, atop which sits a carpet of hundreds of species of bacteria, part of the human microbiome. Some of these microbes carry the enzymes needed to break down various kinds of dietary fiber.

The ability of these bacteria to survive on fiber we can’t digest ourselves has led many experts to wonder if the microbes are somehow involved in the benefits of the fruits-and-vegetables diet. Two detailed studies published recently in the journal Cell Host and Microbe provide compelling evidence that the answer is yes.

In one experiment, Andrew T. Gewirtz of Georgia State University and his colleagues put mice on a low-fiber, high-fat diet. By examining fragments of bacterial DNA in the animals’ feces, the scientists were able to estimate the size of the gut bacterial population in each mouse.

On a low-fiber diet, they found, the population crashed, shrinking tenfold.

Dr. Bäckhed and his colleagues carried out a similar experiment, surveying the microbiome in mice as they were switched from fiber-rich food to a low-fiber diet. “It’s basically what you’d get at McDonald’s,” said Dr. Bäckhed said. “A lot of lard, a lot of sugar, and twenty percent protein.”

The scientists focused on the diversity of species that make up the mouse’s gut microbiome. Shifting the animals to a low-fiber diet had a dramatic effect, they found: Many common species became rare, and rare species became common.

Along with changes to the microbiome, both teams also observed rapid changes to the mice themselves. Their intestines got smaller, and its mucus layer thinner. As a result, bacteria wound up much closer to the intestinal wall, and that encroachment triggered an immune reaction.

After a few days on the low-fiber diet, mouse intestines developed chronic inflammation. After a few weeks, Dr. Gewirtz’s team observed that the mice began to change in other ways, putting on fat, for example, and developing higher blood sugar levels.

Dr. Bäckhed and his colleagues also fed another group of rodents the high-fat menu, along with a modest dose of a type of fiber called inulin. The mucus layer in their guts was healthier than in mice that didn’t get fiber, the scientists found, and intestinal bacteria were kept at a safer distance from their intestinal wall.

Dr. Gewirtz and his colleagues gave inulin to their mice as well, but at a much higher dose. The improvements were even more dramatic: Despite a high-fat diet, the mice had healthy populations of bacteria in their guts, their intestines were closer to normal, and they put on less weight.

Dr. Bäckhed and his colleagues ran one more interesting experiment: They spiked water given to mice on a high-fat diet with a species of fiber-feeding bacteria. The addition changed the mice for the better: Even on a high-fat diet, they produced more mucus in their guts, creating a healthy barrier to keep bacteria from the intestinal walls.

One way that fiber benefits health is by giving us, indirectly, another source of food, Dr. Gewirtz said. Once bacteria are done harvesting the energy in dietary fiber, they cast off the fragments as waste. That waste — in the form of short-chain fatty acids — is absorbed by intestinal cells, which use it as fuel.

But the gut’s microbes do more than just make energy. They also send messages. Intestinal cells rely on chemical signals from the bacteria to work properly, Dr. Gewirtz said. The cells respond to the signals by multiplying and making a healthy supply of mucus. They also release bacteria-killing molecules.
By generating these responses, gut bacteria help maintain a peaceful coexistence with the immune system. They rest atop the gut’s mucus layer at a safe distance from the intestinal wall. Any bacteria that wind up too close get wiped out by antimicrobial poisons.

While some species of gut bacteria feed directly on dietary fiber, they probably support other species that feed on their waste. A number of species in this ecosystem — all of it built on fiber — may be talking to our guts.

Going on a low-fiber diet disturbs this peaceful relationship, the new studies suggest. The species that depend on dietary fiber starve, as do the other species that depend on them. Some species may switch to feeding on the host’s own mucus.

With less fuel, intestinal cells grow more slowly. And without a steady stream of chemical signals from bacteria, the cells slow their production of mucus and bacteria-killing poisons.
As a result, bacteria edge closer to the intestinal wall, and the immune system kicks into high gear.

“The gut is always precariously balanced between trying to contain these organisms and not to overreact,” said Eric C. Martens, a microbiologist at the University of Michigan who was not involved in the new studies. “It could be a tipping point between health and disease.”

Inflammation can help fight infections, but if it becomes chronic, it can harm our bodies. Among other things, chronic inflammation may interfere with how the body uses the calories in food, storing more of it as fat rather than burning it for energy.

Justin L. Sonnenburg, a biologist at Stanford University who was not involved in the new studies, said that a low-fiber diet can cause low-level inflammation not only in the gut, but throughout the body.

His research suggests that when bacteria break down dietary fiber down into short-chain fatty acids, some of them pass into the bloodstream and travel to other organs, where they act as signals to quiet down the immune system.

“You can modulate what’s happening in your lung based on what you’re feeding your microbiome in your gut,” Dr. Sonnenburg said.
ADVERTISEMENT
Hannah D. Holscher, a nutrition scientist at the University of Illinois who was not involved in the new studies, said that the results on mice need to be put to the test in humans. But it’s much harder to run such studies on people.

In her own lab, Dr. Holscher acts as a round-the-clock personal chef. She and her colleagues provide volunteers with all their meals for two weeks. She can then give some of her volunteers an extra source of fiber — such as walnuts — and look for changes in both their microbiome and their levels of inflammation.

Dr. Holscher and other researchers hope that they will learn enough about how fiber influences the microbiome to use it as a way to treat disorders. Lowering inflammation with fiber may also help in the treatment of immune disorders such as inflammatory bowel disease.

Fiber may also help reverse obesity. Last month in the American Journal of Clinical Nutrition, Dr. Holscher and her colleagues reviewed a number of trials in which fiber was used to treat obesity. They found that fiber supplements helped obese people to lose about five pounds, on average.
But for those who want to stay healthy, simply adding one kind of fiber to a typical Western diet won’t be a panacea. Giving mice inulin in the new studies only partly restored them to health.

That’s probably because we depend on a number of different kinds of dietary fiber we get from plants. It’s possible that each type of fiber feeds a particular set of bacteria, which send their own important signals to our bodies.

“It points to the boring thing that we all know but no one does,” Dr. Bäckhed said. “If you eat more green veggies and less fries and sweets, you’ll probably be better off in the long term.”

=====================

CREDIT: https://www.npr.org/sections/goatsandsoda/2017/08/24/545631521/is-the-secret-to-a-healthier-microbiome-hidden-in-the-hadza-diet

Is The Secret To A Healthier Microbiome Hidden In The Hadza Diet?

August 24, 20176:11 PM ET
Heard on All Things Considered

MICHAELEEN DOUCLEFF
Twitter

Enlarge this image

The words “endangered species” often conjure up images of big exotic creatures. Think elephants, leopards and polar bears.

But there’s another of type of extinction that may be occurring, right now, inside our bodies.

Yes, I’m talking about the microbiome — that collection of bacteria in our intestines that influences everything from metabolism and the immune system to moods and behavior.

For the past few years, scientists around the world have been accumulating evidence that the Western lifestyle is altering our microbiome. Some species of bacteria are even disappearing to undetectable levels.

“Over time we are losing valuable members of our community,” says Justin Sonnenburg, a microbiologist at Stanford University, who has been studying the microbiome for more than a decade.

Now Sonnenburg and his team have evidence for why this microbial die-off is happening — and hints about what we can possibly do to reverse it.

The study, published Thursday in the journal Science, focuses on a group of hunter-gatherers in Tanzania, called Hadza.
Their diet consists almost entirely of food they find in the forest, including wild berries, fiber-rich tubers, honey and wild meat. They basically eat no processed food — or even food that comes from farms.
“They are a very special group of people,” Sonnenburg says. “There are only about 2,200 left and really only about 200 that exclusively adhere to hunting and gathering.”

Sonnenberg and his colleagues analyzed 350 stool samples from Hadza people taken over the course of about a year. They then compared the bacteria found in Hadza with those found in 17 other cultures around the world, including other hunter-gatherer communities in Venezuela and Peru and subsistence farmers in Malawi and Cameroon.

The trend was clear: The further away people’s diets are from a Western diet, the greater the variety of microbes they tend to have in their guts. And that includes bacteria that are missing from American guts.

“So whether it’s people in Africa, Papua New Guinea or South America, communities that live a traditional lifestyle have common gut microbes — ones that we all lack in the industrialized world,” Sonnenburg says.

In a way, the Western diet — low in fiber and high in refined sugars — is basically wiping out species of bacteria from our intestines.

That’s the conclusion Sonnenburg and his team reached after analyzing the Hadza microbiome at one stage of the yearlong study. But when they checked several months later, they uncovered a surprising twist: The composition of the microbiome fluctuated over time, depending on the season and what people were eating. And at one point, the composition started to look surprisingly similar to that of Westerners’ microbiome.

During the dry season, Hadza eat a lot of more meat — kind of like Westerners do. And their microbiome shifted as their diet changed. Some of the bacterial species that had been prevalent disappeared to undetectable levels, similar to what’s been observed in Westerners’ guts.

But then in wet season — when Hadza eat more berries and honey — these missing microbes returned, although the researchers are not really sure what’s in these foods that bring the microbes back.

“I think this finding is really exciting,” says Lawrence David, who studies the microbiome at Duke University. “It suggests the shifts in the microbiome seen in industrialized nations might not be permanent — that they might be reversible by changes in people’s diets.

“The finding supports the idea that the microbiome is plastic, depending on diet,” David adds.

Now the big question is: What’s the key dietary change that could bring the missing microbes back?

Lawrence thinks it could be cutting down on fat. “At a high level, it sounds like that,” he says, “because what changed in the Hadza’s diet was whether or not they were hunting versus foraging for berries or honey,” he says.

But Sonnenburg is placing his bets on another dietary component: fiber — which is a vital food for the microbiome.
“We’re beginning to realize that people who eat more dietary fiber are actually feeding their gut microbiome,”
Sonnenburg says.

Hadza consume a huge amount of fiber because throughout the year, they eat fiber-rich tubers and fruit from baobab trees. These staples give them about 100 to 150 grams of fiber each day. That’s equivalent to the fiber in 50 bowls of Cheerios — and 10 times more than many Americans eat.

“Over the past few years, we’ve come to realize how important this gut community is for our health, and yet we’re eating a low-fiber diet that totally neglects them,” he says. “So we’re essentially starving our microbial selves.”

The Dying Algorithm

CREDIT: NYT Article on the Dying Algorithm

This Cat Sensed Death. What if Computers Could, Too
By Siddhartha Mukherjee
Jan. 3, 2018

Of the many small humiliations heaped on a young oncologist in his final year of fellowship, perhaps this one carried the oddest bite: A 2-year-old black-and-white cat named Oscar was apparently better than most doctors at predicting when a terminally ill patient was about to die. The story appeared, astonishingly, in The New England Journal of Medicine in the summer of 2007. Adopted as a kitten by the medical staff, Oscar reigned over one floor of the Steere House nursing home in Rhode Island. When the cat would sniff the air, crane his neck and curl up next to a man or woman, it was a sure sign of impending demise. The doctors would call the families to come in for their last visit. Over the course of several years, the cat had curled up next to 50 patients. Every one of them died shortly thereafter.
No one knows how the cat acquired his formidable death-sniffing skills. Perhaps Oscar’s nose learned to detect some unique whiff of death — chemicals released by dying cells, say. Perhaps there were other inscrutable signs. I didn’t quite believe it at first, but Oscar’s acumen was corroborated by other physicians who witnessed the prophetic cat in action. As the author of the article wrote: “No one dies on the third floor unless Oscar pays a visit and stays awhile.”
The story carried a particular resonance for me that summer, for I had been treating S., a 32-year-old plumber with esophageal cancer. He had responded well to chemotherapy and radiation, and we had surgically resected his esophagus, leaving no detectable trace of malignancy in his body. One afternoon, a few weeks after his treatment had been completed, I cautiously broached the topic of end-of-life care. We were going for a cure, of course, I told S., but there was always the small possibility of a relapse. He had a young wife and two children, and a mother who had brought him weekly to the chemo suite. Perhaps, I suggested, he might have a frank conversation with his family about his goals?

But S. demurred. He was regaining strength week by week. The conversation was bound to be “a bummah,” as he put it in his distinct Boston accent. His spirits were up. The cancer was out. Why rain on his celebration? I agreed reluctantly; it was unlikely that the cancer would return.

When the relapse appeared, it was a full-on deluge. Two months after he left the hospital, S. returned to see me with sprays of metastasis in his liver, his lungs and, unusually, in his bones. The pain from these lesions was so terrifying that only the highest doses of painkilling drugs would treat it, and S. spent the last weeks of his life in a state bordering on coma, unable to register the presence of his family around his bed. His mother pleaded with me at first to give him more chemo, then accused me of misleading the family about S.’s prognosis. I held my tongue in shame: Doctors, I knew, have an abysmal track record of predicting which of our patients are going to die. Death is our ultimate black box.

In a survey led by researchers at University College London of over 12,000 prognoses of the life span of terminally ill patients, the hits and misses were wide-ranging. Some doctors predicted deaths accurately. Others underestimated death by nearly three months; yet others overestimated it by an equal magnitude. Even within oncology, there were subcultures of the worst offenders: In one story, likely apocryphal, a leukemia doctor was found instilling chemotherapy into the veins of a man whose I.C.U. monitor said that his heart had long since stopped.

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

Avati and his team identified about 200,000 patients who could be studied. The patients had all sorts of illnesses — cancer, neurological diseases, heart and kidney failure. The team’s key insight was to use the hospital’s medical records as a proxy time machine. Say a man died in January 2017. What if you scrolled time back to the “sweet spot of palliative care” — the window between January and October 2016 when care would have been most effective? But to find that spot for a given patient, Avati knew, you’d presumably need to collect and analyze medical information before that window. Could you gather information about this man during this prewindow period that would enable a doctor to predict a demise in that three-to-12-month section of time? And what kinds of inputs might teach such an algorithm to make predictions?
Avati drew on medical information that had already been coded by doctors in the hospital: a patient’s diagnosis, the number of scans ordered, the number of days spent in the hospital, the kinds of procedures done, the medical prescriptions written. The information was admittedly limited — no questionnaires, no conversations, no sniffing of chemicals — but it was objective, and standardized across patients.

These inputs were fed into a so-called deep neural network — a kind of software architecture thus named because it’s thought to loosely mimic the way the brain’s neurons are organized. The task of the algorithm was to adjust the weights and strengths of each piece of information in order to generate a probability score that a given patient would die within three to 12 months.

The “dying algorithm,” as we might call it, digested and absorbed information from nearly 160,000 patients to train itself. Once it had ingested all the data, Avati’s team tested it on the remaining 40,000 patients. The algorithm performed surprisingly well. The false-alarm rate was low: Nine out of 10 patients predicted to die within three to 12 months did die within that window. And 95 percent of patients assigned low probabilities by the program survived longer than 12 months. (The data used by this algorithm can be vastly refined in the future. Lab values, scan results, a doctor’s note or a patient’s own assessment can be added to the mix, enhancing the predictive power.)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

Still, when you pry the box open to look at individual cases, you see expected and unexpected patterns. One man assigned a score of 0.946 died within a few months, as predicted. He had had bladder and prostate cancer, had undergone 21 scans, had been hospitalized for 60 days — all of which had been picked up by the algorithm as signs of impending death. But a surprising amount of weight was seemingly put on the fact that scans were made of his spine and that a catheter had been used in his spinal cord — features that I and my colleagues might not have recognized as predictors of dying (an M.R.I. of the spinal cord, I later realized, was most likely signaling cancer in the nervous system — a deadly site for metastasis).
It’s hard for me to read about the “dying algorithm” without thinking about my patient S. If a more sophisticated version of such an algorithm had been available, would I have used it in his case? Absolutely. Might that have enabled the end-of-life conversation S. never had with his family? Yes. But I cannot shake some inherent discomfort with the thought that an algorithm might understand patterns of mortality better than most humans. And why, I kept asking myself, would such a program seem so much more acceptable if it had come wrapped in a black-and-white fur box that, rather than emitting probabilistic outputs, curled up next to us with retracted claws?

Siddhartha Mukherjee is the author of “The Emperor of All Maladies: A Biography of Cancer” and, more recently, “The Gene: An Intimate History.”

Why Facts Don’t Change Our Minds

CREDIT:
New Yorker Article

Why Facts Don’t Change Our Minds
New discoveries about the human mind show the limitations of reason.

By Elizabeth Kolbert

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.Illustration by Gérard DuBois
In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

“Thanks again for coming—I usually find these office parties rather awkward.”
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figuredout how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. ♦

This article appears in the print edition of the February 27, 2017, issue, with the headline “That’s What You Think.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”Read more »

John C. Reid