Category Archives: Systems

Complex adaptive systems, forecasting systems, systems design, systems architecture, modules, modular systems, modular design, system design patterns, forecasting systems that mine big data

Microbiome Update

Microbiome update

20210318

Credit: https://www.nytimes.com/2021/03/18/well/eat/microbiome-aging.html?action=click&algo=bandit-all-surfaces-decay-decay-02&block=trending_recirc&fellback=false&imp_id=986584686&impression_id=0cc53dd8-8b11-11eb-87e0-b362713e87cd&index=8&pgtype=Article&region=footer&req_id=227739339&surface=most-popular-story&variant=2_bandit-all-surfaces-decay-decay-02

By Anahad O’Connor

The secret to successful aging may lie in part in your gut, according to a new report. The study found that it may be possible to predict your likelihood of living a long and healthy life by analyzing the trillions of bacteria, viruses and fungi that inhabit your intestinal tract.

The new research, published in the journal Nature Metabolism, found that as people get older, the composition of this complex community of microbes, collectively known as the gut microbiome, tends to change. And the greater the change, the better, it appears.

In healthy people, the kinds of microbes that dominate the gut in early adulthood make up a smaller and smaller proportion of the microbiome over the ensuing decades, while the percentage of other, less prevalent species rises. But in people who are less healthy, the study found, the opposite occurs: The composition of their microbiomes remains relatively static and they tend to die earlier.

The new findings suggest that a gut microbiome that continually transforms as you get older is a sign of healthy aging, said a co-author of the study, Sean Gibbons, a microbiome specialist and assistant professor at the Institute for Systems Biology in Seattle, a nonprofit biomedical research.

“A lot of aging research is obsessed with returning people to a younger state or turning back the clock,” he said. “But here the conclusion is very different. Maybe a microbiome that’s healthy for a 20-year-old is not at all healthy for an 80-year-old. It seems that it’s good to have a changing microbiome when you’re old. It means that the bugs that are in your system are adjusting appropriately to an aging body.”

The researchers could not be certain whether changes in the gut microbiome helped to drive healthy aging or vice versa. But they did see signs that what happens in people’s guts may directly improve their health. They found, for example, that people whose microbiomes shifted toward a unique profile as they aged also had higher levels of health-promoting compounds in their blood, including compounds produced by gut microbes that fight chronic disease.

Scientists have suspected for some time that the microbiome plays a role in aging. Studies have found, for example, that people 65 and older who are relatively lean and physically active have a higher abundance of certain microbes in their guts compared to seniors who are less fit and healthy. People who develop early signs of frailty also have less microbial diversity in their guts. By studying the microbiomes of people of all ages, scientists have found patterns that extend across the entire life span. The microbiome undergoes rapid changes as it develops in the first three years of life. Then it remains relatively stable for decades, before gradually undergoing changes in its makeup as people reach midlife, which accelerates into old age in those who are healthy but slows or remains static in people who are less healthy.

Although no two microbiomes are identical, people on average share about 30 percent of their gut bacterial species. A few species that are particularly common and abundant make up a “core” set of gut microbes in all of us, along with smaller amounts of a wide variety of other species that are found in different combinations in every person.

To get a better understanding of what happens in the gut as people age, Dr. Gibbons and his colleagues, including Dr. Tomasz Wilmanski, the lead author of the new study, looked at data on over 9,000 adults who had their microbiomes sequenced. They ranged in age from 18 to 101.

About 900 of these people were seniors who underwent regular checkups at medical clinics to assess their health. Dr. Gibbons and his colleagues found that in midlife, starting at around age 40, people started to show distinct changes in their microbiomes. The strains that were most dominant in their guts tended to decline, while other, less common strains became more prevalent, causing their microbiomes to diverge and look more and more different from others in the population.

“What we found is that over the different decades of life, individuals drift apart — their microbiomes become more and more unique from one another,” said Dr. Gibbons.

People who had the most changes in their microbial compositions tended to have better health and longer life spans. They had higher vitamin D levels and lower levels of LDL cholesterol and triglycerides, a type of fat in the blood. They needed fewer medications, and they had better physical health, with faster walking speeds and greater mobility.

The researchers found that these “unique” individuals also had higher levels of several metabolites in their blood that are produced by gut microbes, including indoles, which have been shown to reduce inflammation and maintain the integrity of the barrier that lines and protects the gut. In some studies, scientists have found that giving indoles to mice and other animals helps them stay youthful, allowing them to be more physically active, mobile and resistant to sickness, injuries and other stresses in old age. Another one of the metabolites identified in the new study was phenylacetylglutamine. It is not clear exactly what this compound does. But some experts believe it promotes longevity because research has shown that centenarians in northern Italy tend to have very high levels of it.

Dr. Wilmanski found that people whose gut microbiomes did not undergo much change as they got older were in poorer health. They had higher cholesterol and triglycerides and lower levels of vitamin D. They were less active and could not walk as fast. They used more medications, and they were nearly twice as likely to die during the study period.

The researchers speculated that some gut bugs that might be innocuous or perhaps even beneficial in early adulthood could turn harmful in old age. The study found, for example, that in healthy people who saw the most dramatic shifts in their microbiome compositions there was a steep decline in the prevalence of bacteria called Bacteroides, which are more common in developed countries where people eat a lot of processed foods full of fat, sugar and salt, and less prevalent in developing countries where people tend to eat a higher-fiber diet. When fiber is not available, Dr. Gibbons said, Bacteroides like to “munch on mucus,” including the protective mucus layer that lines the gut.

“Maybe that’s good when you’re 20 or 30 and producing a lot of mucus in your gut,” he said. “But as we get older, our mucus layer thins, and maybe we may need to suppress these bugs.”

ADVERTISEMENT

Continue reading the main story

-5%

Bombas® Gripper Socks

Bombas

If those microbes chew through the barrier that keeps them safely in the gut, it is possible they could trigger an immune system response.

“When that happens, the immune system goes nuts,” Dr. Gibbons said. “Having that mucus layer is like having a barrier that maintains a détente that allows us to live happily with our gut microbes, and if that goes away it starts a war” and could set off chronic inflammation. Increasingly, chronic inflammation is thought to underlie a wide range of age-related ailments, from heart disease and diabetes to cancer and arthritis.

One way to prevent these microbes from destroying the lining of the gut is to give them something else to snack on, such as fiber from nutritious whole foods like beans, nuts and seeds and fruits and vegetables.

Other studies have shown that diet can have a substantial impact on the composition of the microbiome. While the new research did not look closely at the impact of different foods on changes in the microbiome as we age, Dr. Gibbons said he hopes to examine that in a future study.

“It may be possible to preserve the aging mucus layer in the gut by increasing the amount of fiber in the diet,” Dr. Gibbons said. “Or we might identify other ways to reduce Bacteroides abundance or increase indole production through diet. These are not-too-distant future interventions that we hope to test.”

In the meantime, he said, his advice for people is to try to stay physically active, which can have a beneficial effect on the gut microbiome, and eat more fiber and fish and fewer highly processed foods.

“I have started eating a lot more fiber since I began studying the microbiome,” he said. “Whole foods like fresh fruits and veggies have all the complex carbohydrates that our microbes like to eat. So, when you’re feeding yourself, think about your microbes too.”

Anahad O’Connor is a staff reporter covering health, science, nutrition and other topics. He is also a bestselling author of consumer health books such as “Never Shower in a Thunderstorm” and “The 10 Things You Need to Eat.” 

Batteries

CREDIT: https://www.nytimes.com/2021/02/16/business/energy-environment/electric-car-batteries-investment.html?action=click&module=Top%20Stories&pgtype=Homepage

Key points:

Demand for batteries will explode.

Global race to dominate the field.

China currently dominate.

Europe (EU) investing heavily.

The mother lode: a commercial-grade solid state battery, replacing liquid.

Needed: cheaper, longer-lasting.

Article below:

Batteries

Feb. 16, 2021, 5:00 a.m. ET

As automakers like General MotorsVolkswagen and Ford Motor make bold promises about transitioning to an electrified, emission-free future, one thing is becoming obvious: They will need a lot of batteries.

Demand for this indispensable component already outstrips supply, prompting a global gold rush that has investors, established companies and start-ups racing to develop the technology and build the factories needed to churn out millions of electric cars.

Long considered one of the least interesting car components, batteries may now be one of the most exciting parts of the auto industry. Car manufacturing hasn’t fundamentally changed in 50 years and is barely profitable, but the battery industry is still ripe for innovation. Technology is evolving at a pace that is reminiscent of the early days of personal computers, mobile phones or even automobiles and an influx of capital has the potential to mint the next Steve Jobs or Henry Ford.

Wood Mackenzie, an energy research and consulting firm, estimates that electric vehicles will make up about 18 percent of new car sales by 2030. That would increase the demand for batteries by about eight times as much as factories can currently produce. And that is a conservative estimate. Some analysts expect electric vehicle sales to grow much faster.

Carmakers are engaged in an intense race to acquire the chemical recipe that will deliver the most energy at the lowest price and in the smallest package. G.M.’s announcement last month that it would go all electric by 2035was widely considered a landmark moment by policymakers and environmentalists. But to many people in the battery industry, the company was stating the obvious.

“This was the last in a wave of big announcements that very clearly signaled that electric vehicles are here,” said Venkat Viswanathan, an associate professor at Carnegie Mellon University who researches battery technology.

Battery manufacturing is dominated by companies like Tesla, Panasonic, LG Chem, BYD China and SK Innovation — nearly all of them based in China, Japan or South Korea. But there are also many new players getting into the game. And investors, sensing the vast profits at stake, are hurling money at start-ups that they believe are close to breakthroughs.

“I think we’re in the infancy stage,” said Andy Palmer, the former chief executive of Aston Martin and now the nonexecutive vice chairman of InoBat Auto, a battery start-up. “There is more money than there are ideas.”

QuantumScape, a Silicon Valley start-up whose investors include Volkswagen and Bill Gates, is working on a technology that could make batteries cheaper, more reliable and quicker to recharge. But it has no substantial sales and it could fail to produce and sell batteries. Yet, stock market investors consider the company to be more valuable than the French carmaker.

China and the European Union are injecting government funds into battery technology. China sees batteries as crucial to its ambition to dominate the electric vehicle industry. In response, the Chinese government helped Contemporary Amperex Technology, which is partly state-owned, become one of the world’s biggest battery suppliers seemingly overnight.

The European Union is subsidizing battery production to avoid becoming dependent on Asian suppliers and to preserve auto industry jobs. Last month, the European Commission, the bloc’s administrative arm, announced a 2.9 billion euro, or $3.5 billion, fund to support battery manufacturing and research. That was on top of the more than €60 billion that European governments and automakers had already committed to electric vehicles and batteries, according to the consulting firm Accenture. Some of the government money will go to Tesla as a reward for the company’s decision to build a factory near Berlin.

The United States is also expected to promote the industry in accordance with President Biden’s focus on climate change and his embrace of electric cars. In a campaign ad last year, Mr. Biden, who owns a 1967 Chevrolet Corvette, said he was looking forward to driving an electric version of the sports car if G.M. decides to make one.

Several battery factories are in the planning or construction phase in the United States, including a factory G.M. is building in Ohio with LG, but analysts said federal incentives for electric car and battery production would be crucial to creating a thriving industry in the United States. So will technological advances by government-funded researchers and domestic companies like QuantumScape and Tesla, which last fall outlined its plans to lower the cost and improve the performance of batteries.

“There’s no secret that China strongly promotes manufacturing and new development,” said Margaret Mann, a group manager in the Center for Integrated Mobility Sciences at the National Renewable Energy Laboratory, a unit of the U.S. Energy Department. “I am not pessimistic,” she said of the United States’ ability to gain ground in battery production. “But I don’t think all of the problems have been solved yet.”

Entrepreneurs working in this area said these were early days and U.S. companies could still leapfrog the Asian producers that dominate the industry.

“Today’s batteries are not competitive,” said Jagdeep Singh, chief executive of QuantumScape, which is based in San Jose, Calif. “Batteries have enormous potential and are critical for a renewable energy economy, but they have to get better.”

For the most part, all of the money pouring into battery technology is good news. It puts capitalism to work on solving a global problem. But this reordering of the auto industry will also claim some victims, like the companies that build parts for internal combustion engine cars and trucks, or automakers and investors that bet on the wrong technology.

“Battery innovations are not overnight,” said Venkat Srinivasan, director of the Argonne National Laboratory’s Collaborative Center for Energy Storage Science. “It can take you many years. All sorts of things can happen.”

Most experts are certain that demand for batteries will empower China, which refines most of the metals used in batteries and produces more than 70 percent of all battery cells. And China’s grip on battery production will slip only marginally during the next decade despite ambitious plans to expand production in Europe and the United States, according to projections by Roland Berger, a German management consulting firm.

Battery production has “deep geopolitical ramifications,” said Tom Einar Jensen, the chief executive of Freyr, which is building a battery factory in northern Norway to take advantage of the region’s abundant wind and hydropower. “The European auto industry doesn’t want to rely too much on imports from Asia in general and China in particular,” he added.

Freyr plans to raise $850 million as part of a proposed merger with Alussa Energy Acquisition Corporation, a shell company that sold shares before it had any assets. The deal, announced in January, would give Freyr a listing on the New York Stock Exchange. The company plans to make batteries using technology developed by 24M Technologies in Cambridge, Mass.

The first priority for the industry is to make batteries cheaper. Electric car batteries for a midsize vehicle cost about $15,000, or roughly double the price they need to be for electric cars to achieve mass acceptance, Mr. Srinivasan said.

Those savings can be achieved by making dozens of small improvements — like producing batteries close to car factories to avoid shipping costs — and by reducing waste, according to Roland Berger. About 10 percent of the materials that go into making a battery are wasted because of inefficient production methods.

But, in a recent study, Roland Berger also warned that growing demand could push up prices for raw materials like lithium, cobalt and nickel and cancel out some of those efficiency gains. The auto industry is competing for batteries with electric utilities and other energy companies that need them to store intermittent wind and solar power, further driving up demand.

“We are getting rumbles there may be a supply crunch this year,” said Jason Burwen, interim chief executive for the United States Energy Storage Association.

An entire genre of companies has sprung up to replace expensive minerals used in batteries with materials that are cheaper and more common. OneD Material, based in San Jose, Calif., makes a substance that looks like used coffee grounds for use in anodes, the electrode through which power leaves batteries when a vehicle is underway. The material is made from silicon, which is abundant and inexpensive, to reduce the need for graphite, which is scarcer and more expensive.

Longer term, the industry holy grail is solid state batteries, which will replace the liquid lithium solution at the core of most batteries with solid layers of a lithium compound. Solid state batteries would be more stable and less prone to overheating, allowing faster charging times. They would also weigh less.

Toyota Motor and other companies have invested heavily in the technology, and have already succeeded in building some solid state batteries. The hard part is mass producing them at a reasonable cost. Much of the excitement around QuantumScape stems from the company’s assertion that it has found a material that solves one of the main impediments to mass production of solid state batteries, namely their tendency to short circuit if there are any imperfections.

Still, most people in the industry don’t expect solid state batteries to be widely available until around 2030. Mass producing batteries is “the hardest thing in the world,” Elon Musk, Tesla’s chief executive, said on a recent conference call with analysts. “Prototypes are easy. Scaling production is very hard.”

One thing is certain: It’s a great time to have a degree in electrochemistry. Those who understand the properties of lithium, nickel, cobalt and other materials are to batteries what software coders are to computers. Jakub Reiter, for example, has been fascinated with battery chemistry since he was a teenager growing up in the 1990s in Prague, long before that seemed like a hot career choice.

Mr. Reiter was doing graduate research in Germany in 2011 when a headhunter recruited him to work at BMW, which wanted to understand the underlying science of batteries. Last year, InoBat poached him to help set up a factory in Slovakia, where Volkswagen, Kia, Peugeot and Jaguar Land Rover produce cars.

Mr. Reiter is now head of science at InoBat, whose technology allows customers to quickly develop batteries for different uses, like a low-cost battery for a commuter car or a high-performance version for a roadster.

“Twenty years ago, nobody cared much about batteries,” Mr. Reiter said. Now, he said, there is intense competition and “it’s a big fight.”

The Electric Car Race

Remaking the auto industry

G.M. Announcement Shakes Up U.S. Automakers’ Transition to Electric Cars

Jan. 29, 2021

G.M.’s Electric Car Push Could Put China in the Driver’s Seat

Jan. 29, 2021

California Is Trying to Jump-Start the Hydrogen Economy

Nov. 11, 2020

The Age of Electric Cars Is Dawning Ahead of Schedule

Sept. 20, 2020

Jack Ewing writes about business, banking, economics and monetary policy from Frankfurt, and contributes to breaking news coverage. Previously he worked for a decade at BusinessWeek magazine in Frankfurt, where he was European regional editor. @JackEwingNYT • Facebook

Ivan Penn is a Los Angeles-based reporter covering alternative energy. Before coming to The Times in 2018 he covered utility and energy issues at The Tampa Bay Times and The Los Angeles Times. @ivanlpenn

Arivale busts: a scientific wellness darling

Arivale – the end of a promising “scientific wellness” company

Anyone who cares about well-being, particularly that subset of well-being that many labeled as the “scientific wellness” movement, should note a decade-ender: the failure of Arivale. 

They burned through $50 million. They sold 5,000 customers over their lifetime. The customers were paying $99 per month on LABS tracking and health coaching that was tailored to the person’s genomes and other critical lab work. 

Their conclusion: customers would not pay for what it cost to serve them. In the future? Maybe. But not now. 

=======ARTICLE ON ARIVALE FOLLOWS=======

CREDIT: https://www.geekwire.com/2019/scientific-wellness-startup-arivale-closes-abruptly-tragic-end-vision-transform-personal-health/

Scientific wellness startup Arivale closes abruptly in ‘tragic’ end to vision to transform personal health

BY TODD BISHOP & TAYLOR SOPER on April 24, 2019

Arivale, the genetic testing and personal health coaching startup co-founded by genomics pioneer Leroy “Lee” Hood, shut down unexpectedly Wednesday — bringing an abrupt end to its ambitions to transform the lives of Americans through a new field that Hood dubbed “scientific wellness.”

All of the Seattle-based company’s approximately 120 employees were let go as of noon today, Arivale CEO Clayton Lewis confirmed in an interview. Arivale raised more than $50 million over its lifetime. The company offered ongoing wellness and nutritional coaching tailored to the results of each person’s genetic, blood and microbiome tests.

FOLLOW-UP: Why Arivale failed: Inside the surprise closure of an ambitious ‘scientific wellness’ startup

The decision was a surprise to many Arivale employees and customers. In a message to Arivale customers this afternoon, the company attributed the decision to “the simple fact that the cost of providing the service exceeds what our customers can pay for it.”

The message added, “We believe the costs of collecting the genetic, blood and microbiome assays that form the foundation of the program will eventually decline to a point where the program can be delivered to consumers cost-effectively. However, we are unable to continue to operate at a loss until that time arrives.”

Lewis told GeekWire that the high cost of acquiring customers also played a role in the decision.

“What is tragic on so many levels is that we were not successful in going out and convincing consumers that you could optimize your wellness and avoid disease with a little bit data and some changes in your lifestyle — that there’s not a market for that product that I believe in passionately,” Lewis said. “And that’s what we were trying to do.”

About 5,000 people took part in the Arivale program over the lifetime of the company, and Lewis said he is “incredibly proud” of the results. The program launched at a cost of $3,500 per year, but the price had dropped to the point where most customers were paying $99 per month for the flagship Arivale program, Lewis said.

The larger personal wellness industry includes heavyweights such as 23andMe, a genetic testing startup valued at more than $1 billion, and smaller players including EverlyWell, which raised a $50 million round last week, and Viome, the Seattle-area microbiome company led by Naveen Jain that just announced a new $25 million funding round from investors including Salesforce CEO Marc Benioff.

Global Wellness Institute estimates that the preventative and personalized medicine and public health industry is worth $575 billion.

Some of Arivale’s underlying work will continue at the Institute for Systems Biology (ISB), the not-for-profit biomedical research organization co-founded by Hood, where the ideas that led to Arivale were originally developed. ISB is now part of Providence St. Joseph Health, where Hood is chief science officer. Clayton said ISB is expected to hire some of the employees let go by Arivale as part of its closure. He declined to disclose details of the severance offered to employees, but said the same package was provided to all executives and employees.

Investors in Arivale included Arch Venture Partners, Polaris, and Maveron, where Lewis worked full-time before joining Arivale as co-founder and CEO.

Its scientific advisory board included George Church, a professor at Harvard and MIT; James Heath, president of Institute of Systems Biology; and Ed Lazowska, computer science professor at the University of Washington.

“Lee Hood sees the future with unmatched clarity,” said Lazowska, an early participant in the Arivale program. “A clear view, however, does not always imply a short path. Scientific wellness, as pioneered by Arivale, will be a foundation of 21st century medicine. But not right now. Right now, the cost of providing the service (the tests, the coaching) exceeds what people are willing to pay. Those costs will fall in time, and Arivale’s model and Arivale’s discoveries will see another day.”

Lewis said he has come to believe that Arivale was about a decade too early.

Arivale’s executive team included Sean Bell, chief operating officer; Jennifer Lovejoy, chief translational science officer; Mia Nease, head of healthcare and life sciences partnerships; Andrew Magis, director of research; Ashley Wells, chief product officer; and others.

Hood, who led the Caltech team that pioneered the automated DNA sequencer, said in a 2015 interview with GeekWire that Arivale was “the opening shot in a whole new industry called scientific wellness, and it really stands a chance of being the Google or Microsoft of this whole arena.”

GeekWire chief business officer Daniel Rossi was a longtime paying customer of the program, and we chronicled his early experience with Arivale in series of articles in 2017. “Arivale was crucial to my health journey,” Rossi said. “From them I learned not only the genetic hand I was dealt but also the best ways to maximize my health and well-being. I am most thankful for the weekly calls with my coaches who encouraged me every step of the way. I’ll miss this program. It was terrific.”

Here’s the text of the message sent to Arivale customers earlier today, a version of which was also posted to the Arivale website.

To Our Customers,

We are very sorry to inform you that, effective immediately, Arivale can no longer provide our program to you and our other customers. This letter explains why we are ending the consumer program and answers the questions you are likely to have about the process.

Our decision to terminate the program today comes despite the fact that customer engagement and satisfaction with the program is high and the clinical health markers of many customers have improved significantly. Our decision to cease operations is attributable to the simple fact that the cost of providing the program exceeds what our customers can pay for it. We believe the costs of collecting the genetic, blood and microbiome assays that form the foundation of the program will eventually decline to a point where the program can be delivered to consumers cost-effectively. Regrettably, we are unable to continue to operate at a loss until that time arrives; in other words, we have concluded that it is simply too early for a direct-to-consumer scientific wellness offering to be viable.

We founded Arivale with the vision of making personalized, data-driven, preventive coaching a new wellness paradigm in the United States. Since its launch in 2015, the results of the Arivale program have been remarkable. To cite but one example, our scientific paper describing the improvements seen in multiple health markers in ~2500 participants was recently accepted for publication in the journal Scientific Reports.

While our direct-to-consumer model isn’t yet sustainable, we know that the Arivale program improved the lives of our customers and showed great scientific merit. We are proud of everyone at Arivale for their dedication and devotion to our mission and grateful to you and all of our other customers for joining us on this journey. Together, our efforts have launched a new paradigm—scientific or quantitative wellness—which, we are confident will become a major component of 21st century medicine.

================== END ARTICLE==============

Written by: Todd Bishop is GeekWire’s co-founder and editor, a longtime technology journalist who covers subjects including cloud tech, e-commerce, virtual reality, devices, apps and tech giants such as Amazon.com, Apple, Microsoft and Google. Follow him @toddbishop, email todd@geekwire.com, or call (206) 294-6255.

Deep Learning

Singularity, Deep Learning, and AI

DRAFT: January, 2019

CREDIT: The Deep Learning Revolution, by Terrence J. Sejnowski

CREDIT: https://en.wikipedia.org/wiki/Deep_learning

====================

Sadly, I was giving up on speech recognition just as it was emerging. I gave up, after 30 years of waiting, around 1995. Bad idea.

Speech recognition stopped being just cute, and exploded onto the world scene during the late 1990’s. It has taken almost two decades to commercialize, but the technologies birthed in the late 1990’s have now yielded commercial grade results. 

Why then? Why the late 1990’s?  

In reading “The Deep Learning Revolution”, by Terrence J. Senjnowski, I learned why: an underlying technology called “deep learning” had come of age. 

“Deep Learning” was birthed in the late 1990’s, but the research leading up to the term goes back to the 1980”s (and the foundations of all this goes back to 1965).

Many trace the current revolution in deep learning to October 2012. Researchers proved successful in a large-scale ImageNet competition. Their approach won the ICPR contest on analysis of large medical images for cancer detection.

In 2013 and 2014, the error rate on the ImageNet task using deep learning was further reduced, following a similar trend in large-scale speech recognition. The Wolfram Image Identification project publicized these improvements.

Image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs.

IN 2015, spectacular practical application began to burst on the scene in 2015. Speech recognition was one. Facial recognition was a second. Pattern recognition can identify cats, dogs and dog breeds, and applications that allow medical diagnosticians to improve their diagnoses. 

Today, applications address computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases superior to human experts.

It turns out that new approaches to Deep Learning have broad applicability. But one of those applications that has broken into mass commercialization is …. speech recognition. These breakthroughs trace back to breakthroughs in “speaker recognition” – results that were achieved at SRI.

To understand the massive improvements, consider this: In 2015, Google Voice Search experienced a dramatic performance jump of 49%.

Or consider this: All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.

More on speaker recognition: The recent history traces back to breakthroughs at SRI in the late 1990’s. The research arms of NSA and DARPA needed answers. To get the answers, they turned to SRI international. SRI made the biggest breakthroughs. They cracked “speaker recognition” at that time. They failed, however, to crack “speech recognition”. That came later, around 2003.

Specifically, important papers were published in the late 1990’s describing how deep learning could solve the nagging issues of speaker and speech recognition. 

The deep learning method used was called long short-term memory (LSTM). (Hochreiter and Schmidhuber, 1997.)

Deep learning for speech recognition came later, in the early 21st century. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks.. Later it was combined with connectionist temporal classification (CTC) in stacks of LSTM RNNs.

Google Voice Search drew upon “CTC-trained LSTM” – in other words, the LSTM technologies birthed in the late 1990’s had by 2015 yielded commercial-grade results.

Today, lay people understand the power of speech recognition by using “Siri” – or by using the voice transcription technologies on their iPhones. Everyone has noted the vast improvements in the last several years. All of these improvements are due to Deep Learning. 

Let me step back at this point and trace the breakthroughs by researchers. I begin with a glossary:

AI – Artificial Intelligence

ANN – Artificial Neural Networks

DNN – Deep Learning Networks – a variant of artificial intelligence in which software “learns to recognize patterns in distinct layers

RNN – Recurrent Neural Networks

“Deep” – The “deep” in “deep learning” refers to the number of layers through which the data is transformed.

“Layers” – each layer represents a level of abstraction that allows the machine to group like data from unlike data (the machine classifies). Each successive layer uses the output from the previous layer as input. The “deep” in “deep learning” refers to the number of layers through which the data is transformed. Deep learning helps to disentangle these abstractions and pick out which features improve performance.[1]

Pattern Recognition

Image Recognition (In 2011, deep learning-based image recognition has become “superhuman”, producing more accurate results than human contestants.)

Speech Recognition (and ASR – Automatic Speech recognition)

Speaker Recognition (In 1998, deep learning-based speaker recognition was proven to be effective)

Visual Recognition – recognizing object, faces handwritten zip codes etc

Facial Recognition – for example, Facebook’s AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them.

Object recognition – (in 1992, a method of extracting 3D objects from a cluttered scene

Medical Imaging – where each neural-network layer operates both independently and in concert, separating aspects such as color, size and shape before integrating the outcomes” of medical imaging

Deep Learning Techniques

Supervised – uses classifications

Unsupervised.  – uses pattern recognition (without human assistance)

Backpropogation (Backprop) – passing information in the reverse direction and adjusting the network to reflect that information.

LSTM – long short-term memory 

CTC – connectionist temporal classification

CAP – the chain of transformations from input to output. CAPs describe potentially causal connections between input and output.

More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output.

Applications 

TAMER – in 2008, proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor.[

TAMER (Deep TAMER) – in 2018, is a new algorithm using deep learning to provide a robot the ability to learn new tasks through observation. (robots learn a task with a human trainer, watching video streams or observing a human perform a task in-person). The robot practices the task with the help of some coaching from the trainer, who provided feedback such as “good job” and “bad job.”

CRESCEPTRON, in 1991, a method for performing 3-D object recognition in cluttered scenes. 

Hardware

GPU – in 2009, Nvidia graphics processing units (GPUs) were used by Google Brain to create capable DNNs. This increased the speed of deep-learning systems by about 100 times.

Training Sets

TIMIT (Automatic speech recognition trainer)

MNIST (image classification trainer)

The MNIST database is composed of handwritten digits. it includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations.

With this glossary, a few simple statements can pinpoint why the current revolution is exploding:

Hardware has advanced, thanks to GPU commercialized in 2009.

Software has advanced, thanks to GPU-based successes in cancer image identification in 2012. 

Pattern recognition has advanced, with speech recognition leading the way. The TIMIT training set has allowed exponential progress, especially in 2015, leading the way. 

Robotics have advanced, thanks to deep TAMER breakthroughs in 2018. 

Voice Recognition Explodes

CREDIT: The Deep Learning Revolution, by Terrence J. Sejnowski

CREDIT: https://en.wikipedia.org/wiki/Deep_learning

====================

Sadly, I was giving up on voice recognition just as it was emerging. I gave up, after 30 years of waiting, around 1995. Bad idea.

Voice recognition stopped being just cute, and exploded onto the world scene during the late 1990’s. It has taken almost two decades to commercialize, but the technologies birthed in the late 1990’s have now yielded commercial grade results. 

Why then? Why the late 1990’s?  

In reading “The Deep Learning Revolution”, by Terrence J. Senjnowski, I learned why: an underlying technology called “deep learning” had come of age. 

“Deep Learning” was birthed in the late 1990’s, but the research leading up to the term goes back to the 1980”s.

It turns out that new approaches to Deep Learning have broad applicability. But one of those applications that has broken into mass commercialization is …. voice recognition. 

To understand the massive improvements, consider this: In 2015, Google Voice Search experienced a dramatic performance jump of 49%.

Or consider this: All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.

The recent history traces back to breakthroughs at SRI in the late 1990’s. The research arms of NSA and DARPA needed answers. To get the answers, they turned to SRI international. SRI made the biggest breakthroughs. They cracked “speaker recognition” at that time. They failed, however, to crack “speech recognition”. That came later.

Specifically, important papers were published in the late 1990’s describing how deep learning could solve the nagging issues of speaker and voice recognition. The deep learning method used was called long short-term memory (LSTM). (Hochreiter and Schmidhuber, 1997.)

Deep learning for speech recognition came later, in the early 21st century. In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks.. Later it was combined with connectionist temporal classification (CTC) in stacks of LSTM RNNs.

Google Voice Search drew upon “CTC-trained LSTM” – in other words, the LSTM technologies birthed in the late 1990’s had by 2015 yielded commercial-grade results.

Today, lay people understand the power of speech recognition by using “Siri” – or by using the voice transcription technologies on their iPhones. Everyone has noted the vast improvements in the last several years. All of these improvements are due to Deep Learning. 

Microbiome Science Advances

On January 28, the New York Times published a major article on recent advances in microbiome research.

The article says that breakthroughs began in 2014, when scientists began finding evidence that the micro biome is linked to Alzheimer’s, Parkinson’s, depression, schizophrenia, autism, and other conditions.

The article also describes the early 2000’s, when major advances came from figuring out how to sequenced DNA from microbes in the micro biome. Apparently, a gene called SHANK3 Is particularly central to autism research.

Also apparently, Researchers have isolated one particular bacteria, lactobacillus reuteri. They seem to have identified compounds that are released. These compounds send a signal to nerve endings in the intestines. The Vegas nerve send these signals from we got to the brain, where they alter production of a hormone called Oxsee Tosun. This hormone apparently promote social bonds.

The article is below:

Digital Immortality

In this week’s Sunday NYT Magazine, a discussion was recorded about the future of technology. One of my favorite writers, Sid Mukerjee, discussed chronic disease. In that discussion, he touched on a notion of immortality that I have been pondering for some time.

Here is what he said, and after is what I say in response.

MUKHERJEE: “In terms of longevity, the diseases that are most likely to kill us are neurological diseases and heart disease and cancer. In some other countries, there is tuberculosis and malaria and other infectious diseases, but here it’s the chronic diseases that dominate. There are three ways to think about these chronic diseases. One is the disease-specific way. So, you attack Alzheimer’s as Alzheimer’s; you attack cancer as cancer. The second one is that you forget about the disease-specific manners of attacking diseases and you attack longevity or aging reversal in general. You change diet, change genes, change whatever else — we might call them “trans factors,” which would simply override the “cis factors” that existed for individual diseases. And the third option is some combination of that and some digital form of immortality, which is that you record yourself forever, that you clone yourself and somehow pass along that recording. Which is to say that the body is just a repository of memories, images, times. And as a repository, there’s nothing special about it. The body per se, the mortal coil, is just a coil.

This is the first time I have heard a major thinker put immortality into this context. And yet – its so obvious to do so!

For example:

– wouldn’t it be fair to say that every autobiography ever written would be a sincere attempt by the writer to achieve some form of immortality?

– in like manner, isn’t the task of the biographer, in part, to immortalize their subject?

– more broadly, how do societies around the world remember their ancestors? Their memories are their attempts to allow ancestors to live forever!

This point is nicely illustrated by the Irish culture. In my work on the History of Ireland, the centrality of “oral tradition” was crystal clear. I came continually across how the Irish told stories to revere their ancestors. The Irish would distill their ancestors into a wide variety of stories that helped the present generation understand the past.

So, by extrapolation from this point (which is obvious), can this be asked: “Can I be immortalized digitally?

Digital storage costs have plummeted. Methods of organizing and tagging video and audio recordings are now commonplace. Search engines are commonplace. Pattern recognition combined with search is exploding.

So what will prevent me in the future from immortalizing myself digitally? What prevents me from storing who I am, what I did, what I learned, where I have been, what I have experienced, who I knew, who my ancestors were, who my children and grandchildren were, etc etc?

Perhaps the answer is: nothing. Nothing prevents me from being digitally immortal.

Climate Change Language

We Need A Better Language for Climate Change – that Acts as a Call to Action

============================

Below is as essay that makes the case for a new six-box classification system for global climate change – two columns and three rows. The core idea here is to move climate change out of a subject for the editorial page and into a subject for daily new – much like how storms, earthquakes and epidemics are covered. We want a language that serves as a “call-to-action”.

The news would inform the world about climate-change related occurrences that have impacts that are “major”, “disaster”, or “global disaster”, and that are either “incidents” (one-time) or “recurring”.

I worked this out with Karen . I am the scribe. Obviously, this is DRAFT 1.

=============================
Climate Change Language

CREDIT: Karen Flanders-Reid
CREDIT: https://www.nytimes.com/2018/08/08/opinion/environment/california-wildfires-trump-zinke-climate-change.html

Karen and I read today’s NYT article about California wildfires, and found ourselves musing – is the language of climate change right? Why is a “wildfire” just an isolated incident? Why isn’t it part of a larger wildfire classification system (“BREAKING NEWS: THE CALIFORNIA WILDFIRE HAS JUST BEEN RECLASSIFIED AS CATEGORY V.”?

We went on to ask: if climate change is the critical issue of our day, why Why isn’t the wildfire in California an climate change incident – part of a larger climate change classification system?

Why do the NYT editorial writers have to scream – everything is related to climate change!!!! After all, news breaks when a Hurricane is re-classified: “BREAKING NEWS: THE TROPICAL STORM OVER CUBA HAS JUST BEEN RE-CLASSIFIED BY THE WEATHER SERVICE AS A HURRICANE.”

Why doesn’t climate change have its own global classification system? How do we move from the editorial opinion desk to the news desk? How do we move from “The science is being ignored.” To “BREAKING NEWS: THE WILDFIRES IN CALIFORNIA HAVE JUST BEEN RECLASSIFIED BY THE WEATHER SERVICE FROM A CLIMATE-RELATED INCIDENT (CRI) TO A CLIMATE-RELATED DISASTER (CRD).”

EXAMPLES OF POWERFUL GLOBAL CLASSIFICATION SYSTEMS

To identify a powerful classification system, and the new language it implies, it first would be useful to identify the other global classification systems that exist – especially those with imply a call to action.

There are at least four:

Storms; Classified by the World Meteorological Organization (WMO), using the Saffir–Simpson scale:

Tropical Depression
Tropical Storm
Hurricane/Cyclone Categories 1-5

Source: https://en.wikipedia.org/wiki/Maximum_sustained_wind

Earthquakes: Classified by the US Geological Service, using the Richter Scale:
Moderate (above 8)
Strong (7-7.9)
Major (6-6.9)
Great (5-5.9)

Infectious Disease; Classified by the global centers for disease control, the classes are:

Outbreak (more incident than expected)
Epidemic (spreads rapidly to many people)
Pandemic (spreads rapidly to many people globally)

Source: https://www.webmd.com/cold-and-flu/what-are-epidemics-pandemics-outbreaks#1

A NEW GLOBAL CLASSIFICATION SYSTEM FOR CLIMATE CHANGE

To Begin

We recommend s simple structure, with easily understood terms, that evolves over time:

Starts with a few terms, and adds terms over time.
Begins classifying major occurrences only, and evolves to classify most occurrences.
Begins classifying evidence-based occurrences only (where science is conclusive that the occurrence is climate-change-related) and evolves as science becomes increasingly conclusive.

Initial Terms

“Occurrence” – a natural phenomena that occurs somewhere

“Climate-Change-Related” (CR) – a shorthand for saying that the preponderance of science indicates that a given occurrence is a contributor to or the result of climate change.

“Incident” (I) – an episodic occurrence (with a beginning, middle, and end)
“Recurring” (R) – an on-going occurrence (no end in sight)

“Major” (M) – an occurrence with sufficient size to merit being classified.
“Disaster” (D) – an occurrence, with major impacts
“Global Disaster” (G) – an occurrence with major global impacts

Initial Classification System:

Climate-related Occurrences shall be identified.

Once identified, they shall be classified in one of six classes:

Either “incidents” or “recurring”.
Either “major”, “disaster”, or “global disaster”

“Climate-Change-Related Event” (CRE) – any occurrence that is deemed to be a contributor to climate-change.

“Climate-Change-Related Outcome” (CRO) – any occurrence that is deemed to be the result of to climate-change.

All major climate-change-related occurrences would be classified as follows:

CR Incident (CRE-I): An episodic event, with a beginning, a middle, and an end.
CR Disaster (CRE-D): An episodic event, with global impacts

The Weather Service would be tasked with implementation, and aligning with the World Meteorological Organization (WMO) and other agencies around the world.

History of US Immigration

Borders
A History of Border Security, Illegal and legal immigration

Overview

Regulating the flow of immigrants into the United States has a long, and often tawdry past.

Once regulated, entry then becomes “legal” or “illegal”. And “legal” entry is now generally highly restricted, on a temporary or permanent basis to three different routes: employment, family reunification, or humanitarian protection. All other entry: “illegal”.

Once regulated, borders then become “secure” or “insecure”. Because of trade, borders needed to be highly efficient for goods, and highly “secure” for people. This distinction, between the flow of goods and the flow of people, was an almost unenforceable dilemma, where billions have been expended to do …. the best we can.

Who should regulate? The Supreme Court settled that issue in 1875, opining that this was the role of the Federal Government. Up until then, it was a state responsibility.

How should it regulate? Congress decided that racial quotas were the answer in 1917. Before that time, they actually banned Asian immigration in 1875. The essential idea was to restrict immigration by race to a % of the race’s population in the US (2% of that population was frequently used, noting that 2% of nothing is nothing). The notion of racial quotas was maintained until 1965!

Would there be any exceptions to racial quotas?

Yes, for refugees and asylum-seekers. Congress responded to American sympathies for those fleeing communism and those feeing persecution. Recognizing “refugees” added significant new complexity.

Yes, for spouses and children of American citizens.

Yes, for those born in the Western Hemisphere.

Once regulated, politicians could rail against immigrants, but they rarely provided the funds to enforce the border laws. We severely curtailed legal immigration, and illegal immigration was the easily anticipated result. In 1952, Congress specified that legal immigration be limited to 175,455 per year!

Also easily anticipated, “illegals” brought massive issues for schools, health care, housing, etc. As the number of “illegals” grew, so grew the pressure to do something, anything, to reduce the pressure. Congress has been forced to act, as they did in 1986 when they granted amnesty to approximately 3 million illegals!

So the history of immigration in the United States includes major shifts in policy in 1875 (Supreme Court rules), 1891 (Federal bureaucracy formed), 1924 (racial quotas put in place), 1986 (racial quotas replaced and amnesty granted).

“Illegals” are out of control. Estimates of illegals are 3 million illegals in 1986, 7 million in 2001, and 12 million in 2017. As a % of U.S. population, “foreign-born” dropped from 14.7% in 1910 to 4.7% in 1970, and has been rising ever since. In 2013, there were 13.1% of the population who were foreign born (CREDIT:PEW).

Discussion
Immigration became a full-fledged subject for the nation in 1875, when the Supreme Court ruled that it was a Federal responsibility. Shortly thereafter, Congress stepped up and began excluding people – literally making it “illegal” for them to enter the United States. They banned Asians in 1875 and Chinese in 1882 (the “Asian Exclusion Act” and the “Chinese Exclusion Act” set the stage for all restrictions on immigration that would follow.

In 1891, the Federal Government took a big step: they created a bureaucracy to execute the laws. The Immigration Act of 1891 established a Commissioner of Immigration in the Treasury Department. With the two exceptions noted above, states regulated immigration before 1890.

Before then, this “nation of immigrants” actually had an immigration hiatus from 1790 to 1815, when “foreign-born” reached a low. Immigration as we now know it began with some force in 1830, when “foreign-born reached 9.7% of the population. By 1850, census estimates place immigrants at 1.7 million people, and “foreign-born” at 2.2 million. Between 1870 and 1910, foreign born hovered between 13% and 15% of population. It then started to dip, moving to 4.7% in 1970. It has been climbing since, reaching 13.1% in 2013.

Since then, waves of immigration brought the country waves of immigrants:

Between 1850 and 1930, 25 million Europeans immigrated. Italians, Greeks, Hungarians, Poles, and others speaking Slavic languages made up the bulk of this migration. But among them were 5 million Germans, 3.5 million British, and 4.5 million Irish. 2.5 to 4 million Jews were among them.

The twentieth century began with debates about immigration, and we have been debating the subject ever since.

In 1907, Congress created The Dillingham Commission to investigate the effects of immigration on the country. They wrote forty volumes on the subject.

In 1917, Congress changed the nation’s basic policy about immigration. We began setting “quotas” and limiting access based on literacy. The first such law was a literacy requirement in 1917.

In 1921, Congress adopted the Emergency Quota Act, set quotas. The National Origins Formula assigned quotas based on national origins. This complex legislation gave preference to immigrants from Central, Northern and Western Europe, severely limiting the numbers from Russia and Southern Europe, and declared all potential immigrants from Asia unworthy of entry into the United States (to our shame, this law made it virtually impossible for Jews fleeing Germany after 1934 to immigrate to the United States).

In 1924 , Congress adopted The Immigration Act of 1924. It set quotas for European immigrants so that no more than 2% of the 1890 immigrant stocks were allowed into America.

Interestingly, no quotas were set for people born in the Western Hemisphere.

This era, and its legislative framework, lasted until 1965. During this period, Congress recognized the notion of a “refugee” seeking “amnesty”. Jewish Holocaust survivors after the war, those fleeing Communist rule in Central Europe and Russia, Hungarians seeking refuge after their failed uprising in 1956, and Cubans after the 1960 revolution, and others moved the conscience of the nation.

In 1965, Congress adopted the Hart-Celler Act. It was a by-product of the civil rights revolution and a jewel in the crown of President Lyndon Johnson’s Great Society programs. It abolished the racially based quota system.The law replaced these quotas with new preferential categories. It gave particular preference to immigrants with U.S. relatives and job skills deemed critical.

In 1986, the Immigration Reform and Control Act (IRCA) was adopted. It created, for the first time, penalties for employers who hired illegal immigrants. IRCA, also granted amnesty to workers in the country illegally. In practice, amnesty was granted for about 3,000,000 illegal immigrants. Most were from Mexico. Legal Mexican immigrant family numbers were 2,198,000 in 1980, 4,289,000 in 1990 (includes IRCA), and 7,841,000 in 2000.

References

https://en.wikipedia.org/wiki/History_of_immigration_to_the_United_States

https://www.politico.com/magazine/story/2017/08/06/trump-history-of-american-immigration-215464

https://americanimmigrationcouncil.org/research/why-don’t-they-just-get-line

How U.S. immigration laws and rules have changed through history

http://assets.pewresearch.org/wp-content/uploads/sites/7/reports/39.pdf

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3407978/