Thought Recognition and BCIS

The Economist kicked off their 2018 year with a bold prediction: “Brain-computer interfaces may change what it means to be human.”

In their lead article, they suggest that BCIS (Brain Computer Interfaces) like the BrainGate System are leading the way into a new world: where mind control works.

I feel like I did in 1979 when I first heard about the Apple II. The whole world was mainframe computing and time-sharing of those monsters, and yet two guys in a garage blow a massive hole through this paradigm, turn it on its head, and invent personal computing.

Think about it: personal computing had been evolving and constantly improving now for almost forty year!

Back then, I could see the future vaguely, in very partial outlines, without much practical effect, but with intense curiosity.

Another example is voice recognition. I still remember being introduced to the subject, way back in …. 1970? I got all excited about it, until I realized …. it sucked! And it wasn’t going to get much better anytime soon. But I remember saying to myself: I can’t be fooled by the first versions of voice recognition. I can’t lull myself to sleep. I need to watch this space because it will evolve and improve over time.

If you think about it, technology version 1-10 always sucks. The history of speech recognition in the 1950’s and 1960’s is, well, pathetic.

IBM’s SHOEBOX was introduced at the 1962 World’s Fair.
DARPA got involved in the late 1970’s, and then partnered with Carnegie Mellon on HARPY – a major advance.
Threshold Technology was formed then, in order to advance commercialization of primitive speech recognition.
And now we have SIRI.

And sure, enough, after almost 40 years of trying, voice recognition is getting really, really good. Can we see a time within the next 10 years when voice recognition replaces most keyboard applications?

I think so.

And so it is with this subject. We are at the very, very beginning, when it all sounds vague, with partial outlines, without much practical effect, and yet ….. it fills me with intense curiosity.

What could the next fifty years bring?

Is it possible that we will be able to think something, and have that something (a thought? a prescribed action? an essay?) become physical?

Read on…..


CREDIT: Economist Article on The Next Frontier

TECHNOLOGIES are often billed as transformative. For William Kochevar, the term is justified. Mr Kochevar is paralysed below the shoulders after a cycling accident, yet has managed to feed himself by his own hand. This remarkable feat is partly thanks to electrodes, implanted in his right arm, which stimulate muscles. But the real magic lies higher up. Mr Kochevar can control his arm using the power of thought. His intention to move is reflected in neural activity in his motor cortex; these signals are detected by implants in his brain and processed into commands to activate the electrodes in his arms.

An ability to decode thought in this way may sound like science fiction. But brain-computer interfaces (BCIs) like the BrainGate system used by Mr Kochevar provide evidence that mind-control can work. Researchers are able to tell what words and images people have heard and seen from neural activity alone. Information can also be encoded and used to stimulate the brain. Over 300,000 people have cochlear implants, which help them to hear by converting sound into electrical signals and sending them into the brain. Scientists have “injected” data into monkeys’ heads, instructing them to perform actions via electrical pulses.

As our Technology Quarterly in this issue explains, the pace of research into BCIs and the scale of its ambition are increasing. Both America’s armed forces and Silicon Valley are starting to focus on the brain. Facebook dreams of thought-to-text typing. Kernel, a startup, has $100m to spend on neurotechnology. Elon Musk has formed a firm called Neuralink; he thinks that, if humanity is to survive the advent of artificial intelligence, it needs an upgrade. Entrepreneurs envisage a world in which people can communicate telepathically, with each other and with machines, or acquire superhuman abilities, such as hearing at very high frequencies.

These powers, if they ever materialise, are decades away. But well before then, BCIs could open the door to remarkable new applications. Imagine stimulating the visual cortex to help the blind, forging new neural connections in stroke victims or monitoring the brain for signs of depression. By turning the firing of neurons into a resource to be harnessed, BCIs may change the idea of what it means to be human.

That thinking feeling
Sceptics scoff. Taking medical BCIs out of the lab into clinical practice has proved very difficult. The BrainGate system used by Mr Kochevar was developed more than ten years ago, but only a handful of people have tried it out. Turning implants into consumer products is even harder to imagine. The path to the mainstream is blocked by three formidable barriers—technological, scientific and commercial.

Start with technology. Non-invasive techniques like an electroencephalogram (EEG) struggle to pick up high-resolution brain signals through intervening layers of skin, bone and membrane. Some advances are being made—on EEG caps that can be used to play virtual-reality games or control industrial robots using thought alone. But for the time being at least, the most ambitious applications require implants that can interact directly with neurons. And existing devices have lots of drawbacks. They involve wires that pass through the skull; they provoke immune responses; they communicate with only a few hundred of the 85bn neurons in the human brain. But that could soon change. Helped by advances in miniaturisation and increased computing power, efforts are under way to make safe, wireless implants that can communicate with hundreds of thousands of neurons. Some of these interpret the brain’s electrical signals; others experiment with light, magnetism and ultrasound.

Clear the technological barrier, and another one looms. The brain is still a foreign country. Scientists know little about how exactly it works, especially when it comes to complex functions like memory formation. Research is more advanced in animals, but experiments on humans are hard. Yet, even today, some parts of the brain, like the motor cortex, are better understood. Nor is complete knowledge always needed. Machine learning can recognise patterns of neural activity; the brain itself gets the hang of controlling BCIS with extraordinary ease. And neurotechnology will reveal more of the brain’s secrets.

Like a hole in the head
The third obstacle comprises the practical barriers to commercialisation. It takes time, money and expertise to get medical devices approved. And consumer applications will take off only if they perform a function people find useful. Some of the applications for brain-computer interfaces are unnecessary—a good voice-assistant is a simpler way to type without fingers than a brain implant, for example. The idea of consumers clamouring for craniotomies also seems far-fetched. Yet brain implants are already an established treatment for some conditions. Around 150,000 people receive deep-brain stimulation via electrodes to help them control Parkinson’s disease. Elective surgery can become routine, as laser-eye procedures show.

All of which suggests that a route to the future imagined by the neurotech pioneers is arduous but achievable. When human ingenuity is applied to a problem, however hard, it is unwise to bet against it. Within a few years, improved technologies may be opening up new channels of communications with the brain. Many of the first applications hold out unambiguous promise—of movement and senses restored. But as uses move to the augmentation of abilities, whether for military purposes or among consumers, a host of concerns will arise. Privacy is an obvious one: the refuge of an inner voice may disappear. Security is another: if a brain can be reached on the internet, it can also be hacked. Inequality is a third: access to superhuman cognitive abilities could be beyond all except a self-perpetuating elite. Ethicists are already starting to grapple with questions of identity and agency that arise when a machine is in the neural loop.

These questions are not urgent. But the bigger story is that neither are they the realm of pure fantasy. Technology changes the way people live. Beneath the skull lies the next frontier.

This article appeared in the Leaders section of the print edition under the headline “The next frontier”

================== REFERENCE: History of Speech Recognition =====



Speech Recognition Through the Decades: How We Ended Up With Siri

By Melanie Pinola
PCWorld | NOV 2, 2011 6:00 PM PT

Looking back on the development of speech recognition technology is like watching a child grow up, progressing from the baby-talk level of recognizing single syllables, to building a vocabulary of thousands of words, to answering questions with quick, witty replies, as Apple’s supersmart virtual assistant Siri does.

Listening to Siri, with its slightly snarky sense of humor, made us wonder how far speech recognition has come over the years. Here’s a look at the developments in past decades that have made it possible for people to control devices using only their voice.

1950s and 1960s: Baby Talk
The first speech recognition systems could understand only digits. (Given the complexity of human language, it makes sense that inventors and engineers first focused on numbers.) Bell Laboratories designed in 1952 the “Audrey” system, which recognized digits spoken by a single voice. Ten years later, IBM demonstrated at the 1962 World’s Fair its “Shoebox” machine, which could understand 16 words spoken in English.

Labs in the United States, Japan, England, and the Soviet Union developed other hardware dedicated to recognizing spoken sounds, expanding speech recognition technology to support four vowels and nine consonants.
They may not sound like much, but these first efforts were an impressive start, especially when you consider how primitive computers themselves were at the time.

1970s: Speech Recognition Takes Off

Speech recognition technology made major strides in the 1970s, thanks to interest and funding from the U.S. Department of Defense. The DoD’s DARPA Speech Understanding Research (SUR) program, from 1971 to 1976, was one of the largest of its kind in the history of speech recognition, and among other things it was responsible for Carnegie Mellon’s “Harpy” speech-understanding system. Harpy could understand 1011 words, approximately the vocabulary of an average three-year-old.

Harpy was significant because it introduced a more efficient search approach, called beam search, to “prove the finite-state network of possible sentences,” according to Readings in Speech Recognition by Alex Waibel and Kai-Fu Lee. (The story of speech recognition is very much tied to advances in search methodology and technology, as Google’s entrance into speech recognition on mobile devices proved just a few years ago.)

The ’70s also marked a few other important milestones in speech recognition technology, including the founding of the first commercial speech recognition company, Threshold Technology, as well as Bell Laboratories’ introduction of a system that could interpret multiple people’s voices.

1980s: Speech Recognition Turns Toward Prediction
Over the next decade, thanks to new approaches to understanding what people say, speech recognition vocabulary jumped from about a few hundred words to several thousand words, and had the potential to recognize an unlimited number of words. One major reason was a new statistical method known as the hidden Markov model.
Rather than simply using templates for words and looking for sound patterns, HMM considered the probability of unknown sounds’ being words. This foundation would be in place for the next two decades (see Automatic Speech Recognition—A Brief History of the Technology Development by B.H. Juang and Lawrence R. Rabiner).

Equipped with this expanded vocabulary, speech recognition started to work its way into commercial applications for business and specialized industry (for instance, medical use). It even entered the home, in the form of Worlds of Wonder’s Julie doll (1987), which children could train to respond to their voice. (“Finally, the doll that understands you.”)
See how well Julie could speak:

However, whether speech recognition software at the time could recognize 1000 words, as the 1985 Kurzweil text-to-speech program did, or whether it could support a 5000-word vocabulary, as IBM’s system did, a significant hurdle remained: These programs took discrete dictation, so you had … to … pause … after … each … and … every … word.

Next page: Speech recognition for the masses, and the future of speech recognition