http://us.penguingroup.com/static/packages/us/kurzweil/excerpts/timeline/timeline2.htm
TIME LINE
1950
Eckert and Mauchley develop UNIVAC, the first commercially marketed computer. It is used to compile the results of the U.S. census, marking the first time this census is handled by a programmable computer.
1950
In his paper “Computing Machinery and Intelligence,” Alan Turing presents the Turing Test, a means for determining whether a machine is intelligent.
1950
Commercial color television is first broadcast in the United States, and transcontinental black-and-white television is available within the next year.
1950
Claude Elwood Shannon writes “Programming a Computer for Playing Chess,” published in Philosophical Magazine.
1951
Eckert and Mauchley build EDVAC, which is the first computer to use the stored-program concept. The work takes place at the Moore School at the University of Pennsylvania.
1951
Paris is the host to a Cybernetics Congress.
1952
UNIVAC, used by the Columbia Broadcasting System (CBS) television network, successfully predicts the election of Dwight D. Eisenhower as president of the United States.
1952
Pocket-sized transistor radios are introduced.
1952
Nathaniel Rochester designs the 701, IBM’s first production-line electronic digital computer. It is marketed for scientific use.
1953
The chemical structure of the DNA molecule is discovered by James D. Watson and Francis H. C. Crick.
1953
Philosophical Investigations by Ludwig Wittgenstein and Waiting for Godot, a play by Samuel Beckett, are published. Both documents are considered of major importance to modern existentialism.
1953
Marvin Minsky and John McCarthy get summer jobs at Bell Laboratories.
1955
William Shockley’s Semiconductor Laboratory is founded, thereby starting Silicon Valley.
1955
The Remington Rand Corporation and Sperry Gyroscope join forces and become the Sperry-Rand Corporation. For a time, it presents serious competition to IBM.
1955
IBM introduces its first transistor calculator. It uses 2,200 transistors instead of the 1,200 vacuum tubes that would otherwise be required for equivalent computing power.
1955
A U.S. company develops the first design for a robotlike machine to be used in industry.
1955
IPL-II, the first artificial intelligence language, is created by Allen Newell, J. C. Shaw, and Herbert Simon.
1955
The new space program and the U.S. military recognize the importance of having computers with enough power to launch rockets to the moon and missiles through the stratosphere. Both organizations supply major funding for research.
1956
The Logic Theorist, which uses recursive search techniques to solve mathematical problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1956
John Backus and a team at IBM invent FORTRAN, the first scientific computer-programming language.
1956
Stanislaw Ulam develops MANIAC I, the first computer program to beat a human being in a chess game.
1956
The first commercial watch to run on electric batteries is presented by the Lip company of France.
1956
The term Artificial Intelligence is coined at a computer conference at Dartmouth College.
1957
Kenneth H. Olsen founds Digital Equipment Corporation.
1957
The General Problem Solver, which uses recursive search to solve problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1957
Noam Chomsky writes Syntactic Structures, in which he seriously considers the computation required for natural-language understanding. This is the first of the many important works that will earn him the title Father of Modern Linguistics.
1958
An integrated circuit is created by Texas Instruments’ Jack St. Clair Kilby.
1958
The Artificial Intelligence Laboratory at the Massachusetts Institute of Technology is founded by John McCarthy and Marvin Minsky.
1958
Allen Newell and Herbert Simon make the prediction that a digital computer will be the world’s chess champion within ten years.
1958
LISP, an early AI language, is developed by John McCarthy.
1958
The Defense Advanced Research Projects Agency, which will fund important computer-science research for years in the future, is established.
1958
Seymour Cray builds the Control Data Corporation 1604, the first fully transistorized supercomputer.
1958-1959
Jack Kilby and Robert Noyce each develop the computer chip independently. The computer chip leads to the development of much cheaper and smaller computers.
1959
Arthur Samuel completes his study in machine learning. The project, a checkers-playing program, performs as well as some of the best players of the time.
1959
Electronic document preparation increases the consumption of paper in the United States. This year, the nation will consume 7 million tons of paper. In 1986, 22 million tons will be used. American businesses alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and 4 trillion in 1990.
1959
COBOL, a computer language designed for business use, is developed by Grace Murray Hopper, who was also one of the first programmers of the Mark I.
1959
Xerox introduces the first commercial copier.
1960
Theodore Harold Maimen develops the first laser. It uses a ruby cylinder.
1960
The recently established Defense Department’s Advanced Research Projects Agency substantially increases its funding for computer research.
1960
There are now about six thousand computers in operation in the United States.
1960s
Neural-net machines are quite simple and incorporate a small number of neurons organized in only one or two layers. These models are shown to be limited in their capabilities.
1961
The first time-sharing computer is developed at MIT.
1961
President John F. Kennedy provides the support for space project Apollo and inspiration for important research in computer science when he addresses a joint session of Congress, saying, “I believe we should go to the moon.”
1962
The world’s first industrial robots are marketed by a U.S. company.
1962
Frank Rosenblatt defines the Perceptron in his Principles of Neurodynamics. Rosenblatt first introduced the Perceptron, a simple processing element for neural networks, at a conference in 1959.
1963
The Artificial Intelligence Laboratory at Stanford University is founded by John McCarthy.
1963
The influential Steps Toward Artificial Intelligence by Marvin Minsky is published.
1963
Digital Equipment Corporation announces the PDP-8, which is the first successful minicomputer.
1964
IBM introduces its 360 series, thereby further strengthening its leadership in the computer industry.
1964
Thomas E. Kurtz and John G. Kenny of Dartmouth College invent BASIC (Beginner’s All-purpose Symbolic Instruction Code).
1964
Daniel Bobrow completes his doctoral work on Student, a natural-language program that can solve high-school-level word problems in algebra.
1964
Gordon Moore’s prediction, made this year, says integrated circuits will double in complexity each year. This will become known as Moore’s Law and prove true (with later revisions) for decades to come.
1964
Marshall McLuhan, via his Understanding Media, foresees the potential for electronic media, especially television, to create a “global village” in which “the medium is the message.”
1965
The Robotics Institute at Carnegie Mellon University, which will become a leading research center for AI, is founded by Raj Reddy.
1965
Hubert Dreyfus presents a set of philosophical arguments against the possibility of artificial intelligence in a RAND corporate memo entitled “Alchemy and Artificial Intelligence.”
1965
Herbert Simon predicts that by 1985 “machines will be capable of doing any work a man can do.”
1966
The Amateur Computer Society, possibly the first personal computer club, is founded by Stephen B. Gray. The Amateur Computer Society Newsletter is one of the first magazines about computers.
1967
The first internal pacemaker is developed by Medtronics. It uses integrated circuits.
1968
Gordon Moore and Robert Noyce found Intel (Integrated Electronics) Corporation.
1968
The idea of a computer that can see, speak, hear, and think sparks imaginations when HAL is presented in the film 2001: A Space Odyssey, by Arthur C. Clarke and Stanley Kubrick.
1969
Marvin Minsky and Seymour Papert present the limitation of single-layer neural nets in their book Perceptrons. The book’s pivotal theorem shows that a Perceptron is unable to determine if a line drawing is fully connected. The book essentially halts funding for neural-net research.
1970
The GNP, on a per capita basis and in constant 1958 dollars, is $3,500, or more than six times as much as a century before.
1970
The floppy disc is introduced for storing data in computers.
c. 1970
Researchers at the Xerox Palo Alto Research Center (PARC) develop the first personal computer, called Alto. PARC’s Alto pioneers the use of bit-mapped graphics, windows, icons, and mouse pointing devices.
1970
Terry Winograd completes his landmark thesis on SHRDLU, a natural-language system that exhibits diverse intelligent behavior in the small world of children’s blocks. SHRDLU is criticized, however, for its lack of generality.
1971
The Intel 4004, the first microprocessor, is introduced by Intel.
1971
The first pocket calculator is introduced. It can add, subtract, multiply, and divide.
1972
Continuing his criticism of the capabilities of AI, Hubert Dreyfus publishes What Computers Can’t Do, in which he argues that symbol manipulation cannot be the basis of human intelligence.
1973
Stanley H. Cohen and Herbert W. Boyer show that DNA strands can be cut, joined, and then reproduced by inserting them into the bacterium Escherichia coli. This work creates the foundation for genetic engineering.
1974
Creative Computing starts publication. It is the first magazine for home computer hobbyists.
1974
The 8-bit 8080, which is the first general-purpose microprocessor, is announced by Intel.
1975
Sales of microcomputers in the United States reach more than five thousand, and the first personal computer, the Altair 8800, is introduced. It has 256 bytes of memory.
1975
BYTE, the first widely distributed computer magazine, is published.
1975
Gordon Moore revises his observation on the doubling rate of transistors on an integrated circuit from twelve months to twenty-four months.
1976
Kurzweil Computer Products introduces the Kurzweil Reading Machine (KRM), the first print-to-speech reading machine for the blind. Based on the first omni-font (any font) optical character recognition (OCR) technology, the KRM scans and reads aloud any printed materials (books, magazines, typed documents).
1976
Stephen G. Wozniak and Steven P. Jobs found Apple Computer Corporation.
1977
The concept of true-to-life robots with convincing human emotions is imaginatively portrayed in the film Star Wars.
1977
For the first time, a telephone company conducts large-scale experiments with fiber optics in a telephone system.
1977
The Apple II, the first personal computer to be sold in assembled form and the first with color graphics capability, is introduced and successfully marketed. (JCR buys first Apple II at KO in 1978
J1978
Speak & Spell, a computerized learning aid for young children, is introduced by Texas Instruments. This is the first product that electronically duplicates the human vocal tract on a chip.
1979
In a landmark study by nine researchers published in the Journal of the American Medical Association, the performance of the computer program MYCIN is compared with that of doctors in diagnosing ten test cases of meningitis. MYCIN does at least as well as the medical experts. The potential of expert systems in medicine becomes widely recognized.
1979
Dan Bricklin and Bob Frankston establish the personal computer as a serious business tool when they develop VisiCalc, the first electronic spreadsheet.
1980
AI industry revenue is a few million dollars this year.
1980s
As neuron models are becoming potentially more sophisticated, the neural network paradigm begins to make a comeback, and networks with multiple layers are commonly used.
1981
Xerox introduces the Star Computer, thus launching the concept of Desktop Publishing. Apple’s Laserwriter, available in 1985, will further increase the viability of this inexpensive and efficient way for writers and artists to create their own finished documents.
1981
IBM introduces its Personal Computer (PC).
1981
The prototype of the Bubble Jet printer is presented by Canon.
1982
Compact disc players are marketed for the first time.
1982
Mitch Kapor presents Lotus 1-2-3, an enormously popular spreadsheet program.
1983
Fax machines are fast becoming a necessity in the business world.
1983
The Musical Instrument Digital Interface (MIDI) is presented in Los Angeles at the first North American Music Manufacturers show.
1983
Six million personal computers are sold in the United States.
1984
The Apple Macintosh introduces the “desktop metaphor,” pioneered at Xerox, including bit-mapped graphics, icons, and the mouse.
1984
William Gibson uses the term cyberspace in his book Neuromancer.
1984
The Kurzweil 250 (K250) synthesizer, considered to be the first electronic instrument to successfully emulate the sounds of acoustic instruments, is introduced to the market.
1985
Marvin Minsky publishes The Society of Mind, in which he presents a theory of the mind where intelligence is seen to be the result of proper organization of a hierarchy of minds with simple mechanisms at the lowest level of the hierarchy.
1985
MIT’s Media Laboratory is founded by Jerome Weisner and Nicholas Negroponte. The lab is dedicated to researching possible applications and interactions of computer science, sociology, and artificial intelligence in the context of media technology.
1985
There are 116 million jobs in the United States, compared to 12 million in 1870. In the same period, the number of those employed has grown from 31 percent to 48 percent, and the per capita GNP in constant dollars has increased by 600 percent. These trends show no signs of abating.
1986
Electronic keyboards account for 55.2 percent of the American musical keyboard market, up from 9.5 percent in 1980.
1986
Life expectancy is about 74 years in the United States. Only 3 percent of the American workforce is involved in the production of food. Fully 76 percent of American adults have high-school diplomas, and 7.3 million U.S. students are enrolled in college.
1987
NYSE stocks have their greatest single-day loss due, in part, to computerized trading.
1987
Current speech systems can provide any one of the following: a large vocabulary, continuous speech recognition, or speaker independence.
1987
Robotic-vision systems are now a $300 million industry and will grow to $800 million by 1990.
1988
Computer memory today costs only one hundred millionth of what it did in 1950.
1988
Marvin Minsky and Seymour Papert publish a revised edition of Perceptrons in which they discuss recent developments in neural network machinery for intelligence.
1988
In the United States, 4,700,000 microcomputers, 120,000 minicomputers, and 11,500 mainframes are sold this year.
1988
W. Daniel Hillis’s Connection Machine is capable of 65,536 computations at the same time.
1988
Notebook computers are replacing the bigger laptops in popularity.
1989
Intel introduces the 16-megahertz (MHz) 80386SX, 2.5 MIPS microprocessor.
1990
Nautilus, the first CD-ROM magazine, is published.
1990
The development of HypterText Markup Language by researcher Tim Berners-Lee and its release by CERN, the high-energy physics laboratory in Geneva, Switzerland, leads to the conception of the World Wide Web.
1991
Cell phones and e-mail are increasing in popularity as business and personal communication tools.
1992
The first double-speed CD-ROM drive becomes available from NEC.
1992
The first personal digital assistant (PDA), a hand-held computer, is introduced at the Consumer Electronics Show in Chicago. The developer is Apple Computer.
1993
The Pentium 32-bit microprocessor is launched by Intel. This chip has 3.1 million transistors.
1994
The World Wide Web emerges.
1994
America Online now has more than 1 million subscribers.
1994
Scanners and CD-ROMs are becoming widely used.
1994
Digital Equipment Corporation introduces a 300-MHz version of the Alpha AXP processor that executes 1 billion instructions per second.
1996
Compaq Computer and NEC Computer Systems ship hand-held computers running Windows CE.
1996
NEC Electronics ships the R4101 processor for personal digital assistants. It includes a touch-screen interface.
1997
Deep Blue defeats Gary Kasparov, the world chess champion, in a regulation tournament.
1997
Dragon Systems introduces Naturally Speaking, the first continuous-speech dictation software product.
1997
Video phones are being used in business settings.
1997
Face-recognition systems are beginning to be used in payroll check-cashing machines.
1998
The Dictation Division of Lernout & Hauspie Speech Products (formerly Kurzweil Applied Intelligence) introduces Voice Xpress Plus, the first continuous-speech-recognition program with the ability to understand natural-language commands.
1998
Routine business transactions over the phone are beginning to be conducted between a human customer and an automated system that engages in a verbal dialogue with the customer (e.g., United Airlines reservations).
1998
Investment funds are emerging that use evolutionary algorithms and neural nets to make investment decisions (e.g., Advanced Investment Technologies).
1998
The World Wide Web is ubiquitous. It is routine for high-school students and local grocery stores to have web sites.
1998
Automated personalities, which appear as animated faces that speak with realistic mouth movements and facial expressions, are working in laboratories. These personalities respond to the spoken statements and facial expressions of their human users. They are being developed to be used in future user interfaces for products and services, as personalized research and business assistants, and to conduct transactions.
1998
Microvision’s Virtual Retina Display (VRD) projects images directly onto the user’s retinas. Although expensive, consumer versions are projected for 1999.
1998
“Bluetooth” technology is being developed for “body” local area networks (LANs) and for wireless communication between personal computers and associated peripherals. Wireless communication is being developed for high-bandwidth connection to the Web.
1999
Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence is published, available at your local bookstore!
FORECASTS:
2009
A $1,000 personal computer can perform about a trillion calculations per second.
Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech- to- text conversion) for the deaf, and computer- controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.<
Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which had been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvement in the price performance of computing.
Human musicians routinely jam with cybernetic musicians.
Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases.
The neo-Luddite movement is growing.
2019
A $1,000 computing device (in 1999 dollars) is now approximately equal to the computational ability of the human brain.
Computers are now largely invisible and are embedded everywhere -- in walls, tables, chairs, desks, clothing, jewelry, and bodies.
Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory "lenses," are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality.
Most interaction with computing is through gestures and two-way natural-language spoken communication.
Nanoengineered machines are beginning to be applied to manufacturing and process-control applications.
High-resolution, three-dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity.
Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers.
Blind persons routinely use eyeglass-mounted reading-navigation systems. Deaf persons read what other people are saying through their lens displays. Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices.
The vast majority of transactions include a simulated person.
Automated driving systems are now installed in most roads.
People are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers, and lovers.
Virtual artists, with their own reputations, are emerging in all of the arts.
There are widespread reports of computers passing the Turing Test, although these tests do not meet the criteria established by knowledgeable observers.
2029
A $1,000 (in 1999 dollars) unit of computation has the computing capacity of approximately 1,000 human brains.
Permanent or removable implants (similar to contact lenses) for the eyes as well as cochlear implants are now used to provide input and output between the human user and the worldwide computing network.
Direct neural pathways have been perfected for high-bandwidth connection to the human brain. A range of neural implants is becoming available to enhance visual and auditory perception and interpretation, memory, and reasoning.
Automated agents are now learning on their own, and significant knowledge is being created by machines with little or no human intervention. Computers have read all available human- and machine-generated literature and multimedia material.
There is widespread use of all-encompassing visual, auditory, and tactile communication using direct neural connections, allowing virtual reality to take place without having to be in a "total touch enclosure."
The majority of communication does not involve a human. The majority of communication involving a human is between a human and a machine.
There is almost no human employment in production, agriculture, or transportation. Basic life needs are available for the vast majority of the human race.
There is a growing discussion about the legal rights of computers and what constitutes being "human."
Although computers routinely pass apparently valid forms of the Turing Test, controversy persists about whether or not machine intelligence equals human intelligence in all of its diversity.
Machines claim to be conscious. These claims are largely accepted.
2049
The common use of nanoproduced food, which has the correct nutritional composition and the same taste and texture of organically produced food, means that the availability of food is no longer affected by limited resources, bad crop weather, or spoilage.<
Nanobot swarm projections are used to create visual-auditory-tactile projections of people and objects in real reality.
2072
Picoengineering (developing technology at the scale of picometers or trillionths of a meter) becomes practical.1
By the year 2099
There is a strong trend toward a merger of human thinking with the world of machine intelligence that the human species initially created.
There is no longer any clear distinction between humans and computers.
Most conscious entities do not have a permanent physical presence.
Machine-based intelligences derived from extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic equivalents. Most of these intelligences are not tied to a specific computational processing unit. The number of software-based humans vastly exceeds those still using native neuron-cell-based computation.
Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do.
Because most information is published using standard assimilated knowledge protocols, information can be instantly understood. The goal of education, and of intelligent beings, is discovering new knowledge to learn.
Femtoengineering (engineering at the scale of femtometers or one thousandth of a trillionth of a meter) proposals are controversial.2
Life expectancy is no longer a viable term in relation to intelligent beings.
Some many millenniums hence . . .
Intelligent beings consider the fate of the Universe.