Great courage by Google to call a spade a spade and close out a product that has little uptake. I am sure they will rise again.
As they say, Microsoft has Vault and now let’s see how Apple plays their cards in this incredibly complex space.
Great courage by Google to call a spade a spade and close out a product that has little uptake. I am sure they will rise again.
As they say, Microsoft has Vault and now let’s see how Apple plays their cards in this incredibly complex space.
Here are the basics….
There are about 200 tests that require blood samples.
The basis metabolic panel is widely used.
40-50 test for allergies.
18 are useful for male related general health screening
18 are useful for female related general health screening
7 are for detecting viruses
18 are for rheumatic evaluations
7 are hormonal related – men
7 are hormonal related – female
Some labs group the tests. An example is:
Blood test types and panels
LABS By Disease
Quantified Self Movement
http://www.austinmedclinic.com/lab-pricing.html”>Austin Example of LABS
Note this example underlines that stool, urine, saliva, and blood are all specimens.
The history of this song involves Pete Seeger, Joan Baez, Bruce Springsteen, The Highlander Folk School, SNCC, Labor Organizing in SC Tobacco fields. There are many other threads. The song apparently originated in 1901!
The Highlander Folk School seems to be in the center of much of this history.
Listen to NPR’s Noel Adams (8 minute clip):
See this passage below from Wikipedia:
“The civil rights anthem “We Shall Overcome” was adapted (from a gospel song) by Highlander music director Zilphia Horton, wife of Myles Horton, from the singing of striking tobacco factory workers in South Carolina in 1946, and shortly afterward was published by folksinger Pete Seeger in the People’s Songs bulletin. It was revived at Highlander by Guy Carawan, who succeeded Zilphia Horton as Highlander’s music director in 1959. Guy Carawan taught the song to SNCC at their first convening at Shaw University. The song has since spread and become one of the most recognizable movement songs in the world.”
Note that Rosa Parks was actually a trained community organizer – trained at the Highlander Folk School.
In an NPR interview, it was stated that the song was originally sung in a sing-song fashion, not as the anthem as it is known today. Also, it originally was written “we will overcome’, until Pete Seeger changed it to “We shall overcome”.
The best history I have seen of the song is below:
This song is based on the early hymn “U Sanctissima.” Charles Albert Tindley, who was a minister at Bainbridge St. Methodist Church in Philadelphia and also a Gospel music composer, added the words in 1901 and called this new hymn “I’ll Overcome Some Day.” In the ensuing decades, the song became a favorite at black churches throughout the American south, often sung as “I Will Overcome.”
The song evolved at the Highlander Folk School in Monteagle, Tennessee, which was a meeting place and activity center for civil rights activists founded in 1932 (it was later renamed the Highlander Center and relocated to New Market, Tennessee). In 1947, striking tobacco workers from Charleston, South Carolina attended a workshop there and introduced the song (as “I Will Overcome”) to the cultural director of the school, Zilphia Horton. She began performing the song all of her workshops, and taught it to Pete Seeger when he visited the center.
Seeger published the song in 1948 in the newsletter for his People’s Songs collective and began performing it. He changed the title to “We Shall Overcome” and also added two new verses and a banjo part.
In 1959, Guy Carawan took over as cultural director at the Highlander school, where the song was now a staple. Carawan brought it to the burgeoning civil rights movement when he played it at the first meeting of the Student Nonviolent Coordinating Committee in April, 1960. Members of this group spread word of the song, and soon it was sung across America at vigils, rallies, protests and other gatherings that called for an inspirational song of freedom.
While the song is most closely associated with Pete Seeger, he downplayed his contribution, saying that the song already existed and that all he really did was change “will” to “shall” because it “opens up the mouth better.”
When Pete Seeger played his updated version of “We shall Overcome” to the American civil rights leader Martin Luther King Jr., he gave King’s civil rights movement its anthem. Seeger performed the song for King in 1957 when they attended the 25th anniversary of the Highlander Center in Tennessee. Rosa Parks was also at the event.
The only artist to chart with this song was Joan Baez, whose version reached #90 in the US in November, 1963. She performed the song at the March on Washington on August 28th, 1963 before Martin Luther King, Jr. gave his famous “I Have a Dream” speech. The album containing the audio from the event was released as We Shall Overcome: Documentary of the March on Washington.
After her first trip to England in 1965 (where she performed with Bob Dylan), Baez’ version of this classic protest anthem went to #26 on the UK chart.
The song wasn’t copyrighted until October 7, 1963. Listed as “New material arranged for voice and piano with guitar chords and some new words,” the copyright was granted to Seeger, Guy Carawan, Zilphia Horton and Frank Hamilton. Horton had died in 1956, so her husband Myles represented her estate in the claim. Myles Horton was co-founder of the Highlander Folk School; Frank Hamilton was a folk singer who worked with Seeger and often performed the song.
All four of the copyright holders (the composer credit is listed as Guy Carawan/Frank Hamilton/Zilphia Horton/Pete Seeger) advanced the song in some fashion, but none profited from the songwriting royalties, which are donated to the We Shall Overcome Fund. Administered by the Highlander Research and Education Center, the fund supports cultural and educational endeavors in African American communities in the South.
By the time the song was copyrighted, the original words written by Charles Albert Tindley in 1901 had been transformed to the extent that he was not due a writer’s credit (giving him one would have made delivering the song’s proceeds to charity very difficult). Tindley does have another musical claim to fame: he also wrote a hymn called “Stand By Me,” which became the basis for the Ben E. King hit of the same name. He was left off the credits for that one, too.
Notable artists who recorded this song include Louis Armstrong, Mahalia Jackson, Peter, Paul & Mary and Toots & the Maytals. Martin Luther King, Jr. also recorded a spoken word version.
The song was not widely recorded until the 1960s. One of the first recordings appeared in 1960 on the album The Nashville Sit-in Story: Songs and Scenes of Nashville Lunch Counter Desegregation, which was a compilation of songs recorded by demonstrators who participated in a sit-in on February 13 of that year.
Guy Carawan’s version of the song appeared in 1961 on the set Folk Music of the Newport Folk Festival, Vol. 2. “Here, he sings what has become the theme song of the Negro movement in the south,” it states in the liner notes.
Most of Pete Seeger’s recordings of the song were taken from live performances, including his June 8, 1963 concert at Carnegie Hall in New York City, which was released as a live album by Columbia Records called We Shall Overcome.In 2006, Bruce Springsteen included this song on his album We Shall Overcome: The Seeger Sessions, which contained his versions of various songs written by Pete Seeger, who like Springsteen, championed the working class and fought against institutional oppression. Seeger told The Guardian: “I’ve managed to survive all these years by keeping a low profile. Now my cover’s blown. If I had known, I’d have asked him to mention my name somewhere inside.”
The album won a Grammy for Best Traditional Folk Album. Springsteen recorded it without the E Street Band – it was the first album of cover songs he ever recorded.
Internet History Timeline
Yahoo is founded by Stanford engineering students Jerry Yang and David Filo in a campus trailer in 1994, bootstrap innovation and hardcore coding were etched into its corporate DNA.
Bill Gross, the indefatigable entrepreneur behind the business incubator IdeaLab, launched a search site called GoTo.com. Other search engines promised results based on human-built directories or computer-driven algorithms. Gross’ search site auctioned its results to the highest bidder.
At the time, the idea seemed radical, even offensive. Who would want results driven by hordes of sellers hawking goods and services? Advertisers would, as it turned out. Although GoTo never became a top-tier search destination, Gross and CEO Ted Meisel quickly saw that the big Web portals and search engines like AltaVista, Yahoo, AOL, and MSN would pay big money for GoTo’s auction-driven results.
www.goto.com changes its name to www.overture.com by the end of that year Web surfers had clicked on Overture ads 1.4 billion times. Advertisers understood the value of being able to bid for juicy keywords. The ads would be laser targeted, and the results — clicks — could be measured precisely. The portals and search sites figured out that the sponsored links could be placed alongside a more objective set of search results. It was a brilliant way to turn searches into revenue.
Laid low by the tech crash, Yahoo brought in Semel in May 2001, when the company was at its nadir. It rebounded spectacularly under his leadership.
Just as Goto.com changes its name to Overture in 2001, Google saw the power of this approach and decided to grow its own. By the time Google published its financial statements for the first time in 2004, everyone knew that the company had harnessed one of the great innovations of the Internet age.
Google’s revenue stood at a measly $240 million a year. Yahoo’s was about $837 million.
Engineers at Google took the concept of pay-per-click search results and in 2002 turned it into a smooth-running, money-printing machine called AdWords. The company developed an automated process for advertisers to bid on keywords. It also made the auctions more sophisticated so customers couldn’t game the system. Crucially, Google determined ad prominence on a Web page not just by the price advertisers were willing to pay per click — as Overture had done — but also based on how many clickthroughs that ad generated. As a result, Google’s system responded quickly to ineffective ads: They disappeared. Google also had a massive database that tracked which ads worked and which didn’t, information it could pass on to its customers to help them create better ad campaigns.
Yahoo CEO Terry Semel offers to buy Google for roughly $3 billion, but the young Internet search firm wasn’t interested. Once upon a time, Google’s founders had come to Yahoo for an infusion of cash; now they were turning up their noses at what Semel believed was a perfectly reasonable offer. Worse, Semel’s lieutenants were telling him that, in fact, Google was probably worth at least $5 billion.
Yahoo’s stock price was still hovering at a bubble-busted $7 a share, a $5 billion purchase price would essentially mean that Yahoo would have to spend its entire market value to swing the deal. It would be a merger of equals, not a purchase.
Overture dominates search-related advertising; its revenue is two times Google’s.
In late 2002, Yahoo acquired Inktomi, which many believed had the second-best search technology on the planet. (Google was still tops.) The price: a bargain $257 million.
In mid-2003, Semel’s patient negotiations with Overture bore fruit. He paid $1.4 billion for the search-driven ad pioneer, roughly 25 percent less than the original asking price.
By the time the deal was actually announced in 2003, the Google and Overture are neck and neck in revenues.
Google goes public. Google’s stock soared above $500 a share, giving the company a market value of $147 billion — right behind Chevron and just ahead of Intel.
Google’s revenue is 2.5 times Overture’s.
By early 2004 — more than a year after the deal — Yahoo had integrated Inktomi’s technology well enough to put an end to a 2000 agreement to use Google’s search technology. Jeff Weiner seems to get the thanks for this. And by late summer, even as Google was preparing to go public, optimism about Yahoo’s future was so high that its stock hit $36.42, eight times its low in 2001. The thinking went that Google’s rapid growth rate and immature management would cause it to stumble, allowing Yahoo to prevail. At the time, Yahoo had a database of 157 million users that could be sliced and diced for advertisers to permit pinpoint targeting, while name brands like Procter & Gamble were skeptical about Google.
Yahoo attempts to placate Microsoft by maintaining Overture as a stand-alone brand. At the same time, they planned an overhaul of Overture’s technology, a project code-named Panama. It was a disaster. With no clear delineations, Yahoo and Overture executives fought over turf. Yahoo hired and fired a half-dozen engineering chiefs at Overture during the first year.
Ted Meisel, Overture’s CEO is replaced by Jeff Weiner, a Semel protégé. Semel had decided in March 2005 to finally, fully integrate Overture into Yahoo, but those who worked on the project say it didn’t become a truly top priority until Weiner took over.
A year of the new world of social networking and online video
A year of social networking darlings www.facebook.com and www.myspace.com
A year expected by many to be Yahoo’s best — turned out dismally. Brand-advertising growth fell by half, while Yahoo’s share of search-related advertising dropped from 32 to 24 percent, according to Piper Jaffray. (During the same period, Google’s edged up from 64 to 68 percent.) Analysts at Morgan Stanley predicted that operating profits at Yahoo would fall by 20 percent. (Final 2006 results had not posted by press time.) “It’s now a given among advertisers that Google has won the search game,” says Jeff Lanctot, vice president of media for Internet ad firm Avenue A Razorfish. No wonder Yahoo’s share price fell 36 percent last year.
Yahoo was interested in buying YouTube, but Google snatched away the Web video star for $1.65 billion.
In December, Semel shook up his top management team, leading to the departures of COO Dan Rosensweig and content chief Lloyd Braun and to the streamlining of the company’s organizational chart. There’s talk that Semel himself may be on the way out.
Yahoo’s HotJobs, was trounced by the competition
By Fred Vogelstein | Also by this reporter
12:00 PM Jan, 16, 2007
Eckert and Mauchley develop UNIVAC, the first commercially marketed computer. It is used to compile the results of the U.S. census, marking the first time this census is handled by a programmable computer.
In his paper “Computing Machinery and Intelligence,” Alan Turing presents the Turing Test, a means for determining whether a machine is intelligent.
Commercial color television is first broadcast in the United States, and transcontinental black-and-white television is available within the next year.
Claude Elwood Shannon writes “Programming a Computer for Playing Chess,” published in Philosophical Magazine.
Eckert and Mauchley build EDVAC, which is the first computer to use the stored-program concept. The work takes place at the Moore School at the University of Pennsylvania.
Paris is the host to a Cybernetics Congress.
UNIVAC, used by the Columbia Broadcasting System (CBS) television network, successfully predicts the election of Dwight D. Eisenhower as president of the United States.
Pocket-sized transistor radios are introduced.
Nathaniel Rochester designs the 701, IBM’s first production-line electronic digital computer. It is marketed for scientific use.
The chemical structure of the DNA molecule is discovered by James D. Watson and Francis H. C. Crick.
Philosophical Investigations by Ludwig Wittgenstein and Waiting for Godot, a play by Samuel Beckett, are published. Both documents are considered of major importance to modern existentialism.
Marvin Minsky and John McCarthy get summer jobs at Bell Laboratories.
William Shockley’s Semiconductor Laboratory is founded, thereby starting Silicon Valley.
The Remington Rand Corporation and Sperry Gyroscope join forces and become the Sperry-Rand Corporation. For a time, it presents serious competition to IBM.
IBM introduces its first transistor calculator. It uses 2,200 transistors instead of the 1,200 vacuum tubes that would otherwise be required for equivalent computing power.
A U.S. company develops the first design for a robotlike machine to be used in industry.
IPL-II, the first artificial intelligence language, is created by Allen Newell, J. C. Shaw, and Herbert Simon.
The new space program and the U.S. military recognize the importance of having computers with enough power to launch rockets to the moon and missiles through the stratosphere. Both organizations supply major funding for research.
The Logic Theorist, which uses recursive search techniques to solve mathematical problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
John Backus and a team at IBM invent FORTRAN, the first scientific computer-programming language.
Stanislaw Ulam develops MANIAC I, the first computer program to beat a human being in a chess game.
The first commercial watch to run on electric batteries is presented by the Lip company of France.
The term Artificial Intelligence is coined at a computer conference at Dartmouth College.
Kenneth H. Olsen founds Digital Equipment Corporation.
The General Problem Solver, which uses recursive search to solve problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
Noam Chomsky writes Syntactic Structures, in which he seriously considers the computation required for natural-language understanding. This is the first of the many important works that will earn him the title Father of Modern Linguistics.
An integrated circuit is created by Texas Instruments’ Jack St. Clair Kilby.
The Artificial Intelligence Laboratory at the Massachusetts Institute of Technology is founded by John McCarthy and Marvin Minsky.
Allen Newell and Herbert Simon make the prediction that a digital computer will be the world’s chess champion within ten years.
LISP, an early AI language, is developed by John McCarthy.
The Defense Advanced Research Projects Agency, which will fund important computer-science research for years in the future, is established.
Seymour Cray builds the Control Data Corporation 1604, the first fully transistorized supercomputer.
Jack Kilby and Robert Noyce each develop the computer chip independently. The computer chip leads to the development of much cheaper and smaller computers.
Arthur Samuel completes his study in machine learning. The project, a checkers-playing program, performs as well as some of the best players of the time.
Electronic document preparation increases the consumption of paper in the United States. This year, the nation will consume 7 million tons of paper. In 1986, 22 million tons will be used. American businesses alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and 4 trillion in 1990.
COBOL, a computer language designed for business use, is developed by Grace Murray Hopper, who was also one of the first programmers of the Mark I.
Xerox introduces the first commercial copier.
Theodore Harold Maimen develops the first laser. It uses a ruby cylinder.
The recently established Defense Department’s Advanced Research Projects Agency substantially increases its funding for computer research.
There are now about six thousand computers in operation in the United States.
Neural-net machines are quite simple and incorporate a small number of neurons organized in only one or two layers. These models are shown to be limited in their capabilities.
The first time-sharing computer is developed at MIT.
President John F. Kennedy provides the support for space project Apollo and inspiration for important research in computer science when he addresses a joint session of Congress, saying, “I believe we should go to the moon.”
The world’s first industrial robots are marketed by a U.S. company.
Frank Rosenblatt defines the Perceptron in his Principles of Neurodynamics. Rosenblatt first introduced the Perceptron, a simple processing element for neural networks, at a conference in 1959.
The Artificial Intelligence Laboratory at Stanford University is founded by John McCarthy.
The influential Steps Toward Artificial Intelligence by Marvin Minsky is published.
Digital Equipment Corporation announces the PDP-8, which is the first successful minicomputer.
IBM introduces its 360 series, thereby further strengthening its leadership in the computer industry.
Thomas E. Kurtz and John G. Kenny of Dartmouth College invent BASIC (Beginner’s All-purpose Symbolic Instruction Code).
Daniel Bobrow completes his doctoral work on Student, a natural-language program that can solve high-school-level word problems in algebra.
Gordon Moore’s prediction, made this year, says integrated circuits will double in complexity each year. This will become known as Moore’s Law and prove true (with later revisions) for decades to come.
Marshall McLuhan, via his Understanding Media, foresees the potential for electronic media, especially television, to create a “global village” in which “the medium is the message.”
The Robotics Institute at Carnegie Mellon University, which will become a leading research center for AI, is founded by Raj Reddy.
Hubert Dreyfus presents a set of philosophical arguments against the possibility of artificial intelligence in a RAND corporate memo entitled “Alchemy and Artificial Intelligence.”
Herbert Simon predicts that by 1985 “machines will be capable of doing any work a man can do.”
The Amateur Computer Society, possibly the first personal computer club, is founded by Stephen B. Gray. The Amateur Computer Society Newsletter is one of the first magazines about computers.
The first internal pacemaker is developed by Medtronics. It uses integrated circuits.
Gordon Moore and Robert Noyce found Intel (Integrated Electronics) Corporation.
The idea of a computer that can see, speak, hear, and think sparks imaginations when HAL is presented in the film 2001: A Space Odyssey, by Arthur C. Clarke and Stanley Kubrick.
Marvin Minsky and Seymour Papert present the limitation of single-layer neural nets in their book Perceptrons. The book’s pivotal theorem shows that a Perceptron is unable to determine if a line drawing is fully connected. The book essentially halts funding for neural-net research.
The GNP, on a per capita basis and in constant 1958 dollars, is $3,500, or more than six times as much as a century before.
The floppy disc is introduced for storing data in computers.
Researchers at the Xerox Palo Alto Research Center (PARC) develop the first personal computer, called Alto. PARC’s Alto pioneers the use of bit-mapped graphics, windows, icons, and mouse pointing devices.
Terry Winograd completes his landmark thesis on SHRDLU, a natural-language system that exhibits diverse intelligent behavior in the small world of children’s blocks. SHRDLU is criticized, however, for its lack of generality.
The Intel 4004, the first microprocessor, is introduced by Intel.
The first pocket calculator is introduced. It can add, subtract, multiply, and divide.
Continuing his criticism of the capabilities of AI, Hubert Dreyfus publishes What Computers Can’t Do, in which he argues that symbol manipulation cannot be the basis of human intelligence.
Stanley H. Cohen and Herbert W. Boyer show that DNA strands can be cut, joined, and then reproduced by inserting them into the bacterium Escherichia coli. This work creates the foundation for genetic engineering.
Creative Computing starts publication. It is the first magazine for home computer hobbyists.
The 8-bit 8080, which is the first general-purpose microprocessor, is announced by Intel.
Sales of microcomputers in the United States reach more than five thousand, and the first personal computer, the Altair 8800, is introduced. It has 256 bytes of memory.
BYTE, the first widely distributed computer magazine, is published.
Gordon Moore revises his observation on the doubling rate of transistors on an integrated circuit from twelve months to twenty-four months.
Kurzweil Computer Products introduces the Kurzweil Reading Machine (KRM), the first print-to-speech reading machine for the blind. Based on the first omni-font (any font) optical character recognition (OCR) technology, the KRM scans and reads aloud any printed materials (books, magazines, typed documents).
Stephen G. Wozniak and Steven P. Jobs found Apple Computer Corporation.
The concept of true-to-life robots with convincing human emotions is imaginatively portrayed in the film Star Wars.
For the first time, a telephone company conducts large-scale experiments with fiber optics in a telephone system.
The Apple II, the first personal computer to be sold in assembled form and the first with color graphics capability, is introduced and successfully marketed. (JCR buys first Apple II at KO in 1978
Speak & Spell, a computerized learning aid for young children, is introduced by Texas Instruments. This is the first product that electronically duplicates the human vocal tract on a chip.
In a landmark study by nine researchers published in the Journal of the American Medical Association, the performance of the computer program MYCIN is compared with that of doctors in diagnosing ten test cases of meningitis. MYCIN does at least as well as the medical experts. The potential of expert systems in medicine becomes widely recognized.
Dan Bricklin and Bob Frankston establish the personal computer as a serious business tool when they develop VisiCalc, the first electronic spreadsheet.
AI industry revenue is a few million dollars this year.
As neuron models are becoming potentially more sophisticated, the neural network paradigm begins to make a comeback, and networks with multiple layers are commonly used.
Xerox introduces the Star Computer, thus launching the concept of Desktop Publishing. Apple’s Laserwriter, available in 1985, will further increase the viability of this inexpensive and efficient way for writers and artists to create their own finished documents.
IBM introduces its Personal Computer (PC).
The prototype of the Bubble Jet printer is presented by Canon.
Compact disc players are marketed for the first time.
Mitch Kapor presents Lotus 1-2-3, an enormously popular spreadsheet program.
Fax machines are fast becoming a necessity in the business world.
The Musical Instrument Digital Interface (MIDI) is presented in Los Angeles at the first North American Music Manufacturers show.
Six million personal computers are sold in the United States.
The Apple Macintosh introduces the “desktop metaphor,” pioneered at Xerox, including bit-mapped graphics, icons, and the mouse.
William Gibson uses the term cyberspace in his book Neuromancer.
The Kurzweil 250 (K250) synthesizer, considered to be the first electronic instrument to successfully emulate the sounds of acoustic instruments, is introduced to the market.
Marvin Minsky publishes The Society of Mind, in which he presents a theory of the mind where intelligence is seen to be the result of proper organization of a hierarchy of minds with simple mechanisms at the lowest level of the hierarchy.
MIT’s Media Laboratory is founded by Jerome Weisner and Nicholas Negroponte. The lab is dedicated to researching possible applications and interactions of computer science, sociology, and artificial intelligence in the context of media technology.
There are 116 million jobs in the United States, compared to 12 million in 1870. In the same period, the number of those employed has grown from 31 percent to 48 percent, and the per capita GNP in constant dollars has increased by 600 percent. These trends show no signs of abating.
Electronic keyboards account for 55.2 percent of the American musical keyboard market, up from 9.5 percent in 1980.
Life expectancy is about 74 years in the United States. Only 3 percent of the American workforce is involved in the production of food. Fully 76 percent of American adults have high-school diplomas, and 7.3 million U.S. students are enrolled in college.
NYSE stocks have their greatest single-day loss due, in part, to computerized trading.
Current speech systems can provide any one of the following: a large vocabulary, continuous speech recognition, or speaker independence.
Robotic-vision systems are now a $300 million industry and will grow to $800 million by 1990.
Computer memory today costs only one hundred millionth of what it did in 1950.
Marvin Minsky and Seymour Papert publish a revised edition of Perceptrons in which they discuss recent developments in neural network machinery for intelligence.
In the United States, 4,700,000 microcomputers, 120,000 minicomputers, and 11,500 mainframes are sold this year.
W. Daniel Hillis’s Connection Machine is capable of 65,536 computations at the same time.
Notebook computers are replacing the bigger laptops in popularity.
Intel introduces the 16-megahertz (MHz) 80386SX, 2.5 MIPS microprocessor.
Nautilus, the first CD-ROM magazine, is published.
The development of HypterText Markup Language by researcher Tim Berners-Lee and its release by CERN, the high-energy physics laboratory in Geneva, Switzerland, leads to the conception of the World Wide Web.
Cell phones and e-mail are increasing in popularity as business and personal communication tools.
The first double-speed CD-ROM drive becomes available from NEC.
The first personal digital assistant (PDA), a hand-held computer, is introduced at the Consumer Electronics Show in Chicago. The developer is Apple Computer.
The Pentium 32-bit microprocessor is launched by Intel. This chip has 3.1 million transistors.
The World Wide Web emerges.
America Online now has more than 1 million subscribers.
Scanners and CD-ROMs are becoming widely used.
Digital Equipment Corporation introduces a 300-MHz version of the Alpha AXP processor that executes 1 billion instructions per second.
Compaq Computer and NEC Computer Systems ship hand-held computers running Windows CE.
NEC Electronics ships the R4101 processor for personal digital assistants. It includes a touch-screen interface.
Deep Blue defeats Gary Kasparov, the world chess champion, in a regulation tournament.
Dragon Systems introduces Naturally Speaking, the first continuous-speech dictation software product.
Video phones are being used in business settings.
Face-recognition systems are beginning to be used in payroll check-cashing machines.
The Dictation Division of Lernout & Hauspie Speech Products (formerly Kurzweil Applied Intelligence) introduces Voice Xpress Plus, the first continuous-speech-recognition program with the ability to understand natural-language commands.
Routine business transactions over the phone are beginning to be conducted between a human customer and an automated system that engages in a verbal dialogue with the customer (e.g., United Airlines reservations).
Investment funds are emerging that use evolutionary algorithms and neural nets to make investment decisions (e.g., Advanced Investment Technologies).
The World Wide Web is ubiquitous. It is routine for high-school students and local grocery stores to have web sites.
Automated personalities, which appear as animated faces that speak with realistic mouth movements and facial expressions, are working in laboratories. These personalities respond to the spoken statements and facial expressions of their human users. They are being developed to be used in future user interfaces for products and services, as personalized research and business assistants, and to conduct transactions.
Microvision’s Virtual Retina Display (VRD) projects images directly onto the user’s retinas. Although expensive, consumer versions are projected for 1999.
“Bluetooth” technology is being developed for “body” local area networks (LANs) and for wireless communication between personal computers and associated peripherals. Wireless communication is being developed for high-bandwidth connection to the Web.
Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence is published, available at your local bookstore!
A $1,000 personal computer can perform about a trillion calculations per second.
Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech- to- text conversion) for the deaf, and computer- controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.< Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which had been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvement in the price performance of computing. Human musicians routinely jam with cybernetic musicians. Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases. The neo-Luddite movement is growing. 2019 A $1,000 computing device (in 1999 dollars) is now approximately equal to the computational ability of the human brain. Computers are now largely invisible and are embedded everywhere -- in walls, tables, chairs, desks, clothing, jewelry, and bodies. Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory "lenses," are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality. Most interaction with computing is through gestures and two-way natural-language spoken communication. Nanoengineered machines are beginning to be applied to manufacturing and process-control applications. High-resolution, three-dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity. Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers. Blind persons routinely use eyeglass-mounted reading-navigation systems. Deaf persons read what other people are saying through their lens displays. Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices. The vast majority of transactions include a simulated person. Automated driving systems are now installed in most roads. People are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers, and lovers. Virtual artists, with their own reputations, are emerging in all of the arts. There are widespread reports of computers passing the Turing Test, although these tests do not meet the criteria established by knowledgeable observers. 2029 A $1,000 (in 1999 dollars) unit of computation has the computing capacity of approximately 1,000 human brains. Permanent or removable implants (similar to contact lenses) for the eyes as well as cochlear implants are now used to provide input and output between the human user and the worldwide computing network. Direct neural pathways have been perfected for high-bandwidth connection to the human brain. A range of neural implants is becoming available to enhance visual and auditory perception and interpretation, memory, and reasoning. Automated agents are now learning on their own, and significant knowledge is being created by machines with little or no human intervention. Computers have read all available human- and machine-generated literature and multimedia material. There is widespread use of all-encompassing visual, auditory, and tactile communication using direct neural connections, allowing virtual reality to take place without having to be in a "total touch enclosure." The majority of communication does not involve a human. The majority of communication involving a human is between a human and a machine. There is almost no human employment in production, agriculture, or transportation. Basic life needs are available for the vast majority of the human race. There is a growing discussion about the legal rights of computers and what constitutes being "human." Although computers routinely pass apparently valid forms of the Turing Test, controversy persists about whether or not machine intelligence equals human intelligence in all of its diversity. Machines claim to be conscious. These claims are largely accepted. 2049 The common use of nanoproduced food, which has the correct nutritional composition and the same taste and texture of organically produced food, means that the availability of food is no longer affected by limited resources, bad crop weather, or spoilage.< Nanobot swarm projections are used to create visual-auditory-tactile projections of people and objects in real reality. 2072 Picoengineering (developing technology at the scale of picometers or trillionths of a meter) becomes practical.1 By the year 2099 There is a strong trend toward a merger of human thinking with the world of machine intelligence that the human species initially created. There is no longer any clear distinction between humans and computers. Most conscious entities do not have a permanent physical presence. Machine-based intelligences derived from extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic equivalents. Most of these intelligences are not tied to a specific computational processing unit. The number of software-based humans vastly exceeds those still using native neuron-cell-based computation. Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do. Because most information is published using standard assimilated knowledge protocols, information can be instantly understood. The goal of education, and of intelligent beings, is discovering new knowledge to learn. Femtoengineering (engineering at the scale of femtometers or one thousandth of a trillionth of a meter) proposals are controversial.2 Life expectancy is no longer a viable term in relation to intelligent beings. Some many millenniums hence . . . Intelligent beings consider the fate of the Universe.