Tag Archives: TIMS

Voice Recognition – Update from the Economist

Excellent comment on the state of the art of voice recognition in the Economist. The entire article is below.

I think its fair to say, as the article does, “we’re in 1994 for voice.” In other words, just like the internet had core technology in place in 1994, no one really had a clue about what it would ultimately mean to society.

My guess is …. it will be a game-changer of the first order.

ECHO, SIRI, CORTANA – the beginning of a new era!

Just like the GUI, the mouse, and WINDOWS allowed computers to go mainstream, my instinct is that removing a keyboard as a requirement will take the computer from a daily tool, and will make it a second-by-second tool. The Apple Watch, which looks rather benign right now, could easily become the central means of communication. And the hard-to-use keyboard on the iPhone will become increasingly a white elephant – rarely used and quaint.

==================

CREDIT: http://www.economist.com/technology-quarterly/2017-05-01/language

TECHNOLOGY QUARTERLY
FINDING A VOICE

Language: Finding a voice
Computers have got much better at translation, voice recognition and speech synthesis, says Lane Greene. But they still don’t understand the meaning of language.

I’M SORRY, Dave. I’m afraid I can’t do that.” With chilling calm, HAL 9000, the on-board computer in “2001: A Space Odyssey”, refuses to open the doors to Dave Bowman, an astronaut who had ventured outside the ship. HAL’s decision to turn on his human companion reflected a wave of fear about intelligent computers.

When the film came out in 1968, computers that could have proper conversations with humans seemed nearly as far away as manned flight to Jupiter. Since then, humankind has progressed quite a lot farther with building machines that it can talk to, and that can respond with something resembling natural speech. Even so, communication remains difficult. If “2001” had been made to reflect the state of today’s language technology, the conversation might have gone something like this: “Open the pod bay doors, Hal.” “I’m sorry, Dave. I didn’t understand the question.” “Open the pod bay doors, Hal.” “I have a list of eBay results about pod doors, Dave.”

Creative and truly conversational computers able to handle the unexpected are still far off. Artificial-intelligence (AI) researchers can only laugh when asked about the prospect of an intelligent HAL, Terminator or Rosie (the sassy robot housekeeper in “The Jetsons”). Yet although language technologies are nowhere near ready to replace human beings, except in a few highly routine tasks, they are at last about to become good enough to be taken seriously. They can help people spend more time doing interesting things that only humans can do. After six decades of work, much of it with disappointing outcomes, the past few years have produced results much closer to what early pioneers had hoped for.

Speech recognition has made remarkable advances. Machine translation, too, has gone from terrible to usable for getting the gist of a text, and may soon be good enough to require only modest editing by humans. Computerised personal assistants, such as Apple’s Siri, Amazon’s Alexa, Google Now and Microsoft’s Cortana, can now take a wide variety of questions, structured in many different ways, and return accurate and useful answers in a natural-sounding voice. Alexa can even respond to a request to “tell me a joke”, but only by calling upon a database of corny quips. Computers lack a sense of humour.

When Apple introduced Siri in 2011 it was frustrating to use, so many people gave up. Only around a third of smartphone owners use their personal assistants regularly, even though 95% have tried them at some point, according to Creative Strategies, a consultancy. Many of those discouraged users may not realise how much they have improved.
In 1966 John Pierce was working at Bell Labs, the research arm of America’s telephone monopoly. Having overseen the team that had built the first transistor and the first communications satellite, he enjoyed a sterling reputation, so he was asked to take charge of a report on the state of automatic language processing for the National Academy of Sciences. In the period leading up to this, scholars had been promising automatic translation between languages within a few years.

But the report was scathing. Reviewing almost a decade of work on machine translation and automatic speech recognition, it concluded that the time had come to spend money “hard-headedly toward important, realistic and relatively short-range goals”—another way of saying that language-technology research had overpromised and underdelivered. In 1969 Pierce wrote that both the funders and eager researchers had often fooled themselves, and that “no simple, clear, sure knowledge is gained.” After that, America’s government largely closed the money tap, and research on language technology went into hibernation for two decades.

The story of how it emerged from that hibernation is both salutary and surprisingly workaday, says Mark Liberman. As professor of linguistics at the University of Pennsylvania and head of the Linguistic Data Consortium, a huge trove of texts and recordings of human language, he knows a thing or two about the history of language technology. In the bad old days researchers kept their methods in the dark and described their results in ways that were hard to evaluate. But beginning in the 1980s, Charles Wayne, then at America’s Defence Advanced Research Projects Agency, encouraged them to try another approach: the “common task”.

Many early approaches to language technology got stuck in a conceptual cul-de-sac

Step by step
Researchers would agree on a common set of practices, whether they were trying to teach computers speech recognition, speaker identification, sentiment analysis of texts, grammatical breakdown, language identification, handwriting recognition or anything else. They would set out the metrics they were aiming to improve on, share the data sets used to train their software and allow their results to be tested by neutral outsiders. That made the process far more transparent. Funding started up again and language technologies began to improve, though very slowly.

Many early approaches to language technology—and particularly translation—got stuck in a conceptual cul-de-sac: the rules-based approach. In translation, this meant trying to write rules to analyse the text of a sentence in the language of origin, breaking it down into a sort of abstract “interlanguage” and rebuilding it according to the rules of the target language. These approaches showed early promise. But language is riddled with ambiguities and exceptions, so such systems were hugely complicated and easily broke down when tested on sentences beyond the simple set they had been designed for. Nearly all language technologies began to get a lot better with the application of statistical methods, often called a “brute force” approach. This relies on software scouring vast amounts of data, looking for patterns and learning from precedent. For example, in parsing language (breaking it down into its grammatical components), the software learns from large bodies of text that have already been parsed by humans. It uses what it has learned to make its best guess about a previously unseen text. In machine translation, the software scans millions of words already translated by humans, again looking for patterns. In speech recognition, the software learns from a body of recordings and the transcriptions made by humans. Thanks to the growing power of processors, falling prices for data storage and, most crucially, the explosion in available data, this approach eventually bore fruit. Mathematical techniques that had been known for decades came into their own, and big companies with access to enormous amounts of data were poised to benefit. People who had been put off by the hilariously inappropriate translations offered by online tools like BabelFish began to have more faith in Google Translate. Apple persuaded millions of iPhone users to talk not only on their phones but to them. The final advance, which began only about five years ago, came with the advent of deep learning through digital neural networks (DNNs). These are often touted as having qualities similar to those of the human brain: “neurons” are connected in software, and connections can become stronger or weaker in the process of learning.

But Nils Lenke, head of research for Nuance, a language-technology company, explains matter-of-factly that “DNNs are just another kind of mathematical model,” the basis of which had been well understood for decades. What changed was the hardware being used. Almost by chance, DNN researchers discovered that the graphical processing units (GPUs) used to render graphics fluidly in applications like video games were also brilliant at handling neural networks. In computer graphics, basic small shapes move according to fairly simple rules, but there are lots of shapes and many rules, requiring vast numbers of simple calculations. The same GPUs are used to fine-tune the weights assigned to “neurons” in DNNs as they scour data to learn. The technique has already produced big leaps in quality for all kinds of deep learning, including deciphering handwriting, recognising faces and classifying images. Now they are helping to improve all manner of language technologies, often bringing enhancements of up to 30%. That has shifted language technology from usable at a pinch to really rather good. But so far no one has quite worked out what will move it on from merely good to reliably great.

Speech recognition: I hear you
Computers have made huge strides in understanding human speech

WHEN a person speaks, air is forced out through the lungs, making the vocal chords vibrate, which sends out characteristic wave patterns through the air. The features of the sounds depend on the arrangement of the vocal organs, especially the tongue and the lips, and the characteristic nature of the sounds comes from peaks of energy in certain frequencies. The vowels have frequencies called “formants”, two of which are usually enough to differentiate one vowel from another. For example, the vowel in the English word “fleece” has its first two formants at around 300Hz and 3,000Hz. Consonants have their own characteristic features.

In principle, it should be easy to turn this stream of sound into transcribed speech. As in other language technologies, machines that recognise speech are trained on data gathered earlier. In this instance, the training data are sound recordings transcribed to text by humans, so that the software has both a sound and a text input. All it has to do is match the two. It gets better and better at working out how to transcribe a given chunk of sound in the same way as humans did in the training data. The traditional matching approach was a statistical technique called a hidden Markov model (HMM), making guesses based on what was done before. More recently speech recognition has also gained from deep learning.

English has about 44 “phonemes”, the units that make up the sound system of a language. P and b are different phonemes, because they distinguish words like pat and bat. But in English p with a puff of air, as in “party”, and p without a puff of air, as in “spin”, are not different phonemes, though they are in other languages. If a computer hears the phonemes s, p, i and n back to back, it should be able to recognise the word “spin”.

But the nature of live speech makes this difficult for machines. Sounds are not pronounced individually, one phoneme after the other; they mostly come in a constant stream, and finding the boundaries is not easy. Phonemes also differ according to the context. (Compare the l sound at the beginning of “light” with that at the end of “full”.)

Speakers differ in timbre and pitch of voice, and in accent. Conversation is far less clear than careful dictation. People stop and restart much more often than they realise.
All the same, technology has gradually mitigated many of these problems, so error rates in speech-recognition software have fallen steadily over the years—and then sharply with the introduction of deep learning. Microphones have got better and cheaper. With ubiquitous wireless internet, speech recordings can easily be beamed to computers in the cloud for analysis, and even smartphones now often have computers powerful enough to carry out this task.

Bear arms or bare arms?
Perhaps the most important feature of a speech-recognition system is its set of expectations about what someone is likely to say, or its “language model”. Like other training data, the language models are based on large amounts of real human speech, transcribed into text. When a speech-recognition system “hears” a stream of sound, it makes a number of guesses about what has been said, then calculates the odds that it has found the right one, based on the kinds of words, phrases and clauses it has seen earlier in the training text.

At the level of phonemes, each language has strings that are permitted (in English, a word may begin with str-, for example) or banned (an English word cannot start with tsr-). The same goes for words. Some strings of words are more common than others. For example, “the” is far more likely to be followed by a noun or an adjective than by a verb or an adverb. In making guesses about homophones, the computer will have remembered that in its training data the phrase “the right to bear arms” came up much more often than “the right to bare arms”, and will thus have made the right guess.

Training on a specific speaker greatly cuts down on the software’s guesswork. Just a few minutes of reading training text into software like Dragon Dictate, made by Nuance, produces a big jump in accuracy. For those willing to train the software for longer, the improvement continues to something close to 99% accuracy (meaning that of each hundred words of text, not more than one is wrongly added, omitted or changed). A good microphone and a quiet room help.

Advance knowledge of what kinds of things the speaker might be talking about also increases accuracy. Words like “phlebitis” and “gastrointestinal” are not common in general discourse, and uncommon words are ranked lower in the probability tables the software uses to guess what it has heard. But these words are common in medicine, so creating software trained to look out for such words considerably improves the result. This can be done by feeding the system a large number of documents written by the speaker whose voice is to be recognised; common words and phrases can be extracted to improve the system’s guesses.

As with all other areas of language technology, deep learning has sharply brought down error rates. In October Microsoft announced that its latest speech-recognition system had achieved parity with human transcribers in recognising the speech in the Switchboard Corpus, a collection of thousands of recorded conversations in which participants are talking with a stranger about a randomly chosen subject.

Error rates on the Switchboard Corpus are a widely used benchmark, so claims of quality improvements can be easily compared. Fifteen years ago quality had stalled, with word-error rates of 20-30%. Microsoft’s latest system, which has six neural networks running in parallel, has reached 5.9% (see chart), the same as a human transcriber’s. Xuedong Huang, Microsoft’s chief speech scientist, says that he expected it to take two or three years to reach parity with humans. It got there in less than one.

The improvements in the lab are now being applied to products in the real world. More and more cars are being fitted with voice-activated controls of various kinds; the vocabulary involved is limited (there are only so many things you might want to say to your car), which ensures high accuracy. Microphones—or often arrays of microphones with narrow fields of pick-up—are getting better at identifying the relevant speaker among a group.

Some problems remain. Children and elderly speakers, as well as people moving around in a room, are harder to understand. Background noise remains a big concern; if it is different from that in the training data, the software finds it harder to generalise from what it has learned. So Microsoft, for example, offers businesses a product called CRIS that lets users customise speech-recognition systems for the background noise, special vocabulary and other idiosyncrasies they will encounter in that particular environment. That could be useful anywhere from a noisy factory floor to a care home for the elderly.

But for a computer to know what a human has said is only a beginning. Proper interaction between the two, of the kind that comes up in almost every science-fiction story, calls for machines that can speak back.

Hasta la vista, robot voice
Machines are starting to sound more like humans
“I’LL be back.” “Hasta la vista, baby.” Arnold Schwarzenegger’s Teutonic drone in the “Terminator” films is world-famous. But in this instance film-makers looking into the future were overly pessimistic. Some applications do still feature a monotonous “robot voice”, but that is changing fast.

Examples of speech synthesis from OSX synthesiser:
A basic sample:

An advanced sample:

Example from Amazon’s “Polly” synthesiser:
Amazon’s Polly:

Creating speech is roughly the inverse of understanding it. Again, it requires a basic model of the structure of speech. What are the sounds in a language, and how do they combine? What words does it have, and how do they combine in sentences? These are well-understood questions, and most systems can now generate sound waves that are a fair approximation of human speech, at least in short bursts.
Heteronyms require special care. How should a computer pronounce a word like “lead”, which can be a present-tense verb or a noun for a heavy metal, pronounced quite differently? Once again a language model can make accurate guesses: “Lead us not into temptation” can be parsed for its syntax, and once the software has worked out that the first word is almost certainly a verb, it can cause it to be pronounced to rhyme with “reed”, not “red”.

Traditionally, text-to-speech models have been “concatenative”, consisting of very short segments recorded by a human and then strung together as in the acoustic model described above. More recently, “parametric” models have been generating raw audio without the need to record a human voice, which makes these systems more flexible but less natural-sounding.

DeepMind, an artificial-intelligence company bought by Google in 2014, has announced a new way of synthesising speech, again using deep neural networks. The network is trained on recordings of people talking, and on the texts that match what they say. Given a text to reproduce as speech, it churns out a far more fluent and natural-sounding voice than the best concatenative and parametric approaches.

The last step in generating speech is giving it prosody—generally, the modulation of speed, pitch and volume to convey an extra (and critical) channel of meaning. In English, “a German teacher”, with the stress on “teacher”, can teach anything but must be German. But “a German teacher” with the emphasis on “German” is usually a teacher of German (and need not be German). Words like prepositions and conjunctions are not usually stressed. Getting machines to put the stresses in the correct places is about 50% solved, says Mark Liberman of the University of Pennsylvania.

Many applications do not require perfect prosody. A satellite-navigation system giving instructions on where to turn uses just a small number of sentence patterns, and prosody is not important. The same goes for most single-sentence responses given by a virtual assistant on a smartphone.

But prosody matters when someone is telling a story. Pitch, speed and volume can be used to pass quickly over things that are already known, or to build interest and tension for new information. Myriad tiny clues communicate the speaker’s attitude to his subject. The phrase “a German teacher”, with stress on the word “German”, may, in the context of a story, not be a teacher of German, but a teacher being explicitly contrasted with a teacher who happens to be French or British.

Text-to-speech engines are not much good at using context to provide such accentuation, and where they do, it rarely extends beyond a single sentence. When Alexa, the assistant in Amazon’s Echo device, reads a news story, her prosody is jarringly un-humanlike. Talking computers have yet to learn how to make humans want to listen.

Machine translation: Beyond Babel
Computer translations have got strikingly better, but still need human input
IN “STAR TREK” it was a hand-held Universal Translator; in “The Hitchhiker’s Guide to the Galaxy” it was the Babel Fish popped conveniently into the ear. In science fiction, the meeting of distant civilisations generally requires some kind of device to allow them to talk. High-quality automated translation seems even more magical than other kinds of language technology because many humans struggle to speak more than one language, let alone translate from one to another.

Computer translation is still known as “machine translation”
The idea has been around since the 1950s, and computerised translation is still known by the quaint moniker “machine translation” (MT). It goes back to the early days of the cold war, when American scientists were trying to get computers to translate from Russian. They were inspired by the code-breaking successes of the second world war, which had led to the development of computers in the first place. To them, a scramble of Cyrillic letters on a page of Russian text was just a coded version of English, and turning it into English was just a question of breaking the code.

Scientists at IBM and Georgetown University were among those who thought that the problem would be cracked quickly. Having programmed just six rules and a vocabulary of 250 words into a computer, they gave a demonstration in New York on January 7th 1954 and proudly produced 60 automated translations, including that of “Mi pyeryedayem mislyi posryedstvom ryechyi,” which came out correctly as “We transmit thoughts by means of speech.” Leon Dostert of Georgetown, the lead scientist, breezily predicted that fully realised MT would be “an accomplished fact” in three to five years.

Instead, after more than a decade of work, the report in 1966 by a committee chaired by John Pierce, mentioned in the introduction to this report, recorded bitter disappointment with the results and urged researchers to focus on narrow, achievable goals such as automated dictionaries. Government-sponsored work on MT went into near-hibernation for two decades. What little was done was carried out by private companies. The most notable of them was Systran, which provided rough translations, mostly to America’s armed forces.
La plume de mon ordinateur
The scientists got bogged down by their rules-based approach. Having done relatively well with their six-rule system, they came to believe that if they programmed in more rules, the system would become more sophisticated and subtle. Instead, it became more likely to produce nonsense. Adding extra rules, in the modern parlance of software developers, did not “scale”.

Besides the difficulty of programming grammar’s many rules and exceptions, some early observers noted a conceptual problem. The meaning of a word often depends not just on its dictionary definition and the grammatical context but the meaning of the rest of the sentence. Yehoshua Bar-Hillel, an Israeli MT pioneer, realised that “the pen is in the box” and “the box is in the pen” would require different translations for “pen”: any pen big enough to hold a box would have to be an animal enclosure, not a writing instrument.

How could machines be taught enough rules to make this kind of distinction? They would have to be provided with some knowledge of the real world, a task far beyond the machines or their programmers at the time. Two decades later, IBM stumbled on an approach that would revive optimism about MT. Its Candide system was the first serious attempt to use statistical probabilities rather than rules devised by humans for translation. Statistical, “phrase-based” machine translation, like speech recognition, needed training data to learn from. Candide used Canada’s Hansard, which publishes that country’s parliamentary debates in French and English, providing a huge amount of data for that time. The phrase-based approach would ensure that the translation of a word would take the surrounding words properly into account.

But quality did not take a leap until Google, which had set itself the goal of indexing the entire internet, decided to use those data to train its translation engines; in 2007 it switched from a rules-based engine (provided by Systran) to its own statistics-based system. To build it, Google trawled about a trillion web pages, looking for any text that seemed to be a translation of another—for example, pages designed identically but with different words, and perhaps a hint such as the address of one page ending in /en and the other ending in /fr. According to Macduff Hughes, chief engineer on Google Translate, a simple approach using vast amounts of data seemed more promising than a clever one with fewer data.

Training on parallel texts (which linguists call corpora, the plural of corpus) creates a “translation model” that generates not one but a series of possible translations in the target language. The next step is running these possibilities through a monolingual language model in the target language. This is, in effect, a set of expectations about what a well-formed and typical sentence in the target language is likely to be. Single-language models are not too hard to build. (Parallel human-translated corpora are hard to come by; large amounts of monolingual training data are not.) As with the translation model, the language model uses a brute-force statistical approach to learn from the training data, then ranks the outputs from the translation model in order of plausibility.

Statistical machine translation rekindled optimism in the field. Internet users quickly discovered that Google Translate was far better than the rules-based online engines they had used before, such as BabelFish. Such systems still make mistakes—sometimes minor, sometimes hilarious, sometimes so serious or so many as to make nonsense of the result. And language pairs like Chinese-English, which are unrelated and structurally quite different, make accurate translation harder than pairs of related languages like English and German. But more often than not, Google Translate and its free online competitors, such as Microsoft’s Bing Translator, offer a usable approximation.

Such systems are set to get better, again with the help of deep learning from digital neural networks. The Association for Computational Linguistics has been holding workshops on MT every summer since 2006. One of the events is a competition between MT engines turned loose on a collection of news text. In August 2016, in Berlin, neural-net-based MT systems were the top performers (out of 102), a first.
Now Google has released its own neural-net-based engine for eight language pairs, closing much of the quality gap between its old system and a human translator.
This is especially true for closely related languages (like the big European ones) with lots of available training data. The results are still distinctly imperfect, but far smoother and more accurate than before. Translations between English and (say) Chinese and Korean are not as good yet, but the neural system has brought a clear improvement here too.

What machines cannot yet do is have true conversations
The Coca-Cola factor

Neural-network-based translation actually uses two networks. One is an encoder. Each word of an input sentence is converted into a multidimensional vector (a series of numerical values), and the encoding of each new word takes into account what has happened earlier in the sentence. Marcello Federico of Italy’s Fondazione Bruno Kessler, a private research organisation, uses an intriguing analogy to compare neural-net translation with the phrase-based kind. The latter, he says, is like describing Coca-Cola in terms of sugar, water, caffeine and other ingredients. By contrast, the former encodes features such as liquidness, darkness, sweetness and fizziness.
Once the source sentence is encoded, a decoder network generates a word-for-word translation, once again taking account of the immediately preceding word. This can cause problems when the meaning of words such as pronouns depends on words mentioned much earlier in a long sentence. This problem is mitigated by an “attention model”, which helps maintain focus on other words in the sentence outside the immediate context.

Neural-network translation requires heavy-duty computing power, both for the original training of the system and in use. The heart of such a system can be the GPUs that made the deep-learning revolution possible, or specialised hardware like Google’s Tensor Processing Units (TPUs). Smaller translation companies and researchers usually rent this kind of processing power in the cloud. But the data sets used in neural-network training do not need to be as extensive as those for phrase-based systems, which should give smaller outfits a chance to compete with giants like Google.
Fully automated, high-quality machine translation is still a long way off. For now, several problems remain. All current machine translations proceed sentence by sentence. If the translation of such a sentence depends on the meaning of earlier ones, automated systems will make mistakes. Long sentences, despite tricks like the attention model, can be hard to translate. And neural-net-based systems in particular struggle with rare words.

Training data, too, are scarce for many language pairs. They are plentiful between European languages, since the European Union’s institutions churn out vast amounts of material translated by humans between the EU’s 24 official languages. But for smaller languages such resources are thin on the ground. For example, there are few Greek-Urdu parallel texts available on which to train a translation engine. So a system that claims to offer such translation is in fact usually running it through a bridging language, nearly always English. That involves two translations rather than one, multiplying the chance of errors.
Even if machine translation is not yet perfect, technology can already help humans translate much more quickly and accurately. “Translation memories”, software that stores already translated words and segments, first came into use as early as the 1980s. For someone who frequently translates the same kind of material (such as instruction manuals), they serve up the bits that have already been translated, saving lots of duplication and time.

A similar trick is to train MT engines on text dealing with a narrow real-world domain, such as medicine or the law. As software techniques are refined and computers get faster, training becomes easier and quicker. Free software such as Moses, developed with the support of the EU and used by some of its in-house translators, can be trained by anyone with parallel corpora to hand. A specialist in medical translation, for instance, can train the system on medical translations only, which makes them far more accurate.
At the other end of linguistic sophistication, an MT engine can be optimised for the shorter and simpler language people use in speech to spew out rough but near-instantaneous speech-to-speech translations. This is what Microsoft’s Skype Translator does. Its quality is improved by being trained on speech (things like film subtitles and common spoken phrases) rather than the kind of parallel text produced by the European Parliament.

Translation management has also benefited from innovation, with clever software allowing companies quickly to combine the best of MT, translation memory, customisation by the individual translator and so on. Translation-management software aims to cut out the agencies that have been acting as middlemen between clients and an army of freelance translators. Jack Welde, the founder of Smartling, an industry favourite, says that in future translation customers will choose how much human intervention is needed for a translation. A quick automated one will do for low-stakes content with a short life, but the most important content will still require a fully hand-crafted and edited version. Noting that MT has both determined boosters and committed detractors, Mr Welde says he is neither: “If you take a dogmatic stance, you’re not optimised for the needs of the customer.”

Translation software will go on getting better. Not only will engineers keep tweaking their statistical models and neural networks, but users themselves will make improvements to their own systems. For example, a small but much-admired startup, Lilt, uses phrase-based MT as the basis for a translation, but an easy-to-use interface allows the translator to correct and improve the MT system’s output. Every time this is done, the corrections are fed back into the translation engine, which learns and improves in real time. Users can build several different memories—a medical one, a financial one and so on—which will help with future translations in that specialist field.

TAUS, an industry group, recently issued a report on the state of the translation industry saying that “in the past few years the translation industry has burst with new tools, platforms and solutions.” Last year Jaap van der Meer, TAUS’s founder and director, wrote a provocative blogpost entitled “The Future Does Not Need Translators”, arguing that the quality of MT will keep improving, and that for many applications less-than-perfect translation will be good enough.

The “translator” of the future is likely to be more like a quality-control expert, deciding which texts need the most attention to detail and editing the output of MT software. That may be necessary because computers, no matter how sophisticated they have become, cannot yet truly grasp what a text means.

Meaning and machine intelligence: What are you talking about?
Machines cannot conduct proper conversations with humans because they do not understand the world

IN “BLACK MIRROR”, a British science-fiction satire series set in a dystopian near future, a young woman loses her boyfriend in a car accident. A friend offers to help her deal with her grief. The dead man was a keen social-media user, and his archived accounts can be used to recreate his personality. Before long she is messaging with a facsimile, then speaking to one. As the system learns to mimic him ever better, he becomes increasingly real.

This is not quite as bizarre as it sounds. Computers today can already produce an eerie echo of human language if fed with the appropriate material. What they cannot yet do is have true conversations. Truly robust interaction between man and machine would require a broad understanding of the world. In the absence of that, computers are not able to talk about a wide range of topics, follow long conversations or handle surprises.

Machines trained to do a narrow range of tasks, though, can perform surprisingly well. The most obvious examples are the digital assistants created by the technology giants. Users can ask them questions in a variety of natural ways: “What’s the temperature in London?” “How’s the weather outside?” “Is it going to be cold today?” The assistants know a few things about users, such as where they live and who their family are, so they can be personal, too: “How’s my commute looking?” “Text my wife I’ll be home in 15 minutes.”
And they get better with time. Apple’s Siri receives 2bn requests per week, which (after being anonymised) are used for further teaching. For example, Apple says Siri knows every possible way that users ask about a sports score. She also has a delightful answer for children who ask about Father Christmas. Microsoft learned from some of its previous natural-language platforms that about 10% of human interactions were “chitchat”, from “tell me a joke” to “who’s your daddy?”, and used such chat to teach its digital assistant, Cortana.

The writing team for Cortana includes two playwrights, a poet, a screenwriter and a novelist. Google hired writers from Pixar, an animated-film studio, and The Onion, a satirical newspaper, to make its new Google Assistant funnier. No wonder people often thank their digital helpers for a job well done. The assistants’ replies range from “My pleasure, as always” to “You don’t need to thank me.”
Good at grammar

How do natural-language platforms know what people want? They not only recognise the words a person uses, but break down speech for both grammar and meaning. Grammar parsing is relatively advanced; it is the domain of the well-established field of “natural-language processing”. But meaning comes under the heading of “natural-language understanding”, which is far harder.

First, parsing. Most people are not very good at analysing the syntax of sentences, but computers have become quite adept at it, even though most sentences are ambiguous in ways humans are rarely aware of. Take a sign on a public fountain that says, “This is not drinking water.” Humans understand it to mean that the water (“this”) is not a certain kind of water (“drinking water”). But a computer might just as easily parse it to say that “this” (the fountain) is not at present doing something (“drinking water”).

As sentences get longer, the number of grammatically possible but nonsensical options multiplies exponentially. How can a machine parser know which is the right one? It helps for it to know that some combinations of words are more common than others: the phrase “drinking water” is widely used, so parsers trained on large volumes of English will rate those two words as likely to be joined in a noun phrase. And some structures are more common than others: “noun verb noun noun” may be much more common than “noun noun verb noun”. A machine parser can compute the overall probability of all combinations and pick the likeliest.

A “lexicalised” parser might do even better. Take the Groucho Marx joke, “One morning I shot an elephant in my pyjamas. How he got in my pyjamas, I’ll never know.” The first sentence is ambiguous (which makes the joke)—grammatically both “I” and “an elephant” can attach to the prepositional phrase “in my pyjamas”. But a lexicalised parser would recognise that “I [verb phrase] in my pyjamas” is far more common than “elephant in my pyjamas”, and so assign that parse a higher probability.

But meaning is harder to pin down than syntax. “The boy kicked the ball” and “The ball was kicked by the boy” have the same meaning but a different structure. “Time flies like an arrow” can mean either that time flies in the way that an arrow flies, or that insects called “time flies” are fond of an arrow.

“Who plays Thor in ‘Thor’?” Your correspondent could not remember the beefy Australian who played the eponymous Norse god in the Marvel superhero film. But when he asked his iPhone, Siri came up with an unexpected reply: “I don’t see any movies matching ‘Thor’ playing in Thor, IA, US, today.” Thor, Iowa, with a population of 184, was thousands of miles away, and “Thor”, the film, has been out of cinemas for years. Siri parsed the question perfectly properly, but the reply was absurd, violating the rules of what linguists call pragmatics: the shared knowledge and understanding that people use to make sense of the often messy human language they hear. “Can you reach the salt?” is not a request for information but for salt. Natural-language systems have to be manually programmed to handle such requests as humans expect them, and not literally.

Multiple choice
Shared information is also built up over the course of a conversation, which is why digital assistants can struggle with twists and turns in conversations. Tell an assistant, “I’d like to go to an Italian restaurant with my wife,” and it might suggest a restaurant. But then ask, “is it close to her office?”, and the assistant must grasp the meanings of “it” (the restaurant) and “her” (the wife), which it will find surprisingly tricky. Nuance, the language-technology firm, which provides natural-language platforms to many other companies, is working on a “concierge” that can handle this type of challenge, but it is still a prototype.
Such a concierge must also offer only restaurants that are open. Linking requests to common sense (knowing that no one wants to be sent to a closed restaurant), as well as a knowledge of the real world (knowing which restaurants are closed), is one of the most difficult challenges for language technologies.

Common sense, an old observation goes, is uncommon enough in humans. Programming it into computers is harder still. Fernando Pereira of Google points out why. Automated speech recognition and machine translation have something in common: there are huge stores of data (recordings and transcripts for speech recognition, parallel corpora for translation) that can be used to train machines. But there are no training data for common sense.

Brain scan: Terry Winograd
The Winograd Schema tests computers’ “understanding” of the real world

THE Turing Test was conceived as a way to judge whether true artificial intelligence has been achieved. If a computer can fool humans into thinking it is human, there is no reason, say its fans, to say the machine is not truly intelligent.
Few giants in computing stand with Turing in fame, but one has given his name to a similar challenge: Terry Winograd, a computer scientist at Stanford. In his doctoral dissertation Mr Winograd posed a riddle for computers: “The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?”

It is a perfect illustration of a well-recognised point: many things that are easy for humans are crushingly difficult for computers. Mr Winograd went into AI research in the 1960s and 1970s and developed an early natural-language program called SHRDLU that could take commands and answer questions about a group of shapes it could manipulate: “Find a block which is taller than the one you are holding and put it into the box.” This work brought a jolt of optimism to the AI crowd, but Mr Winograd later fell out with them, devoting himself not to making machines intelligent but to making them better at helping human beings. (These camps are sharply divided by philosophy and academic pride.) He taught Larry Page at Stanford, and after Mr Page went on to co-found Google, Mr Winograd became a guest researcher at the company, helping to build Gmail.

In 2011 Hector Levesque of the University of Toronto became annoyed by systems that “passed” the Turing Test by joking and avoiding direct answers. He later asked to borrow Mr Winograd’s name and the format of his dissertation’s puzzle to pose a more genuine test of machine “understanding”: the Winograd Schema. The answers to its battery of questions were obvious to humans but would require computers to have some reasoning ability and some knowledge of the real world. The first official Winograd Schema Challenge was held this year, with a $25,000 prize offered by Nuance, the language-software company, for a program that could answer more than 90% of the questions correctly. The best of them got just 58% right.
Though officially retired, Mr Winograd continues writing and researching. One of his students is working on an application for Google Glass, a computer with a display mounted on eyeglasses. The app would help people with autism by reading the facial expressions of conversation partners and giving the wearer information about their emotional state. It would allow him to integrate linguistic and non-linguistic information in a way that people with autism find difficult, as do computers.

Asked to trick some of the latest digital assistants, like Siri and Alexa, he asks them things like “Where can I find a nightclub my Methodist uncle would like?”, which requires knowledge about both nightclubs (which such systems have) and Methodist uncles (which they don’t). When he tried “Where did I leave my glasses?”, one of them came up with a link to a book of that name. None offered the obvious answer: “How would I know?”

Knowledge of the real world is another matter. AI has helped data-rich companies such as America’s West-Coast tech giants organise much of the world’s information into interactive databases such as Google’s Knowledge Graph. Some of the content of that appears in a box to the right of a Google page of search results for a famous figure or thing. It knows that Jacob Bernoulli studied at the University of Basel (as did other people, linked to Bernoulli through this node in the Graph) and wrote “On the Law of Large Numbers” (which it knows is a book).

Organising information this way is not difficult for a company with lots of data and good AI capabilities, but linking information to language is hard. Google touts its assistant’s ability to answer questions like “Who was president when the Rangers won the World Series?” But Mr Pereira concedes that this was the result of explicit training. Another such complex query—“What was the population of London when Samuel Johnson wrote his dictionary?”—would flummox the assistant, even though the Graph knows about things like the historical population of London and the date of Johnson’s dictionary. IBM’s Watson system, which in 2011 beat two human champions at the quiz show “Jeopardy!”, succeeded mainly by calculating huge numbers of potential answers based on key words by probability, not by a human-like understanding of the question.

Making real-world information computable is challenging, but it has inspired some creative approaches. Cortical.io, a Vienna-based startup, took hundreds of Wikipedia articles, cut them into thousands of small snippets of information and ran an “unsupervised” machine-learning algorithm over it that required the computer not to look for anything in particular but to find patterns. These patterns were then represented as a visual “semantic fingerprint” on a grid of 128×128 pixels. Clumps of pixels in similar places represented semantic similarity. This method can be used to disambiguate words with multiple meanings: the fingerprint of “organ” shares features with both “liver” and “piano” (because the word occurs with both in different parts of the training data). This might allow a natural-language system to distinguish between pianos and church organs on one hand, and livers and other internal organs on the other.
Proper conversation between humans and machines can be seen as a series of linked challenges: speech recognition, speech synthesis, syntactic analysis, semantic analysis, pragmatic understanding, dialogue, common sense and real-world knowledge. Because all the technologies have to work together, the chain as a whole is only as strong as its weakest link, and the first few of these are far better developed than the last few.

The hardest part is linking them together. Scientists do not know how the human brain draws on so many different kinds of knowledge at the same time. Programming a machine to replicate that feat is very much a work in progress.

Looking ahead: For my next trick
Talking machines are the new must-haves
IN “WALL-E”, an animated children’s film set in the future, all humankind lives on a spaceship after the Earth’s environment has been trashed. The humans are whisked around in intelligent hovering chairs; machines take care of their every need, so they are all morbidly obese. Even the ship’s captain is not really in charge; the actual pilot is an intelligent and malevolent talking robot, Auto, and like so many talking machines in science fiction, he eventually makes a grab for power.

Speech is quintessentially human, so it is hard to imagine machines that can truly speak conversationally as humans do without also imagining them to be superintelligent. And if they are super intelligent, with none of humans’ flaws, it is hard to imagine them not wanting to take over, not only for their good but for that of humanity. Even in a fairly benevolent future like “WALL-E’s”, where the machines are doing all the work, it is easy to see that the lack of anything challenging to do would be harmful to people.
Fortunately, the tasks that talking machines can take off humans’ to-do lists are the sort that many would happily give up. Machines are increasingly able to handle difficult but well-defined jobs. Soon all that their users will have to do is pipe up and ask them, using a naturally phrased voice command. Once upon a time, just one tinkerer in a given family knew how to work the computer or the video recorder. Then graphical interfaces (icons and a mouse) and touchscreens made such technology accessible to everyone. Frank Chen of Andreessen Horowitz, a venture-capital firm, sees natural-language interfaces between humans and machines as just another step in making information and services available to all. Silicon Valley, he says, is enjoying a golden age of AI technologies. Just as in the early 1990s companies were piling online and building websites without quite knowing why, now everyone is going for natural language. Yet, he adds, “we’re in 1994 for voice.”
1995 will soon come. This does not mean that people will communicate with their computers exclusively by talking to them. Websites did not make the telephone obsolete, and mobile devices did not make desktop computers obsolete. In the same way, people will continue to have a choice between voice and text when interacting with their machines.
Not all will choose voice. For example, in Japan yammering into a phone is not done in public, whether the interlocutor is a human or a digital assistant, so usage of Siri is low during business hours but high in the evening and at the weekend. For others, voice-enabled technology is an obvious boon. It allows dyslexic people to write without typing, and the very elderly may find it easier to talk than to type on a tiny keyboard. The very young, some of whom today learn to type before they can write, may soon learn to talk to machines before they can type.
Those with injuries or disabilities that make it hard for them to write will also benefit. Microsoft is justifiably proud of a new device that will allow people with amyotrophic lateral sclerosis (ALS), which immobilises nearly all of the body but leaves the mind working, to speak by using their eyes to pick letters on a screen. The critical part is predictive text, which improves as it gets used to a particular individual. An experienced user will be able to “speak” at around 15 words per minute.
People may even turn to machines for company. Microsoft’s Xiaoice, a chatbot launched in China, learns to come up with the responses that will keep a conversation going longest. Nobody would think it was human, but it does make users open up in surprising ways. Jibo, a new “social robot”, is intended to tell children stories, help far-flung relatives stay in touch and the like.

Another group that may benefit from technology is smaller language communities. Networked computers can encourage a winner-take-all effect: if there is a lot of good software and content in English and Chinese, smaller languages become less valuable online. If they are really tiny, their very survival may be at stake. But Ross Perlin of the Endangered Languages Alliance notes that new software allows researchers to document small languages more quickly than ever. With enough data comes the possibility of developing resources—from speech recognition to interfaces with software—for smaller and smaller languages. The Silicon Valley giants already localise their services in dozens of languages; neural networks and other software allow new versions to be generated faster and more efficiently than ever.

There are two big downsides to the rise in natural-language technologies: the implications for privacy, and the disruption it will bring to many jobs.

Increasingly, devices are always listening. Digital assistants like Alexa, Cortana, Siri and Google Assistant are programmed to wait for a prompt, such as “Hey, Siri” or “OK, Google”, to activate them. But allowing always-on microphones into people’s pockets and homes amounts to a further erosion of traditional expectations of privacy. The same might be said for all the ways in which language software improves by training on a single user’s voice, vocabulary, written documents and habits.

All the big companies’ location-based services—even the accelerometers in phones that detect small movements—are making ever-improving guesses about users’ wants and needs. The moment when a digital assistant surprises a user with “The chemist is nearby—do you want to buy more haemorrhoid cream, Steve?” could be when many may choose to reassess the trade-off between amazing new services and old-fashioned privacy. The tech companies can help by giving users more choice; the latest iPhone will not be activated when it is laid face down on a table. But hackers will inevitably find ways to get at some of these data.

The other big concern is for jobs. To the extent that they are routine, they face being automated away. A good example is customer support. When people contact a company for help, the initial encounter is usually highly scripted. A company employee will verify a customer’s identity and follow a decision-tree. Language technology is now mature enough to take on many of these tasks.

For a long transition period humans will still be needed, but the work they do will become less routine. Nuance, which sells lots of automated online and phone-based help systems, is bullish on voice biometrics (customers identifying themselves by saying “my voice is my password”). Using around 200 parameters for identifying a speaker, it is probably more secure than a fingerprint, says Brett Beranek, a senior manager at the company. It will also eliminate the tedium, for both customers and support workers, of going through multi-step identification procedures with PINs, passwords and security questions. When Barclays, a British bank, offered it to frequent users of customer-support services, 84% signed up within five months.

Digital assistants on personal smartphones can get away with mistakes, but for some business applications the tolerance for error is close to zero, notes Nikita Ivanov. His company, Datalingvo, a Silicon Valley startup, answers questions phrased in natural language about a company’s business data. If a user wants to know which online ads resulted in the most sales in California last month, the software automatically translates his typed question into a database query. But behind the scenes a human working for Datalingvo vets the query to make sure it is correct. This is because the stakes are high: the technology is bound to make mistakes in its early days, and users could make decisions based on bad data.
This process can work the other way round, too: rather than natural-language input producing data, data can produce language. Arria, a company based in London, makes software into which a spreadsheet full of data can be dragged and dropped, to be turned automatically into a written description of the contents, complete with trends. Matt Gould, the company’s chief strategy officer, likes to think that this will free chief financial officers from having to write up the same old routine analyses for the board, giving them time to develop more creative approaches.

Carl Benedikt Frey, an economist at Oxford University, has researched the likely effect of artificial intelligence on the labour market and concluded that the jobs most likely to remain immune include those requiring creativity and skill at complex social interactions. But not every human has those traits. Call centres may need fewer people as more routine work is handled by automated systems, but the trickier inquiries will still go to humans.

Much of this seems familiar. When Google search first became available, it turned up documents in seconds that would have taken a human operator hours, days or years to find. This removed much of the drudgery from being a researcher, librarian or journalist. More recently, young lawyers and paralegals have taken to using e-discovery. These innovations have not destroyed the professions concerned but merely reshaped them.

Machines that relieve drudgery and allow people to do more interesting jobs are a fine thing. In net terms they may even create extra jobs. But any big adjustment is most painful for those least able to adapt. Upheavals brought about by social changes—like the emancipation of women or the globalisation of labour markets—are already hard for some people to bear. When those changes are wrought by machines, they become even harder, and all the more so when those machines seem to behave more and more like humans. People already treat inanimate objects as if they were alive: who has never shouted at a computer in frustration? The more that machines talk, and the more that they seem to understand people, the more their users will be tempted to attribute human traits to them.

That raises questions about what it means to be human. Language is widely seen as humankind’s most distinguishing trait. AI researchers insist that their machines do not think like people, but if they can listen and talk like humans, what does that make them? As humans teach ever more capable machines to use language, the once-obvious line between them will blur.

Voice Recognition 3.0

Can we just pass right over voice recognition 2.0? I am getting really tired and bored after 25+ years of incrementalism. For sure, the incrementalism has paid off – just look at cars and iPhones and siri and a host of call center applications (which are equally annoying and progressive).

So here is my version of voice recognition 3.0. It would have these characteristics:

Goodbye Typing: it is so good that typing basically becomes a secondary application. Most emails and text messages etc would be generated by voice not be typing.
Grammar, stutter, and uh checks: it is so good that it comes to recognize uh’s and simple makes sure that I want to delete them and then does so automatically. Ditto bad grammar.
Personal vocabularies – it is so good that it builds a personal vocabulary for me over time and remembers it. For example, why can’t I expect my voice recognition system to learn that I live in Serenbe? And will use it in my communications a lot? Why does the best system today choke and think I am saying “Saran Be” or “Seren Bee”. But that is easy. Tough is that it should know that my brother lives in West Townsend, and that the movie I saw last Saturday was called The Hunger Games (and all I should need to say is “the movie I saw last Saturday night” or “the people I met at lunch today” – in other words it should draw regularly and liberally from my calendar and emails as I allow it to do so).
Search: I am tired of talking about this one. We should be able to search using natural language – just buy saying what is on my mind. “I vaguely recall a great article in one of the national magazines, I guess it was published in the last 24 months, and it spoke about genomic mapping getting cheaper and better. Can you find me that article?

Well-Being Assessment 101 – LABS

The exploration here is to try to get very clear about how LABS and their associated datasets can inform the average person’s curiosity about their general well-being.

Obviously, this is a highly technical subject, and not one that I am trying to master – I just want the basics.

So any good “well-being assessment” begins with MARVELS (see JCR Post on MARVELS).

This post focuses on the “L” – LABS.

The “L” in MARVELS is for “LABS” – in this domain, there are blood, stool, urine, saliva, HAIR LABS. Specimens need to be collected and shipped per protocols that can be very stringent. Costs are incurred accordingly.

An interesting distinction is this: which tests are cellular and which are blood tests? See THIS ARTICLE FROM WOMEN’S SPORTS NUTRITION, which argues that cellular tests are better for well-being assessments while medicine addresses disease in advanced stages and therefore relies on blood, which reveals a progression that is not evident in healthy people (even though their cellular assessment might reveal important deficiencies).

Hundreds of Tests Available

Just as there are hundreds of medical tests for diagnosing disease, there are hundreds of scientific nutritional tests available to Advanced Clinical Nutrition to identify the causes of nutritional deficiencies, biochemical imbalances and organ/gland dysfunctions.

Because blood, urine and stool are the primary specimens collected and analyzed for medical diagnosis, some people may not be aware of saliva and hair testing. Advanced Clinical Nutrition utilizes all of these specimens for clinical nutrition analyses. Below, we have provided an Analogy on how saliva, urine and hair analysis differs from blood testing.

Any Fluid or Tissue Can Be Analyzed

It is important to note that any fluid or tissue of the body can provide insight into its level of malnutrition, deficiencies, imbalances and dysfunctions, and pattern of or existing disease.

Single Tests and Testing Profiles Available

There are single tests and testing profiles. For example, one of the female saliva hormone profiles for a woman in menopause, or who has had a hysterectomy, includes six individual saliva tests, the three estrogens, progesterone, testosterone and DHEA.

Note: when obtaining blood testing for medical purposes, your physician may order from 16 to 25 blood tests in their profile. At Advanced Clinical Nutrition, we order 44 different blood chemistries in our profile. Due to insurance and Medicare cut-backs and new regulations, physicians no longer order the comprehensive blood chemistry profile (44 blood tests), as they did years ago. However, we do.

In blood, 200 test protocols have been identified – see below.

Reference: LAB CORPS DESCRIPTION OF SPECIMEN COLLECTION
======================
See this post below at: JCR Post on Blood Testing 101
Blood Testing 101
Leave a reply
Here are the basics….

See this post below at: JCR Post on Blood Testing 101

LABS (blood testing, etc)
CDC Blood Test List

There are about 200 tests that require blood samples.

The basis metabolic panel is widely used.

40-50 test for allergies.
18 are useful for male related general health screening
18 are useful for female related general health screening
7 are for detecting viruses
18 are for rheumatic evaluations
7 are hormonal related – men
7 are hormonal related – female

Some labs group the tests. An example is:
Blood test types and panels

References:
LABS revolution
LABS By Disease
Quantified Self Movement

http://www.austinmedclinic.com/lab-pricing.html”>Austin Example of LABS
Note this example underlines that stool, urine, saliva, and blood are all specimens.

This entry was posted in Uncategorized, Well-Being, Well-Being – PersonaL and tagged LABS on February 1, 2009. Edit

Proteomics

“Systems biology…is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different….It means changing our philosophy, in the full sense of the term” (Denis Noble).[5]

Proteomics
From Wikipedia, the free encyclopedia
For the journal Proteomics, see Proteomics (journal).

Proteomics is the large-scale study of proteins, particularly their structuresand functions.[1][2] Proteins are vital parts of living organisms, as they are the main components of the physiological metabolic pathways of cells.

The term proteomics was first coined in 1997[3] to make an analogy with genomics, the study of the genome. The word proteome is a blend of protein and genome, and was coined by Marc Wilkins in 1994 while working on the concept as a PhD student.[4][5]

The proteome is the entire set of proteins,[4] produced or modified by an organism or system. This varies with time and distinct requirements, or stresses, that a cell or organism undergoes.

Proteomics is an interdisciplinary domain formed on the basis of the research and development of the Human Genome Project;[citation needed] it is also emerging scientific research and exploration of proteomes from the overall level of intracellular protein composition, structure, and its own unique activity patterns. It is an important component of functional genomics.

While proteomics generally refers to the large-scale experimental analysis of proteins, it is often specifically used for protein purification and mass spectrometry.

Contents [hide]
1 Complexity of the problem
1.1 Post-translational modifications
1.1.1 Phosphorylation
1.1.2 Ubiquitination
1.1.3 Additional modifications
1.2 Distinct proteins are made under distinct settings
2 Limitations of genomics and proteomics studies
3 Methods of studying proteins
3.1 Protein Detection with Immunoassays
3.2 Identifying proteins that are post-translationally modified
3.3 Determining the existence of proteins in complex mixtures
3.4 Computational methods in studying protein biomarkers
4 Establishing protein–protein interactions
5 Practical applications of proteomics
5.1 Biomarkers
5.2 Proteogenomics
5.3 Current research methodologies
6 Bioinformatics for Proteomics
7 Structural proteomics
8 Expression proteomics
9 Interaction proteomics
10 Proteomics and System Biology
11 Current Proteomic Technologies
11.1 Mass Spectrometry and Protein Profiling
11.2 Protein Chips
11.3 Reverse Phased Protein Microarrays
12 Emerging trends in Proteomics
12.1 Human Plasma Proteome
13 See also
13.1 Protein databases
13.2 Research centers
14 References
15 Bibliography
16 External links

Complexity of the problem

After genomics and transcriptomics, proteomics is the next step in the study of biological systems. It is more complicated than genomics because an organism’s genome is more or less constant, whereas the proteome differs from cell to cell and from time to time. Distinct genes are expressed in different cell types, which means that even the basic set of proteins that are produced in a cell needs to be identified.
In the past this phenomenon was done by mRNA analysis, but it was found not to correlate with protein content.[6][7] It is now known that mRNA is not always translated into protein,[8] and the amount of protein produced for a given amount of mRNA depends on the gene it is transcribed from and on the current physiological state of the cell. Proteomics confirms the presence of the protein and provides a direct measure of the quantity present.

Post-translational modifications
Not only does the translation from mRNA cause differences, but many proteins are also subjected to a wide variety of chemical modifications after translation. Many of these post-translational modifications are critical to the protein’s function.

Phosphorylation
One such modification is phosphorylation, which happens to many enzymes and structural proteins in the process of cell signaling. The addition of a phosphate to particular amino acids—most commonly serine and threonine[9] mediated by serine/threonine kinases, or more rarely tyrosine mediated by tyrosine kinases—causes a protein to become a target for binding or interacting with a distinct set of other proteins that recognize the phosphorylated domain.
Because protein phosphorylation is one of the most-studied protein modifications, many “proteomic” efforts are geared to determining the set of phosphorylated proteins in a particular cell or tissue-type under particular circumstances. This alerts the scientist to the signaling pathways that may be active in that instance.

Ubiquitination
Ubiquitin is a small protein that can be affixed to certain protein substrates by enzymes called E3 ubiquitin ligases. Determining which proteins are poly-ubiquitinated helps understand how protein pathways are regulated. This is, therefore, an additional legitimate “proteomic” study. Similarly, once a researcher determines which substrates are ubiquitinated by each ligase, determining the set of ligases expressed in a particular cell type is helpful.
Additional modifications[edit]
Listing all the protein modifications that might be studied in a “proteomics” project would require a discussion of most of biochemistry. Therefore, a short list illustrates the complexity of the problem. In addition to phosphorylation and ubiquitination, proteins can be subjected to (among others) methylation, acetylation, glycosylation, oxidation and nitrosylation. Some proteins undergo all these modifications, often in time-dependent combinations. This illustrates the potential complexity of studying protein structure and function.

(TEXT OMITTED)

Practical applications of proteomics

One major development to come from the study of human genes and proteins has been the identification of potential new drugs for the treatment of disease. This relies on genome and proteome information to identify proteins associated with a disease, which computer software can then use as targets for new drugs. For example, if a certain protein is implicated in a disease, its 3D structure provides the information to design drugs to interfere with the action of the protein. A molecule that fits the active site of an enzyme, but cannot be released by the enzyme, inactivates the enzyme. This is the basis of new drug-discovery tools, which aim to find new drugs to inactivate proteins involved in disease. As genetic differences among individuals are found, researchers expect to use these techniques to develop personalized drugs that are more effective for the individual.[19]
Proteomics is also used to reveal complex plant-insect interactions that help identify candidate genes involved in the defensive response of plants to herbivory.[20][21][

(TEXT OMITTED)

Proteomics and System Biology

Proteomics has recently come into the act as a promising force to transform biology and medicine. It is becoming increasingly apparent that changes in mRNA expression correlate poorly with protein expression changes. Proteins changes enormously in patterns of expressions across developmental and physiological responses. Proteins also face changes on the act of environmental perturbations. Proteins are the actual effectors driving cell behavior. The field of proteomics strives to characterize protein structure and function, protein-protein,protein-nucleic acid, protein-lipid, and enzyme-substrate interactions, protein processing and folding, protein activation, cellular and sub-cellular localization, protein turnover and synthesis rates, and even promoter usage. Integrating proteomic data with information such as gene, mRNA and metabolic profiles helps in better understanding of how the system works.[37]

(
See also[edit]

Activity based proteomics
Bioinformatics
Bottom-up proteomics
Cytomics
Functional genomics
Genomics
Heat stabilization
Immunomics
Immunoproteomics
Lipidomics
List of biological databases
List of omics topics in biology
Metabolomics
PEGylation
Phosphoproteomics
Proteogenomics
Proteomic chemistry
Secretomics
Shotgun proteomics
Top-down proteomics
Systems biology
Transcriptomics
Yeast two-hybrid system
Protein databases[edit]
Human Protein Atlas
Cardiac Organellar Protein Atlas Knowledgebase (COPaKB)
Human Protein Reference Database
Model Organism Protein Expression Database (MOPED)
National Center for Biotechnology Information (NCBI)
Protein Data Bank (PDB)
Protein Information Resource (PIR)
Proteomics Identifications Database (PRIDE)
Proteopedia The collaborative, 3D encyclopedia of proteins and other molecules
Swiss-Prot
UniProt
Research centers[edit]
European Bioinformatics Institute
Netherlands Proteomics Centre (NPC)
Proteomics Research Resource for Integrative Biology (NIH)
Global map of proteomics labs
References[edit]

Jump up^ Anderson NL, Anderson NG (1998). “Proteome and proteomics: new technologies, new concepts, and new words”.Electrophoresis 19 (11): 1853–61.doi:10.1002/elps.1150191103. PMID 9740045.
Jump up^ Blackstock WP, Weir MP (1999). “Proteomics: quantitative and physical mapping of cellular proteins”. Trends Biotechnol. 17 (3): 121–7. doi:10.1016/S0167-7799(98)01245-1.PMID 10189717.
Jump up^ P. James (1997). “Protein identification in the post-genome era: the rapid rise of proteomics”. Quarterly reviews of biophysics 30 (4): 279–331.doi:10.1017/S0033583597003399. PMID 9634650.
^ Jump up to:a b Marc R. Wilkins, Christian Pasquali, Ron D. Appel, Keli Ou, Olivier Golaz, Jean-Charles Sanchez, Jun X. Yan, Andrew. A. Gooley, Graham Hughes, Ian Humphery-Smith, Keith L. Williams & Denis F. Hochstrasser (1996). “From Proteins to Proteomes: Large Scale Protein Identification by Two-Dimensional Electrophoresis and Arnino Acid Analysis”. Nature Biotechnology 14 (1): 61–65. doi:10.1038/nbt0196-61.PMID 9636313.
Jump up^ UNSW Staff Bio: Professor Marc Wilkins
Jump up^ Simon Rogers, Mark Girolami, Walter Kolch, Katrina M. Waters, Tao Liu, Brian Thrall and H. Steven Wiley (2008). “Investigating the correspondence between transcriptomic and proteomic expression profiles using coupled cluster models”.Bioinformatics 24 (24): 2894–2900.doi:10.1093/bioinformatics/btn553. PMID 18974169.
Jump up^ Vikas Dhingraa, Mukta Gupta, Tracy Andacht and Zhen F. Fu (2005). “New frontiers in proteomics research: A perspective”.International Journal of Pharmaceutics 299 (1–2): 1–18.doi:10.1016/j.ijpharm.2005.04.010. PMID 15979831.
Jump up^ Buckingham, Steven (May 2003). “The major world of microRNAs”. Retrieved 2009-01-14.
Jump up^ Olsen JV, Blagoev B, Gnad F, Macek B, Kumar C, Mortensen P, Mann M (2006). “Global, in vivo, and site-specific phosphorylation dynamics in signaling networks”. Cell 127 (3): 635–648. doi:10.1016/j.cell.2006.09.026. PMID 17081983.
Jump up^ Gygi, S. P.; Rochon, Y.; Franza, B. R.; Aebersold, R. (1999).”Correlation between protein and mRNA abundance in yeast”.Molecular and Cellular Biology 19 (3): 1720–1730.PMC 83965. PMID 10022859. edit
Jump up^ Archana Belle, Amos Tanay, Ledion Bitincka, Ron Shamir and Erin K. O’Shea (2006). “Quantification of protein half-lives in the budding yeast proteome”. PNAS 103 (35): 13004–13009.Bibcode:2006PNAS..10313004B.doi:10.1073/pnas.0605420103. PMC 1550773.PMID 16916930.
Jump up^ Peng, J.; Elias, J. E.; Thoreen, C. C.; Licklider, L. J.; Gygi, S. P. (2003). “Evaluation of multidimensional chromatography coupled with tandem mass spectrometry (LC/LC-MS/MS) for large-scale protein analysis: The yeast proteome”. Journal of proteome research 2 (1): 43–50. PMID 12643542. edit
Jump up^ Washburn, M. P.; Wolters, D.; Yates, J. R. (2001). “Large-scale analysis of the yeast proteome by multidimensional protein identification technology”. Nature Biotechnology 19 (3): 242–247. doi:10.1038/85686. PMID 11231557. edit
^ Jump up to:a b c d Klopfleisch R, Klose P, Weise C, Bondzio A, Multhaup G, Einspanier R, Gruber AD. (2010). “Proteome of metastatic canine mammary carcinomas: similarities to and differences from human breast cancer”. J Proteome Res 9 (12): 6380–91.doi:10.1021/pr100671c. PMID 20932060.
Jump up^ Dix MM, Simon GM, Cravatt BF (August 2008). “Global mapping of the topography and magnitude of proteolytic events in apoptosis”. Cell 134 (4): 679–91.doi:10.1016/j.cell.2008.06.038. PMC 2597167.PMID 18724940.
Jump up^ Minde DP (2012). “Determining biophysical protein stability in lysates by a fast proteolysis assay, FASTpp”. PLOS ONE 7(10): e46147. Bibcode:2012PLoSO…746147M.doi:10.1371/journal.pone.0046147. PMC 3463568.PMID 23056252.
Jump up^ Maron JL, Alterovitz G, Ramoni M, Johnson KL, Bianchi DW (December 2009). “High-throughput discovery and characterization of fetal protein trafficking in the blood of pregnant women”. Proteomics Clin Appl 3 (12): 1389–96.doi:10.1002/prca.200900109. PMC 2825712.PMID 20186258.
Jump up^ Alterovitz G, Xiang M, Liu J, Chang A, Ramoni MF (2008).”System-wide peripheral biomarker discovery using information theory”. Pacific Symposium on Biocomputing: 231–42.PMID 18229689.
Jump up^ Vaidyanathan G (March 2012). “Redefining clinical trials: the age of personalized medicine”. Cell 148 (6): 1079–80.doi:10.1016/j.cell.2012.02.041. PMID 22424218.
Jump up^ Rakwal, Randeep; Komatsu, Setsuko (2000). “Role of jasmonate in the rice (Oryza sativa L.) self-defense mechanism using proteome analysis”. Electrophoresis 21 (12): 2492–500.doi:10.1002/1522-2683(20000701)21:12<2492::AID-ELPS2492>3.0.CO;2-2. PMID 10939463.
Jump up^ Wu, Jianqiang; Baldwin, Ian T. (2010). “New Insights into Plant Responses to the Attack from Insect Herbivores”. Annual Review of Genetics 44: 1–24. doi:10.1146/annurev-genet-102209-163500. PMID 20649414.
Jump up^ Sangha J.S., Chen Y.H., Kaur Jatinder, Khan Wajahatullah, Abduljaleel Zainularifeen, Alanazi Mohammed S., Mills Aaron, Adalla Candida B., Bennett John et al. (2013). “Proteome Analysis of Rice (Oryza sativa L.) Mutants Reveals Differentially Induced Proteins during Brown Planthopper (Nilaparvata lugens) Infestation”. Int. J. Mo Sci 14 (2): 3921–3945.doi:10.3390/ijms14023921. PMC 3588078.PMID 23434671.
Jump up^ Strimbu, Kyle; Tavel, Jorge A (2010). “What are biomarkers?”. Current Opinion in HIV and AIDS 5 (6): 463–6.doi:10.1097/COH.0b013e32833ed177. PMC 3078627.PMID 20978388.
Jump up^ Biomarkers Definitions Working Group (2001). “Biomarkers and surrogate endpoints: preferred definitions and conceptual framework”. Clinical Pharmacology & Therapeutics 69 (3): 89–95. doi:10.1067/mcp.2001.113989. PMID 11240971.
Jump up^ Klopfleisch R, Gruber AD (2009). “Increased expression of BRCA2 and RAD51 in lymph node metastases of canine mammary adenocarcinomas”. Veterinary Pathology 46 (3): 416–22. doi:10.1354/vp.08-VP-0212-K-FL. PMID 19176491.
Jump up^ Hathout, Yetrib (2007). “Approaches to the study of the cell secretome”. Expert Review of Proteomics 4 (2): 239–48.doi:10.1586/14789450.4.2.239. PMID 17425459.
Jump up^ Gupta N, Tanner S, Jaitly N, et al. (September 2007). “Whole proteome analysis of post-translational modifications: applications of mass-spectrometry for proteogenomic annotation”. Genome Res. 17 (9): 1362–77.doi:10.1101/gr.6427907. PMC 1950905.PMID 17690205.
Jump up^ Gupta N, Benhamida J, Bhargava V, et al. (July 2008).”Comparative proteogenomics: combining mass spectrometry and comparative genomics to analyze multiple genomes”.Genome Res. 18 (7): 1133–42. doi:10.1101/gr.074344.107.PMC 2493402. PMID 18426904.
^ Jump up to:a b Tonge R, Shaw J, Middleton B, et al. (March 2001). “Validation and development of fluorescence two-dimensional differential gel electrophoresis proteomics technology”.Proteomics 1 (3): 377–96. doi:10.1002/1615-9861(200103)1:3<377::AID-PROT377>3.0.CO;2-6.PMID 11680884.
Jump up^ Li-Ping Wang, Jun Shen, Lin-Quan Ge, Jin-Cai Wu, Guo-Qin Yang, Gary C. Jahn (November 2010). “Insecticide-induced increase in the protein content of male accessory glands and its effect on the fecundity of females in the brown planthopper,Nilaparvata lugens Stål (Hemiptera: Delphacidae)”. Crop Protection 29 (11): 1280–5. doi:10.1016/j.cropro.2010.07.009.
^ Jump up to:a b Ge, Lin-Quan; Cheng, Yao; Wu, Jin-Cai; Jahn, Gary C. (2011). “Proteomic Analysis of Insecticide Triazophos-Induced Mating-Responsive Proteins ofNilaparvata lugensStål (Hemiptera: Delphacidae)”. Journal of Proteome Research 10(10): 4597–612. doi:10.1021/pr200414g. PMID 21800909.
^ Jump up to:a b Reumann S (May 2011). “Toward a definition of the complete proteome of plant peroxisomes: Where experimental proteomics must be complemented by bioinformatics”.Proteomics 11 (9): 1764–79. doi:10.1002/pmic.201000681.PMID 21472859.
Jump up^ Uhlen M, Ponten F (April 2005). “Antibody-based proteomics for human tissue profiling”. Mol. Cell Proteomics 4 (4): 384–93.doi:10.1074/mcp.R500009-MCP200. PMID 15695805.
Jump up^ Ole Nørregaard Jensen (2004). “Modification-specific proteomics: characterization of post-translational modifications by mass spectrometry”. Current Opinion in Chemical Biology 8(1): 33–41. doi:10.1016/j.cbpa.2003.12.009.PMID 15036154.
Jump up^ Chandramouli, Kondethimmanahalli; Qian, Pei-Yuan (2009). “Proteomics: Challenges, Techniques and Possibilities to Overcome Biological Sample Complexity”. Human Genomics and Proteomics 2009: 1. doi:10.4061/2009/239204.
^ Jump up to:a b c d e f “What is Proteomics?”. ProteoConsult.[unreliable medical source?]
^ Jump up to:a b c d e f g Weston, Andrea D.; Hood, Leroy (2004). “Systems Biology, Proteomics, and the Future of Health Care: Toward Predictive, Preventative, and Personalized Medicine”. Journal of Proteome Research 3 (2): 179–96. doi:10.1021/pr0499693.PMID 15113093.
Bibliography[edit]

Belhajjame, K. et al. Proteome Data Integration: Characteristics and Challenges. Proceedings of the UK e-Science All Hands Meeting, ISBN 1-904425-53-4, September 2005, Nottingham, UK.
Twyman RM (2004). Principles Of Proteomics (Advanced Text Series). Oxford, UK: BIOS Scientific Publishers. ISBN 1-85996-273-4. (covers almost all branches of proteomics)
Naven T, Westermeier R (2002). Proteomics in Practice: A Laboratory Manual of Proteome Analysis. Weinheim: Wiley-VCH.ISBN 3-527-30354-5. (focused on 2D-gels, good on detail)
Liebler DC (2002). Introduction to proteomics: tools for the new biology. Totowa, NJ: Humana Press. ISBN 0-89603-992-7. ISBN 0-585-41879-9 (electronic, on Netlibrary?), ISBN 0-89603-991-9 hbk
Wilkins MR, Williams KL, Appel RD, Hochstrasser DF (1997).Proteome Research: New Frontiers in Functional Genomics (Principles and Practice). Berlin: Springer. ISBN 3-540-62753-7.
Arora PS, Yamagiwa H, Srivastava A, Bolander ME, Sarkar G (2005). “Comparative evaluation of two two-dimensional gel electrophoresis image analysis software applications using synovial fluids from patients with joint disease”. J Orthop Sci 10 (2): 160–6.doi:10.1007/s00776-004-0878-0. PMID 15815863.
Rediscovering Biology Online Textbook. Unit 2 Proteins and Proteomics. 1997–2006.
Weaver RF (2005). Molecular biology (3rd ed.). New York: McGraw-Hill. pp. 840–9. ISBN 0-07-284611-9.
Reece J, Campbell N (2002). Biology (6th ed.). San Francisco: Benjamin Cummings. pp. 392–3. ISBN 0-8053-6624-5.
Hye A, Lynham S, Thambisetty M et al. (November 2006). “Proteome-based plasma biomarkers for Alzheimer’s disease”.Brain 129 (Pt 11): 3042–50. doi:10.1093/brain/awl279.PMID 17071923.
Perroud B, Lee J, Valkova N et al. (2006). “Pathway analysis of kidney cancer using proteomics and metabolic profiling”. Mol Cancer 5: 64. doi:10.1186/1476-4598-5-64. PMC 1665458.PMID 17123452.
Yohannes E, Chang J, Christ GJ, Davies KP, Chance MR (July 2008). “Proteomics analysis identifies molecular targets related to diabetes mellitus-associated bladder dysfunction”. Mol. Cell Proteomics 7 (7): 1270–85. doi:10.1074/mcp.M700563-MCP200.PMC 2493381. PMID 18337374.
Macaulay IC, Carr P, Gusnanto A, Ouwehand WH, Fitzgerald D, Watkins NA (December 2005). “Platelet genomics and proteomics in human health and disease”. J Clin Invest. 115 (12): 3370–7.doi:10.1172/JCI26885. PMC 1297260. PMID 16322782.
Rogers MA, Clarke P, Noble J et al. (15 October 2003). “Proteomic profiling of urinary proteins in renal cancer by surface enhanced laser desorption ionization and neural-network analysis: identification of key issues affecting potential clinical utility”.Cancer Res. 63 (20): 6971–83. PMID 14583499.
Vasan RS (May 2006). “Biomarkers of cardiovascular disease: molecular basis and practical considerations”. Circulation 113 (19): 2335–62. doi:10.1161/CIRCULATIONAHA.104.482570.PMID 16702488.
“Myocardial Infarction”. (Retrieved 29 November 2006)
Introduction to Antibodies – Enzyme-Linked Immunosorbent Assay (ELISA). (Retrieved 29 November 2006)
Decramer, Stephane; Wittke, Stefan; Mischak, Harald; Zürbig, Petra; Walden, Michael; Bouissou, François; Bascands, Jean-Loup; Schanstra, Joost P (2006). “Predicting the clinical outcome of congenital unilateral ureteropelvic junction obstruction in newborn by urinary proteome analysis”. Nature Medicine 12 (4): 398–400.doi:10.1038/nm1384. PMID 16550189.
Mayer U (January 2008). “Protein Information Crawler (PIC): extensive spidering of multiple protein information resources for large protein sets”. Proteomics 8 (1): 42–4.doi:10.1002/pmic.200700865. PMID 18095364.
Jörg von Hagen, VCH-Wiley 2008 Proteomics Sample Preparation.ISBN 978-3-527-31796-7
External links[edit]

Proteomics on the Open Directory Project
http://www.merriam-webster.com/dictionary/proteomics.html
Look up proteomics in Wiktionary, the free dictionary.
Wikibooks has more on the topic of: Proteomics
At Wikiversity you can learn more and teach others aboutProteomics at:
The Department of Proteomics
[hide]vte
Omics
Genomics (Environmental genomics or Metagenomics)TranscriptomicsProteomicsMetabolomics (Glycomics, Lipidomics)
BioinformaticsCheminformaticsChemogenomicsComputational genomicsGenome projectHuman Genome ProjectImmunomicsNutrigenomicsPaleopolyploidyPersonal genomicsPharmacogeneticsPharmacogenomicsStructural genomicsSystems biologyToxicogenomics
Biology portal Medicine portal
Categories: ProteomicsGenomicsBioinformaticsProteins

Personalized medicine
From Wikipedia, the free encyclopedia
Personalized medicine or PM is a medical model that proposes the customization of healthcare – with medical decisions, practices, and/or products being tailored to the individual patient. In this model, diagnostic testing is essential for selecting appropriate therapies; terms used to describe these tests include “”companion diagnostics”, “theranostics” (a portmanteau of therapeutics and diagnostics), and “therapygenetics”. The use of genetic information has played a major role in certain aspects of personalized medicine, and the term was even first coined in the context of genetics (though it has since broadened to encompass all sorts of personalization measures). To distinguish from the sense in which medicine has always been inherently “personal” to each patient, PM commonly denotes the use of some kind of technology or discovery enabling a level of personalization not previously feasible or practical.
Contents [hide]
1 Background
2 Technologies
3 Examples
3.1 Cancer management
4 Psychiatry and psychological therapy
5 See also
6 References
7 Further reading
8 External links
Background[edit]

Traditional clinical diagnosis and management focuses on the individual patient’s clinical signs and symptoms, medical and family history, and data from laboratory and imaging evaluation to diagnose and treat illnesses. This is often a reactive approach to treatment, i.e., treatment/medication starts after the signs and symptoms appear.
Advances in medical genetics and human genetics have enabled a more detailed understanding of the impact of genetics in disease. Large collaborative research projects (for example, the Human genome project) have laid the groundwork for the understanding of the roles of genes in normal human development and physiology, revealed single nucleotide polymorphisms (SNPs) that account for some of the genetic variability between individuals, and made possible the use of genome-wide association studies (GWAS) to examine genetic variation and risk for many common diseases.
Historically, the pharmaceutical industry has developed medications based on empiric observations and more recently, known disease mechanisms.[citation needed] For example, antibiotics were based on the observation that microbes produce substances that inhibit other species. Agents that lower blood pressure have typically been designed to act on certain pathways involved in hypertension(such as renal salt and water absorption, vascular contractility, and cardiac output). Medications for high cholesterol target the absorption, metabolism, and generation of cholesterol. Treatments for diabetes are aimed at improving insulin release from the pancreas and sensitivity of the muscle and fat tissues to insulin action. Thus, medications are developed based on mechanisms of disease that have been extensively studied over the past century. It is hoped that recent advancements in the genetic etiologies of common diseases will improve pharmaceutical development.
Technologies[edit]

Since the late 1990s, the advent of research using biobanks has brought advances in molecular biology, proteomics, metabolomicanalysis, genetic testing, and molecular medicine. Another significant development has been the notion of companion diagnostics, whereby molecular assays that measure levels of proteins, genes, or specific mutations are used to provide a specific therapy for an individual’s condition – by stratifying disease status, selecting the proper medication, and tailoring dosages to that patient’s specific needs. Additionally, such methods might be used to assess a patient’s risk factor for a number of conditions and tailor individualpreventative treatments.
Pharmacogenetics (also termed pharmacogenomics) is the field of study that examines the impact of genetic variation and responses to therapeutic interventions by biomarker (medicine).[1] This approach is aimed at tailoring drug therapy at a dosage that is most appropriate for an individual patient, with the potential benefits of increasing the efficacy and safety of medications.[2] Other benefits include reduced time, cost, and failure rates of clinical trials in the production of new drugs by using precise biomarkers.[3] Gene-centered research may also speed the development of novel therapeutics.[4]
The field of proteomics, or the comprehensive analysis and characterization of all of the proteins and protein isoforms encoded by thehuman genome, may eventually have a significant impact on medicine. This is because while the DNA genome[5] is the information archive, it is the proteins that do the work of the cell: the functional aspects of the cell are controlled by and through proteins, not genes.
It has also been demonstrated that pre-dose metabolic profiles from urine can be used to predict drug metabolism.[6][7]Pharmacometabolomics refers to the direct measurement of metabolites in an individual’s bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds.
Examples[edit]

Some examples of personalized medicine include:
Genotyping for SNPs in genes involved in the action and metabolism of warfarin (Coumadin). This medication is used clinically as an anticoagulant but requires periodic monitoring and is associated with adverse side affects. Recently, genetic variants in the gene encoding Cytochrome P450 enzyme CYP2C9, which metabolizes warfarin,[8] and the Vitamin K epoxide reductase gene (VKORC1), a target of coumarins,[9] have led to commercially-available testing that enables more accurate dosing based on algorithms that take into account the age, gender, weight, and genotype of an individual.
Genotyping variants in genes encoding Cytochrome P450 enzymes (CYP2D6, CYP2C19, and CYP2C9), which metabolize neuroleptic medications, to improve drug response and reduce side-effects.[10]
Cancer management[edit]
Oncology is a field of medicine with a long history of classifying tumor stages and subtypes based on anatomic and pathologic findings. This approach includes histological examination of tumor specimens from individual patients (such as HER2/NEU in breast cancer) to look for markers associated with prognosis and likely treatment responses. Thus, “personalized medicine” was in practice long before the term was coined. New molecular testing methods have enabled an extension of this approach to include testing for global gene, protein, and protein pathway activation expression profiles and/or somatic mutations in cancer cells from patients in order to better define the prognosis in these patients and to suggest treatment options that are most likely to succeed.[11][12]
Examples of personalized cancer management include:
Companion diagnostics for targeted therapies.
Trastuzumab (trade names Herclon, Herceptin) is a monoclonal antibody drug that interferes with the HER2/neu receptor. Its main use is to treat certain breast cancers. This drug is only used if a patient’s cancer is tested for overexpression the HER2/neu receptor. Two of the most common tests used are the (Dako) HercepTest and Genentech’s Herceptin.[13] Only Her2+ patients will be treated with Herceptin therapy (trastuzumab)[14]
Tyrosine kinase inhibitors such as imatinib (marketed as Gleevec) have been developed to treat chronic myeloid leukemia(CML), in which the BCR-ABL fusion gene (the product of a reciprocal translocation between chromosome 9 and chromosome 22) is present in >95% of cases and produces hyperactivated abl-driven protein signaling. These medications specifically inhibit the Ableson tyrosine kinase (ABL) protein and are thus a prime example of “rational drug design” based on knowledge of disease pathophysiology.[15]
Testing for disease-causing mutations in the BRCA1 and BRCA2 genes, which are implicated in hereditary breast–ovarian cancer syndromes. Discovery of a disease-causing mutation in a family can inform “at-risk” individuals as to whether they are at higher risk for cancer and may prompt individualized prophylactic therapy including mastectomy and removal of the ovaries. This testing involves complicated personal decisions and is undertaken in the context of detailed genetic counseling. More detailed molecular stratification of breast tumors may pave the way for future tailored treatments.[16] These tests are part of the emerging field ofcancer genetics, which is a specialized field of medical genetics concerned with hereditary cancer risk.
Psychiatry and psychological therapy[edit]

Efforts are underway to apply the tools of personalized medicine to psychiatry and psychological therapy; these technologies are still under development as of 2013.
In 2012 Professor Thalia Eley and her research team coined the term “therapygenetics” refers to a branch of psychiatric genetic research looking at the relationship between specific genetic variants and differences in the level of success of psychological therapy.[17][18] The field is parallel to pharmacogenetics, which explores the association between specific genetic variants and the efficacy of drug treatments. Therapygenetics work also relates to the differential susceptibility hypothesis [19] which proposes that individuals have a genetic predisposition to respond to a greater or lesser extent to their environment, be it positive or negative.
See also[edit]

Predictive medicine
Whole genome sequencing
Drug development
Translational Research
$1,000 genome
References[edit]

Jump up^ Shastry BS (2006). “Pharmacogenetics and the concept of individualized medicine”. Pharmacogenomics J. 6 (1): 16–21.doi:10.1038/sj.tpj.6500338. PMID 16302022.
Jump up^ Ozdemir, Vural; Williams-Jones, Bryn; Glatt, Stephen J; Tsuang, Ming T; Lohr, James B; Reist, Christopher (August 2006). “Shifting emphasis from pharmacogenomics to theranostics”. Nature Biotechnology 24 (8): 942–946. Retrieved 30 March 2013.
Jump up^ Galas, D. J., & Hood, L. (2009). “Systems Biology and Emerging Technologies Will Catalyze the Transition from Reactive Medicine to Predictive, Personalized, Preventive and Participatory (P4) Medicine”. Interdisciplinary Bio Central 1: 1–4.doi:10.4051/ibc.2009.2.0006.
Jump up^ Shastry BS (2006). “Pharmacogenetics and the concept of individualized medicine”. Pharmacogenomics J. 6 (1): 16–21.doi:10.1038/sj.tpj.6500338. PMID 16302022.
Jump up^ Harmon, Katherine (2010-06-28). “Genome Sequencing for the Rest of Us”. Scientific American. Retrieved 2010-08-13.
Jump up^ Clayton TA, Lindon JC, Cloarec O, et al. (April 2006). “Pharmaco-metabonomic phenotyping and personalized drug treatment”. Nature 440 (7087): 1073–7.doi:10.1038/nature04648. PMID 16625200.
Jump up^ Clayton TA, Baker D, Lindon JC, Everett JR, Nicholson JK (August 2009). “Pharmacometabonomic identification of a significant host-microbiome metabolic interaction affecting human drug metabolism”. Proc. Natl. Acad. Sci. U.S.A. 106(34): 14728–33. doi:10.1073/pnas.0904489106.PMC 2731842. PMID 19667173.
Jump up^ Schwarz UI (November 2003). “Clinical relevance of genetic polymorphisms in the human CYP2C9 gene”. Eur. J. Clin. Invest. 33. Suppl 2: 23–30. doi:10.1046/j.1365-2362.33.s2.6.x. PMID 14641553.
Jump up^ Oldenburg J, Watzka M, Rost S, Müller CR (July 2007). “VKORC1: molecular target of coumarins”. J. Thromb. Haemost. 5. Suppl 1: 1–6. doi:10.1111/j.1538-7836.2007.02549.x.PMID 17635701.
Jump up^ Cichon S, Nöthen MM, Rietschel M, Propping P (2000). “Pharmacogenetics of schizophrenia”. Am. J. Med. Genet. 97(1): 98–106. doi:10.1002/(SICI)1096-8628(200021)97:1<98::AID-AJMG12>3.0.CO;2-W.PMID 10813809.
Jump up^ Mansour JC, Schwarz RE (August 2008). “Molecular mechanisms for individualized cancer care”. J. Am. Coll. Surg.207 (2): 250–8. doi:10.1016/j.jamcollsurg.2008.03.003.PMID 18656055.
Jump up^ van’t Veer LJ, Bernards R (April 2008). “Enabling personalized cancer medicine through analysis of gene-expression patterns”.Nature 452 (7187): 564–70. doi:10.1038/nature06915.PMID 18385730.
Jump up^ Carney, Walter (2006). “HER2/neu Status is an Important Biomarker in Guiding Personalized HER2/neu Therapy”.Connection 9: 25–27.
Jump up^ Telli, M. L.; Hunt, S. A.; Carlson, R. W.; Guardino, A. E. (2007). “Trastuzumab-Related Cardiotoxicity: Calling Into Question the Concept of Reversibility”. Journal of Clinical Oncology 25 (23): 3525–3533.doi:10.1200/JCO.2007.11.0106. ISSN 0732-183X.PMID 17687157.
Jump up^ Saglio G, Morotti A, Mattioli G, et al. (December 2004). “Rational approaches to the design of therapeutics targeting molecular markers: the case of chronic myelogenous leukemia”.Ann. N. Y. Acad. Sci. 1028 (1): 423–31.doi:10.1196/annals.1322.050. PMID 15650267.
Jump up^ Gallagher, James (19 April 2012). “Breast cancer rules rewritten in ‘landmark’ study”. BBC News. Retrieved 19 April 2012.
Jump up^ Lester, KJ; Eley TC (2013). “Therapygenetics: Using genetic markers to predict response to psychological treatment for mood and anxiety disorders”. Biology of mood & anxiety disorders 3(1): 1–16. doi:10.1186/2045-5380-3-4. PMC 3575379.PMID 23388219.
Jump up^ Beevers, CG; McGeary JE (2012). “Therapygenetics: moving towards personalized psychotherapy treatment”. Trends in Cognitive Sciences 16 (1): 11–12.doi:10.1016/j.tics.2011.11.004. PMC 3253222.PMID 22104133.
Jump up^ Belsky J, Jonassaint C, Pluess M, Stanton M, Brummett B, Williams R (August 2009). “Vulnerability genes or plasticity genes?”. Mol. Psychiatry 14 (8): 746–54.doi:10.1038/mp.2009.44. PMC 2834322.PMID 19455150.
Further reading[edit]

Daskalaki A, Wierling C, Herwig R (2009), Computational tools and resources for systems biology approaches in cancer.In Computational Biology – Issues and Applications in Oncology, Series: Applied Bioinformatics and Biostatistics in Cancer Research, Pham, Tuan (Ed.), Springer, New York Dordrecht Heidelberg London. 2009:227-242.
Acharya et al. (2008), Gene Expression Signatures, clinicopathological features, and individualized therapy in breast cancer, JAMA 299: 1574.
Sadee W, Dai Z. (2005), Pharmacogenetics/genomics and personalized medicine, Hum Mol Genet. 2005 October 15;14 Spec No. 2:R207-14.
Steven H. Y. Wong (2006), Pharmacogenomics and Proteomics: Enabling the Practice of Personalized Medicine, American Association for Clinical Chemistry, ISBN 1-59425-046-4
Qing Yan (2008), Pharmacogenomics in Drug Discovery and Development, Humana Press, 2008, ISBN 1-58829-887-6.
Willard, H.W., and Ginsburg, G.S., (eds), (2009), Genomic and Personalized Medicine, Academic Press, 2009, ISBN 0-12-369420-5.
Haile, Lisa A. (2008), Making Personalized Medicine a Reality, Genetic Engineering & Biotechnology News Vol. 28, No. 1.
Hornberger J, Habraken H, Bloch DA. Minimum data needed on patient preferences for accurate, efficient medical decision making. Medical Care 1995; 33:297-310.
Lyman GH, Cosler LE, Kuderer NM, Hornberger J. Impact of a 21-gene RT-PCR assay on treatment decisions in early-stage breast cancer: an economic analysis based on prognostic and predictive validation studies. Cancer 2007; 109(6):1011-8.
Hornberger J, Cosler L and Lyman G. Economic analysis of targeting chemotherapy using a 21-gene RT-PCR assay in lymph-node–negative, estrogen-receptor–positive, early-stage breast cancer. Am J Managed Care 2005; 11:313-24.
A.Daskalaki & A.Lazakidou (2011). Quality Assurance in Healthcare Service Delivery, Nursing and Personalized Medicine: Technologies and Processes. IGI Global. ISBN 978-1-61350-120-7
Picard FJ, Bergeron MG., Rapid molecular theranostics in infectious diseases, Drug Discov Today. 2002 Nov 1;7(21):1092-101.
Hooper JW., The genetic map to theranostics, MLO Med Lab Obs. 2006 Jun;38(6):22-3, 25.
External links[edit]

CancerDriver : a free and open database to promote personalized medicine in oncology.
[show]vte
Personal genomics
[show]vte
Medicine
[show]vte
Emerging technologies
Categories: Evidence-based medicineGeneticsGenomicsHealth informaticsMonoclonal antibodiesPharmacologyEmerging technologies

Adam Arkin

Adam Arkin (colleague of Vijay Vaserani)

http://Berkeley BIO

Research Expertise and Interest
Systems and Synthetic Biology, Environmental Microbiology of Bacteria and Viruses, bioenergy, Biomedicine, Bioremediation

Description
The Arkin laboratory for systems and synthetic biology seeks to uncover the evolutionary design principles of cellular networks and populations and to exploit them for applications. To do so they are developing a framework to effectively combine comparative functional genomics, quantitative measurement of cellular dynamics, biophysical modeling of cellular networks, and cellular circuit design to ultimately facilitate applications in health, the environment, and bioenergy.