Monthly Archives: January 2017

UHVDC and China

Credit: Economist Article about UHVDC and China

A greener grid
China’s embrace of a new electricity-transmission technology holds lessons for others
The case for high-voltage direct-current connectors
Jan 14th 2017

YOU cannot negotiate with nature. From the offshore wind farms of the North Sea to the solar panels glittering in the Atacama desert, renewable energy is often generated in places far from the cities and industrial centres that consume it. To boost renewables and drive down carbon-dioxide emissions, a way must be found to send energy over long distances efficiently.

The technology already exists (see article). Most electricity is transmitted today as alternating current (AC), which works well over short and medium distances. But transmission over long distances requires very high voltages, which can be tricky for AC systems. Ultra-high-voltage direct-current (UHVDC) connectors are better suited to such spans. These high-capacity links not only make the grid greener, but also make it more stable by balancing supply. The same UHVDC links that send power from distant hydroelectric plants, say, can be run in reverse when their output is not needed, pumping water back above the turbines.

Boosters of UHVDC lines envisage a supergrid capable of moving energy around the planet. That is wildly premature. But one country has grasped the potential of these high-capacity links. State Grid, China’s state-owned electricity utility, is halfway through a plan to spend $88bn on UHVDC lines between 2009 and 2020. It wants 23 lines in operation by 2030.

That China has gone furthest in this direction is no surprise. From railways to cities, China’s appetite for big infrastructure projects is legendary (see article). China’s deepest wells of renewable energy are remote—think of the sun-baked Gobi desert, the windswept plains of Xinjiang and the mountain ranges of Tibet where rivers drop precipitously. Concerns over pollution give the government an additional incentive to locate coal-fired plants away from population centres. But its embrace of the technology holds two big lessons for others. The first is a demonstration effect. China shows that UHVDC lines can be built on a massive scale. The largest, already under construction, will have the capacity to power Greater London almost three times over, and will span more than 3,000km.

The second lesson concerns the co-ordination problems that come with long-distance transmission. UHVDCs are as much about balancing interests as grids. The costs of construction are hefty. Utilities that already sell electricity at high prices are unlikely to welcome competition from suppliers of renewable energy; consumers in renewables-rich areas who buy electricity at low prices may balk at the idea of paying more because power is being exported elsewhere. Reconciling such interests is easier the fewer the utilities involved—and in China, State Grid has a monopoly.

That suggests it will be simpler for some countries than others to follow China’s lead. Developing economies that lack an established electricity infrastructure have an advantage. Solar farms on Africa’s plains and hydroplants on its powerful rivers can use UHVDC lines to get energy to growing cities. India has two lines on the drawing-board, and should have more.

Things are more complicated in the rich world. Europe’s utilities work pretty well together but a cross-border UHVDC grid will require a harmonised regulatory framework. America is the biggest anomaly. It is a continental-sized economy with the wherewithal to finance UHVDCs. It is also horribly fragmented. There are 3,000 utilities, each focused on supplying power to its own customers. Consumers a few states away are not a priority, no matter how much sense it might make to send them electricity. A scheme to connect the three regional grids in America is stuck. The only way that America will create a green national grid will be if the federal government throws its weight behind it.

Live wire
Building a UHVDC network does not solve every energy problem. Security of supply remains an issue, even within national borders: any attacker who wants to disrupt the electricity supply to China’s east coast will soon have a 3,000km-long cable to strike. Other routes to a cleaner grid are possible, such as distributed solar power and battery storage. But to bring about a zero-carbon grid, UHVDC lines will play a role. China has its foot on the gas. Others should follow.
This article appeared in the Leaders section of the print edition under the headline “A greener grid”

Fixed Costs of the Grid … 55%?


“Distributed generation” (DG) is what the electric utility industry calls solar panels, wind turbines, etc.

The article points out what is well-known: even with aggressive use of solar, any DG customer still needs the grid ….. at least this is true until a reasonable cost methodology for storing electricity at the point of generation comes on-line (at which time perhaps a true “off-grid” location is possible.

So …. for a DG customer …. the grid becomes a back-up, a source of power when the sun does not shine, the wind does not blow, etc.

So the fairness question is: should a DG customer pay for their fair share of the grid? Asked this way, the answer is obvious: yes. Just like people pay for insurance, in that same way should people be asked to pay for the cost of the grid.

Unfortunately, these costs are astronomical. This paper claims that they are 55% of total costs!

“In this example, the typical residential customer consumes, on average, about 1000 kWh per month and pays an average monthly bill of about $110 (based on EIA data). About half of that bill (i.e., $60 per month) covers charges related to the non-energy services provided by the grid….”

Batteries Update

New York Times article on big batteriesP

Notes from the article: Susan Kennedy is the former state utility regulator knows a lot about this. She now runs and energy stored start up.

AES has the contract. This is one of three major installations in Southern California.

This one is 130 miles south east of Aliso Canyon, the site of the major gas leak in 2015.

The second is installation is in Escondido, California, 30 miles north of San Diego. It will be the largest of its kind in the world.

The third is being built by Tesla – for southern California Edison – near Chino, California.

AES has two executives that drove the project since 12 2006. Chris Shelton and John Zahurancik. Their inspiration came from a purse festers paper the predicted the future dominated by electric cars. When Park, they could be connected to the grid so that their batteries could act as storage devices to help balance electricity demand.

They are buying the batteries that they are installing from manufacturers like Samsung, LG, and Panasonic.

Voice Recognition – Update from the Economist

Excellent comment on the state of the art of voice recognition in the Economist. The entire article is below.

I think its fair to say, as the article does, “we’re in 1994 for voice.” In other words, just like the internet had core technology in place in 1994, no one really had a clue about what it would ultimately mean to society.

My guess is …. it will be a game-changer of the first order.

ECHO, SIRI, CORTANA – the beginning of a new era!

Just like the GUI, the mouse, and WINDOWS allowed computers to go mainstream, my instinct is that removing a keyboard as a requirement will take the computer from a daily tool, and will make it a second-by-second tool. The Apple Watch, which looks rather benign right now, could easily become the central means of communication. And the hard-to-use keyboard on the iPhone will become increasingly a white elephant – rarely used and quaint.




Language: Finding a voice
Computers have got much better at translation, voice recognition and speech synthesis, says Lane Greene. But they still don’t understand the meaning of language.

I’M SORRY, Dave. I’m afraid I can’t do that.” With chilling calm, HAL 9000, the on-board computer in “2001: A Space Odyssey”, refuses to open the doors to Dave Bowman, an astronaut who had ventured outside the ship. HAL’s decision to turn on his human companion reflected a wave of fear about intelligent computers.

When the film came out in 1968, computers that could have proper conversations with humans seemed nearly as far away as manned flight to Jupiter. Since then, humankind has progressed quite a lot farther with building machines that it can talk to, and that can respond with something resembling natural speech. Even so, communication remains difficult. If “2001” had been made to reflect the state of today’s language technology, the conversation might have gone something like this: “Open the pod bay doors, Hal.” “I’m sorry, Dave. I didn’t understand the question.” “Open the pod bay doors, Hal.” “I have a list of eBay results about pod doors, Dave.”

Creative and truly conversational computers able to handle the unexpected are still far off. Artificial-intelligence (AI) researchers can only laugh when asked about the prospect of an intelligent HAL, Terminator or Rosie (the sassy robot housekeeper in “The Jetsons”). Yet although language technologies are nowhere near ready to replace human beings, except in a few highly routine tasks, they are at last about to become good enough to be taken seriously. They can help people spend more time doing interesting things that only humans can do. After six decades of work, much of it with disappointing outcomes, the past few years have produced results much closer to what early pioneers had hoped for.

Speech recognition has made remarkable advances. Machine translation, too, has gone from terrible to usable for getting the gist of a text, and may soon be good enough to require only modest editing by humans. Computerised personal assistants, such as Apple’s Siri, Amazon’s Alexa, Google Now and Microsoft’s Cortana, can now take a wide variety of questions, structured in many different ways, and return accurate and useful answers in a natural-sounding voice. Alexa can even respond to a request to “tell me a joke”, but only by calling upon a database of corny quips. Computers lack a sense of humour.

When Apple introduced Siri in 2011 it was frustrating to use, so many people gave up. Only around a third of smartphone owners use their personal assistants regularly, even though 95% have tried them at some point, according to Creative Strategies, a consultancy. Many of those discouraged users may not realise how much they have improved.
In 1966 John Pierce was working at Bell Labs, the research arm of America’s telephone monopoly. Having overseen the team that had built the first transistor and the first communications satellite, he enjoyed a sterling reputation, so he was asked to take charge of a report on the state of automatic language processing for the National Academy of Sciences. In the period leading up to this, scholars had been promising automatic translation between languages within a few years.

But the report was scathing. Reviewing almost a decade of work on machine translation and automatic speech recognition, it concluded that the time had come to spend money “hard-headedly toward important, realistic and relatively short-range goals”—another way of saying that language-technology research had overpromised and underdelivered. In 1969 Pierce wrote that both the funders and eager researchers had often fooled themselves, and that “no simple, clear, sure knowledge is gained.” After that, America’s government largely closed the money tap, and research on language technology went into hibernation for two decades.

The story of how it emerged from that hibernation is both salutary and surprisingly workaday, says Mark Liberman. As professor of linguistics at the University of Pennsylvania and head of the Linguistic Data Consortium, a huge trove of texts and recordings of human language, he knows a thing or two about the history of language technology. In the bad old days researchers kept their methods in the dark and described their results in ways that were hard to evaluate. But beginning in the 1980s, Charles Wayne, then at America’s Defence Advanced Research Projects Agency, encouraged them to try another approach: the “common task”.

Many early approaches to language technology got stuck in a conceptual cul-de-sac

Step by step
Researchers would agree on a common set of practices, whether they were trying to teach computers speech recognition, speaker identification, sentiment analysis of texts, grammatical breakdown, language identification, handwriting recognition or anything else. They would set out the metrics they were aiming to improve on, share the data sets used to train their software and allow their results to be tested by neutral outsiders. That made the process far more transparent. Funding started up again and language technologies began to improve, though very slowly.

Many early approaches to language technology—and particularly translation—got stuck in a conceptual cul-de-sac: the rules-based approach. In translation, this meant trying to write rules to analyse the text of a sentence in the language of origin, breaking it down into a sort of abstract “interlanguage” and rebuilding it according to the rules of the target language. These approaches showed early promise. But language is riddled with ambiguities and exceptions, so such systems were hugely complicated and easily broke down when tested on sentences beyond the simple set they had been designed for. Nearly all language technologies began to get a lot better with the application of statistical methods, often called a “brute force” approach. This relies on software scouring vast amounts of data, looking for patterns and learning from precedent. For example, in parsing language (breaking it down into its grammatical components), the software learns from large bodies of text that have already been parsed by humans. It uses what it has learned to make its best guess about a previously unseen text. In machine translation, the software scans millions of words already translated by humans, again looking for patterns. In speech recognition, the software learns from a body of recordings and the transcriptions made by humans. Thanks to the growing power of processors, falling prices for data storage and, most crucially, the explosion in available data, this approach eventually bore fruit. Mathematical techniques that had been known for decades came into their own, and big companies with access to enormous amounts of data were poised to benefit. People who had been put off by the hilariously inappropriate translations offered by online tools like BabelFish began to have more faith in Google Translate. Apple persuaded millions of iPhone users to talk not only on their phones but to them. The final advance, which began only about five years ago, came with the advent of deep learning through digital neural networks (DNNs). These are often touted as having qualities similar to those of the human brain: “neurons” are connected in software, and connections can become stronger or weaker in the process of learning.

But Nils Lenke, head of research for Nuance, a language-technology company, explains matter-of-factly that “DNNs are just another kind of mathematical model,” the basis of which had been well understood for decades. What changed was the hardware being used. Almost by chance, DNN researchers discovered that the graphical processing units (GPUs) used to render graphics fluidly in applications like video games were also brilliant at handling neural networks. In computer graphics, basic small shapes move according to fairly simple rules, but there are lots of shapes and many rules, requiring vast numbers of simple calculations. The same GPUs are used to fine-tune the weights assigned to “neurons” in DNNs as they scour data to learn. The technique has already produced big leaps in quality for all kinds of deep learning, including deciphering handwriting, recognising faces and classifying images. Now they are helping to improve all manner of language technologies, often bringing enhancements of up to 30%. That has shifted language technology from usable at a pinch to really rather good. But so far no one has quite worked out what will move it on from merely good to reliably great.

Speech recognition: I hear you
Computers have made huge strides in understanding human speech

WHEN a person speaks, air is forced out through the lungs, making the vocal chords vibrate, which sends out characteristic wave patterns through the air. The features of the sounds depend on the arrangement of the vocal organs, especially the tongue and the lips, and the characteristic nature of the sounds comes from peaks of energy in certain frequencies. The vowels have frequencies called “formants”, two of which are usually enough to differentiate one vowel from another. For example, the vowel in the English word “fleece” has its first two formants at around 300Hz and 3,000Hz. Consonants have their own characteristic features.

In principle, it should be easy to turn this stream of sound into transcribed speech. As in other language technologies, machines that recognise speech are trained on data gathered earlier. In this instance, the training data are sound recordings transcribed to text by humans, so that the software has both a sound and a text input. All it has to do is match the two. It gets better and better at working out how to transcribe a given chunk of sound in the same way as humans did in the training data. The traditional matching approach was a statistical technique called a hidden Markov model (HMM), making guesses based on what was done before. More recently speech recognition has also gained from deep learning.

English has about 44 “phonemes”, the units that make up the sound system of a language. P and b are different phonemes, because they distinguish words like pat and bat. But in English p with a puff of air, as in “party”, and p without a puff of air, as in “spin”, are not different phonemes, though they are in other languages. If a computer hears the phonemes s, p, i and n back to back, it should be able to recognise the word “spin”.

But the nature of live speech makes this difficult for machines. Sounds are not pronounced individually, one phoneme after the other; they mostly come in a constant stream, and finding the boundaries is not easy. Phonemes also differ according to the context. (Compare the l sound at the beginning of “light” with that at the end of “full”.)

Speakers differ in timbre and pitch of voice, and in accent. Conversation is far less clear than careful dictation. People stop and restart much more often than they realise.
All the same, technology has gradually mitigated many of these problems, so error rates in speech-recognition software have fallen steadily over the years—and then sharply with the introduction of deep learning. Microphones have got better and cheaper. With ubiquitous wireless internet, speech recordings can easily be beamed to computers in the cloud for analysis, and even smartphones now often have computers powerful enough to carry out this task.

Bear arms or bare arms?
Perhaps the most important feature of a speech-recognition system is its set of expectations about what someone is likely to say, or its “language model”. Like other training data, the language models are based on large amounts of real human speech, transcribed into text. When a speech-recognition system “hears” a stream of sound, it makes a number of guesses about what has been said, then calculates the odds that it has found the right one, based on the kinds of words, phrases and clauses it has seen earlier in the training text.

At the level of phonemes, each language has strings that are permitted (in English, a word may begin with str-, for example) or banned (an English word cannot start with tsr-). The same goes for words. Some strings of words are more common than others. For example, “the” is far more likely to be followed by a noun or an adjective than by a verb or an adverb. In making guesses about homophones, the computer will have remembered that in its training data the phrase “the right to bear arms” came up much more often than “the right to bare arms”, and will thus have made the right guess.

Training on a specific speaker greatly cuts down on the software’s guesswork. Just a few minutes of reading training text into software like Dragon Dictate, made by Nuance, produces a big jump in accuracy. For those willing to train the software for longer, the improvement continues to something close to 99% accuracy (meaning that of each hundred words of text, not more than one is wrongly added, omitted or changed). A good microphone and a quiet room help.

Advance knowledge of what kinds of things the speaker might be talking about also increases accuracy. Words like “phlebitis” and “gastrointestinal” are not common in general discourse, and uncommon words are ranked lower in the probability tables the software uses to guess what it has heard. But these words are common in medicine, so creating software trained to look out for such words considerably improves the result. This can be done by feeding the system a large number of documents written by the speaker whose voice is to be recognised; common words and phrases can be extracted to improve the system’s guesses.

As with all other areas of language technology, deep learning has sharply brought down error rates. In October Microsoft announced that its latest speech-recognition system had achieved parity with human transcribers in recognising the speech in the Switchboard Corpus, a collection of thousands of recorded conversations in which participants are talking with a stranger about a randomly chosen subject.

Error rates on the Switchboard Corpus are a widely used benchmark, so claims of quality improvements can be easily compared. Fifteen years ago quality had stalled, with word-error rates of 20-30%. Microsoft’s latest system, which has six neural networks running in parallel, has reached 5.9% (see chart), the same as a human transcriber’s. Xuedong Huang, Microsoft’s chief speech scientist, says that he expected it to take two or three years to reach parity with humans. It got there in less than one.

The improvements in the lab are now being applied to products in the real world. More and more cars are being fitted with voice-activated controls of various kinds; the vocabulary involved is limited (there are only so many things you might want to say to your car), which ensures high accuracy. Microphones—or often arrays of microphones with narrow fields of pick-up—are getting better at identifying the relevant speaker among a group.

Some problems remain. Children and elderly speakers, as well as people moving around in a room, are harder to understand. Background noise remains a big concern; if it is different from that in the training data, the software finds it harder to generalise from what it has learned. So Microsoft, for example, offers businesses a product called CRIS that lets users customise speech-recognition systems for the background noise, special vocabulary and other idiosyncrasies they will encounter in that particular environment. That could be useful anywhere from a noisy factory floor to a care home for the elderly.

But for a computer to know what a human has said is only a beginning. Proper interaction between the two, of the kind that comes up in almost every science-fiction story, calls for machines that can speak back.

Hasta la vista, robot voice
Machines are starting to sound more like humans
“I’LL be back.” “Hasta la vista, baby.” Arnold Schwarzenegger’s Teutonic drone in the “Terminator” films is world-famous. But in this instance film-makers looking into the future were overly pessimistic. Some applications do still feature a monotonous “robot voice”, but that is changing fast.

Examples of speech synthesis from OSX synthesiser:
A basic sample:

An advanced sample:

Example from Amazon’s “Polly” synthesiser:
Amazon’s Polly:

Creating speech is roughly the inverse of understanding it. Again, it requires a basic model of the structure of speech. What are the sounds in a language, and how do they combine? What words does it have, and how do they combine in sentences? These are well-understood questions, and most systems can now generate sound waves that are a fair approximation of human speech, at least in short bursts.
Heteronyms require special care. How should a computer pronounce a word like “lead”, which can be a present-tense verb or a noun for a heavy metal, pronounced quite differently? Once again a language model can make accurate guesses: “Lead us not into temptation” can be parsed for its syntax, and once the software has worked out that the first word is almost certainly a verb, it can cause it to be pronounced to rhyme with “reed”, not “red”.

Traditionally, text-to-speech models have been “concatenative”, consisting of very short segments recorded by a human and then strung together as in the acoustic model described above. More recently, “parametric” models have been generating raw audio without the need to record a human voice, which makes these systems more flexible but less natural-sounding.

DeepMind, an artificial-intelligence company bought by Google in 2014, has announced a new way of synthesising speech, again using deep neural networks. The network is trained on recordings of people talking, and on the texts that match what they say. Given a text to reproduce as speech, it churns out a far more fluent and natural-sounding voice than the best concatenative and parametric approaches.

The last step in generating speech is giving it prosody—generally, the modulation of speed, pitch and volume to convey an extra (and critical) channel of meaning. In English, “a German teacher”, with the stress on “teacher”, can teach anything but must be German. But “a German teacher” with the emphasis on “German” is usually a teacher of German (and need not be German). Words like prepositions and conjunctions are not usually stressed. Getting machines to put the stresses in the correct places is about 50% solved, says Mark Liberman of the University of Pennsylvania.

Many applications do not require perfect prosody. A satellite-navigation system giving instructions on where to turn uses just a small number of sentence patterns, and prosody is not important. The same goes for most single-sentence responses given by a virtual assistant on a smartphone.

But prosody matters when someone is telling a story. Pitch, speed and volume can be used to pass quickly over things that are already known, or to build interest and tension for new information. Myriad tiny clues communicate the speaker’s attitude to his subject. The phrase “a German teacher”, with stress on the word “German”, may, in the context of a story, not be a teacher of German, but a teacher being explicitly contrasted with a teacher who happens to be French or British.

Text-to-speech engines are not much good at using context to provide such accentuation, and where they do, it rarely extends beyond a single sentence. When Alexa, the assistant in Amazon’s Echo device, reads a news story, her prosody is jarringly un-humanlike. Talking computers have yet to learn how to make humans want to listen.

Machine translation: Beyond Babel
Computer translations have got strikingly better, but still need human input
IN “STAR TREK” it was a hand-held Universal Translator; in “The Hitchhiker’s Guide to the Galaxy” it was the Babel Fish popped conveniently into the ear. In science fiction, the meeting of distant civilisations generally requires some kind of device to allow them to talk. High-quality automated translation seems even more magical than other kinds of language technology because many humans struggle to speak more than one language, let alone translate from one to another.

Computer translation is still known as “machine translation”
The idea has been around since the 1950s, and computerised translation is still known by the quaint moniker “machine translation” (MT). It goes back to the early days of the cold war, when American scientists were trying to get computers to translate from Russian. They were inspired by the code-breaking successes of the second world war, which had led to the development of computers in the first place. To them, a scramble of Cyrillic letters on a page of Russian text was just a coded version of English, and turning it into English was just a question of breaking the code.

Scientists at IBM and Georgetown University were among those who thought that the problem would be cracked quickly. Having programmed just six rules and a vocabulary of 250 words into a computer, they gave a demonstration in New York on January 7th 1954 and proudly produced 60 automated translations, including that of “Mi pyeryedayem mislyi posryedstvom ryechyi,” which came out correctly as “We transmit thoughts by means of speech.” Leon Dostert of Georgetown, the lead scientist, breezily predicted that fully realised MT would be “an accomplished fact” in three to five years.

Instead, after more than a decade of work, the report in 1966 by a committee chaired by John Pierce, mentioned in the introduction to this report, recorded bitter disappointment with the results and urged researchers to focus on narrow, achievable goals such as automated dictionaries. Government-sponsored work on MT went into near-hibernation for two decades. What little was done was carried out by private companies. The most notable of them was Systran, which provided rough translations, mostly to America’s armed forces.
La plume de mon ordinateur
The scientists got bogged down by their rules-based approach. Having done relatively well with their six-rule system, they came to believe that if they programmed in more rules, the system would become more sophisticated and subtle. Instead, it became more likely to produce nonsense. Adding extra rules, in the modern parlance of software developers, did not “scale”.

Besides the difficulty of programming grammar’s many rules and exceptions, some early observers noted a conceptual problem. The meaning of a word often depends not just on its dictionary definition and the grammatical context but the meaning of the rest of the sentence. Yehoshua Bar-Hillel, an Israeli MT pioneer, realised that “the pen is in the box” and “the box is in the pen” would require different translations for “pen”: any pen big enough to hold a box would have to be an animal enclosure, not a writing instrument.

How could machines be taught enough rules to make this kind of distinction? They would have to be provided with some knowledge of the real world, a task far beyond the machines or their programmers at the time. Two decades later, IBM stumbled on an approach that would revive optimism about MT. Its Candide system was the first serious attempt to use statistical probabilities rather than rules devised by humans for translation. Statistical, “phrase-based” machine translation, like speech recognition, needed training data to learn from. Candide used Canada’s Hansard, which publishes that country’s parliamentary debates in French and English, providing a huge amount of data for that time. The phrase-based approach would ensure that the translation of a word would take the surrounding words properly into account.

But quality did not take a leap until Google, which had set itself the goal of indexing the entire internet, decided to use those data to train its translation engines; in 2007 it switched from a rules-based engine (provided by Systran) to its own statistics-based system. To build it, Google trawled about a trillion web pages, looking for any text that seemed to be a translation of another—for example, pages designed identically but with different words, and perhaps a hint such as the address of one page ending in /en and the other ending in /fr. According to Macduff Hughes, chief engineer on Google Translate, a simple approach using vast amounts of data seemed more promising than a clever one with fewer data.

Training on parallel texts (which linguists call corpora, the plural of corpus) creates a “translation model” that generates not one but a series of possible translations in the target language. The next step is running these possibilities through a monolingual language model in the target language. This is, in effect, a set of expectations about what a well-formed and typical sentence in the target language is likely to be. Single-language models are not too hard to build. (Parallel human-translated corpora are hard to come by; large amounts of monolingual training data are not.) As with the translation model, the language model uses a brute-force statistical approach to learn from the training data, then ranks the outputs from the translation model in order of plausibility.

Statistical machine translation rekindled optimism in the field. Internet users quickly discovered that Google Translate was far better than the rules-based online engines they had used before, such as BabelFish. Such systems still make mistakes—sometimes minor, sometimes hilarious, sometimes so serious or so many as to make nonsense of the result. And language pairs like Chinese-English, which are unrelated and structurally quite different, make accurate translation harder than pairs of related languages like English and German. But more often than not, Google Translate and its free online competitors, such as Microsoft’s Bing Translator, offer a usable approximation.

Such systems are set to get better, again with the help of deep learning from digital neural networks. The Association for Computational Linguistics has been holding workshops on MT every summer since 2006. One of the events is a competition between MT engines turned loose on a collection of news text. In August 2016, in Berlin, neural-net-based MT systems were the top performers (out of 102), a first.
Now Google has released its own neural-net-based engine for eight language pairs, closing much of the quality gap between its old system and a human translator.
This is especially true for closely related languages (like the big European ones) with lots of available training data. The results are still distinctly imperfect, but far smoother and more accurate than before. Translations between English and (say) Chinese and Korean are not as good yet, but the neural system has brought a clear improvement here too.

What machines cannot yet do is have true conversations
The Coca-Cola factor

Neural-network-based translation actually uses two networks. One is an encoder. Each word of an input sentence is converted into a multidimensional vector (a series of numerical values), and the encoding of each new word takes into account what has happened earlier in the sentence. Marcello Federico of Italy’s Fondazione Bruno Kessler, a private research organisation, uses an intriguing analogy to compare neural-net translation with the phrase-based kind. The latter, he says, is like describing Coca-Cola in terms of sugar, water, caffeine and other ingredients. By contrast, the former encodes features such as liquidness, darkness, sweetness and fizziness.
Once the source sentence is encoded, a decoder network generates a word-for-word translation, once again taking account of the immediately preceding word. This can cause problems when the meaning of words such as pronouns depends on words mentioned much earlier in a long sentence. This problem is mitigated by an “attention model”, which helps maintain focus on other words in the sentence outside the immediate context.

Neural-network translation requires heavy-duty computing power, both for the original training of the system and in use. The heart of such a system can be the GPUs that made the deep-learning revolution possible, or specialised hardware like Google’s Tensor Processing Units (TPUs). Smaller translation companies and researchers usually rent this kind of processing power in the cloud. But the data sets used in neural-network training do not need to be as extensive as those for phrase-based systems, which should give smaller outfits a chance to compete with giants like Google.
Fully automated, high-quality machine translation is still a long way off. For now, several problems remain. All current machine translations proceed sentence by sentence. If the translation of such a sentence depends on the meaning of earlier ones, automated systems will make mistakes. Long sentences, despite tricks like the attention model, can be hard to translate. And neural-net-based systems in particular struggle with rare words.

Training data, too, are scarce for many language pairs. They are plentiful between European languages, since the European Union’s institutions churn out vast amounts of material translated by humans between the EU’s 24 official languages. But for smaller languages such resources are thin on the ground. For example, there are few Greek-Urdu parallel texts available on which to train a translation engine. So a system that claims to offer such translation is in fact usually running it through a bridging language, nearly always English. That involves two translations rather than one, multiplying the chance of errors.
Even if machine translation is not yet perfect, technology can already help humans translate much more quickly and accurately. “Translation memories”, software that stores already translated words and segments, first came into use as early as the 1980s. For someone who frequently translates the same kind of material (such as instruction manuals), they serve up the bits that have already been translated, saving lots of duplication and time.

A similar trick is to train MT engines on text dealing with a narrow real-world domain, such as medicine or the law. As software techniques are refined and computers get faster, training becomes easier and quicker. Free software such as Moses, developed with the support of the EU and used by some of its in-house translators, can be trained by anyone with parallel corpora to hand. A specialist in medical translation, for instance, can train the system on medical translations only, which makes them far more accurate.
At the other end of linguistic sophistication, an MT engine can be optimised for the shorter and simpler language people use in speech to spew out rough but near-instantaneous speech-to-speech translations. This is what Microsoft’s Skype Translator does. Its quality is improved by being trained on speech (things like film subtitles and common spoken phrases) rather than the kind of parallel text produced by the European Parliament.

Translation management has also benefited from innovation, with clever software allowing companies quickly to combine the best of MT, translation memory, customisation by the individual translator and so on. Translation-management software aims to cut out the agencies that have been acting as middlemen between clients and an army of freelance translators. Jack Welde, the founder of Smartling, an industry favourite, says that in future translation customers will choose how much human intervention is needed for a translation. A quick automated one will do for low-stakes content with a short life, but the most important content will still require a fully hand-crafted and edited version. Noting that MT has both determined boosters and committed detractors, Mr Welde says he is neither: “If you take a dogmatic stance, you’re not optimised for the needs of the customer.”

Translation software will go on getting better. Not only will engineers keep tweaking their statistical models and neural networks, but users themselves will make improvements to their own systems. For example, a small but much-admired startup, Lilt, uses phrase-based MT as the basis for a translation, but an easy-to-use interface allows the translator to correct and improve the MT system’s output. Every time this is done, the corrections are fed back into the translation engine, which learns and improves in real time. Users can build several different memories—a medical one, a financial one and so on—which will help with future translations in that specialist field.

TAUS, an industry group, recently issued a report on the state of the translation industry saying that “in the past few years the translation industry has burst with new tools, platforms and solutions.” Last year Jaap van der Meer, TAUS’s founder and director, wrote a provocative blogpost entitled “The Future Does Not Need Translators”, arguing that the quality of MT will keep improving, and that for many applications less-than-perfect translation will be good enough.

The “translator” of the future is likely to be more like a quality-control expert, deciding which texts need the most attention to detail and editing the output of MT software. That may be necessary because computers, no matter how sophisticated they have become, cannot yet truly grasp what a text means.

Meaning and machine intelligence: What are you talking about?
Machines cannot conduct proper conversations with humans because they do not understand the world

IN “BLACK MIRROR”, a British science-fiction satire series set in a dystopian near future, a young woman loses her boyfriend in a car accident. A friend offers to help her deal with her grief. The dead man was a keen social-media user, and his archived accounts can be used to recreate his personality. Before long she is messaging with a facsimile, then speaking to one. As the system learns to mimic him ever better, he becomes increasingly real.

This is not quite as bizarre as it sounds. Computers today can already produce an eerie echo of human language if fed with the appropriate material. What they cannot yet do is have true conversations. Truly robust interaction between man and machine would require a broad understanding of the world. In the absence of that, computers are not able to talk about a wide range of topics, follow long conversations or handle surprises.

Machines trained to do a narrow range of tasks, though, can perform surprisingly well. The most obvious examples are the digital assistants created by the technology giants. Users can ask them questions in a variety of natural ways: “What’s the temperature in London?” “How’s the weather outside?” “Is it going to be cold today?” The assistants know a few things about users, such as where they live and who their family are, so they can be personal, too: “How’s my commute looking?” “Text my wife I’ll be home in 15 minutes.”
And they get better with time. Apple’s Siri receives 2bn requests per week, which (after being anonymised) are used for further teaching. For example, Apple says Siri knows every possible way that users ask about a sports score. She also has a delightful answer for children who ask about Father Christmas. Microsoft learned from some of its previous natural-language platforms that about 10% of human interactions were “chitchat”, from “tell me a joke” to “who’s your daddy?”, and used such chat to teach its digital assistant, Cortana.

The writing team for Cortana includes two playwrights, a poet, a screenwriter and a novelist. Google hired writers from Pixar, an animated-film studio, and The Onion, a satirical newspaper, to make its new Google Assistant funnier. No wonder people often thank their digital helpers for a job well done. The assistants’ replies range from “My pleasure, as always” to “You don’t need to thank me.”
Good at grammar

How do natural-language platforms know what people want? They not only recognise the words a person uses, but break down speech for both grammar and meaning. Grammar parsing is relatively advanced; it is the domain of the well-established field of “natural-language processing”. But meaning comes under the heading of “natural-language understanding”, which is far harder.

First, parsing. Most people are not very good at analysing the syntax of sentences, but computers have become quite adept at it, even though most sentences are ambiguous in ways humans are rarely aware of. Take a sign on a public fountain that says, “This is not drinking water.” Humans understand it to mean that the water (“this”) is not a certain kind of water (“drinking water”). But a computer might just as easily parse it to say that “this” (the fountain) is not at present doing something (“drinking water”).

As sentences get longer, the number of grammatically possible but nonsensical options multiplies exponentially. How can a machine parser know which is the right one? It helps for it to know that some combinations of words are more common than others: the phrase “drinking water” is widely used, so parsers trained on large volumes of English will rate those two words as likely to be joined in a noun phrase. And some structures are more common than others: “noun verb noun noun” may be much more common than “noun noun verb noun”. A machine parser can compute the overall probability of all combinations and pick the likeliest.

A “lexicalised” parser might do even better. Take the Groucho Marx joke, “One morning I shot an elephant in my pyjamas. How he got in my pyjamas, I’ll never know.” The first sentence is ambiguous (which makes the joke)—grammatically both “I” and “an elephant” can attach to the prepositional phrase “in my pyjamas”. But a lexicalised parser would recognise that “I [verb phrase] in my pyjamas” is far more common than “elephant in my pyjamas”, and so assign that parse a higher probability.

But meaning is harder to pin down than syntax. “The boy kicked the ball” and “The ball was kicked by the boy” have the same meaning but a different structure. “Time flies like an arrow” can mean either that time flies in the way that an arrow flies, or that insects called “time flies” are fond of an arrow.

“Who plays Thor in ‘Thor’?” Your correspondent could not remember the beefy Australian who played the eponymous Norse god in the Marvel superhero film. But when he asked his iPhone, Siri came up with an unexpected reply: “I don’t see any movies matching ‘Thor’ playing in Thor, IA, US, today.” Thor, Iowa, with a population of 184, was thousands of miles away, and “Thor”, the film, has been out of cinemas for years. Siri parsed the question perfectly properly, but the reply was absurd, violating the rules of what linguists call pragmatics: the shared knowledge and understanding that people use to make sense of the often messy human language they hear. “Can you reach the salt?” is not a request for information but for salt. Natural-language systems have to be manually programmed to handle such requests as humans expect them, and not literally.

Multiple choice
Shared information is also built up over the course of a conversation, which is why digital assistants can struggle with twists and turns in conversations. Tell an assistant, “I’d like to go to an Italian restaurant with my wife,” and it might suggest a restaurant. But then ask, “is it close to her office?”, and the assistant must grasp the meanings of “it” (the restaurant) and “her” (the wife), which it will find surprisingly tricky. Nuance, the language-technology firm, which provides natural-language platforms to many other companies, is working on a “concierge” that can handle this type of challenge, but it is still a prototype.
Such a concierge must also offer only restaurants that are open. Linking requests to common sense (knowing that no one wants to be sent to a closed restaurant), as well as a knowledge of the real world (knowing which restaurants are closed), is one of the most difficult challenges for language technologies.

Common sense, an old observation goes, is uncommon enough in humans. Programming it into computers is harder still. Fernando Pereira of Google points out why. Automated speech recognition and machine translation have something in common: there are huge stores of data (recordings and transcripts for speech recognition, parallel corpora for translation) that can be used to train machines. But there are no training data for common sense.

Brain scan: Terry Winograd
The Winograd Schema tests computers’ “understanding” of the real world

THE Turing Test was conceived as a way to judge whether true artificial intelligence has been achieved. If a computer can fool humans into thinking it is human, there is no reason, say its fans, to say the machine is not truly intelligent.
Few giants in computing stand with Turing in fame, but one has given his name to a similar challenge: Terry Winograd, a computer scientist at Stanford. In his doctoral dissertation Mr Winograd posed a riddle for computers: “The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?”

It is a perfect illustration of a well-recognised point: many things that are easy for humans are crushingly difficult for computers. Mr Winograd went into AI research in the 1960s and 1970s and developed an early natural-language program called SHRDLU that could take commands and answer questions about a group of shapes it could manipulate: “Find a block which is taller than the one you are holding and put it into the box.” This work brought a jolt of optimism to the AI crowd, but Mr Winograd later fell out with them, devoting himself not to making machines intelligent but to making them better at helping human beings. (These camps are sharply divided by philosophy and academic pride.) He taught Larry Page at Stanford, and after Mr Page went on to co-found Google, Mr Winograd became a guest researcher at the company, helping to build Gmail.

In 2011 Hector Levesque of the University of Toronto became annoyed by systems that “passed” the Turing Test by joking and avoiding direct answers. He later asked to borrow Mr Winograd’s name and the format of his dissertation’s puzzle to pose a more genuine test of machine “understanding”: the Winograd Schema. The answers to its battery of questions were obvious to humans but would require computers to have some reasoning ability and some knowledge of the real world. The first official Winograd Schema Challenge was held this year, with a $25,000 prize offered by Nuance, the language-software company, for a program that could answer more than 90% of the questions correctly. The best of them got just 58% right.
Though officially retired, Mr Winograd continues writing and researching. One of his students is working on an application for Google Glass, a computer with a display mounted on eyeglasses. The app would help people with autism by reading the facial expressions of conversation partners and giving the wearer information about their emotional state. It would allow him to integrate linguistic and non-linguistic information in a way that people with autism find difficult, as do computers.

Asked to trick some of the latest digital assistants, like Siri and Alexa, he asks them things like “Where can I find a nightclub my Methodist uncle would like?”, which requires knowledge about both nightclubs (which such systems have) and Methodist uncles (which they don’t). When he tried “Where did I leave my glasses?”, one of them came up with a link to a book of that name. None offered the obvious answer: “How would I know?”

Knowledge of the real world is another matter. AI has helped data-rich companies such as America’s West-Coast tech giants organise much of the world’s information into interactive databases such as Google’s Knowledge Graph. Some of the content of that appears in a box to the right of a Google page of search results for a famous figure or thing. It knows that Jacob Bernoulli studied at the University of Basel (as did other people, linked to Bernoulli through this node in the Graph) and wrote “On the Law of Large Numbers” (which it knows is a book).

Organising information this way is not difficult for a company with lots of data and good AI capabilities, but linking information to language is hard. Google touts its assistant’s ability to answer questions like “Who was president when the Rangers won the World Series?” But Mr Pereira concedes that this was the result of explicit training. Another such complex query—“What was the population of London when Samuel Johnson wrote his dictionary?”—would flummox the assistant, even though the Graph knows about things like the historical population of London and the date of Johnson’s dictionary. IBM’s Watson system, which in 2011 beat two human champions at the quiz show “Jeopardy!”, succeeded mainly by calculating huge numbers of potential answers based on key words by probability, not by a human-like understanding of the question.

Making real-world information computable is challenging, but it has inspired some creative approaches., a Vienna-based startup, took hundreds of Wikipedia articles, cut them into thousands of small snippets of information and ran an “unsupervised” machine-learning algorithm over it that required the computer not to look for anything in particular but to find patterns. These patterns were then represented as a visual “semantic fingerprint” on a grid of 128×128 pixels. Clumps of pixels in similar places represented semantic similarity. This method can be used to disambiguate words with multiple meanings: the fingerprint of “organ” shares features with both “liver” and “piano” (because the word occurs with both in different parts of the training data). This might allow a natural-language system to distinguish between pianos and church organs on one hand, and livers and other internal organs on the other.
Proper conversation between humans and machines can be seen as a series of linked challenges: speech recognition, speech synthesis, syntactic analysis, semantic analysis, pragmatic understanding, dialogue, common sense and real-world knowledge. Because all the technologies have to work together, the chain as a whole is only as strong as its weakest link, and the first few of these are far better developed than the last few.

The hardest part is linking them together. Scientists do not know how the human brain draws on so many different kinds of knowledge at the same time. Programming a machine to replicate that feat is very much a work in progress.

Looking ahead: For my next trick
Talking machines are the new must-haves
IN “WALL-E”, an animated children’s film set in the future, all humankind lives on a spaceship after the Earth’s environment has been trashed. The humans are whisked around in intelligent hovering chairs; machines take care of their every need, so they are all morbidly obese. Even the ship’s captain is not really in charge; the actual pilot is an intelligent and malevolent talking robot, Auto, and like so many talking machines in science fiction, he eventually makes a grab for power.

Speech is quintessentially human, so it is hard to imagine machines that can truly speak conversationally as humans do without also imagining them to be superintelligent. And if they are super intelligent, with none of humans’ flaws, it is hard to imagine them not wanting to take over, not only for their good but for that of humanity. Even in a fairly benevolent future like “WALL-E’s”, where the machines are doing all the work, it is easy to see that the lack of anything challenging to do would be harmful to people.
Fortunately, the tasks that talking machines can take off humans’ to-do lists are the sort that many would happily give up. Machines are increasingly able to handle difficult but well-defined jobs. Soon all that their users will have to do is pipe up and ask them, using a naturally phrased voice command. Once upon a time, just one tinkerer in a given family knew how to work the computer or the video recorder. Then graphical interfaces (icons and a mouse) and touchscreens made such technology accessible to everyone. Frank Chen of Andreessen Horowitz, a venture-capital firm, sees natural-language interfaces between humans and machines as just another step in making information and services available to all. Silicon Valley, he says, is enjoying a golden age of AI technologies. Just as in the early 1990s companies were piling online and building websites without quite knowing why, now everyone is going for natural language. Yet, he adds, “we’re in 1994 for voice.”
1995 will soon come. This does not mean that people will communicate with their computers exclusively by talking to them. Websites did not make the telephone obsolete, and mobile devices did not make desktop computers obsolete. In the same way, people will continue to have a choice between voice and text when interacting with their machines.
Not all will choose voice. For example, in Japan yammering into a phone is not done in public, whether the interlocutor is a human or a digital assistant, so usage of Siri is low during business hours but high in the evening and at the weekend. For others, voice-enabled technology is an obvious boon. It allows dyslexic people to write without typing, and the very elderly may find it easier to talk than to type on a tiny keyboard. The very young, some of whom today learn to type before they can write, may soon learn to talk to machines before they can type.
Those with injuries or disabilities that make it hard for them to write will also benefit. Microsoft is justifiably proud of a new device that will allow people with amyotrophic lateral sclerosis (ALS), which immobilises nearly all of the body but leaves the mind working, to speak by using their eyes to pick letters on a screen. The critical part is predictive text, which improves as it gets used to a particular individual. An experienced user will be able to “speak” at around 15 words per minute.
People may even turn to machines for company. Microsoft’s Xiaoice, a chatbot launched in China, learns to come up with the responses that will keep a conversation going longest. Nobody would think it was human, but it does make users open up in surprising ways. Jibo, a new “social robot”, is intended to tell children stories, help far-flung relatives stay in touch and the like.

Another group that may benefit from technology is smaller language communities. Networked computers can encourage a winner-take-all effect: if there is a lot of good software and content in English and Chinese, smaller languages become less valuable online. If they are really tiny, their very survival may be at stake. But Ross Perlin of the Endangered Languages Alliance notes that new software allows researchers to document small languages more quickly than ever. With enough data comes the possibility of developing resources—from speech recognition to interfaces with software—for smaller and smaller languages. The Silicon Valley giants already localise their services in dozens of languages; neural networks and other software allow new versions to be generated faster and more efficiently than ever.

There are two big downsides to the rise in natural-language technologies: the implications for privacy, and the disruption it will bring to many jobs.

Increasingly, devices are always listening. Digital assistants like Alexa, Cortana, Siri and Google Assistant are programmed to wait for a prompt, such as “Hey, Siri” or “OK, Google”, to activate them. But allowing always-on microphones into people’s pockets and homes amounts to a further erosion of traditional expectations of privacy. The same might be said for all the ways in which language software improves by training on a single user’s voice, vocabulary, written documents and habits.

All the big companies’ location-based services—even the accelerometers in phones that detect small movements—are making ever-improving guesses about users’ wants and needs. The moment when a digital assistant surprises a user with “The chemist is nearby—do you want to buy more haemorrhoid cream, Steve?” could be when many may choose to reassess the trade-off between amazing new services and old-fashioned privacy. The tech companies can help by giving users more choice; the latest iPhone will not be activated when it is laid face down on a table. But hackers will inevitably find ways to get at some of these data.

The other big concern is for jobs. To the extent that they are routine, they face being automated away. A good example is customer support. When people contact a company for help, the initial encounter is usually highly scripted. A company employee will verify a customer’s identity and follow a decision-tree. Language technology is now mature enough to take on many of these tasks.

For a long transition period humans will still be needed, but the work they do will become less routine. Nuance, which sells lots of automated online and phone-based help systems, is bullish on voice biometrics (customers identifying themselves by saying “my voice is my password”). Using around 200 parameters for identifying a speaker, it is probably more secure than a fingerprint, says Brett Beranek, a senior manager at the company. It will also eliminate the tedium, for both customers and support workers, of going through multi-step identification procedures with PINs, passwords and security questions. When Barclays, a British bank, offered it to frequent users of customer-support services, 84% signed up within five months.

Digital assistants on personal smartphones can get away with mistakes, but for some business applications the tolerance for error is close to zero, notes Nikita Ivanov. His company, Datalingvo, a Silicon Valley startup, answers questions phrased in natural language about a company’s business data. If a user wants to know which online ads resulted in the most sales in California last month, the software automatically translates his typed question into a database query. But behind the scenes a human working for Datalingvo vets the query to make sure it is correct. This is because the stakes are high: the technology is bound to make mistakes in its early days, and users could make decisions based on bad data.
This process can work the other way round, too: rather than natural-language input producing data, data can produce language. Arria, a company based in London, makes software into which a spreadsheet full of data can be dragged and dropped, to be turned automatically into a written description of the contents, complete with trends. Matt Gould, the company’s chief strategy officer, likes to think that this will free chief financial officers from having to write up the same old routine analyses for the board, giving them time to develop more creative approaches.

Carl Benedikt Frey, an economist at Oxford University, has researched the likely effect of artificial intelligence on the labour market and concluded that the jobs most likely to remain immune include those requiring creativity and skill at complex social interactions. But not every human has those traits. Call centres may need fewer people as more routine work is handled by automated systems, but the trickier inquiries will still go to humans.

Much of this seems familiar. When Google search first became available, it turned up documents in seconds that would have taken a human operator hours, days or years to find. This removed much of the drudgery from being a researcher, librarian or journalist. More recently, young lawyers and paralegals have taken to using e-discovery. These innovations have not destroyed the professions concerned but merely reshaped them.

Machines that relieve drudgery and allow people to do more interesting jobs are a fine thing. In net terms they may even create extra jobs. But any big adjustment is most painful for those least able to adapt. Upheavals brought about by social changes—like the emancipation of women or the globalisation of labour markets—are already hard for some people to bear. When those changes are wrought by machines, they become even harder, and all the more so when those machines seem to behave more and more like humans. People already treat inanimate objects as if they were alive: who has never shouted at a computer in frustration? The more that machines talk, and the more that they seem to understand people, the more their users will be tempted to attribute human traits to them.

That raises questions about what it means to be human. Language is widely seen as humankind’s most distinguishing trait. AI researchers insist that their machines do not think like people, but if they can listen and talk like humans, what does that make them? As humans teach ever more capable machines to use language, the once-obvious line between them will blur.

California Grid

Path 26


In winter, the Pacific Northwest needs power for heat – and must import it from So Cal.
In summer, the Pacific Northwest has excess power – and exports it to So Cal.

“Paths” are the major transmission lines that form the “grid” – which connect geographic areas covered by utilities.

Of interest here are the “paths” that transmit power north and south in California. These path make the importing and exporting of power possible.

These paths were built in the 1970s and 1980s in order to provide California and the Southwest with excess hydropower from the Pacific Northwest without actually having to construct any new power plants.

During the cold Pacific Northwest winters, power is sent north due to heater use. This transfer reverses in the hot, dry summers, when many people in the South run air conditioners.[11] In order to do this the maximum south-to-north transmission capacity is 5,400 MW for most parts,[8] but between Los Banos substation and Gates substation, there were only two 500 kV lines.

The capacity at this electricity bottleneck was only 3,900 MW, and this was identified in the 1990s as a trouble spot, but no one acted upon it.[2] This bottleneck was one of the leading causes of the California electricity crisis in 2000-2001. To remedy this problem, WAPA along with several utilities built a third 500 kV line between these two substations to eliminate this transmission constraint and raise the maximum south-to-north transmission capacity to 5,400 MW.[2] The project was completed under budget and on time on December 21, 2004.[12] California’s governor, Arnold Schwarzenegger attended the commissioning ceremony at California-ISO’s control center in Folsom.[12]

Path 26 is three 500 kV lines with 3,700 MW capacity North to South and 3,000 MW capacity south to north. Itl inks PG&E (north) to SCE (south).

Path 26 forms Southern California Edison’s (SCE) intertie (link) with Pacific Gas & Electric (PG&E) to the north. Since PG&E’s power grid and SCE’s grid both have interconnections to elsewhere, in the Pacific Northwest (PG&E) and the Southwestern United States (SCE), Path 26 is a southern extension of Path 15 and Path 66, and a crucial link between the two regions’ grids.[3]

The path consists of three transmission lines, Midway–Vincent No. 1, Midway–Vincent No. 2 and Midway–Whirlwind. Midway–Whirlwind was part of what was called Midway–Vincent No. 3 before Whirlwind was built, as part of the Tehachapi Renewable Transmission Project.

The three Path 26 500 kV lines can transmit 3,700 MW of electrical power north to south. The capacity for south to north power transmission is 3,000 MW.[3]

Path 26 – Vincent to Midway[edit]
The Path, starting from the south, starts at the large Vincent substation close to State Route 14 and Soledad Pass near Acton east of the Santa Clarita Valley. The same Vincent substation is linked to Path 46 and Path 61 via two SCE 500 kV lines that head southeast to Lugo substation. As for these SCE 500 kV wires, like Path 15 to the north, the three 500 kV wires are never built together for the entire length of the route. Straight from the substation, all three lines head north-northwest. The westernmost SCE 500 kV line splits away and runs west of the other two SCE 500 kV lines.[2]
After crossing State Route 14, two 500 kV wires built by Los Angeles Department of Water and Power (LADW&P) join the eastern two SCE 500 kV wires. Some point west of Palmdale, one line (SCE) continues northwest and the other three (one SCE, two LADW&P) head west. The lone SCE line continuing northwest (with 230 kV lines) runs close to the Antelope Valley California Poppy Reserve, famed for its California Poppy flowers. The one SCE line that ran west of the other two SCE lines (now separated) re-joins the single SCE 500 kV running west with the two LADW&P lines. The four 500 kV lines run together for some distance until, at some point in the mountains, the two SCE lines continue to head west and the two LADW&P lines turn southwest and head for Sylmar in the San Fernando Valley (close to the Sylmar Converter Station southern terminus of the Pacific Intertie HVDC line). The two SCE lines heading west meet up with Interstate 5 on the arid foothills of the Sierra Pelona Mountains to the east of Pyramid Lake. The lines parallel I-5 crossing Tejon Pass (running on the eastern foothills of Frazier Mountain) and run out of sight for a while as they cross the high woodlands of the northern San Emigdio Mountains at their highest point at around 5,350 ft (1,630 m).[2][5]
As for the third line, north of Lancaster and State Route 138, it runs through a remote, roadless area of the Tehachapi Mountains with two 230 kV lines. Although it runs across sparse to dense oak woodlands at around 5,300 ft (1,615 m),[5] it is not easy to spot it on Google Earth since its right of way is not as clear cut as Path 15 and Path 66 to the north. Due to this, the line is not readily seen again until it crosses State Route 184 as a PG&E power line. Somewhere to the east of State Route 184, in the mountains, the line changes from SCE towers to PG&E towers.[2][6][7] By the time the all three lines are visible to Interstate 5, they roughly parallel each other until all three lines, two SCE and one PG&E, terminate at the massive Midway substation in Buttonwillow in the San Joaquin Valley.[8] Two pairs of PG&E 500 kV lines heading north and southwest (separated), form Path 15.[2]
Connecting wires to Path 46 – Vincent to Lugo[edit]
Adjacent to the Path 26 wires, two other SCE 500 kV also begin in Vincent substation. The two 500 kV power lines head northeast from Vincent to meet up with LADW&P’s two other 500 kV wires from Rinaldi and then all four lines head east in the Antelope Valley along the northern foothills of the San Gabriel Mountains. Another LADW&P line from Toluca joins the four-line transmission corridor, resulting in a large path of five power lines. However, one LADW&P splits off from the other four lines and heads southeast. Soon after, the SCE lines split away from the remaining two LADW&P lines and head southeast as well. They cross the lone LADW&P line that split away and Interstate 15 as they head to the Lugo substation northeast of Cajon Pass. The lines terminate at Lugo, where one SCE Path 61 500 kV line, two SCE Path 46 500 kV lines, and three other SCE 500 kV lines end.[2][9][10]

Path 16

Path 15 is an 84-mile (135 km) portion[1] of the north-south power transmission corridor in California, U.S. It forms a part of the Pacific AC Intertie and the California-Oregon Transmission Project.

Path 15, along with the Pacific DC Intertie running far to the east, forms an important transmission interconnection with the hydroelectric plants to the north and the fossil fuel plants to the south. Most of the three AC 500 kV lines were built by Pacific Gas and Electric (PG&E) south of Tesla substation.

Path 15 consists of three lines at 500 kV and four lines at 230 kV. The 500 kV lines connect Los Banos to Gates and Los Banos to Midway. All four 230 kV lines have Gates at one end with the other ends at Panoche, Gregg, and McCall.[2]

There are only two connecting PG&E lines north of Tracy substation that connect Path 15 to Path 66 at the Round Mountain substation. The third line between Los Banos and Gates substation, south of Tracy, is operated by the Western Area Power Administration (WAPA), a division of the United States Department of Energy. This line was constructed away from the other two lines and is often out of sight. Most of the time the lines are in California’s Sierra foothills and the Central Valley, but there are some PG&E lines that come from power plants along the shores of the Pacific Ocean and cross the California Coast Ranges and connect with the intertie. The Diablo Canyon Power Plant and the Moss Landing Power Plant are two examples.[3][4]

The Vaca-Dixon substation (38°24′8.33″N 121°55′14.75″W) was the world’s largest substation at the time of its inauguration in 1922.[6]

Mr President

Credit: Washington Post Article authored by David Maraniss, author of ‘Barack Obama: The Story’

His journey to become a leader of consequence
How Barack Obama’s understanding of his place in the world, as a mixed-race American with a multicultural upbringing affected his presidency.
By David Maraniss, author of ‘Barack Obama: The Story’  

When Barack Obama worked as a community organizer amid the bleak industrial decay of Chicago’s far South Side during the 1980s, he tried to follow a mantra of that profession: Dream of the world as you wish it to be, but deal with the world as it is.

The notion of an Obama presidency was beyond imagining in the world as it was then. But, three decades later, it has happened, and a variation of that saying seems appropriate to the moment: Stop comparing Obama with the president you thought he might be, and deal with the one he has been.

Seven-plus years into his White House tenure, Obama is working through the final months before his presidency slips from present to past, from daily headlines to history books. That will happen at noontime on the 20th of January next year, but the talk of his legacy began much earlier and has intensified as he rounds the final corner of his improbable political career.

Of the many ways of looking at Obama’s presidency, the first is to place it in the continuum of his life. The past is prologue for all presidents to one degree or another, even as the job tests them in ways that nothing before could. For Obama, the line connecting his life’s story with the reality of what he has been as the 44th president is consistently evident.

The first connection involves Obama’s particular form of ambition. His political design arrived relatively late. He was no grade school or high school or college leader. Unlike Bill Clinton, he did not have a mother telling everyone that her first-grader would grow up to be president. When Obama was a toddler in Honolulu, his white grandfather boasted that his grandson was a Hawaiian prince, but that was more to explain his skin color than to promote family aspirations.
But once ambition took hold of Obama, it was with an intense sense of mission, sometimes tempered by self-doubt but more often self-assured and sometimes bordering messianic. At the end of his sophomore year at Occidental College, he started to talk about wanting to change the world. At the end of his time as a community organizer in Chicago, he started to talk about how the only way to change the world was through electoral power. When he was defeated for the one and only time in his career in a race for Congress in 2000, he questioned whether he indeed had been chosen for greatness, as he had thought he was, but soon concluded that he needed another test and began preparing to run for the Senate seat from Illinois that he won in 2004.

That is the sensibility he took into the White House. It was not a careless slip when he said during the 2008 campaign that he wanted to emulate Ronald Reagan and change “the trajectory of America” in ways that recent presidents, including Clinton, had been unable to do. Obama did not just want to be president. His mission was to leave a legacy as a president of consequence, the liberal counter to Reagan. To gauge himself against the highest-ranked presidents, and to learn from their legacies, Obama held private White House sessions with an elite group of American historians.

It is now becoming increasingly possible to argue that he has neared his goal. His decisions were ineffective in stemming the human wave of disaster in Syria, and he has thus far failed to close the detention camp at Guantanamo Bay, Cuba, and to make anything more than marginal changes on two domestic issues of importance to him, immigration and gun control. But from the Affordable Care Act to the legalization of same-sex marriage and the nuclear deal with Iran, from the stimulus package that started the slow recovery from the 2008 recession to the Detroit auto industry bailout, from global warming and renewable energy initiatives to the veto of the Keystone pipeline, from the withdrawal of combat troops from Iraq and Afghanistan and the killing of Osama bin Laden to the opening of relations with Cuba, the liberal achievements have added up, however one judges the policies.

This was done at the same time that he faced criticism from various quarters for seeming aloof, if not arrogant, for not being more effective in his dealings with members of Congress of either party, for not being angry enough when some thought he should be, or for not being an alpha male leader.

A promise of unity
His accomplishments were bracketed by two acts of negation by opponents seeking to minimize his authority: first a vow by Republican leaders to do what it took to render him a one-term president; and then, with 11 months left in his second term, a pledge to deny him the appointment of a nominee for the crucial Supreme Court seat vacated by the death of Antonin Scalia, a conservative icon. Obama’s White House years also saw an effort to delegitimize him personally by shrouding his story in fallacious myth — questioning whether he was a foreigner in our midst, secretly born in Kenya, despite records to the contrary, and insinuating that he was a closet Muslim, again defying established fact. Add to that a raucous new techno-political world of unending instant judgments and a decades-long erosion of economic stability for the working class and middle class that was making an increasingly large segment of the population, of various ideologies, feel left behind, uncertain, angry and divided, and the totality was a national condition that was anything but conducive to the promise of unity that brought Obama into the White House.

To the extent that his campaign rhetoric raised expectations that he could bridge the nation’s growing political divide, Obama owns responsibility for the way his presidency was perceived. His political rise, starting in 2004, when his keynote convention speech propelled him into the national consciousness, was based on his singular ability to tie his personal story as the son of a father from Kenya and mother from small-town Kansas to some transcendent common national purpose. Unity out of diversity, the ideal of the American mosaic that was constantly being tested, generation after generation, part reality, part myth. Even though Obama romanticized his parents’ relationship, which was brief and dysfunctional, his story of commonality was more than a campaign construct; it was deeply rooted in his sense of self.

As a young man, Obama at times felt apart from his high school and college friends of various races and perspectives as he watched them settle into defined niches in culture, outlook and occupation. He told one friend that he felt “large dollops of envy for them” but believed that because of his own life’s story, his mixed-race heritage, his experiences in multicultural Hawaii and exotic Indonesia, his childhood without “a structure or tradition to support me,” he had no choice but to seek the largest possible embrace of the world. “The only way to assuage my feelings of isolation are to absorb all the traditions [and all the] classes, make them mine, me theirs,” he wrote. He carried that notion with him through his political career in Illinois and all the way to the White House, where it was challenged in ways he had never confronted before.

With most politicians, their strengths are their weaknesses, and their weaknesses are their strengths.

With Obama, one way that was apparent was in his coolness. At various times in his presidency, there were calls from all sides for him to be hotter. He was criticized by liberals for not expressing more anger at Republicans who were stifling his agenda, or at Wall Street financiers and mortgage lenders whose wheeler-dealing helped drag the country into recession. He was criticized by conservatives for not being more vociferous in denouncing Islamic terrorists, or belligerent in standing up to Russian President Vladimir Putin.

His coolness as president can best be understood by the sociological forces that shaped him before he reached the White House. There is a saying among native Hawaiians that goes: Cool head, main thing. This was the culture in which Obama reached adolescence on the island of Oahu, and before that during the four years he lived with his mother in Jakarta. Never show too much. Never rush into things. Maintain a personal reserve and live by your own sense of time. This sensibility was heightened when he developed an affection for jazz, the coolest mode of music, as part of his self-tutorial on black society that he undertook while living with white grandparents in a place where there were very few African Americans. As he entered the political world, the predominantly white society made it clear to him the dangers of coming across as an angry black man. As a community organizer, he refined the skill of leading without being overt about it, making the dispossessed citizens he was organizing feel their own sense of empowerment. As a constitutional law professor at the University of Chicago, he developed an affinity for rational thought.

Differing approaches
All of this created a president who was comfortable coolly working in his own way at his own speed, waiting for events to turn his way.
Was he too cool in his dealings with other politicians? One way to consider that question is by comparing him with Clinton. Both came out of geographic isolation, Hawaii and southwest Arkansas, far from the center of power, in states that had never before offered up presidents. Both came out of troubled families defined by fatherlessness and alcoholism. Both at various times felt a sense of abandonment. Obama had the additional quandary of trying to figure out his racial identity. And the two dealt with their largely similar situations in diametrically different ways.

Rather than deal with the problems and contradictions of his life head-on, Clinton became skilled at moving around and past them. He had an insatiable need to be around people for affirmation. As a teenager, he would ask a friend to come over to the house just to watch him do a crossword puzzle. His life became all about survival and reading the room. He kept shoeboxes full of file cards of the names and phone numbers of people who might help him someday. His nature was to always move forward. He would wake up each day and forgive himself and keep going. His motto became “What’s next?” He refined these skills to become a political force of nature, a master of transactional politics. This got him to the White House, and into trouble in the White House, and out of trouble again, in acycle of loss and recovery.

Obama spent much of his young adulthood, from when he left Hawaii for the mainland and college in 1979 to the time he left Chicago for Harvard Law School nearly a decade later, trying to figure himself out, examining the racial, cultural, personal, sociological and political contradictions that life threw at him. He internalized everything, first withdrawing from the world during a period in New York City and then slowly reentering it as he was finding his identity as a community organizer in Chicago.

Rather than plow forward relentlessly, like Clinton, Obama slowed down. He woke up each day and wrote in his journal, analyzing the world and his place in it. He emerged from that process with a sense of self that helped him rise in politics all the way to the White House, then led him into difficulties in the White House, or at least criticism for the way he operated. His sensibility was that if he could resolve the contradictions of his own life, why couldn’t the rest of the country resolve the larger contradictions of American life? Why couldn’t Congress? The answer from Republicans was that his actions were different from his words, and that while he talked the language of compromise, he did not often act on it. He had built an impressive organization to get elected, but it relied more on the idea of Obama than on a long history of personal contacts. He did not have a figurative equivalent of Clinton’s shoebox full of allies, and he did not share his Democratic predecessor’s profound need to be around people. He was not as interested in the personal side of politics that was so second nature to presidents such as Clinton and Lyndon Johnson.

Politicians of both parties complained that Obama seemed distant. He was not calling them often enough. When he could be schmoozing with members of Congress, cajoling them and making them feel important, he was often back in the residence having dinner with his wife, Michelle, and their two daughters, or out golfing with the same tight group of high school chums and White House subordinates.

Here again, some history provided context. Much of Obama’s early life had been a long search for home, which he finally found with Michelle and their girls, Malia and Sasha. There were times when Obama was an Illinois state senator and living for a few months at a time in a hotel room in Springfield, when Michelle made clear her unhappiness with his political obsession, and the sense of home that he had strived so hard to find was jeopardized. Once he reached the White House, with all the demands on his time, if there was a choice, he was more inclined to be with his family than hang out with politicians. A weakness in one sense, a strength in another, enriching the image of the first-ever black first family.

A complex question
The fact that Obama was the first black president, and that his family was the first African American first family, provides him with an uncontested hold on history. Not long into his presidency, even to mention that seemed beside the point, if not tedious, but it was a prejudice-shattering event when he was elected in 2008, and its magnitude is not likely to diminish. Even as some of the political rhetoric this year longs for a past America, the odds are greater that as the century progresses, no matter what happens in the 2016 election, Obama will be seen as the pioneer who broke an archaic and distant 220-year period of white male dominance.

But what kind of black president has he been?

His life illuminates the complexity of that question. His white mother, who conscientiously taught him black history at an early age but died nearly a decade before her son reached the White House, would have been proud that he broke the racial barrier. But she also inculcated him in the humanist idea of the universality of humankind, a philosophy that her life exemplified as she married a Kenyan and later an Indonesian and worked to help empower women in many of the poorest countries in the world. Obama eventually found his own comfort as a black man with a black family, but his public persona, and his political persona, was more like his mother’s.

At various times during his career, Obama faced criticism from some African Americans that, because Obama did not grow up in a minority community and received an Ivy League education, he was not “black enough.” That argument was one of the reasons he lost that 2000 congressional race to Bobby L. Rush, a former Black Panther, but fortunes shift and attitudes along with them; there was no more poignant and revealing scene at Obama’s final State of the Union address to Congress than Rep. Rush waiting anxiously at the edge of the aisle and reaching out in the hope of recognition from the passing president.

As president, Obama rarely broke character to show what was inside. He was reluctant to bring race into the political discussion, and never publicly stated what many of his supporters believed: that some of the antagonism toward his presidency was rooted in racism. He wished to be judged by the content of his presidency rather than the color of his skin. One exception came after February 2012, when Trayvon Martin, an unarmed black teenager, was shot and killed in Florida by a gun-toting neighborhood zealot. In July 2013, commenting on the verdict in the case, Obama talked about the common experience of African American men being followed when shopping in a department store, or being passed up by a taxi on the street, or a car door lock clicking as they walked by — all of which he said had happened to him. He said Trayvon Martin could have been his son, and then added, “another way of saying that is: Trayvon Martin could have been me 35 years ago.”

Nearly two years later, in June 2015, Obama hit what might be considered the most powerful emotional note of his presidency, a legacy moment, by finding a universal message in black spiritual expression. Time after time during his two terms, he had performed the difficult task of trying to console the country after another mass shooting, choking up with tears whenever he talked about little children being the victims, as they had been in 2012 at Sandy Hook Elementary School in Newtown, Conn. Now he was delivering the heart-rending message one more time, nearing the end of a eulogy in Charleston, S.C., for the Rev. Clementa Pinckney, one of nine African Americans killed by a young white gunman during a prayer service at Emanuel African Methodist Episcopal Church. It is unlikely that any other president could have done what Barack Obama did that day, when all the separate parts of his life story came together with a national longing for reconciliation as he started to sing, “Amazing grace, how sweet the sound, that saved a wretch like me. . . .”

Engie Takes Majority Stake in Green Charge Network


Behind-the-Meter Battery Acquisition: Engie Takes Majority Stake in Green Charge Networks

The first big acquisition in the space puts a big balance sheet behind the startup’s storage tech as it faces rivals like Stem and Tesla.
by Jeff St. John
May 10, 2016

Green Charge Networks, one of the country’s pioneers in behind-the-meter batteries, has just been taken over by France’s Engie. The energy giant, formerly known as GDF Suez, announced Tuesday that it has acquired an 80 percent stake in the Santa Clara, Calif.-based startup, and plans to put its building energy storage and battery-solar expertise to work for its commercial, industrial and public energy services customers.

Terms of the deal weren’t disclosed. Green Charge has previously raised $56 million from K Road DG in 2014, and an undisclosed amount from angel investors including ChargePoint founder Richard Lowenthal in its early days in New York City.

Green Charge CEO Vic Shaw wouldn’t say how much Engie spent to take Green Charge under its wing, but insisted that “investors definitely made money” on the deal. The company got its start deploying its battery and control systems in 7-Eleven stores and rental car lots in New York City under an $18 million Department of Energy grant, which helped it reach scale without too much capital, he noted.

Green Charge has also lined up $50 million in non-recourse debt financing from Ares for new projects, which will remain intact under Engie’s ownership, he said. But with the deep pockets of a multinational energy services company behind it, he’s expecting a lot more growth.

“Engie does a little bit of everything — or a lot of everything,” he said. “They have 150,000 employees worldwide, and I think they’re in fact the world’s largest provider of energy efficiency services. They have a footprint in every state in the U.S. and in most countries around the world.”

The companies were introduced through Engie subsidiaries Ecova and OpTerra Energy Services, which do work with the same kind of commercial and industrial clients that Green Charge does, he said. “Those entities provide different services than Green Charge Networks does,” largely focused on reducing waste and optimizing energy use, in terms of the kilowatt-hours of energy consumed.

Energy storage, by contrast, focuses on reducing the demand side of the energy equation, by injecting stored power to avoid spikes in grid power consumption at any one period in time. That can help reduce demand charges, a portion of the utility bill that’s invisible to residential customers, but can add up to nearly half of a commercial or industrial customer’s costs in high-priced states like California and New York.

“On that kilowatt-hours side of the business, most of the low-hanging fruit is gone,” Shao said. “The next frontier is on the kilowatt side, on the power side — and offering energy storage.”

As for how Engie plans to put Green Charge’s technology to use, Frank Demaille, CEO of the company’s North American business, said it will be deploying it in standalone storage and storage-plus-solar configurations for clients in the United States.
But the “acquisition will also reinforce Engie’s strengths and skills in the activities of decentralized energy management, off-grid solutions, and power reliability, which are identified as areas for growth for the company around the world,” he said.

As Shao said, “A lot of what Engie is acquiring here is the very sophisticated software and analytics [and] operational capabilities of our energy storage system.” Green Charge has about 48 megawatt-hours of storage deployed or under construction, and has “real-time communications and monitoring, and analytics for charge-discharge activity being done every couple of seconds.” While opportunities to put aggregated behind-the-meter capacity to use for grid or utility needs are still rare, Green Charge has aggregated a portion of its portfolio in California to serve the state’s new demand response auction mechanism (DRAM) program, and it is looking at more opportunities, he said.

This is one of the first big acquisitions in the behind-the-meter battery space, at least in the United States. Green Charge competes against rival California startup Stem, which has raised about $75 million from investors including Angeleno Group, Iberdrola (Inversiones Financieras Perseo), GE Ventures, Constellation New Energy, and Total Energy Ventures, and has some $135 million in non-recourse debt project financing.

It also competes against SolarCity and Tesla, which have deployed dozens of megawatts of behind-the-meter storage projects in California, and which also have plans to deploy a lot more this year. Newer entrants include as Gexpro, the electrical equipment distributor that is selling a C&I storage system using software from startup Geli, batteries from LG Chem and inverters from Ideal Power.  Another rival in the field, Coda Energy, closed its doors in December. 

8 Health Habits


Weigh yourself often.
Learn to cook
Cut back on sugar.
Live an active life. 

Eat your veggies.
Practice portion control.

Adopt a post-party exercise routine. 

Find a job you love.

============ JCR NOTES ==========

As we enter 2017, I am in the mood for simplifying well-being, which is why I like this list above. But I want to cross-check it against what I know.

For example, I long have asserted that “MARVELS” are critical to well-being. MARVELS stands for MEDS (M), ACTIVITY (A), RESILIENCE (R), VITALS (V), EATING (E), LABS (L), AND SLEEP (S).

I still believe this. But it is complicated – can it be simpler?

The article suggests – correctly – that in your 20’s, “what’s important now” is developing and maintaining an active, healthy lifestyle. It emphasizes E (a healthy diet, moderate alcohol consumption and no smoking), and A (regular physical activity).

Just for fun, what if the most simplistic acronym was “EAT”, which stands for:

E – eating, drinking, smoking ………….(what you put into your body)
“be mindful about what you put in your body, by tracking it, and enjoy taking increasing control over this by developing related habits such as learning to cook or juicing”
A – activity, including rest and sleep ….(what you do with your body)
“Be mindful about what you do with your body by staying active and getting plenty of rest”
T – track E and A ….
“Develop quantified self habits that track E and A and regularly verify that your body is operating normally”

So I might restate “what is important now” for people in their 20’s:

You want to fully enjoy your life in your 20’s – without putting your 30’s,, 40’s and beyond at risk – develop AHL (active, healthy living) habits that you enjoy, so that they have a good chance of being with you the rest of your life. stay lean and well-rested during your 20’s. To get that way, eat and drink well, don’t smoke,


Daily (get a routine, like taking a shower): M, A, E, and M. (Track what you put in your body (E) and what you do with your body (A and S) daily.

Simple checklist is this. Today, did you:
“Take MEDS as prescribed” (were you in compliance?)?
“Eat your veggies?”
“use sugar, especially alcohol, in moderation?”
“stay active with activities that can be life habits?”

Monthly: V (Track your vital signs, including body weight and body mass monthly.)

Annually: L (Track your lab results annually, and more frequently if results are out of normal range). L includes genomes – so do them once, and annually if V or L is out of normal range.

==============KEY STUDY===========

Staying healthy in your 20s is strongly associated with a lower risk for heart disease in middle age, according to research from Northwestern University. That study showed that most people who adopted five healthy habits in their 20s – a lean body mass index, moderate alcohol consumption, no smoking, a healthy diet and regular physical activity – stayed healthy well into middle age.

The 8 Health Habits Experts Say You Need in Your 20s
If you had just one piece of health advice for people in their 20s, what would it be?
That’s the question we posed to a number of experts in nutrition, obesity, cardiology and other health disciplines. While most 20-year-olds don’t worry much about their health, studies show the lifestyle and health decisions we make during our third decade of life have a dramatic effect on how well we age.
Staying healthy in your 20s is strongly associated with a lower risk for heart disease in middle age, according to research from Northwestern University. That study showed that most people who adopted five healthy habits in their 20s – a lean body mass index, moderate alcohol consumption, no smoking, a healthy diet and regular physical activity – stayed healthy well into middle age.
And a disproportionate amount of the weight we gain in life is accumulated in our 20s, according to data from the Centers for Disease Control and Prevention. The average woman in the United States weighs about 150 when she’s 19, but by the time she’s 29, she weighs 162 pounds – that’s a gain of 12 pounds. An average 19-year-old man weighs 175 pounds, but by the time he hits 29 he is nine pounds heavier, weighing in at 184 pounds.
But it can be especially difficult for a young adult to focus on health. Young people often spend long hours at work, which can make it tough to exercise and eat well. They face job pressure, romantic challenges, money problems and family stress. Who has time to think about long-term health?
To make it easier, we asked our panel of experts for just one simple piece of health advice. We skipped the obvious choices like no smoking or illegal drug use – you know that already. Instead we asked them for simple strategies to help a 20-something get on the path to better health. Here’s what they had to say.

Weigh yourself often. 
- Susan Roberts, professor of nutrition at Tufts University and co-founder of the iDiet weight management program 
Buy a bathroom scale or use one at the gym and weigh yourself regularly. There is nothing more harmful to long-term health than carrying excess pounds, and weight tends to creep up starting in the 20s. It is pretty easy for most people to get rid of three to five pounds and much harder to get rid of 20. If you keep an eye on your weight you can catch it quickly.

Learn to cook. 
- Barbara J. Rolls, professor and Guthrie Chair of Nutritional Sciences at Penn State 
Learning to cook will save you money and help you to eat healthy. Your focus should be on tasty ways to add variety to your diet and to boost intake of veggies and fruits and other nutrient-rich ingredients. As you experiment with herbs and spices and new cooking techniques, you will find that you can cut down on the unhealthy fats, sugar and salt, as well as the excess calories found in many prepared convenience foods. Your goal should be to develop a nutritious and enjoyable eating pattern that is sustainable and that will help you not only to be well, but also to manage your weight. 
(Related: The foods you should stop buying and start making yourself) 

Cut back on sugar. 
- Steven E. Nissen, chairman of cardiovascular medicine at the Cleveland Clinic Foundation 
I suggest that young people try to avoid excessive simple sugar by eliminating the most common sources of consumption: 1) sugared soft drinks 2) breakfast cereals with added sugar and 3) adding table sugar to foods. Excessive sugar intake has been linked to obesity and diabetes, both of which contribute to heart disease. Sugar represents “empty calories” with none of the important nutrients needed in a balanced diet. Conversely, the traditional dietary villains, fat, particularly saturated fat, and salt, have undergone re-examination by many thoughtful nutrition experts. In both cases, the available scientific evidence does not clearly show a link to heart disease. 

Live an active life. 
- Walter Willett, chairman of the nutrition department at the Harvard School for Public Health 
While many people can’t find time for a scheduled exercise routine, that doesn’t mean you can’t find time to be active. Build physical activity into your daily life. Find a way to get 20 or 30 minutes of activity each day, including riding a bike or briskly walking to work. 
(Related: Learn how to run like a pro.) 

Eat your veggies. 
- Marion Nestle, professor of nutrition, food studies and public health at New York University 
Nutrition science is complicated and debated endlessly, but the basics are well established: Eat plenty of plant foods, go easy on junk foods, and stay active. The trick is to enjoy your meals, but not to eat too much or too often. 

Practice portion control. 
- Lisa R. Young, adjunct professor of nutrition at New York University 
My tip would be to not to ban entire food groups but to practice portion control. Portion control doesn’t mean tiny portions of all foods – quite the opposite. It’s okay to eat larger portions of healthy foods like vegetables and fruit. No one got fat from eating carrots or bananas. Choose smaller portions of unhealthy foods such as sweets, alcohol and processed foods. When eating out, let your hand be your guide. A serving of protein like chicken or fish should be the size of your palm. (Think 1-2 palms of protein.) A serving of starch, preferably a whole grain such as brown rice or quinoa should be the size of your fist. Limit high-fat condiments like salad dressing to a few tablespoons – a tablespoon is about the size of your thumb tip. 

Adopt a post-party exercise routine. 
- Barry Popkin, professor of global nutrition at the University of North Carolina at Chapel Hill 
If you engage in a lot of drinking and snacking, ensure you exercise a lot to offset all those extra calories from Friday to Sunday that come with extra drinking and eating. We found in a study that on Friday through Sunday young adults consumed about 115 more calories than on other days, mainly from fat and alcohol. 

Find a job you love. 
- Hui Zheng, associate sociology professor, population health, Ohio State University 
Ohio State University research found that work life in your 20s can affect your midlife mental health. People who are less happy in their jobs are more likely to report depression, stress and sleep problems and have lower overall mental health scores. If I can give just one piece of health advice for 20-year-olds, I would suggest finding a job they feel passionate about. This passion can keep them motivated, help them find meaning in life, and increase expectations about their future. That in turn will make them more engaged in life and healthier behaviors, which will have long term benefits for their well-being.


Bill Gates recommended GRID as one of his five favorite books in 2016. Here is what Business Insider said:

“‘The Grid: The Fraying Wires Between Americans and Our Energy Future’ by Gretchen Bakke

“The Grid” is a perfect example of how Bill Gates thinks about book genres the way Netflix thinks about TV and movies.

“This book, about our aging electrical grid, fits in one of my favorite genres: ‘Books About Mundane Stuff That Are Actually Fascinating,'” he writes.

Growing up in the Seattle area, Gates’ first job was writing software for a company that provided energy to the Pacific Northwest. He learned just how vital power grids are to everyday life, and “The Grid” serves as an important reminder that they really are engineering marvels.

“I think you would also come to see why modernizing the grid is so complex,” he writes, “and so critical for building our clean-energy future.”

My son received it as a Christmas gift, and stayed up all night finishing it. I ordered it the same day he told me.

Finally, a readable history of energy. Why does our grid look as it does?

The incredible role that Jimmy Carter played in the creation of the Department of Energy, the passage of two major pieces of legislation.
1. National Energy Act

GRID traces the emergence of the California wind energy industry. According to the author, the industry emerged in spite of bad technology. The growth traced instead to enormous tax credits. The Federal tax credit was 25%, and California doubled it to 50%. Today Texas and California are by far the largest producers of wind energy in the US>

GRID traces energy from Thomas Edison to Thomas Unsall, who was his personal secretary. It was Unsall that formulated, and then implemented, an ambitious plan to centralize the nations power grid. Until he took over in Chicago, no one could figure out how to create, through government regulation and clever pricing, what today is an effective monopoly. What makes this even more remarkable: the monopolies are largely for-profit.

GRID traces the emergence of energy policy, beginning with President Jimmy Carter.

It includes the Energy Policy Act of 1978 and the Energy Policy Act of 1982.

Postscript: I just read the book a second time, and was stuck by its notes at the end, its index, and its general comprehensiveness.

I guess, for me, the big ideas in this book can be boiled down as follows:

LOAD IS DOWN: the planet is rife with innovations that are saving electricity – and most of them are coming without burden to the consumer (like turning thermostats down, wearing sweaters, etc.). So the demand for electricity peaked in 2007, and is unlikely to go higher until at least 2040.

GENERATION IS UP: At the same time, the ways to generate power better are increasing. Solar panels have dropped at least 50% in cost in a decade, while getting more effective. Wind turbines are excellent, and are continuing to improve. Coal generators are being slowly replaced by natural gas. Natural gas plants have desirable properties beyond generation, e.g. they can start up quickly and can come down quickly.

GENERATION IS BECOMING MORE RESILIENT AND MORE DISTRIBUTED. . After a decade of blackouts largely traceable to storms and poor line maintenance, the push is on for resilience, and it is working. The means to resilience is distributed generation (DG), which ultimately will prove to be very beneficial. However, because of regulatory roadblocks, perverse incentives, and a host of other complexities, it will be some time before the benefits of resilient DG are fully realized.

PREDICTING LOAD IS IMPROVING: Predicting load by five minute increments is improving. Smart meters and smart algorithms make it entirely plausible to predict load well 24 hours ahead, and extremely well 4 hours ahead.

PREDICTING GENERATION IS IMPROVING: the book tells horror stories about DG increasing instability and unpredictability. How can a utility plan for a surge due to a scorching sun? A big breeze? I find these horror stories to be suggestive of where this dysfunction will all end up, namely: prediction will improve dramatically through better weather forecasting, better detailed knowledge of all contributing generators.

A NEW MATCHING OF LOAD TO GENERATION IS VISIBLE. For all the horror stories, I think the future looks bright because matching predictable load to predictable generation is doable today, and will become a norm in the future once all the roadblocks are removed.

ASYNCHRONOUS POWER IS ALMOST HERE. Just as emails are asynchronous, while telephony is synchronous, in that same way, electricity has always been a synchronous technology – because there has never been a way of storing electricity. The world is moving fast toward asynchronous power because of batteries. When this happens, the world is going to change very fast.

TIME OF DAY PRICING WILL ACCELERATE ALL CHANGES. I am shocked at how pathetic time of day pricing is. Its ubiquitous – but pathetic. Once time of day pricing sends market signals about that discourage peak power use, so managers will take increasing advantage of using power (load) when it is cheapest, and avoiding power use (avoiding load) when it is most expensive, then we will begin to see thousands of innovative solutions for accomplishing this very simple goal.