CREDIT: Economist Article on genomic facial recognition
===Amazon Crush Sept 2017 Edition===
Amazon now has the world’s attention. It just landed the cover story of The Economist, printed in its entirety below (together with a YouTube video and a Bloomberg article cited below).
A quick scan of this blog reminds me that I began tracking Amazon in early 2014 with multiple posts, copied here.
I speculated in EARLY 2014 (see post below) that Amazon revenue in 2015 would exceed $100 billion. They finished 2015 at $107 billion – up $11 billion vs 2014. I forecast $130 billion in 2016. They closed at $136 billion. Look at this remarkable growth curve!
The old forecast that Amazon will exceed $250 billion in sales by 2020 is looking about right. People are now saying Amazon is headed to $500 billion. I believe them.
Salient points from the article below:
The former bookseller accounts for more than half of every new dollar spent online in America.
Since the beginning of 2015 its share price has jumped by 173%, seven times quicker than in the two previous years (and 12 times faster than the S&P 500 index).
With a market capitalisation of some $400bn, it is the fifth-most-valuable firm in the world.
Last year cashflow (before investment) was $16bn, more than quadruple the level five years ago.
It continues to struggle with grocery, and yet Amazon is moving aggressively to learn from their mistakes. Their purchase of Whole Foods signals their learning that many consumers want to touch their groceries before buying, and they frequently enjoy the buying experience. A recent article in
Amazon, the world’s most remarkable firm, is just getting started
Amazon has the potential to meet the expectations of investors. But success will bring a big problem
Mar 25th 2017
AMAZON is an extraordinary company. The former bookseller accounts for more than half of every new dollar spent online in America. It is the world’s leading provider of cloud computing. This year Amazon will probably spend twice as much on television as HBO, a cable channel. Its own-brand physical products include batteries, almonds, suits and speakers linked to a virtual voice-activated assistant that can control, among other things, your lamps and sprinkler.
Yet Amazon’s shareholders are working on the premise that it is just getting started. Since the beginning of 2015 its share price has jumped by 173%, seven times quicker than in the two previous years (and 12 times faster than the S&P 500 index). With a market capitalisation of some $400bn, it is the fifth-most-valuable firm in the world. Never before has a company been worth so much for so long while making so little money: 92% of its value is due to profits expected after 2020.
That is because investors anticipate both an extraordinary rise in revenue, from sales of $136bn last year to half a trillion over the next decade, and a jump in profits. The hopes invested in it imply that it will probably become more profitable than any other firm in America. Ground for scepticism does not come much more fertile than this: Amazon will have to grow faster than almost any big company in modern history to justify its valuation. Can it possibly do so?
It is easy to tick off some of the pitfalls. Rivals will not stand still. Microsoft has cloud-computing ambitions; Walmart already has revenues nudging $500bn and is beefing up online. If anything happened to Jeff Bezos, Amazon’s founder and boss, the gap would be exceptionally hard to fill. But the striking thing about the company is how much of a chance it has of achieving such unprecedented goals (see article).
A new sort of basket-case
This is largely due to the firm’s unusual approach to two dimensions of corporate life. The first of these is time. In an era when executives routinely whine about pressure to produce short-term results, Amazon is resolutely focused on the distant horizon. Mr Bezos emphasizes continual investment to propel its two principal businesses, e-commerce and Amazon Web Services (AWS), its cloud-computing arm.
In e-commerce, the more shoppers Amazon lures, the more retailers and manufacturers want to sell their goods on Amazon. That gives Amazon more cash for new services—such as two-hour shipping and streaming video and music—which entice more shoppers. Similarly, the more customers use AWS, the more Amazon can invest in new services, which attract more customers. A third virtuous circle is starting to whirl around Alexa, the firm’s voice-activated assistant: as developers build services for Alexa, it becomes more useful to consumers, giving developers reason to create yet more services.
So long as shareholders retain their faith in this model, Amazon’s heady valuation resembles a self-fulfilling prophecy. The company will be able to keep spending, and its spending will keep making it more powerful. Their faith is sustained by Amazon’s record. It has had its failures—its attempt to make a smartphone was a debacle. But the business is starting to crank out cash. Last year cashflow (before investment) was $16bn, more than quadruple the level five years ago.
If Amazon’s approach to time-frames is unusual, so too is the sheer breadth of its activities. The company’s list of current and possible competitors, as described in its annual filings, includes logistics firms, search engines, social networks, food manufacturers and producers of “physical, digital and interactive media of all types”. A wingspan this large is more reminiscent of a conglomerate than a retailer, which makes Amazon’s share price seem even more bloated: stockmarkets typically apply a “conglomerate discount” to reflect their inefficiencies.
Many of these services support Amazon’s own expansion and that of other companies. The obvious example is AWS, which powers Amazon’s operations as well as those of other firms. But Amazon also rents warehouse space to other sellers. It is building a $1.5bn air-freight hub in Kentucky. It is testing technology in stores to let consumers skip the cash register altogether, and experimenting with drone deliveries to the home. Such tools could presumably serve other customers, too. Some think that Amazon could become a new kind of utility: one that provides the infrastructure of commerce, from computing power to payments to logistics.
A giant cannot hide
And here lies the real problem with the expectations surrounding Amazon. If it gets anywhere close to fulfilling them, it will attract the attention of regulators. For now, Amazon is unlikely to trigger antitrust action. It is not yet the biggest retailer in America, its most mature market. America’s antitrust enforcers look mainly at a firm’s effect on consumers and pricing. Seen through this lens, Amazon appears pristine. Consumers applaud it; it is the most well-regarded company in America, according to a Harris poll. (AWS is a boon to startups, too.)
But as it grows, so will concerns about its power. Even on standard antitrust grounds, that may pose a problem: if it makes as much money as investors hope, a rough calculation suggests its earnings could be worth the equivalent of 25% of the combined profits of listed Western retail and media firms. But regulators are also changing the way they think about technology. In Europe, Google stands accused of using its clout as a search engine to extend its power to adjacent businesses. The comparative immunity from legal liability of digital platforms—for the posting of inflammatory content on Facebook, say, or the vetting of drivers on Uber—is being chipped away.
Amazon’s business model will also encourage regulators to think differently. Investors value Amazon’s growth over profits; that makes predatory pricing more tempting. In future, firms could increasingly depend on tools provided by their biggest rival. If Amazon does become a utility for commerce, the calls will grow for it to be regulated as one. Shareholders are right to believe in Amazon’s potential. But success will bring it into conflict with an even stronger beast: government.
This article appeared in the Leaders section of the print edition under the headline “Amazon’s empire”
===Amazon Crush Apr 2017 Edition===
Incredible 18 months for Amazon, but the new phrase is “retail apocalypse”.
For all prior Amazon Updates, see 11/2016 Update
====Amazon Crush March 2017 Edition ===
Inside Amazon’s Battle to Break Into the $800 Billion Grocery Market
After almost a decade of food retail experiments with little success online, the e-commerce giant is embracing the physical stores it once shunned.
By Spencer Soper and Olivia Zaleski
March 20, 2017, 6:00 AM EDT
“Very wasteful” isn’t a phrase usually associated with Amazon.com Inc., which is so cost-conscious it once removed the light bulbs from its cafeteria’s vending machines. But after spending several months analyzing the online retailer’s grocery-shipping hubs back in 2014, that’s exactly how a mechanical engineering student described its approach to selling bananas.
Workers at Amazon Fresh, the company’s grocery-delivery business, threw away about a third of the bananas it purchased because the service only sold the fruit in bunches of five, the student concluded. Employees trimmed each bunch down to size and chucked the excess.
The research paper by Vrajesh Modi, who now works for Boston Consulting Group, highlighted other problems: Poorly trained employees often stood around with nothing to do. Moldy strawberries were frequently returned by disappointed customers. Amazon’s inspectors believed their corporate bosses didn’t care much about the quality of the food.
Such challenges linger for Amazon. Despite several attempts to break into the $800 billion grocery industry and almost a decade in the business, the company has struggled to entice shoppers en masse to buy eggs, steaks and berries online the same way they’ve flocked to buy books, tablets and toys.
“Online grocery is failing,” said Kurt Jetta, chief executive officer of TABS Analytics, a consumer products research firm. Only 4.5 percent of shoppers made frequent online grocery purchases in 2016, up just slightly from 4.2 percent four years earlier despite big investments from companies such as Amazon, according to the firm’s annual surveys. “There’s just not a lot of demand there. The whole premise is that you’re saving people a trip to the store, but people actually like going to the store to buy groceries.”
Amazon CEO Jeff Bezos now seems to understand that he can’t win the grocery game with websites, warehouses and trucks alone. The world’s biggest online retailer sees brick-and-mortar stores playing a key role in a renewed grocery push, documents reviewed by Bloomberg show. And like it did with Amazon Fresh, the company is launching its newest projects in Seattle, its home town.
Last Tuesday, men in cherry pickers worked through driving rain to affix “Amazon Fresh” signs to a drive-in grocery location in Seattle’s Ballard neighborhood, where shoppers can stop and have online orders loaded into their cars. Crews were busy on a similar site south of downtown, readying canopies over parking spaces to protect customers from the elements as they pick up their shopping bags. The secretive company has yet to announce the projects, and crews have covered the Amazon signs in black fabric and paper.
Late last year, Amazon purchased supply-chain software from LLamasoft Inc.– a major departure for a company known for its logistics prowess, and defying an internal mantra of “we don’t buy, we build.” And it more recently restructured how various grocery teams were managed to narrow their focus and set clear priorities, according to people familiar with the company’s business.
These changes come as Amazon breaks from its standard formula of shipping products in boxes out of jam-packed warehouses. Instead, it will invite shoppers inside its own grocery stores to smell the oranges, see the tomatoes and tap the watermelons. Ahead of a national rollout next year, Amazon is testing three brick-and-mortar grocery formats in Seattle — convenience stores called Amazon Go, the drive-in grocery kiosks, and a hybrid supermarket that mixes the best of online and in-store shopping. The company may open as many as 2,000 stores, according to internal documents.
The company has said little about its grocery-store plans, aside from a video about Amazon Go’s no-checkout format that has racked up more than 8.7 million views on YouTube. An Amazon spokeswoman declined to comment for this story. Reports on its moves have dribbled out over the past several months, prompting occasional denials and retorts from the company. Seattle technology site Geekwire in August uncovered Amazon’s mysterious drive-in grocery kiosk in Ballard. The New York Post in February said Amazon aimed to create “robot-run supermarkets” that would operate with only a few people. Bezos responded by tweeting to the Post: “Whoever your anonymous sources are on this story — they’ve mixed up their meds!”
Amazon’s goal is to become a Top 5 grocery retailer by 2025, according to a person familiar with the matter. That would require more than $30 billion in annual food and beverage spending through its sites, up from $8.7 billion — including Amazon Fresh and all other food and drink sales — in 2016, according to Cowen & Co.
Reaching that milestone would require a new wave of store and warehouse investments around the country, costing billions of dollars. That’s an existential change for Amazon, which initially stayed away from perishable goods and has mostly avoided the overhead of physical stores since it started in 1994.
“A bunch of smart people at Amazon have been thinking about re-imagining the next phase of physical retail,” said Scott Jacobson, a former Amazon executive who is now a managing director at Madrona Venture Group. “They want more share of the wallet, and habitual, frequent use of Amazon for groceries is the ultimate goal.”
For Amazon shoppers interested in buying groceries online, the company’s current offerings can be confusing. Amazon Fresh is available in about 20 U.S. cities for those paying $14.99 a month. Amazon Pantry lets shoppers buy crackers, cookies, chips, coffee and other non-perishables for a delivery fee of $5.99 per box. Amazon’s speedy drop-off service, Prime Now, offers items from local grocers in some cities, but no major chains. Its stick-on Dash Buttons let people order many household products — including some groceries, but not fresh food — with a finger tap. And Subscribe & Save offers discounts to Amazon customers who sign up for periodic delivery of laundry detergent, toothpaste, diapers, paper towels and other items frequently purchased in grocery stores.
The various initiatives have been a source of increasing internal tension as employees on different projects compete to sell the same things, according to a person familiar with the matter.
One problem saddling Amazon Fresh is the high cost of losses caused by food going bad, an issue it’s never faced with books and toys. For conventional grocery sellers, browning bananas can be sold at a discount to smoothie-makers and bread bakers. Chicken breasts nearing their expiration dates can be marked down. With Amazon Fresh, such items must be discarded or are returned by frustrated customers, according to a person familiar with the matter. That has meant Amazon Fresh has lost money from spoilage at more than double the rate for a typical supermarket, said the person, who asked not to be identified discussing internal operations. The main reason Amazon began delivering groceries through Prime Now was to hand that risk back to the local grocers to lower Amazon’s costs. The company didn’t originally anticipate the scope or difficulty of these problems because so few people working on its grocery push have experience in the industry.
“Grocery is the most alluring and treacherous category,” said Nadia Shouraboura, a former Amazon executive whose company, Hointer, has been working on redefining in-store grocery shopping for the past 18 months. “It lures inventors and retailers with shopping volume and frequency, and then sinks them with low margin.”
Beyond grocery, Amazon executives have also discussed opening consumer electronics stores to showcase its gadgets and better compete with Best Buy Co., according to three people familiar with the plan. For years, Amazon executives have discussed the downside of an online-only strategy, mostly with regard to a lack of places for shoppers to try out Kindle electronic readers, the voice-activated Echo speaker and its defunct Fire smartphone. Amazon considered holding events similar to Tupperware parties when it introduced its first Kindle in 2007, fearing the products would languish unseen, Jacobson said. The handful of bookstores Amazon has opened around the country double as gadget showrooms, similar to the Apple Store.
Long term, a stronger grocery business could position Amazon to become a wholesale food-distribution business serving supermarkets, convenience stores, restaurants, hotels, hospitals and schools. But first the company has to find a way to get more people to think of Amazon when stocking their refrigerators and pantries.
Photographer: David Ryder/Bloomberg
A group of Amazon executives met late last year to discuss the disadvantage Amazon faced compared with grocery competitors such as Wal-Mart and Kroger because of its lack of physical stores and customer apprehension about buying fresh foods online. They decided they needed something more to jump-start Amazon’s grocery push beyond plans already under way for the Amazon Go convenience store, modeled for urban areas, and drive-in grocery pick-up stations suited for the suburbs.
They worked out plans for a third approach: grocery stores closer in size to a Trader Joe’s than a Wal-Mart to offer easy access to milk, eggs and produce. Other items like paper towels, cereal, canned goods and dish detergent would be stocked on-site in a warehouse where they could be easily packed and delivered to shoppers at the location, according to documents reviewed by Bloomberg. It would also serve as a delivery hub for online orders.
Brittain Ladd, a supply chain consultant who joined Amazon in 2015 and most recently worked on its Amazon Fresh and Pantry expansions, wrote about such a store prior to joining Amazon in an academic paper called “A Beautiful Way to Save Woolworths.”
Ladd envisioned two-story buildings where shoppers browse produce, bread and other fresh items on the ground level while their orders for paper towels, canned goods and cereal are packed in a warehouse above. “The stores will have the capability to fulfill online orders placed by customers within a specific radius of the store,” he wrote. “Amazon drivers and/or contractors will be assigned to deliver groceries.”
The executives decided such a store would be worth pursuing for Amazon Fresh and ordered further research about ideal locations, how to integrate the stores with grocery delivery, and the use of automation to reduce overhead. Site selection for this store’s first model is happening now in Seattle, according to a person familiar with the plan.
Meanwhile, the first wave of its new grocery experiment, Amazon Go, was unveiled in December and for now is only open to employees while the systems are tested. Cameras and sensors monitor shoppers who scan their smartphones upon entering, allowing them to grab items like sandwiches, yogurt, drinks and snacks and automatically pay for them without a checkout kiosk. Products are embedded with tracking devices that pair with customers’ phones to charge their accounts. Weight-sensitive shelves tell Amazon when to restock. A patent filed by Amazon in 2014 suggests it could use facial-recognition technology to identify and then automatically charge in-store shoppers.
In its video touting Amazon Go, the company said it was aiming to open the site to the public in “early 2017,” and it hasn’t provided an update to that timing. But the technology has been crashing in tests when the store gets too crowded and requires human quality control, people watching video images to make sure customers are charged for the right things, according to a person familiar with the plan.
Beyond letting customers skip lines, the technology gives Amazon valuable data, said Guru Hariharan, founder of Boomerang Commerce Inc., which designs software for large retailers. Even if customers don’t purchase everything they touch, there’s value in understanding what shoppers consider but don’t ultimately buy, he said. That makes it worthwhile for Amazon to work through the kinks in the technology.
“It takes a lot of time and experimentation to work through unpredictable scenarios like a child picking up an item or a person wearing sunglasses or a face muffler,” he said.
At the same time, recent work and construction permits indicate the new drive-in grocery kiosk in Ballard could open any day.
===Amazon Crush Nov 2016 Edition===
I began tracking Amazon in early 2014 with multiple posts, copied here.
I speculated in EARLY 2014 (see post below) that Amazon revenue in 2015 would exceed $100 billion.
They finished 2015 at $107 billion – up $11 billion vs 2014:
Revenue is up 21% for the first 9 months of 2016. So my bet is that 2016 will be about $130 billion. The old forecast that Amazon will exceed $250 billion in sales by 2020 is looking about right.
Cash for the latest 12 months is way up – they have $13 billion on hand in cash and cash equivalents.
Their stock price is at an all-time high: $767.
Reminders of earlier posts about moves by Amazon:
ECHO: The Echo is a stout, plain-looking cylinder, about the height of a toaster, that you can park just about anywhere you have Wi-Fi access, though it seems most useful in the kitchen. You can ask it anything, beginning with the word “ALEXA….”
TWITCH: video streaming …. has 40% of all internet video streaming bandwidth????
AMAZONFRESH: lets customers purchase groceries online, including perishable items like dairy, meat, and fish, which are delivered within a day.
They tested in Seattle for years, and then rolled out in 2014 to most of California, New Jersey, New York City, Philadelphia, and Washington. They then took an 18 month expansion hiatus. They will expand to Boston and UK this year. This is slower than expected.
As for its progression into other markets, there are more hurdles associated with AmazonFresh than the site’s other services that have slowed it down—the company needs to open refrigerated warehouses, carry its own stock of perishable groceries, and hire additional delivery people in each new market.
Word is that it’s difficult to convince customers it’s worth the $299/year price tag. Amazon is trying to grab a larger share of the grocery market with this expansion. Delivery currently makes up less than 5% of all grocery sales.
Here is how their 2015 10K describes their business.
Amazon.com opened its virtual doors on the World Wide Web in July 1995. We seek to be Earth’s most customer-centric company. We are guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. In each of our segments, we serve our primary customer sets, consisting of consumers, sellers, developers, enterprises, and content creators. In addition, we provide services, such as advertising services and co-branded credit card agreements.
Beginning in the first quarter of 2015, we changed our reportable segments to North America, International, and Amazon Web Services (“AWS”). These segments reflect the way the Company evaluates its business performance and manages its operations. Additional information on our operating segments and product information is contained in Item 8 of Part II, “Financial Statements and Supplementary Data—Note 11—Segment Information.” See Item 7 of Part II, “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Results of Operations—Supplemental Information” for supplemental information about our net sales. Our company-sponsored research and development expense is set forth within “Technology and content” in Item 8 of Part II, “Financial Statements and Supplementary Data—Consolidated Statements of Operations.”
We serve consumers through our retail websites and focus on selection, price, and convenience. We design our websites to enable millions of unique products to be sold by us and by third parties across dozens of product categories. Customers access our websites directly and through our mobile websites and apps. We also manufacture and sell electronic devices, including Kindle e-readers, Fire tablets, Fire TVs, and Echo. We strive to offer our customers the lowest prices possible through low everyday product pricing and shipping offers, and to improve our operating efficiencies so that we can continue to lower prices for our customers. We also provide easy-to-use functionality, fast and reliable fulfillment, and timely customer service. In addition, we offer Amazon Prime, an annual membership program that includes unlimited free shipping on millions of items, access to unlimited instant streaming of thousands of movies and TV episodes, and other benefits.
We fulfill customer orders in a number of ways, including through: North America and International fulfillment and delivery networks that we operate; co-sourced and outsourced arrangements in certain countries; and digital delivery. We operate customer service centers globally, which are supplemented by co-sourced arrangements. See Item 2 of Part I, “Properties.”
We offer programs that enable sellers to sell their products on our websites and their own branded websites and to fulfill orders through us. We are not the seller of record in these transactions, but instead earn fixed fees, a percentage of sales, per-unit activity fees, or some combination thereof.
Here are prior posts:
JUNE, 2015 POST on ECHO
See NYT article below on Amazon’s Echo (and note comparisons to other voice command systems, such as Siri, Google Now, and Cortana):
“If it moves nimbly, keeping ahead of Apple and Google, Amazon could transform the Echo into a something like a residential hub, the one device to control pretty much everything attached to your home.”
Functionality at the moment is:
– telling you the weather
– playing music you ask for
– adding stuff to your shopping list
– reordering items you frequently buy from Amazon
– giving you a heads-up about your nearing calendar appointments
– setting a kitchen timer
– answering the most basic of search queries
Amazon Echo, a.k.a. Alexa, Is a Personal Aide in Need of Schooling
By FARHAD MANJOOJUNE 24, 2015
The Amazon Echo, a wireless speaker and artificially intelligent personal assistant, can tell you the weather, play music and reorder items you frequently buy from Amazon, among other things.
THIS week, I asked a friend for help: “Alexa, can you write this review for me?”
“What’s your question?” Alexa responded.
“Can you write this review for me?”
“Review is spelled R-E-V-I-E-W.”
“Thanks,” I said. “That about sums it up.”
O.K., so Alexa isn’t perfect; far from it, in fact. If there is one glaring flaw in the Amazon Echo — the tiny wireless speaker and artificially intelligent personal assistant, a machine that one always addresses with the honorific “Alexa,” as if she’s some kind of digital monarch — it is that she is quite stupid.
If Alexa were a human assistant, you’d fire her, if not have her committed. “Sorry, I didn’t understand the question I heard” is her favorite response, though honestly she really doesn’t sound very sorry. She’ll resort to that line whether you ask her questions answered by a simple Google search (“How much does a cup of flour weigh?”) or something more complicated (“Alexa, what was that Martin Scorsese movie with Joe Pesci and Robert De Niro?”).
Other times, she is mind-numbingly literal. One night during the N.B.A. playoffs, I asked, “Alexa, what’s the score of the basketball game?” She proceeded to give me a two-minute, 18-part definition of the word “score” that included “a seduction culminating in sexual intercourse.” Not exactly what I was going for.
And yet, after spending three weeks testing the Echo, I really kind of love Alexa. She is just smart enough to be useful. And she keeps getting smarter. This week, after a long invitation-only preview period, Amazon began selling the Echo to the public. At $179.99, Alexa is more expensive than I’d like. (Subscribers to Amazon’s $99-a-year Prime subscription service could buy the Echo for only $100 during the preview.) But if you’re the type who enjoys taking chances on early, halfway useful tech novelties, the Echo is a fun thing to try.
And if you’re anything like me, after a week with the Echo, you may feel the device begin to change how you think about home tech. It will not seem far-fetched to expect that one day soon, you’ll have an all-knowing, all-seeing talking assistant to control your lights, thermostat, entertainment system and just about anything else at home. In Alexa, Amazon has created the perfect interface to control your home; if it adds some more intelligence, it would be quite handy.
The Echo is a stout, plain-looking cylinder, about the height of a toaster, that you can park just about anywhere you have Wi-Fi access, though it seems most useful in the kitchen. It comes with a remote control that you don’t really need, because after a quick initial setup using your smartphone, you can control pretty much everything the Echo does with your voice. (The remote does have a microphone that allows you to speak to the Echo from far away.) From there, the Echo is terrifically easy to use — say “Alexa” and ask your question.
At the moment, there are only a handful of uses for the Echo. She’s great at telling you the weather, adding stuff to your shopping list, reordering items you frequently buy from Amazon, giving you a heads-up about your nearing calendar appointments, and answering the most basic of search queries.
She is pretty good at playing music, though her main source is Amazon Prime Music, a streaming service that is included with a Prime membership. Prime Music’s selection is dreadfully limited, though, and at the moment, the Echo can’t connect to many other streaming services. Thankfully, with a few quick voice commands, Alexa can connect to your phone like any other Bluetooth speaker. That way, she can take control of music you play from most apps, including streaming apps like Spotify. You can’t call out for specific songs this way, but you can say “Alexa, pause” or “Alexa, next” and she’ll control the tunes playing from your phone.
The Echo is also a very good kitchen timer. Put your cookies in the oven; yell out, “Alexa, set timer for 12 minutes”; and she’s off. It’s far easier than fumbling with buttons on the microwave, especially when you have your hands full.
But wait a minute — can’t you do pretty much all this on your phone, your smartwatch or many other devices? Yes, you can, but Alexa is right there. She’s always plugged in. She’s always listening, and she’s fast. It’s surprising how much of a difference a few milliseconds make in maintaining the illusion of intelligence in our machines. Because Alexa is far quicker to spring into action than Siri, Apple’s digital personal assistant, especially Siri on the Apple Watch, I found her to be much more pleasant to use, even if she is frequently wrong.
Amazon says that it plans to constantly improve the Echo. During the preview period, it added a host of new features, including the ability to control some smart-home devices, built-in integration with the Pandora streaming service, and traffic information for your morning commute. I’m hoping Amazon creates an open system — what developers call an API — for the Echo, which will allow a wide variety of online services and apps to connect to the device. If it moves nimbly, keeping ahead of Apple and Google, Amazon could transform the Echo into a something like a residential hub, the one device to control pretty much everything attached to your home.
At the moment, that dream is far off. But dumb as she sometimes sounds, Alexa may be just smart enough to make it happen.
This entry was posted in Personalization, Systems, Technology and tagged Amazon, Echo, Technology, voice recognition on June 29, 2015.
========= MAY 2015 POST ============
The Amazon Crush continues:
Just look at cash:
CASH AND CASH EQUIVALENTS, END OF PERIOD
Amazon has published its 10Q for Q1 2015
Note that free cash flow has doubled:
“Free cash flow, a non-GAAP financial measure, was $3.2 billion for the trailing twelve months ended March 31, 2015, compared to $1.5 billion for the trailing twelve months ended March 31, 2014.”
Note also that international revenue is down.
They also have published their annual report:
Net sales are continuing sharp growth – to $89 billion.
Sales increased 20%, 22%, and 27% in 2014, 2013, and 2012
, compared to the comparable prior year periods. Changes in foreign currency exchange rates impacted net sales by $(636) million, $(1.3) billion, and $(854) million for 2014, 2013, and 2012. For a discussion of the effect on sales growth of foreign exchange rates, see “Effect of Foreign Exchange Rates” below.
North America sales increased 25%, 28%, and 30% in 2014, 2013, and 2012
, compared to the comparable prior year periods. The sales growth in each year primarily reflects increased unit sales, including sales by marketplace sellers, and AWS, which was partially offset by AWS pricing changes. Increased unit sales were driven largely by our continued efforts to reduce prices for our customers, including from our shipping offers, by sales in faster growing categories such as electronics and other general merchandise, by increased in-stock inventory availability, and by increased selection of product offerings.
nternational sales increased 12%, 14%, and 23% in 2014, 2013, and 2012
, compared to the comparable prior year periods. The sales growth in each year primarily reflects increased unit sales, including sales by marketplace sellers. Increased unit sales were driven largely by our continued efforts to reduce prices for our customers, including from our shipping offers, by sales in faster growing categories such as electronics and other general merchandise, by increased in-stock inventory availability, and by increased selection of product offerings. Additionally, changes in foreign currency exchange rates impacted International net sales by $(580) million, $(1.3) billion, and $(853) million in 2014, 2013, and 2012. “
In their annual report, they state the key to their cash flow business model:
“Because of our model we are able to turn our inventory quickly and have a cash-generating operating cycle3. On average, our high inventory velocity means we generally collect from consumers before our payments to suppliers come due.”
====================== Posted 20140820: Amazon Crush – Update =====
Amazon Crush – Update
Thanks to Oliver Wyman, Fast Company and many others for this update on last post on Amazon…..my sense from reading all:
– Twitch is Amazing! Driving 40% of all internet bandwidth???? Is that even possible? With 55 million users spending an average of 100+ minutes per day??? That is enormous! Bezos obviously intrigued and willing to take a risk to get this three-year old start-up in its fold? But why exactly? Don’t know
– FRESH is moving out. Per plan, and announcement to stockholders and also per rumor, Amazon Fresh is in a soft launch mode. The big news was the launch in LA (after 7 years in Seattle tweaking), and then it was small news that they rounded out most of the the rest of California markets – that is a HUGE expansion in less than a year. Moreover, Amazon green trucks are riding through Manhattan, and a roll there is imminent. No word yet, but I really think Chicago might be next – rumors say I am right.
– Manufacturers think this isn’t their fight. Most of them are just glad they are not retailers. But the truth is that Amazon will cause a massive reduction in retailer margin, as well as many of the brand-building activities at retail that manufacturers are used to …. horrible merchandising will replace great merchandising, forget about cold beverage sales, forget about impulse sales, etc.
– the truth is that retail is not at risk – just the marginally profitable ones.
Here are the articles:
AMAZONFRESH IN THE U.S.
After years of anticipation, AmazonFresh has now expanded its U.S. home grocery delivery service beyond its home market of Seattle.
In June 2013, it launched in Los Angeles, with more markets expected to follow. From conversations with supermarket retailers all over the U.S. and globally, it is clear that online and multi-channel competitors have come into focus as a key competitive threat, and AmazonFresh is by far the most dangerous of the new breed. What is striking is the similarity between what we hear from food retailers today and what leaders of category killers were saying back in 2009 – and we know that the category killers’ fears of Amazon proved to be well-founded.
WHAT IS AMAZON FRESH?
AmazonFresh, operating in pilot mode in Seattle since 2007, allows shopping online and on mobile apps. The assortment is surprisingly broad and deep, with between 10,000 and 30,000 items, depending on the market, including (for example) 400+ produce items, 500+ meat and seafood items, 1,300+ beverage items and 4,000+ health and beauty items. Unlike the traditional Amazon model, AmazonFresh prices on consumables are currently higher than those found in local supermarkets, as promotions are mostly absent – the current customer proposition focus is on convenience.
The differences between the Seattle and LA models (different membership and delivery pricing models and different assortment depth, to name a few) seem to indicate that Amazon is still trialling many elements of the business, but the rollout to additional markets suggests underlying confidence in the economics.
When Amazon decides to move from pilot to rollout, history indicates they will move very rapidly. The company has reportedly told vendors it could roll out to 40 U.S. markets by the end of 2014!
The direct impact that Amazon had on many category killers by winning market share is obvious, as is the impact on consumers’ price expectations, but one under-reported aspect of what is happening to category killers is the channel conflict competition Amazon provokes. Not only does Amazon take share, they also force category killers to shift transactions to their own websites. But those sites are not the basket-building machines that stores are. For one major category killer, the average online transaction has only a quarter of the number of items that the average in-store transaction has. So the incumbents face a conundrum – they must grow online sales, but doing so dramatically worsens their economics.
However, Amazon will never take as much share away from food retailers as it has taken from category killers. Food retailers’ natural defenses – low gross margins, focus on fresh product, “need it now” consumption patterns, the emotional aspect of personally selecting food to feed one’s family – mean the supermarket channel as a whole will not suffer the fate of Borders or even Best Buy.
The threat is not that stores will become obsolete; such notions are alarmist and naïve. But AmazonFresh can force dramatic change in the shape of the food retail industry with even modest market share. It doesn’t take complicated analysis to prove this. The industry overall runs with about a 2% bottom line and a 20% volume variable margin. This means
a 10% sales loss would wipe out the entire industry’s profit. Any experienced food retail executive knows that most chains have a “mushy middle” of stores that generate reasonable operating income with current sales volumes, but would quickly tip into negative store profit with a modest reduction in volume. We don’t know what Amazon’s ultimate ambitions in the food space are, but if they achieve even a 5% volume share it would force significant changes. Current players would have to either raise prices – kicking off the vicious cycle of volume loss, causing deleveraged fixed costs, leading to even more price rises – or close stores to bring costs into line. A 5% volume loss to AmazonFresh would result in 10-20% reduction in store count, because not all the volume from closed stores will be clawed back by surviving stores: as supermarkets become relatively less convenient, some of the volume would go to specialists (clubs, hard discounters, premium players) and online channels. It’s too early to know the full extent of the impact, but a good guess is that around one in eight supermarkets would have to close to maintain current profitability without raising prices.
Supermarkets should not count on their ability to weather this disruption the way they weathered the last major disruption: the Walmart supercenter tsunami, in the case of the U.S. Then, the best grocers got better, slashed cost out of their networks, improved their capabilities, and prospered at the expense of weaker competitors who couldn’t adapt fast enough. They had time to pull this off because Walmart couldn’t open a thousand supercenters overnight. But this time, the starting point is much, much more efficient – there will be a lot less “fat” to cut to preserve profitability in the face of falling volume – and the weaker players that were the victims last time are already gone. Most significantly, the rate of change in the competitive landscape will not be constrained by the process of opening new stores. Amazon only has to set up distribution centers and networks. It already has a strong consumer brand. This disruption could happen much faster than anything the industry has seen before.
WHAT SHOULD FOOD RETAILERS BE DOING? AT LEAST THREE THINGS:
Build a multi-channel offering. Of course, grocers must develop their own answer to online and mobile shopping. And it is better to cannibalize one’s own in-store sales than to surrender them to the competition. That said, it is very tricky to make the economics of these models work, so great care must be taken to manage the bottom line as online and mobile sales ramp up.
Get seriously good at fresh. The fresh categories represent a cushion around the rest of the customer offer. If customers believe they can’t get the same quality, freshness and selection online as they would in a store, it will be a formidable barrier to switching to on-line purchasing. So far AmazonFresh’s fresh product offering is highly variable (see Exhibit 1) but it stands to reason that they will get better with experience. And they have a built-in freshness advantage – as do all online grocers – because product take less time to go from the distribution center to the customer’s home. Most U.S. retailers are nowhere near where they could be, and the same is true in many other geographies. Getting good enough at fresh to fend off online competition means re-thinking the supply chain, store practices and merchandising standards. It means breaking the usual trade-off between availability and shrink, shifting the efficient frontier through better capabilities and greater accountability.
Prepare for a world with fewer stores. Possibly a lot fewer. Even an excellent multichannel platform and a rejuvenated fresh offer will not be nearly enough. We believe food retailers should be planning now for a world with far fewer stores. If, say, 15% of the square footage is going to have to close down, survival will depend on making sure the competition bears more than their fair share of the pain. So grocers’ competitive strategy should be focused on ensuring they get to keep more of their stores than the competition does, and being prepared to pounce when weakened competitors begin to wobble. This means understanding how to win store-by-store battles and drive maximum profit out of every square foot. To be clear, we are bullish on the supermarket industry. While the industry’s transformation will be painful for all and fatal for a few, the survivors will be better placed. Surviving stores will do much higher volume and, with higher fixed cost leverage, they could be massively more profitable. Stronger competition will force grocers to become even better operators and even more responsive to customer needs. Retailers who act fast can not only survive, but adapt their businesses to thrive in the new world.
Exhibit 1: PRELIMINARY CUSTOMER RESEARCH BY OLIVER WYMAN SHOWS a wide range of customer perceptions of the quality of AmazonFresh products
“The apples were the kind I would have hunted for and maybe not found. They looked great and the taste was fresh and sweet, the texture crisp which is exactly what I like. They were delicious.”
“The sell by date on the frozen beef says July 2013 [delivered in September 2013]. Granted it was frozen, this makes me believe it’s not fresh beef and I’m a little disappointed by that.”
“Some of the berries were very soft and leaking all over the box.”
ABOUT OLIVER WYMAN
Oliver Wyman is a global leader in management consulting that combines deep industry knowledge with specialized expertise in strategy, operations, risk management, and organization transformation.
Copyright © 2014 Oliver Wyman. All rights reserved.
FastCompany: AmazonFresh a “Trojan horse;” 20 more markets expected
Aug 11 2013, 18:15 ET
In a cover story on Jeff Bezos and Amazon (AMZN), FastCompany’s J.J. McCorvey observes the company’s new AmazonFresh grocery service (offered via its $299/year Prime Fresh free shipping plan) is a “Trojan horse” meant to give Amazon’s broader same-day delivery efforts needed scale.
Amazon is also hoping its same-day infrastructure (replete with Amazon trucks) will increase its appeal to 3rd-party sellers (now responsible for 40% of unit sales) by lowering delivery times. Merchants already cite access to Prime as a reason for outsourcing fulfillment to Amazon (and giving a ~20% cut).
EBAY could prove a formidable same-day rival. Instead of building its own soup-to-nuts infrastructure, eBay is relying on dozens of offline retailers (inc. major national chains) to help handle fulfillment. Google is also dipping its toes into same-day.
Currently available in L.A. and Seattle, AmazonFresh is expected to expand to 20 more markets, including some international ones. SunTrust recently predicted an NYC launch will happen in 2014.
Also mentioned by McCorvey: Amazon is now able to ship items less than 2.5 hours after an order is placed; and wants to further lower than number; Prime now covers 15M+ items (up from 1M in ’05); and Amazon is still “evaluating” how to use Kiva’s robots.
Amazon buying Twitch for $970M in cash
AMAZONFRESH IS JEFF BEZOS’ LAST MILE QUEST FOR TOTAL RETAIL DOMINATION
AMAZON UPENDED RETAIL, BUT CEO JEFF BEZOS — WHO JUST BOUGHT THE WASHINGTON POST FOR $250 MILLION — INSISTS IT’S STILL “DAY ONE.” WHAT COMES NEXT? A RELENTLESS PURSUIT OF CHEAPER GOODS AND FASTER SHIPPING. THE COMPETITION IS ALREADY GASPING FOR BREATH.
BY J.J. MCCORVEY
The first thing you notice about Jeff Bezos is how he strides into a room.
A surprisingly diminutive figure, clad in blue jeans and a blue pinstripe button-down, Bezos flings open the door with an audible whoosh and instantly commands the space with his explosive voice, boisterous manner, and a look of total confidence. “How are you?” he booms, in a way that makes it sound like both a question and a high-decibel announcement.
MORE ON AMAZON
• Amazon CEO Jeff Bezos Agrees To Buy The Washington Post For $250 Million
• Need A Job? Amazon Is Hiring 5,000 People
• Think Your Office Is Soulless? Check Out This Amazon Fulfillment Center
Each of the dozen buildings on Amazon’s Seattle campus is named for a milestone in the company’s history–Wainwright, for instance, honors its first customer. Bezos and I meet in a six-floor structure known as Day One North. The name means far more than the fact that Amazon, like every company in the universe, opened on a certain date (in this case, it’s July 16, 1995). No, Day One is a central motivating idea for Bezos, who has been reminding the public since his first letter to shareholders in 1997 that we are only at Day One in the development of both the Internet and his ambitious retail enterprise. In one recent update for shareholders he went so far as to assert, with typical I-know-something-you-don’t flair, that “the alarm clock hasn’t even gone off yet.” So I ask Bezos: “What exactly does the rest of day one look like?” He pauses to think, then exclaims, “We’re still asleep at that!”
He’s a liar.
Amazon is a company that is anything but asleep. Amazon, in fact, is an eyes-wide-open army fighting–and winning–a battle that no one can map as well as its general. Yes, it is still the ruthless king of books–especially after Apple’s recent loss in a book price-fixing suit. But nearly two decades after its real day one, the e-commerce giant has evolved light-years from being just a book peddler. More than 209 million active customers rely on Amazon for everything from flat-panel TVs to dog food. Over the past five years, the retailer has snatched up its most sophisticated competition–shoe seller Zappos and Quidsi, parent of such sites as Diapers.com, Soap.com, Wag.com, and BeautyBar.com. It has purchased the robot maker Kiva Systems, because robots accelerate the speed at which Amazon can assemble customer orders, sometimes getting it down to 20 minutes from click to ship. Annual sales have quadrupled over the same period to a whopping $61 billion. Along the way, incidentally, Amazon also became the world’s most trusted company. Consumers voted it so in a recent Harris Poll, usurping the spot formerly held by Apple.
• Retail’s Game Of Thrones: Citizens of these kingdoms know the stakes–you innovate or you die.
Amazon has done a lot more than become a stellar retailer. It has reinvented, disrupted, redefined, and renovated the global marketplace. Last year, e-commerce sales around the world surpassed $1 trillion for the first time; Amazon accounted for more than 5% of that volume. This seemingly inevitable shift has claimed plenty of victims, with more to come. Big-box retailers like Circuit City and Best Buy bore the brunt of Amazon’s digital assault, while shopping-mall mainstays such as Sears and JCPenney have also seen sales tank. Malls in general, which once seemed to offer some shelter from the online pummeling, have been hollowed out. By Green Street Advisors’ estimate, 10% of the country’s large malls will close in the next decade. It has become painfully clear that the chance to sift through bins of sweaters simply isn’t enough of a draw for shoppers anymore. “It has been this way in retail forever,” says Kevin Sterneckert, a research VP at Gartner who focuses on shopping trends, and who lays out a strategy that should blow nobody’s mind: “If you don’t innovate and address who your customers are, you become irrelevant.” And now that means fending off threats from every phone, tablet, and laptop on the planet.
Amazon’s increasing dominance is now less about what it sells than how it sells. And that portends a second wave of change that will further devastate competitors and transform retail again. It’s not just “1-Click Ordering” on Amazon’s mobile app, which is tailor-made for impulse buying. It’s not just the company’s “Subscribe & Save” feature, which lets customers schedule regular replenishments of essentials like toilet paper and deodorant. It’s not just Amazon’s “Lockers” program, in which huge metal cabinets are installed at 7-Elevens and Staples in select cities, letting customers securely pick up packages at their convenience instead of risking missed (or stolen) deliveries.
“AMAZONFRESH IS REALLY A TROJAN HORSE. IT’S NOT ABOUT WINNING IN GROCERY SERVICES. IT’S ABOUT DOMINATING THE MARKET IN SAME-DAY DELIVERIES. ”
No, it’s all this, plus something more primal: speed. Bezos has turned Amazon into an unprecedented speed demon that can give you anything you want. Right. Now. To best understand Amazon’s aggressive game plan–and its true ambitions–you need to begin with Amazon Prime, the company’s $79-per-year, second-day delivery program. “I think Amazon Prime is the best bargain in the history of shopping,” Bezos tells me, noting that the service now includes free shipping on more than 15 million items, up from the 1 million it launched with in 2005. Prime members also gain access to more than 40,000 streaming Instant Video programs and 300,000 free books in the Kindle Owners’ Lending Library. As annoying as this might be to Netflix, it is not intended primarily as an assault on that business. Rather, Bezos is willing to lose money on shipping and services in exchange for loyalty. Those 10 million Prime members (up from 5 million two years ago, according to Morningstar) are practically addicted to using Amazon. The average Prime member spends an astounding $1,224 a year on Amazon, which is $700 more than a regular user. Members’ purchases and membership fees make up more than a third of Amazon’s U.S. profit. And memberships are projected to rise 150%, to 25 million, by 2017.
Nadia Shouraboura of Hointer, a new store that represents how retail must adapt in the Age of Amazon
Robbie Schwietzer, VP of Prime, is more candid than his boss when explaining Prime’s true purpose: “Once you become a Prime member, your human nature takes over. You want to leverage your $79 as much as possible,” he says. “Not only do you buy more, but you buy in a broader set of categories. You discover all the selections we have that you otherwise wouldn’t have thought to look to Amazon for.” And what you buy at Amazon you won’t buy from your local retailer.
Prime is phase one in a three-tiered scheme that also involves expanding Amazon’s local fulfillment capabilities and a nascent program called AmazonFresh. Together, these pillars will remake consumers’ expectations about retail. Bezos seems to relish the coming changes. “In the old world, you could make a living by hoping that your customer didn’t know whether your price was actually competitive. That’s a very”–Bezos pauses for a second to rummage for the least insulting word–“tenuous strategy in the new world. [Now] you can’t convince people you have the low price; you actually have to have the low price. You can’t persuade people that your delivery speeds are fast; you actually have to have fast delivery speeds!” With that last challenge, he erupts in a thunderous laugh, throwing his cleanly depilated head so far back that you can see the dark fillings on his upper molars. He really does seem to know something the rest of us don’t. We’re still asleep, he says? The alarm clock at Amazon went off hours ago. Whether the rest of the retail world has woken up yet is another question.
Amazon’s 1-million-square-foot Phoenix fulfillment center produces a steady and syncopated rhythm. It is the turn of mechanical conveyor belts, the thud of boxes hitting metal, the beeping of forklifts moving to and fro, and the hum of more than 100 industrial-size air conditioners whirring away. This is the sound of speed–a sonic representation of what it takes to serve millions of customers scattered across the globe.
In centers like this one, of which there are 89 globally (with more to come), Amazon has built the complex machinery to make sure a product will ship out in less than 2.5 hours from the time a customer clicks place your order.
From that click, a set of algorithms calculates the customer’s location, desired shipping speed, and product availability; it then dispatches the purchase request to “pickers” on duty at the nearest fulfillment center. The system directs the new order to the picker who is closest on the floor to that product, popping up with a bleep on the picker’s handheld scanner gun. These men and women roam the sea of product shelves with carts, guided by Amazon’s steady hand to the precise location of the product on the color-coded shelves. The picker gathers the item and puts it into a bin with other customer orders. And from there, the item zooms off on a conveyor belt to a boxing station, where a computer instructs a worker on what size box to grab and what items belong in that box. After the packer completes an order, the word success lights up in big green letters on a nearby computer screen. Then the package goes back on a conveyor, where the fastest delivery method is calculated by scanning the box, which is then kicked down a winding chute to the appropriate truck.
How one store merges digital and physical
If anyone can design a brick-and-mortar store for an e-commerce world, it should be Nadia Shouraboura. She used to be Amazon’s VP of global supply chain and fulfillment technology and has since created Hointer, a fully automated store run on software algorithms and machinery. She calls it a “microwarehouse” that marries digital’s instant gratification with in-store benefits. “In apparel, this will win,” she predicts. It works like this:
STEP 1. SEARCH
A customer enters the spare store, where there’s only one of every product in view. She pulls up the Hointer app, scans the QR code on a pair of jeans she likes, and enters her size.
STEP 2. DELIVER
Within 30 seconds of scanning the code, a pair of jeans in her size travels through a chute and lands in her dressing room. She can scan as many items as she likes.
STEP 3. REFINE
Inside the dressing room, she tries on the jeans, but they’re too baggy. So she chucks them down another chute and selects a smaller size from the app.
STEP 4. PURCHASE
The jeans fit! She pays on her phone or swipes her card at a kiosk, and leaves the store with her purchase. No sales clerk necessary.
The process is efficient, but still lower tech than it could be. Although Amazon shelled out $775 million last year for those orange Kiva robots, it says it’s still “evaluating” how to deploy the bots, and they’re nowhere to be seen here. “Fulfillment by Amazon” is still a very human endeavor–and the company’s creativity thrives within that limitation. A team at the Phoenix center is constantly thinking of ways to chip away at the 2.5-hour processing time. For instance, when products arrive from Amazon’s vendors and the 2 million third-party merchants who sell their goods on the site, workers now scan them into Amazon’s inventory system (again, with a handheld gun) instead of entering the details manually. Also, products have been stowed on shelves in what otherwise might appear to be a random way–for example, a single stuffed teddy bear might be next to a college biology book–because it reduces the potential distance a worker must trek between popular products that might be ordered together. Small tweaks like these have an impact: In the past two years, Amazon has reduced the time it took to move a product by a quarter. During the past holiday season, the company processed 306 items per second worldwide.
These centers aren’t just about warehouse speed, though: They’re also about proximity. Over the past several years, Bezos has poured billions into building them in areas closer and closer to customers. The Phoenix warehouse, one of four in the region, serves a metro area of nearly 4 million. Robbinsville, New Jersey, is roughly one hour from 8 million New Yorkers. Patterson, California, is an hour and a half from 7 million people living in the San Francisco Bay Area. Three locations in Texas–Coppell, Haslet, and Schertz–will serve not only the nearly 9 million citizens of the Dallas and San Antonio metro areas but also the other 17 million or so customers in the state (and possibly neighboring states too) who live only a few hundred miles away.
“What you see happening,” Bezos explains, “is that we can have inventory geographically near major urban populations. If we can be smart enough–and when I say ‘smart enough,’ I mean have the right technology, the right software systems, machine-learning tools–to position inventory in all the right places, over time, your items never get on an airplane. It’s lower cost, less fuel burned, and faster delivery.”
The holy grail of shipping–same-day delivery–is tantalizingly within reach. Amazon already offers that service in select cities, what it calls “local express” delivery, but the big trick is to do it nationally. And the crucial element of this ambitious plan is revealed by something wonkier than a bunch of buildings. It is something only an accountant could see coming: a cunning shift in tax strategy.
“”IN THE NEW DIGITAL WORLD,” SAYS BEZOS, “YOU CAN’T CONVINCE PEOPLE YOU HAVE THE LOW PRICE; YOU ACTUALLY HAVE TO HAVE THE LOW PRICE.””
If you were a competitor who knew what to listen for, you’d practically hear the Jaws theme every time Bezos said the word taxes. For years, Amazon fervently avoided establishing what is called a “tax nexus”–that is, a large-enough physical presence–in states that could potentially force it to collect sales tax from its customers, something brick-and-mortar and mom-and-pop stores had long argued would finally remove Amazon’s unfair pricing advantage. In states that dared to challenge Amazon, the company would quickly yank operations. The scrutiny even extended to its sale of products by other merchants. “We had to be very careful, even with the third-party business, about not incurring tax-nexus stuff,” recalls John Rossman, a former Amazon executive and current managing director at Alvarez and Marsal, a Seattle-based consulting firm.
But Amazon has since changed its mind. It determined that the benefits of more fulfillment centers–and all the speed they’ll provide–will outweigh the tax cost they’ll incur. So it began negotiating with states for tax incentives. South Carolina agreed to let the company slide without collecting sales tax until 2016, in exchange for bringing 2,000 jobs to the state. In California, Amazon was given a year to start collecting taxes in exchange for building three new warehouses. And at the end of 2011, Amazon even threw its support behind a federal bill that would mandate all online retailers with sales of more than $1 million to collect tax in states in which they sold to customers. In 2012 alone, Amazon spent $2.5 million lobbying for issues that included what’s known as the Marketplace Fairness Act–the same law, essentially, it had once moved heaven and earth to eradicate. The bill recently cleared the U.S. Senate and awaits passage in the House.
“The general perception is companies thinking, Oh, great, finally a level playing field,” Rossman says. “But other retailers are going to regret the day. Sales tax was one of the few things impeding Amazon from expanding. Now it’s like wherever Amazon wants to be, whatever Amazon wants to do, they are going to do it.”
There’s yet another weapon in Amazon’s offensive, and it’s ready for rollout. It’s called AmazonFresh, a grocery delivery service that has long been available only in Seattle. The site has a selection of 100,000 items, and from my hotel room in that city on a recent Saturday at 11 a.m., I gave it a try. I clicked on chips, bananas, apples, yogurt, and a case of bottled water–along with a DVD of Silver Linings Playbook and a Moleskine reporter’s notebook. After checking out and paying the $10 delivery fee, I requested my goods to arrive during the 7 p.m. to 8 p.m window. At 7:15 that evening, De, my AmazonFresh delivery woman, showed up in the lobby. She helped carry my bags up the elevator and to my hotel room, and tried several times to refuse a $5 tip for the trouble I put her through in the name of research. It was simple, easy–and for Amazon competitors, very threatening.
De and the Kiva robots are central to what Amazon sees as the future of shopping: whatever you want, whenever you want it, wherever you want it, as fast as you demand it. AmazonFresh is expected to expand soon to 20 more urban markets–including some outside America. Los Angeles became the second AmazonFresh market, this past June, and customers there were offered something the folks in Seattle must wish they got: a free trial of Prime Fresh, the upgrade version of Amazon Prime, which provides free shipping of products and free delivery of groceries for orders over $35. Subscribers will pay an annual fee of $299. Considering that grocery delivery otherwise costs between $8 and $10 each time (depending on order size), the subscription covers itself after about 30 deliveries–which busy families will quickly exceed.
Bezos, in his cagey, friendly way, seems more excited about my Fresh experience than he is about describing Fresh’s future. He seems almost surprised that the service worked so well at a hotel, given that it was designed for home delivery. “Thank you!” he shouts. After peppering me with questions on how, precisely, the delivery went down, he finally gets around to addressing the service’s business purpose.
“WE WON’T INVEST IN A COMPANY UNLESS THEY CAN TELL US WHY THEY WON’T GET STEAMROLLED BY AMAZON.”
“We’d been doing a very efficient job with our current distribution model for a wide variety of things,” Bezos says. “Diapers? Fine, no problem. Even Cheerios. But there are a bunch of products that you can’t just wrap up in a cardboard box and ship ’em. It doesn’t work for milk. It doesn’t work for hamburger.” So he developed a service that would work–not because he suddenly wanted to become your full-service grocer but because of how often people buy food.
AmazonFresh is actually a Trojan horse, a service designed for a much greater purpose. “It was articulated [in the initial, internal pitch to Bezos] that this would work with the broader rollout of same-day delivery,” says Tom Furphy, a former Amazon executive who launched Fresh in 2007 and ran it until 2009. Creating a same-day delivery service poses tremendous logistical and economic hurdles. It’s the so-called last-mile problem–you can ship trucks’ worth of packages from a warehouse easily enough, but getting an individual package to wind its way through a single neighborhood and arrive at a single consumer’s door isn’t easy. The volume of freight and frequency of delivery must outweigh the costs of fuel and time, or else this last mile is wildly expensive. You can’t hire a battalion of Des unless they earn their keep. So by expanding grocery delivery, Amazon hopes to transform monthly customers to weekly–or even thrice-weekly–customers. And that, in turn, will produce the kind of order volume that makes same-day delivery worth investing in. “Think of the synergy between Prime, same-day delivery, and Fresh,” says Furphy. “When all of those things start working in concert, it can be a very beautiful thing.”
AmazonFresh is arguably the last link in Bezos’s big plan: to make Amazon the dominant servicer–not just seller–of the entire retail experience. The difference is crucial. Third-party sellers, retailers large and small, now account for 40% of Amazon’s product sales. Amazon generally gets up to a 20% slice of each transaction. Those sellers are also highly incentivized to use Fulfillment by Amazon (known as FBA). Rather than shipping their products themselves after a sale is made on the Amazon site, these retailers let Amazon do the heavy lifting, picking and packing at places like the Phoenix center. For the sellers, an FBA agreement grants them access to Prime shipping speeds, which can help them win new customers and can allow them to sell at slightly higher prices. For Amazon, FBA increases sales, profits, and the likelihood that any shopper can find any item on its website.
“NOW YOU HAVE SMART BRICK-AND-MORTAR STORES SAYING, ‘WHY ISN’T OUR EXPERIENCE MORE INTUITIVE, AS IT IS ON THE WEB?’”
The burgeoning AmazonFresh transportation network will help expand these numbers. In Los Angeles and Seattle, a fleet of Fresh trucks delivers everything from full-course meals to chocolate from local merchants. The bright green branded trucks–with polite drivers in branded uniforms–let Amazon personify its brand, giving it the same kind of trustworthy familiarity that fueled the rise of UPS in the 1930s. “If you have all kinds of fly-by-night operations coming to your door, people don’t like that,” says Yossi Sheffi, professor and director of the MIT Center for Transportation and Logistics. “It’s different with someone in a U.S. Postal Service or FedEx uniform. Those brands inspire confidence.”
As Amazon evolves into a same-day delivery service, its active transportation fleet could become yet another competitive advantage. By supplementing its long-term relationships with UPS and FedEx with its own Fresh trucks, Amazon may well be able to deliver faster than retailers that depend entirely on outside services. “Pretty soon, if you’re a retailer with your online business, you’re going to be faced with a choice,” says Brian Walker, a former analyst at Forrester Research who is now with Hybris, a provider of e-commerce software. “You’re not going to be able to match Amazon, so you’re going to have to consider partnering with them and leveraging their network.”
This shift could even turn Amazon into a competitor to UPS and FedEx, the long-standing duopoly of next-day U.S. shipping. “If Amazon could do it at enough scale, they could offer shipping at a great value and still eke out some margin,” says Walker. “In classic Amazon fashion, they could leverage the infrastructure they’ve built for themselves, take a disruptive approach to the pricing, and run it as an efficiency play.”
Amazon has been down this road before. Its Web Services began as an efficient, reliable back end to handle its own web operations–then became so adept that it now provides digital services for an enormous range of customers, including Netflix and, reportedly, Apple. It’s not impossible to imagine Amazon doing the same with shipping. Last year, the company cut its shipping costs as a percentage of sales from 5.4% to 4.5%. As it builds more distribution centers, installs more lockers, and builds out its fleet, Amazon is likely to drive those efficiency costs down even further.
So is Amazon Freight Services Bezos’s next mission? When I ask, the laugh lines vanish from his face as if someone flipped a switch on his back. He contends that same-day delivery is too expensive outside of urban markets and that it only makes sense for Amazon to deliver its own products within the Fresh program. In China, he explains, Amazon does in fact deliver products via many couriers and bicycle messengers. “But in a country like the United States,” he says, “we have such a sophisticated last-mile delivery system that it makes more sense for Amazon to use that system to reach its customers in a rapid and accurate way.” When I ask whether he would consider, say, buying UPS, with its 90,000 trucks–or even more radically, purchasing the foundering USPS, with its 213,000 vehicles running daily through America’s cities and towns–Bezos scoffs. But he won’t precisely say no.
• Condoms, iPads, And Toilet Paper: A Day In The Life Of An Ebay Now Deliveryman
Rivals aren’t waiting for an answer. EBay has launched eBay Now, a $5 service that uses its own branded couriers in New York, San Francisco, and San Jose, to fetch products from local retail stores like Best Buy and Toys “R” Us and deliver them to customers within an hour. Google, fully aware that Amazon’s market share in product search is substantial (now 30% to Google’s 13%), has launched a pilot service called Google Shopping Express, which partners with courier companies. Walmart–which has booted all Kindles from its stores–started testing same-day delivery in select cities during the last holiday season, shipping items directly from its stores. (Joel Anderson, chief executive of Walmart.com, even suggested paying in-store shoppers to deliver online orders to other customers the same day. Come for a handsaw, leave with a job!)
These are the sort of ideas that retailers–both e-commerce and physical, large and small–will have to consider as Amazon expands. Guys like Jeff Jordan, partner at well-known venture firm Andreessen Horowitz, will make sure of it. His firm follows and invests in direct-to-consumer businesses. “We won’t invest in a company,” he says, “unless they can tell us why they won’t get steamrolled by Amazon.”
Given the astounding growth of Amazon, and the seemingly infinite ways it has defied the critics, Bezos may have proved himself the best CEO in the world at taking the long view. But he doesn’t like talking about it. “Did you bring the crystal ball? I left mine at home today,” he quips. He does, however, like discussing what the future might bring for his customers. In fact, he likes talking about his customer so much that the word can seem like a conversational tic; he used it 40 times, by my count, in just one interview. “It’s impossible to imagine that 10 years from now, I could interview an Amazon customer and they would tell me, ‘Yeah, I really love Amazon. I just wish your prices were a little higher,’” he says. “Or, ‘I just wish you’d deliver a little more slowly.’” In Bezos’s world, the goal of the coming decade is a lot like the goal of the past two: Be cheap. Be fast. That’s how you win.
There is, naturally, no guarantee that Bezos will simply win and win and win. The bigger Amazon gets, the greater the number and variety of stakeholders required to make the Amazon machine hum. Many seem to be getting increasingly frustrated. Consider Amazon’s third-party sellers–that group making up 40% of the company’s product sales. Earlier this year, Amazon issued a series of fee hikes for use of its fulfillment services, ranging from as low as 5 cents per smallish unit to as much as $100 for heavier or awkwardly shaped items (like a whiteboard, say, or roll-away bed). Many sellers took to Amazon’s forums to complain, and others threatened to go to eBay, which mostly leaves fulfillment to its sellers. “I think Amazon is a necessary evil,” says Louisa Eyler, distributor for Lock Laces, a shoelace product that sells as many as 3,000 units per week on Amazon. After the price hike, Eyler says her total fees for the $7.99 item went from $2.37 to $3.62. She says Amazon now makes more per unit than she does.
Or consider the frustrations of Amazon employees, who are striking at two of its eight German facilities in an effort to wrest higher wages and overtime pay. At the height of the conflict, on June 17, 1,300 workers walked off the job. (It is one of Amazon’s largest walk-offs in its biggest foreign market, and could result in shipping delays.) Meanwhile, Amazon workers in the U.S. have filed a lawsuit claiming that they’ve been subject to excessive security checks–to search for pilfered items–at warehouses. The suit alleges their wait could last as long as 25 minutes, an inconvenience Amazon would never subject its customers to. “It means there’s a broken process somewhere,” says Annette Gleneicki, an executive at Confirmit, a software company that helps businesses capture customer and employee feedback. “[Bezos] clearly inspires passion in his employees, but that’s only sustainable for so long.”
The company could be vulnerable on other fronts as well. Target and Walgreens have “geo-fenced” their stores so their mobile apps can guide customers directly to the products they desire. Walmart and Macy’s have begun making their stores do double-duty, both as a place to shop and a warehouse from which to ship products. (The strategy seems to be paying off for Macy’s, which recently reported a jump in first-quarter profit and is now fulfilling 10% of its online purchases from its stores.) They’re proving that retail won’t go away–it’ll learn and adapt. “Now you have smart brick-and-mortar stores saying, Why isn’t our experience more intuitive, as it is on the web?” says Doug Stephens, author of The Retail Revival: Re-Imagining Business for the New Age of Consumerism. “We should know a consumer when they walk in, and what they bought before, in the same way as Amazon’s recommendation engine.”
Bezos won’t admit to any deep concern. While Amazon’s paper-thin profits continue to perplex observers (the company netted only $82 million in the first quarter of 2013), the three primary weapons in its retail takeover–fulfillment centers, Amazon Prime, and now AmazonFresh–are coming to maturity. If the next year tells us anything about Amazon’s future, it should reveal whether Bezos’s decision to plow billions back into these operations will give the company an end-to-end service advantage that might be nearly impossible for its competitors to overcome.
The sun seems to be setting on Bezos’s big Day One. Before we part ways in Seattle, I ask him what we can expect to see on Day Two. “Day Two will be when the rate of change slows,” he replies. “But there’s still so much you can do with technology to improve the customer experience. And that’s the sense in which I believe it’s still Day One, and that it’s early in the day. If anything, the rate of change is accelerating.”
Of course, Bezos is the accelerator.
Amazon Buys Twitch For $970 Million In Cash
AUG. 25, 2014, 4:03 PM 13,994 21
Patrick T. Fallon/Getty Images
Twitch CEO Emmett Shear.
Amazon said on Monday it would pay $970 million in cash for Twitch, a live video-game-streaming site with more than 55 million users that’s like YouTube for video games.
As of July, Twitch had over 15 billion minutes of content, and users were spending more than 100 minutes a day on the site, on average. Twitch users can host live streams of their gaming sessions and broadcast them to the world. They can also chop up their sessions into segments for streaming later.
It’s also a resource for gamers who like to show off their unique skills. For example, there’s an entire community on Twitch dedicated to doing weird stuff like beating Zelda games in under 20 minutes or playing massively collaborative games of Pokemon.
A Twitch streaming session.
Twitch is a huge part of the internet, and it accounts for nearly 2% of all traffic in the U.S. during peak hours, according to a report by The Wall Street Journal. Only Netflix, Google, and Apple account for more traffic. In that respect, Twitch even streams more video than Hulu.
Twitch also accounts for 40% of all live-streamed internet content, according to Business Insider Intelligence:
What’s really impressive is that Twitch was able to become so big after just three years.
You can see Amazon’s purchase of Twitch as a play to take over the future of TV. More and more content is being streamed online, and more and more hours of video watching are being done on sites like YouTube, Netflix, and Hulu. Amazon has its own streaming video service called Amazon Instant that comes with Amazon Prime memberships. Amazon Instant includes thousands of streaming movies and TV shows, including original shows like “Alpha House.”
Alpha House is an original Amazon show.
Earlier Monday, multiple reports indicated Amazon was in late-stage talks to acquire Twitch. The news came as a big surprise because just last month it was reported that Google had agreed to acquire Twitch for about $1 billion. That deal, however, was never officially confirmed.
The Google-Twitch deal felt like a natural fit, since it would’ve been a good way for YouTube to expand its video offerings. Yahoo also tried to buy Twitch for $970 million, but Amazon swooped in and got it instead.
It’s unclear what had caused the Google-Twitch deal to fall through, but one possible reason is over antitrust issues. Since Google already owns YouTube, the world’s largest video streaming site, acquiring another massive video streaming site like Twitch could raise antitrust issues. According to Forbes, the two sides couldn’t agree on the potential break up fee.
Here’s the official announcement from Amazon:
Amazon.com, Inc. (NASDAQ: AMZN) today announced that it has reached an agreement to acquire Twitch Interactive, Inc., the leading live video platform for gamers. In July, more than 55 million unique visitors viewed more than 15 billion minutes of content on Twitch produced by more than 1 million broadcasters, including individual gamers, pro players, publishers, developers, media outlets, conventions and stadium-filling esports organizations.
“Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month – from The International, to breaking the world record for Mario, to gaming conferences like E3. And, amazingly, Twitch is only three years old,” said Jeff Bezos, founder and CEO of Amazon.com. “Like Twitch, we obsess over customers and like to think differently, and we look forward to learning from them and helping them move even faster to build new services for the gaming community.”
“Amazon and Twitch optimize for our customers first and are both believers in the future of gaming,” said Twitch CEO Emmett Shear. “Being part of Amazon will let us do even more for our community. We will be able to create tools and services faster than we could have independently. This change will mean great things for our community, and will let us bring Twitch to even more people around the world.”
Twitch launched in June 2011 to focus exclusively on live video for gamers. Under the terms of the agreement, which has been approved by Twitch’s shareholders, Amazon will acquire all of the outstanding shares of Twitch for approximately $970 million in cash, as adjusted for the assumption of options and other items. Subject to customary closing conditions, the acquisition is expected to close in the second half of 2014.
Here’s a letter from Twitch’s CEO:
Dear Twitch Community,
It’s almost unbelievable that slightly more than 3 years ago, Twitch didn’t exist. The moment we launched, we knew we had stumbled across something special. But what followed surprised us as much as anyone else, and the impact it’s had on both the community and us has been truly profound. Your talent, your passion, your dedication to gaming, your memes, your brilliance – these have made Twitch what it is today. Every day, we strive to live up to the standard set by you, the community. We want to create the very best place to share your gaming and life online, and that mission continues to guide us. Together with you, we’ve found new ways of connecting developers and publishers with their fans. We’ve created a whole new kind of career that lets people make a living sharing their love of games. We’ve brought billions of hours of entertainment, laughter, joy and the occasional ragequit. I think we can all call that a pretty good start. Today, I’m pleased to announce we’ve been acquired by Amazon. We chose Amazon because they believe in our community, they share our values and long-term vision, and they want to help us get there faster. We’re keeping most everything the same: our office, our employees, our brand, and most importantly our independence. But with Amazon’s support we’ll have the resources to bring you an even better Twitch. I personally want to thank you, each and every member of the Twitch community, for what you’ve created. Thank you for putting your faith in us. Thank you for sticking with us through growing pains and stumbles. Thank you for bringing your very best to us and sharing it with the world. Thank you, from a group of gamers who never dreamed they’d get to help shape the face of the industry that we love so much. It’s dangerous to go alone. On behalf of myself and everyone else at Twitch, thank you for coming with us. Emmett Shear, CEO
Disclosure: Jeff Bezos is an investor in Business Insider through his personal investment company Bezos Expeditions.
SEE ALSO: Here’s Why Amazon Just Paid Nearly $1 Billion For A Site Where You Can Watch People Play Video Games
Read more: http://www.businessinsider.com/amazon-buys-twitch-2014-8#ixzz3BSBNYa53
Amazon Is Turning Into Google
AUG. 25, 2014, 7:15 PM 1,038 3
Amazon CEO Jeff Bezos.
Tell me which company this sounds like:
A company that…
• Has its own mobile operating system for tablets and smartphones.
• Has its own app store.
• Sells digital music, books, movies, and TV shows.
• Will soon have an online ad network.
• Created a way to accept payments with a smartphone.
• Owns the servers that act as the backbone for several major apps and startups and even parts of the CIA.
• Is experimenting with drones.
It’s not Google. It’s Amazon.
But just like Google has expanded beyond search into everything from finding ways to cheat death to making cars that can drive themselves, Amazon has been increasingly expanding beyond its core e-commerce business.
And in recent months, that only seems to be speeding up.
Amazon’s $970 million purchase of Twitch, a site that lets you watch people play video games via a live stream, is its latest push into original video content and a move to transform itself into part media company. It’s a longer-term bet that the trend of watching stuff online versus cable will continue.
Add that on top of the stuff listed above, and Amazon suddenly sounds less like an online store for buying books and gifts and more like a company trying to insert itself into everything you do online. It sounds very Google-y.
There’s experimentation with same-day delivery, grocery delivery, and point of sale systems for brick-and-mortar retailers. Those are all things Google is working on or has at least experimented with.
The only difference, of course, is that Google is wildly profitable while Amazon continues to post losses each quarter. (Next quarter could be a doozy. Amazon said to expect at least a $410 million operating loss.)
But it’s also a changing company, one that’s no longly simply “the everything store,” but an entity creeping its way into everything we do from shop to play games to run our small businesses.
Disclosure: Jeff Bezos is an investor in Business Insider through his personal investment company Bezos Expeditions.
SEE ALSO: 9 impressive stats about Twitch
Read more: http://www.businessinsider.com/amazon-is-google-2014-8#ixzz3BSE4Hglt
AmazonFresh Expansion Plans
, Says Drones Are for Real
April 10, 2014, 9:15 AM PDT
By Jason Del Rey
Grocery delivery fans outside of Seattle and California, rejoice: Amazon plans to expand its AmazonFresh offering beyond its current three markets, CEO Jeff Bezos confirmed in his 2013 letter to shareholders published today.
“We’ll continue our methodical approach — measuring and refining AmazonFresh — with the goal of bringing this incredible service to more cities over time,” he said in the letter.
For five years, Fresh was only available in Seattle, before the company launched the program last year in Los Angeles and, six months later, San Francisco. Several reports over the past year have said Amazon plans to expand the delivery service into 10 to 20 more new markets this year, but this may be the first time Bezos has publicly acknowledged the expansion plans.
Through Fresh, shoppers can order deliveries of groceries and hundreds of thousands of other items, from TVs to toys, that arrive either that same day or the following morning. Industry observers believe that part of Amazon’s reason for delivering groceries is that it will create enough sales volume and delivery demand to justify delivering all other Amazon merchandise within one day.
Another highlight from the letter: Drones.
“The Prime Air team is already flight testing our 5th and 6th generation aerial vehicles,” Bezos wrote, “and we are in the design phase on generations 7 and 8.”
Is it possible that drone delivery is still a marketing stunt? Sure. If so, Bezos is sticking to the script.
More from this story
Headed your way: AmazonFresh widens range of grocery deliveries
Doorstep delivery: Our reporter gives AmazonFresh grocery service a whirl
BY NANCY LUNA / STAFF WRITER
Published: June 20, 2014 Updated: June 23, 2014 11:44 a.m.
STEVEN GEORGES, CONTRIBUTING PHOTOGRAPHER
VONS VS. AMAZONFRESH
Here are a few price comparisons based on items found online this week:
1 gallon of Alta Dena fat-free milk: $5.49 Vons vs. $4.99 AmazonFresh
59-ounce jug of Simply Lemonade: $2.50 Vons vs. $2 AmazonFresh
5 ounces organic baby romaine: $3.90 O private-label Vons brand vs. $3.39 Earthbound brand at AmazonFresh
24-pack of Aquafina 16.9-ounce bottled water: $5.49 Vons vs. $4.29 AmazonFresh
Tide Free and Gentle (100 ounces): $11.99 Vons vs. $11.97 AmazonFresh
20-pack Coke Zero: $8.29 Vons vs. $6.99 AmazonFresh
Winder Farms: A Utah-based delivery service in California, Utah and Nevada. Delivers roughly 300 farm fresh items. Delivery in Orange County and parts of Los Angeles County. winderfarms.com
Good Eggs: Delivery of locally grown, sustainable goods from stores or farmers’ markets. Los Angeles County only. goodeggs.com/about/mission
Vons: Traditional market with home delivery in Orange County. shop.safeway.com
Instacart: Delivers from local stores like Whole Foods, Ralphs and Bristol Farms. Limited to a few ZIP codes in Los Angeles County. instacart.com/store/whole-foods
Deliveer: Personal shoppers deliver groceries from Whole Foods Market, Trader Joe’s, Vons and Costco in Pasadena, San Marino, South Pasadena and Altadena. Expansion to other parts of Los Angeles County coming soon. deliveer.com
As Amazon’s fledgling grocery service in Southern California widens its reach, some boutique food suppliers say the experiment has proven to be a boon for business.
Huntington Meats saw sales go from single-digit growth to double digits after the first month of partnering with AmazonFresh, a doorstep food service that launched last summer in Los Angeles.
The 30-year-old butcher shop, known for its top-grade meats and wild game, partnered with AmazonFresh last summer. Co-owner Jim Cascone said his meat market sells the “whole store,” or 175 items, through the online site, from free-range chickens to ground elk.
Demand for his specialty goods continues to soar and was boosted in recent weeks when the company expanded its service to most of Orange County.
“We’re very pleased,” said Cascone. “We’re definitely getting a lot of business out of it.”
For other Amazon partners, the impact has been much less dramatic. Greg Daniels, executive chef-partner at Haven Collective, has been working with AmazonFresh the last six months. The company’s Provisions Market bottle and cheese shop in Old Towne Orange offers Amazon shoppers specialty cheeses, cured meats, chocolate and a wide selection of craft beer.
“Cheese is popular, and beer not too much yet,” said Daniels. “It’s definitely brand exposure more than money.”
Amazon’s doorstep service initially was limited to Los Angeles, four cities in Orange County and parts of Long Beach. Shoppers choose from a wide selection – some 500,000 items – of merchandise, groceries and specialty foods.
In recent weeks, AmazonFresh has expanded to Orange, Tustin, Garden Grove, Aliso Viejo, Santa Ana, Laguna Niguel and Mission Viejo in addition to Irvine, Anaheim, Huntington Beach and Newport Beach. All of Long Beach is also eligible for delivery, a company spokesperson said.
The expansion comes as Amazon sees positive results in the greater Los Angeles area.
“While I can’t share specific numbers, we are very pleased with the response from our customers so far,” AmazonFresh said in a statement.
AmazonFresh’s grocery delivery expansion comes as doorstep food services experience a resurgence after failing years ago.
In 2013, revenue from online grocery sales reached $6.5 billion, according to market research firm IBIS World. By 2018, sales are projected to reach $10.1 billion as time-strapped consumers seek convenient ways to shop through mobiles devices and home computers, IBIS said.
AmazonFresh entered Los Angeles last summer after testing its grocery service near its home turf in Seattle. The service is also available in San Francisco and Berkeley.
Other food delivery options in the region include Winder Farms, Good Eggs, Deliveer, Instacart and Vons.
AmazonFresh rolls into San Diego
By Katherine P. Harvey2:08 P.M.JULY 29, 2014Updated5:36 P.M.
This entry was posted in Business Models, E-Commerce and tagged Amazon on August 20, 2014. Edit
==============AMAZING AMAZON POST AUGUST, 2014 ========
Continually updated notes as I try to keep up with Jeff Bezos (impossible)
As of 10/23/2013
JCR was working with a partner at a major consulting firm on CGF business. In a casual moment, they got talking about e-commerce, and the subject of Amazon came up. The partner shared that they had just completed a major piece about Amazon, using entirely public sources, for a retailer client. He graciously offered to share the work, and did not label the work confidential. JCR reviewed it and thought that the sources and insights were outstanding – but he thought it best not to quote or share the document directly. So these facts are largely from that analysis and that analysis’ public sources (shown at the end of this paper). They are extended by other facts and articles discovered by JCR.
The purpose of this working paper is to lay out a case that Amazon deserves high-priority consideration by virtually all Fortune 1000 companies operating in a retail or manufacturer environment.
From 2015-2018, there is a high likelihood that:
1. E-commercewillbemainstream.Itwillbecomethepreferredmethodofshoppingformany consumers, and it will enjoy ubiquity and mainstream use by the global middle class like cell phones do today;
2. Amazon will lead e-commerce. Amazon will be – far and away – the leader in the e-commerce retailing space;
4. Amazon will aggressively enter food and beverage retail globally. They will establish themselves
in key markets as one of the top 10 customers of most manufacturers;
5. Amazon will “perfect” home delivery. They will crack the “last mile” of retail. They will “perfect” delivering direct to the home or to a designated agent of the home, thereby making obsolete traditional retailers who cannot do this;
6. Amazon will “perfect” their business model. Amazon will dominate best practice in logistics, fulfillment, and customer satisfaction over this planning period, in a manner so effective that others who fail to keep up will be left behind by 2016;
7. Amazon will disrupt most business models. Amazon can potentially disrupt fundamental assumptions bout store delivery, merchandising, and the viability of home delivery
Experts project that:
– E-Commerce will exceed $1,400 billion revenue by 2020
– It will be ubiquitous, accepted by virtually all (like cell phones today)
– It will be primary source of purchasing by consumers, who will be intensively engaged
– It will extend from its current 33 retail categories into all retail categories
1. E-Commerce Will Be Mainstream
2. Amazon Will Lead E-Commerce
Amazon will be – far and away – the leader in the e-commerce retailing space. Amazon revenue will continue to grow fast: in 2013 it was $75 billion, up from $61, $48, $34, $25, and $19 billion in 2012, 2011, 2010, 2009, and 2008 respectively. Analysts predict revenue will reach $90 billion in 2014. By 2015, Amazon is highly likely to have revenues exceeding $100 billion annually. Conservatively, Amazon revenue is likely to grow 20% per year from 2015 to 2020, reaching at least $250 billion in 2020 (one third of e-commerce and more than half the size of Wal-Mart today).
Exhibit 2: Projected Amazon Revenue Growth (2008-2024)
Note: 2008-2012 are actual revenues !
Amazon will begin to directly threaten Walmart over this 2015-2020 planning cycle
Today, Amazon revenue ($61 billion) is small compared to brick & mortar Wal-Mart, who closed 2012 with revenue of $444 billion. But, it nonetheless is remarkable for an online retailer. In 2000, the entire universe
of e-commerce was predicted to be less than $20B by Forrester and yet today, Amazon alone sells $61 MM and is closing in on $100 MM.
In contrast, it appears that Wal-Mart online sales will be less than $10 million. $61 million versus $10 million: it seems reasonably clear who is going to win in e-commerce. Although recent reports make it very clear that Wal-Mart has woken up to the threat and is responding so no one can know for sure what the outcome of this battle will be.
Not bad for a company that opened for business as a bookseller less than 20 years ago – in 1995.
3. E-Commerce Will Impact Food And Beverage
Food & Beverages to Grow as a Proportion of Total E-Commerce Sales
Amazon will emerge as a force in food and beverage retail. Some have concluded that grocery was approximately $36M in 2011 and would nearly double by 2015 to $57M and nearly triple from 2011 to $101B in 2020. Furthermore, it is predicted that beverages would grow from $6B in 2011, to $8B in 2015, and to $17B by 2020 (see Exhibit 1).
Amazon will establish itself in key markets as one of our top 10 customers. The food and beverage category will grow in importance online and Amazon is expected to own 30% of that market.
Amazon will crack the “last mile” of retail. They will “perfect” delivering direct to the home or to a designated agent of the home, thereby marginalizing traditional retailers who cannot do this. Amazon is currently testing and honing their approach through AmazonFresh.
AmazonFresh is a new service that is currently available in Seattle and Los Angeles in select zip codes. The service offers same-day and early morning delivery on orders of over $35 of more than 500,000 Amazon items, including fresh grocery and local products. The annual “membership” costs $299 with unlimited free delivery and is offered as an additional level of Amazon Prime.
4. Amazon Will Aggressively Enter Food and Beverage Retail Globally
5. Amazon Will “Perfect” Home Delivery
Exhibit 3: AmazonFresh Sortable Shopping
6. Amazon Will “Perfect” Their Business Model
Amazon will dominate best practice in logistics, fulfillment, and customer satisfaction over this planning period, in a manner so effective that others who fail to keep up will be left behind by 2016. One example of their current testing in fulfillment, logistics, and delivery is the launch of
Now Amazon has taken a small step toward eliminating the UPS wait with a service inspired less by the internet and more by the Port Authority. Amazon Locker allows you to have your packages sent to the equivalent of single-use P.O. boxes housed in 24-hour convenience stores, grocery stores and drug stores. Amazon sends you an email with a pickup code, which you enter on a touchscreen to open the door of the locker containing your package. You have three days from the delivery date to pick it up.
Additionally, Amazon has started sharing warehouse space with some of its key suppliers. Amazon has been sharing warehouse space with P&G for 3 years and is now in at least 7 P&G distribution centers. Amazon also has arrangements in place or is working out deals with companies such as Kimberly-Clark, Seventh Generation, and Georgia-Pacific.
7. Amazon Will Disrupt Many Business Models
Amazon can potentially disrupt fundamental assumptions about direct store delivery, merchandising, bottlers, OBPPC, and the viability of home delivery.
================ 2013 AMAZON FACT SHEET ========
Amazon Fact Sheet
Last Updated in Late 2013
Corporate mission: We seek to be Earth’s most customer-centric company for four primary customer sets: consumers, sellers, enterprises, and content creators
Headquarters: Seattle, WA
When Founded: The company was incorporated in 1994 as Cadabra and went online as Amazon.com in 1995
CEO: Jeffrey Bezos
Number of Employees: 97,000
Number of Retail Categories: 33 categories
Customers: 200 million active customers; 132 million unique visitors each month Fulfillment Centers: 89 worldwide, 54 million square feet of total space
Amazon has 89 fulfillment centers worldwide
Fulfillment centers are located in 8 countries including USA, Canada, France, Germany, Italy, China, Japan, and the UK
Amazon has separate retail sites for USA, Canada, France, Germany, Italy, China, Japan, and UK plus Brazil, India, Mexico, and Spain
The US fulfillment centers are located in: Arizona, California, Delaware, Indiana, Kansas, Kentucky, Nevada, New Hampshire, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, Washington 1995: 400 square foot 2010: 50 fulfillment centers 26M square feet
Revenue was $61, $48, $34, $25, and $19B in 2012, 2011, 2010, 2009, and 2008 respectively o Annual revenue in 2011 was 27% more than Google’s
Revenue to reach $74.6B in 2013 (though $0 net income), 24.6% CAGR from 2011 Amazon’s market share represents one third of U.S. e-commerce sales
Amazon on pace to reach over $125B globally by 2016; 4.5% CAGR from 2010
North America: $65.4B by 2016; 23.2% CAGR from 2010
International: $61.3B by 2016; 26.3% CAGR from 2010 Amazon has had one of the fastest growths in the internet’s history
After 5 years eBay reached $0.4B, Google reached $1.5B, and Amazon reached $2.8B
Amazon Revenue vs. Net Income ($M)
$ $ $ $
70,000 52,500 35,000 17,500
2008 2009 2010 2011 2012
CEO and founder Jeffery Bezos and an eight-member board of directors
CEO oversees the Chief Financial Officer (CFO), the Chief Technology Officer and the following 8 departments:
o Business Development o E-Commerce Platform o International Retail
o North America Retail o Web Services
o Digital Media
o Legal & Secretary o Kindle
CFO oversees the Real Estate and Control department
International Retail oversees three separate departments: China, Europe and India
North America Retail oversees the following five departments: Seller Services, Operations, Toys, Sports & Home Improvement, Amazon Publishing and Music & Video
Web Services department oversees Amazon S3 and Database Services
Other departments include Product Development & Studios, Europe Operations, Global Advertising Sales, Computing Services, and Global Customer Fulfillment
1998: PlanetAll, Junglee, Bookpages.co.uk
1999: Internet Movie Database (IMDb), Alexa Internet, Accept.com, Exchange.com, Pets.com, Home
Grocer, Back-to-Basics Toys, drugstore.com
2005: BookSurge, Mobipocket.com, CreateSpace.com
2007: DPRreview.com, Brilliance Audio
2008: Audible.com, Fabric.com, Box Office Mojo, AbeBooks, Shelfari, Reflexive Entertainment 2009: Zappos, Lexcycle. SnapTell, Stanza
2010: Touchco, Woot, Quidsi, BuyVIP, Amie Street
2011: LoveFilm, The Book Depository, Pushbutton, Yap
2012: Kiva Systems, Teachstreet, Evi
2013: IVONA Software, GoodReads, Liquavista
Corporate Timeline (1995-2013)
o Began selling books online
o Two small fulfillment centers – Seattle and Delaware 1999
o Acquired Pets.com for $58 million
o Acquired Home Grocer for $42.5 million
o Acquired Back-to-Basics toys for $135 million
o Acquired drugstore.com for $44 million
o (Since then have acquired hardware, car, electronics, sporting goods, luxury, wine, etc.)
o Became the online engine behind Borders.com o Broadened beyond books to CD’s and DVD’s
o Launched Amazon Prime 2007
o Launched Kindle (developed by Lab126, their internal appliance R&D shop)
o Launched Amazon Fresh in Seattle 2008
o $19 billion revenue 2009
o $24.5 billion revenue (+28%)
o $.6 billion net income
o Acquired Zappos for $920 million
o Acquired Quidsi for $500 million (owns Diapers.com) 2011
o $48 billion revenue (+41%) 2012
o 164 million active customers
o $61 billion revenue (+27%)
o $.6 billion net income
o Launched AmazonSupply (with 500,000 products, in 14 categories, B2B target)
• • •
o (Instant new major competitor for Blockbuster and Netflix)
o $75 billion revenue PROJECTED (+22%)
o $0 net income
o 200 million active customers
!o (132 million every month, compared to EBAY 60MM; Wal-Mart 63 MM; Apple 18MM)
In 15 years, Amazon went from one category (books) to 33 (cloud services, clothing, baby products, sports, electronics, music, video games, books, film, audio, beauty products, tools & home improvement, office products etc.)
!• Has introduced two new product categories every year for almost a decade !Strategies
Build, buy, partner
o Build: new categories (e.g., MYHABIT)
o Buy: well-established competitors (E.g., Quidsi)
o Partner: offers tech service / e-commerce expertize to third parties (e.g., cobranded website
with Toys “R” Us Customer-first solutions
o Bottom-up approach: customer needs drive everything
o Frugality: Amazon continually seeking to do things cost-efficiently o Innovation: Amazon always seeing simpler solutions
Data & human driven customer service
o Every employee, even the CEO, spends two days every two years on the service desk to answer
calls and help customers
o 90% of customer service by email rather than by telephone
o Amazon has developed its own software to manage email centers
o 1-click ordering
o Amazon Prime – $79 / year, instant streaming of movies & TV shows, instant access to thousands
of Kindle Books, free-two day shipping
o Amazon Locker – lockers installed in grocery, convenience and drugstore outlets that can accept
packages for customers for a later pick-up o Moving towards same-day delivery
▪ Building warehouses close to city center – risky because Amazon will pay states taxes it did not pay before, but it will get closer to same-day deliver
▪ Warehouses currently being built in California, Indiana, New Jersey, Tennessee, South Carolina, Virginia
o Amazon Supply – free two-day shipping for orders over $50 Low price
o Amazon significantly cheaper than competitors Digital optimization of supply chain
Amazon automatically chooses the cheapest origin for the customer’s order in real-time It re-optimizes it based on the customers’ orders
Fast moving items are stored in all the fulfillment centers
Hard-to-find items are kept in small quantities in one or two fulfillment centers
Easily movable items (e.g. media) are stored in highly automated facilities
Extensive use of tracing
Drop shipping: when applicable, Amazon provides packages and asks the supplier to ship the product himself
Third-party sellers follow the same principle, which increases margins
Selling at a loss – costs around $210 to produce, sold at $199
But over the first 6 months of use, Amazon makes $136 of margin on average on every Kindle Fire by selling digital content
Amazon is developing international partnerships with retailers (e.g., Darty in France) to sell more Kindles
Wal-Mart had 62.5M unique visitors in August 2013, compared with Amazon’s 133M Wal-Mart copycatting some of Amazon’s most successful tactics
o Trying out lockers, allowing shoppers to order items online and pick them up in stores o Dabbling in same-day delivery (testing in four cities) and even going a step further than
Amazon by attempting to crowd-source package drop-off among customers
o Investing in web technology to improve both their site’s appearance and ease of navigation
Comparisons to Wal-Mart
Contrary to Wal-Mart, which failed to enter the German and South Korean markets, Amazon’s international expansion has been successful
Amazon to reach $74.6B in 2013, 24.6% CAGR from 2011
o Wal-Mart’s revenue will be $500B in 2013, but its revenue in e-commerce by 2014 will reach just $10B
E-commerce is growing at 11% a year, but sales for consumer packaged goods online – food, groceries, everyday items – are growing at closer to 20% this is the area Wal-Mart will go after
o Amazon already one step ahead with Amazon Fresh !!
How Amazon Controls Ecommerce (Slides)
This post is about two important articles related to Primary Care Best Practice: One by Atul Gawande called “Big Med” and the other from Harvard Medical School about Physician Burnout.
As usual, Atul tells stories. His stories begin with his positive experience at the Cheesecake Factory and with his mother’s knee replacement surgery.
Article by Atul Gawande Big Med and the Cheesecake Factory
Article explores the potential for transferring some of the operational excellence of the Cheesecake Factory to aspects of health care.
He finds it tempting to look for 95% standardization and 5% customization.
He sees lessons in rolling out innovations through test kitchens and training that includes how to train others.
He sees heroes in doctors that push to articulate a standard of care, or technology, or equipment, or pharmaceutical.
CREDIT: New Yorker Article by Atul Gawande “Big Med”
Annals of Health Care
August 13, 2012 Issue
Restaurant chains have managed to combine quality control, cost control, and innovation. Can health care?
By Atul Gawande
Medicine has long resisted the productivity revolutions that transformed other industries. But the new chains aim to change this.Illustration by Harry Campbell
It was Saturday night, and I was at the local Cheesecake Factory with my two teen-age daughters and three of their friends. You may know the chain: a hundred and sixty restaurants with a catalogue-like menu that, when I did a count, listed three hundred and eight dinner items (including the forty-nine on the “Skinnylicious” menu), plus a hundred and twenty-four choices of beverage. It’s a linen-napkin-and-tablecloth sort of place, but with something for everyone. There’s wine and wasabi-crusted ahi tuna, but there’s also buffalo wings and Bud Light. The kids ordered mostly comfort food—pot stickers, mini crab cakes, teriyaki chicken, Hawaiian pizza, pasta carbonara. I got a beet salad with goat cheese, white-bean hummus and warm flatbread, and the miso salmon.
The place is huge, but it’s invariably packed, and you can see why. The typical entrée is under fifteen dollars. The décor is fancy, in an accessible, Disney-cruise-ship sort of way: faux Egyptian columns, earth-tone murals, vaulted ceilings. The waiters are efficient and friendly. They wear all white (crisp white oxford shirt, pants, apron, sneakers) and try to make you feel as if it were a special night out. As for the food—can I say this without losing forever my chance of getting a reservation at Per Se?—it was delicious.
The chain serves more than eighty million people per year. I pictured semi-frozen bags of beet salad shipped from Mexico, buckets of precooked pasta and production-line hummus, fish from a box. And yet nothing smacked of mass production. My beets were crisp and fresh, the hummus creamy, the salmon like butter in my mouth. No doubt everything we ordered was sweeter, fattier, and bigger than it had to be. But the Cheesecake Factory knows its customers. The whole table was happy (with the possible exception of Ethan, aged sixteen, who picked the onions out of his Hawaiian pizza).
I wondered how they pulled it off. I asked one of the Cheesecake Factory line cooks how much of the food was premade. He told me that everything’s pretty much made from scratch—except the cheesecake, which actually is from a cheesecake factory, in Calabasas, California.
I’d come from the hospital that day. In medicine, too, we are trying to deliver a range of services to millions of people at a reasonable cost and with a consistent level of quality. Unlike the Cheesecake Factory, we haven’t figured out how. Our costs are soaring, the service is typically mediocre, and the quality is unreliable. Every clinician has his or her own way of doing things, and the rates of failure and complication (not to mention the costs) for a given service routinely vary by a factor of two or three, even within the same hospital.
It’s easy to mock places like the Cheesecake Factory—restaurants that have brought chain production to complicated sit-down meals. But the “casual dining sector,” as it is known, plays a central role in the ecosystem of eating, providing three-course, fork-and-knife restaurant meals that most people across the country couldn’t previously find or afford. The ideas start out in élite, upscale restaurants in major cities. You could think of them as research restaurants, akin to research hospitals. Some of their enthusiasms—miso salmon, Chianti-braised short ribs, flourless chocolate espresso cake—spread to other high-end restaurants. Then the casual-dining chains reëngineer them for affordable delivery to millions. Does health care need something like this?
Big chains thrive because they provide goods and services of greater variety, better quality, and lower cost than would otherwise be available. Size is the key. It gives them buying power, lets them centralize common functions, and allows them to adopt and diffuse innovations faster than they could if they were a bunch of small, independent operations. Such advantages have made Walmart the most successful retailer on earth. Pizza Hut alone runs one in eight pizza restaurants in the country. The Cheesecake Factory’s major competitor, Darden, owns Olive Garden, LongHorn Steakhouse, Red Lobster, and the Capital Grille; it has more than two thousand restaurants across the country and employs more than a hundred and eighty thousand people. We can bristle at the idea of chains and mass production, with their homogeneity, predictability, and constant genuflection to the value-for-money god. Then you spend a bad night in a “quaint” “one of a kind” bed-and-breakfast that turns out to have a manic, halitoxic innkeeper who can’t keep the hot water running, and it’s right back to the Hyatt.
Medicine, though, had held out against the trend. Physicians were always predominantly self-employed, working alone or in small private-practice groups. American hospitals tended to be community-based. But that’s changing. Hospitals and clinics have been forming into large conglomerates. And physicians—facing escalating demands to lower costs, adopt expensive information technology, and account for performance—have been flocking to join them. According to the Bureau of Labor Statistics, only a quarter of doctors are self-employed—an extraordinary turnabout from a decade ago, when a majority were independent. They’ve decided to become employees, and health systems have become chains.
I’m no exception. I am an employee of an academic, nonprofit health system called Partners HealthCare, which owns the Brigham and Women’s Hospital and the Massachusetts General Hospital, along with seven other hospitals, and is affiliated with dozens of clinics around eastern Massachusetts. Partners has sixty thousand employees, including six thousand doctors. Our competitors include CareGroup, a system of five regional hospitals, and a new for-profit chain called the Steward Health Care System.
Steward was launched in late 2010, when Cerberus—the multibillion-dollar private-investment firm—bought a group of six failing Catholic hospitals in the Boston area for nine hundred million dollars. Many people were shocked that the Catholic Church would allow a corporate takeover of its charity hospitals. But the hospitals, some of which were more than a century old, had been losing money and patients, and Cerberus is one of those firms which specialize in turning around distressed businesses.
Cerberus has owned controlling stakes in Chrysler and gmac Financing and currently has stakes in Albertsons grocery stories, one of Austria’s largest retail bank chains, and the Freedom Group, which it built into one of the biggest gun-and-ammunition manufacturers in the world. When it looked at the Catholic hospitals, it saw another opportunity to create profit through size and efficiency. In the past year, Steward bought four more Massachusetts hospitals and made an offer to buy six financially troubled hospitals in south Florida. It’s trying to create what some have called the Southwest Airlines of health care—a network of high-quality hospitals that would appeal to a more cost-conscious public.
Steward’s aggressive growth has made local doctors like me nervous. But many health systems, for-profit and not-for-profit, share its goal: large-scale, production-line medicine. The way medical care is organized is changing—because the way we pay for it is changing.
Historically, doctors have been paid for services, not results. In the eighteenth century B.C., Hammurabi’s code instructed that a surgeon be paid ten shekels of silver every time he performed a procedure for a patrician—opening an abscess or treating a cataract with his bronze lancet. It also instructed that if the patient should die or lose an eye, the surgeon’s hands be cut off. Apparently, the Mesopotamian surgeons’ lobby got this results clause dropped. Since then, we’ve generally been paid for what we do, whatever happens. The consequence is the system we have, with plenty of individual transactions—procedures, tests, specialist consultations—and uncertain attention to how the patient ultimately fares.
Health-care reforms—public and private—have sought to reshape that system. This year, my employer’s new contracts with Medicare, BlueCross BlueShield, and others link financial reward to clinical performance. The more the hospital exceeds its cost-reduction and quality-improvement targets, the more money it can keep. If it misses the targets, it will lose tens of millions of dollars. This is a radical shift. Until now, hospitals and medical groups have mainly had a landlord-tenant relationship with doctors. They offered us space and facilities, but what we tenants did behind closed doors was our business. Now it’s their business, too.
The theory the country is about to test is that chains will make us better and more efficient. The question is how. To most of us who work in health care, throwing a bunch of administrators and accountants into the mix seems unlikely to help. Good medicine can’t be reduced to a recipe.
Then again neither can good food: every dish involves attention to detail and individual adjustments that require human judgment. Yet, some chains manage to achieve good, consistent results thousands of times a day across the entire country. I decided to get inside one and find out how they did it.
Dave Luz is the regional manager for the eight Cheesecake Factories in the Boston area. He oversees operations that bring in eighty million dollars in yearly revenue, about as much as a medium-sized hospital. Luz (rhymes with “fuzz”) is forty-seven, and had started out in his twenties waiting tables at a Cheesecake Factory restaurant in Los Angeles. He was writing screenplays, but couldn’t make a living at it. When he and his wife hit thirty and had their second child, they came back east to Boston to be closer to family. He decided to stick with the Cheesecake Factory. Luz rose steadily, and made a nice living. “I wanted to have some business skills,” he said—he started a film-production company on the side—“and there was no other place I knew where you could go in, know nothing, and learn top to bottom how to run a business.”
To show me how a Cheesecake Factory works, he took me into the kitchen of his busiest restaurant, at Prudential Center, a shopping and convention hub. The kitchen design is the same in every restaurant, he explained. It’s laid out like a manufacturing facility, in which raw materials in the back of the plant come together as a finished product that rolls out the front. Along the back wall are the walk-in refrigerators and prep stations, where half a dozen people stood chopping and stirring and mixing. The next zone is where the cooking gets done—two parallel lines of countertop, forty-some feet long and just three shoe-lengths apart, with fifteen people pivoting in place between the stovetops and grills on the hot side and the neatly laid-out bins of fixings (sauces, garnishes, seasonings, and the like) on the cold side. The prep staff stock the pullout drawers beneath the counters with slabs of marinated meat and fish, serving-size baggies of pasta and crabmeat, steaming bowls of brown rice and mashed potatoes. Basically, the prep crew handles the parts, and the cooks do the assembly.
Computer monitors positioned head-high every few feet flashed the orders for a given station. Luz showed me the touch-screen tabs for the recipe for each order and a photo showing the proper presentation. The recipe has the ingredients on the left part of the screen and the steps on the right. A timer counts down to a target time for completion. The background turns from green to yellow as the order nears the target time and to red when it has exceeded it.
I watched Mauricio Gaviria at the broiler station as the lunch crowd began coming in. Mauricio was twenty-nine years old and had worked there eight years. He’d got his start doing simple prep—chopping vegetables—and worked his way up to fry cook, the pasta station, and now the sauté and broiler stations. He bounced in place waiting for the pace to pick up. An order for a “hibachi” steak popped up. He tapped the screen to open the order: medium-rare, no special requests. A ten-minute timer began. He tonged a fat hanger steak soaking in teriyaki sauce onto the broiler and started a nest of sliced onions cooking beside it. While the meat was grilling, other orders arrived: a Kobe burger, a blue-cheese B.L.T. burger, three “old-fashioned” burgers, five veggie burgers, a “farmhouse” burger, and two Thai chicken wraps. Tap, tap, tap. He got each of them grilling.
I brought up the hibachi-steak recipe on the screen. There were instructions to season the steak, sauté the onions, grill some mushrooms, slice the meat, place it on the bed of onions, pile the mushrooms on top, garnish with parsley and sesame seeds, heap a stack of asparagus tempura next to it, shape a tower of mashed potatoes alongside, drop a pat of wasabi butter on top, and serve.
Two things struck me. First, the instructions were precise about the ingredients and the objectives (the steak slices were to be a quarter of an inch thick, the presentation just so), but not about how to get there. The cook has to decide how much to salt and baste, how to sequence the onions and mushrooms and meat so they’re done at the same time, how to swivel from grill to countertop and back, sprinkling a pinch of salt here, flipping a burger there, sending word to the fry cook for the asparagus tempura, all the while keeping an eye on the steak. In producing complicated food, there might be recipes, but there was also a substantial amount of what’s called “tacit knowledge”—knowledge that has not been reduced to instructions.
Second, Mauricio never looked at the instructions anyway. By the time I’d finished reading the steak recipe, he was done with the dish and had plated half a dozen others. “Do you use this recipe screen?” I asked.
“No. I have the recipes right here,” he said, pointing to his baseball-capped head.
He put the steak dish under warming lights, and tapped the screen to signal the servers for pickup. But before the dish was taken away, the kitchen manager stopped to look, and the system started to become clearer. He pulled a clean fork out and poked at the steak. Then he called to Mauricio and the two other cooks manning the grill station.
“Gentlemen,” he said, “this steak is perfect.” It was juicy and pink in the center, he said. “The grill marks are excellent.” The sesame seeds and garnish were ample without being excessive. “But the tower is too tight.” I could see what he meant. The mashed potatoes looked a bit like something a kid at the beach might have molded with a bucket. You don’t want the food to look manufactured, he explained. Mauricio fluffed up the potatoes with a fork.
I watched the kitchen manager for a while. At every Cheesecake Factory restaurant, a kitchen manager is stationed at the counter where the food comes off the line, and he rates the food on a scale of one to ten. A nine is near-perfect. An eight requires one or two corrections before going out to a guest. A seven needs three. A six is unacceptable and has to be redone. This inspection process seemed a tricky task. No one likes to be second-guessed. The kitchen manager prodded gently, being careful to praise as often as he corrected. (“Beautiful. Beautiful!” “The pattern of this pesto glaze is just right.”) But he didn’t hesitate to correct.
“We’re getting sloppy with the plating,” he told the pasta station. He was unhappy with how the fry cooks were slicing the avocado spring rolls. “Gentlemen, a half-inch border on this next time.” He tried to be a coach more than a policeman. “Is this three-quarters of an ounce of Parm-Romano?”
And that seemed to be the spirit in which the line cooks took him and the other managers. The managers had all risen through the ranks. This earned them a certain amount of respect. They in turn seemed respectful of the cooks’ skills and experience. Still, the oversight is tight, and this seemed crucial to the success of the enterprise.
The managers monitored the pace, too—scanning the screens for a station stacking up red flags, indicating orders past the target time, and deciding whether to give the cooks at the station a nudge or an extra pair of hands. They watched for waste—wasted food, wasted time, wasted effort. The formula was Business 101: Use the right amount of goods and labor to deliver what customers want and no more. Anything more is waste, and waste is lost profit.
I spoke to David Gordon, the company’s chief operating officer. He told me that the Cheesecake Factory has worked out a staff-to-customer ratio that keeps everyone busy but not so busy that there’s no slack in the system in the event of a sudden surge of customers. More difficult is the problem of wasted food. Although the company buys in bulk from regional suppliers, groceries are the biggest expense after labor, and the most unpredictable. Everything—the chicken, the beef, the lettuce, the eggs, and all the rest—has a shelf life. If a restaurant were to stock too much, it could end up throwing away hundreds of thousands of dollars’ worth of food. If a restaurant stocks too little, it will have to tell customers that their favorite dish is not available, and they may never come back. Groceries, Gordon said, can kill a restaurant.
The company’s target last year was at least 97.5-per-cent efficiency: the managers aimed at throwing away no more than 2.5 per cent of the groceries they bought, without running out. This seemed to me an absurd target. Achieving it would require knowing in advance almost exactly how many customers would be coming in and what they were going to want, then insuring that the cooks didn’t spill or toss or waste anything. Yet this is precisely what the organization has learned to do. The chain-restaurant industry has produced a field of computer analytics known as “guest forecasting.”
“We have forecasting models based on historical data—the trend of the past six weeks and also the trend of the previous year,” Gordon told me. “The predictability of the business has become astounding.” The company has even learned how to make adjustments for the weather or for scheduled events like playoff games that keep people at home.
A computer program known as Net Chef showed Luz that for this one restaurant food costs accounted for 28.73 per cent of expenses the previous week. It also showed exactly how many chicken breasts were ordered that week ($1,614 worth), the volume sold, the volume on hand, and how much of last week’s order had been wasted (three dollars’ worth). Chain production requires control, and they’d figured out how to achieve it on a mass scale.
As a doctor, I found such control alien—possibly from a hostile planet. We don’t have patient forecasting in my office, push-button waste monitoring, or such stringent, hour-by-hour oversight of the work we do, and we don’t want to. I asked Luz if he had ever thought about the contrast when he went to see a doctor. We were standing amid the bustle of the kitchen, and the look on his face shifted before he answered.
“I have,” he said. His mother was seventy-eight. She had early Alzheimer’s disease, and required a caretaker at home. Getting her adequate medical care was, he said, a constant battle.
Recently, she’d had a fall, apparently after fainting, and was taken to a local emergency room. The doctors ordered a series of tests and scans, and kept her overnight. They never figured out what the problem was. Luz understood that sometimes explanations prove elusive. But the clinicians didn’t seem to be following any coördinated plan of action. The emergency doctor told the family one plan, the admitting internist described another, and the consulting specialist a third. Thousands of dollars had been spent on tests, but nobody ever told Luz the results.
A nurse came at ten the next morning and said that his mother was being discharged. But his mother’s nurse was on break, and the discharge paperwork with her instructions and prescriptions hadn’t been done. So they waited. Then the next person they needed was at lunch. It was as if the clinicians were the customers, and the patients’ job was to serve them. “We didn’t get to go until 6 p.m., with a tired, disabled lady and a long drive home.” Even then she still had to be changed out of her hospital gown and dressed. Luz pressed the call button to ask for help. No answer. He went out to the ward desk.
The aide was on break, the secretary said. “Don’t you dress her yourself at home?” He explained that he didn’t, and made a fuss.
An aide was sent. She was short with him and rough in changing his mother’s clothes. “She was manhandling her,” Luz said. “I felt like, ‘Stop. I’m not one to complain. I respect what you do enormously. But if there were a video camera in here, you’d be on the evening news.’ I sent her out. I had to do everything myself. I’m stuffing my mom’s boob in her bra. It was unbelievable.”
His mother was given instructions to check with her doctor for the results of cultures taken during her stay, for a possible urinary-tract infection. But when Luz tried to follow up, he couldn’t get through to her doctor for days. “Doctors are busy,” he said. “I get it. But come on.” An office assistant finally told him that the results wouldn’t be ready for another week and that she was to see a neurologist. No explanations. No chance to ask questions.
The neurologist, after giving her a two-minute exam, suggested tests that had already been done and wrote a prescription that he admitted was of doubtful benefit. Luz’s family seemed to encounter this kind of disorganization, imprecision, and waste wherever his mother went for help.
“It is unbelievable to me that they would not manage this better,” Luz said. I asked him what he would do if he were the manager of a neurology unit or a cardiology clinic. “I don’t know anything about medicine,” he said. But when I pressed he thought for a moment, and said, “This is pretty obvious. I’m sure you already do it. But I’d study what the best people are doing, figure out how to standardize it, and then bring it to everyone to execute.”
This is not at all the normal way of doing things in medicine. (“You’re scaring me,” he said, when I told him.) But it’s exactly what the new health-care chains are now hoping to do on a mass scale. They want to create Cheesecake Factories for health care. The question is whether the medical counterparts to Mauricio at the broiler station—the clinicians in the operating rooms, in the medical offices, in the intensive-care units—will go along with the plan. Fixing a nice piece of steak is hardly of the same complexity as diagnosing the cause of an elderly patient’s loss of consciousness. Doctors and patients have not had a positive experience with outsiders second-guessing decisions. How will they feel about managers trying to tell them what the “best practices” are?
In March, my mother underwent a total knee replacement, like at least six hundred thousand Americans each year. She’d had a partial knee replacement a decade ago, when arthritis had worn away part of the cartilage, and for a while this served her beautifully. The surgeon warned, however, that the results would be temporary, and about five years ago the pain returned.
She’s originally from Ahmadabad, India, and has spent three decades as a pediatrician, attending to the children of my small Ohio home town. She’s chatty. She can’t go through a grocery checkout line or get pulled over for speeding without learning people’s names and a little bit about them. But she didn’t talk about her mounting pain. I noticed, however, that she had developed a pronounced limp and had become unable to walk even moderate distances. When I asked her about it, she admitted that just getting out of bed in the morning was an ordeal. Her doctor showed me her X-rays. Her partial prosthesis had worn through the bone on the lower surface of her knee. It was time for a total knee replacement.
This past winter, she finally stopped putting it off, and asked me to find her a surgeon. I wanted her to be treated well, in both the technical and the human sense. I wanted a place where everyone and everything—from the clinic secretary to the physical therapists—worked together seamlessly.
My mother planned to come to Boston, where I live, for the surgery so she could stay with me during her recovery. (My father died last year.) Boston has three hospitals in the top rank of orthopedic surgery. But even a doctor doesn’t have much to go on when it comes to making a choice. A place may have a great reputation, but it’s hard to know about actual quality of care.
Unlike some countries, the United States doesn’t have a monitoring system that tracks joint-replacement statistics. Even within an institution, I found, surgeons take strikingly different approaches. They use different makes of artificial joints, different kinds of anesthesia, different regimens for post-surgical pain control and physical therapy.
In the absence of information, I went with my own hospital, the Brigham and Women’s Hospital. Our big-name orthopedic surgeons treat Olympians and professional athletes. Nine of them do knee replacements. Of most interest to me, however, was a surgeon who was not one of the famous names. He has no national recognition. But he has led what is now a decade-long experiment in standardizing joint-replacement surgery.
John Wright is a New Zealander in his late fifties. He’s a tower crane of a man, six feet four inches tall, and so bald he barely seems to have eyebrows. He’s informal in attire—I don’t think I’ve ever seen him in a tie, and he is as apt to do rounds in his zip-up anorak as in his white coat—but he exudes competence.
“Customization should be five per cent, not ninety-five per cent, of what we do,” he told me. A few years ago, he gathered a group of people from every specialty involved—surgery, anesthesia, nursing, physical therapy—to formulate a single default way of doing knee replacements. They examined every detail, arguing their way through their past experiences and whatever evidence they could find. Essentially, they did what Luz considered the obvious thing to do: they studied what the best people were doing, figured out how to standardize it, and then tried to get everyone to follow suit.
They came up with a plan for anesthesia based on research studies—including giving certain pain medications before the patient entered the operating room and using spinal anesthesia plus an injection of local anesthetic to block the main nerve to the knee. They settled on a postoperative regimen, too. The day after a knee replacement, most orthopedic surgeons have their patients use a continuous passive-motion machine, which flexes and extends the knee as they lie in bed. Large-scale studies, though, have suggested that the machines don’t do much good. Sure enough, when the members of Wright’s group examined their own patients, they found that the ones without the machine got out of bed sooner after surgery, used less pain medication, and had more range of motion at discharge. So Wright instructed the hospital to get rid of the machines, and to use the money this saved (ninety thousand dollars a year) to pay for more physical therapy, something that is proven to help patient mobility. Therapy, starting the day after surgery, would increase from once to twice a day, including weekends.
Even more startling, Wright had persuaded the surgeons to accept changes in the operation itself; there was now, for instance, a limit as to which prostheses they could use. Each of our nine knee-replacement surgeons had his preferred type and brand. Knee surgeons are as particular about their implants as professional tennis players are about their racquets. But the hardware is easily the biggest cost of the operation—the average retail price is around eight thousand dollars, and some cost twice that, with no solid evidence of real differences in results.
Knee implants were largely perfected a quarter century ago. By the nineteen-nineties, studies showed that, for some ninety-five per cent of patients, the implants worked magnificently a decade after surgery. Evidence from the Australian registry has shown that not a single new knee or hip prosthesis had a lower failure rate than that of the established prostheses. Indeed, thirty per cent of the new models were likelier to fail. Like others on staff, Wright has advised companies on implant design. He believes that innovation will lead to better implants. In the meantime, however, he has sought to limit the staff to the three lowest-cost knee implants.
These have been hard changes for many people to accept. Wright has tried to figure out how to persuade clinicians to follow the standardized plan. To prevent revolt, he learned, he had to let them deviate at times from the default option. Surgeons could still order a passive-motion machine or a preferred prosthesis. “But I didn’t make it easy,” Wright said. The surgeons had to enter the treatment orders in the computer themselves. To change or add an implant, a surgeon had to show that the performance was superior or the price at least as low.
I asked one of his orthopedic colleagues, a surgeon named John Ready, what he thought about Wright’s efforts. Ready was philosophical. He recognized that the changes were improvements, and liked most of them. But he wasn’t happy when Wright told him that his knee-implant manufacturer wasn’t matching the others’ prices and would have to be dropped.
“It’s not ideal to lose my prosthesis,” Ready said. “I could make the switch. The differences between manufacturers are minor. But there’d be a learning curve.” Each implant has its quirks—how you seat it, what tools you use. “It’s probably a ten-case learning curve for me.” Wright suggested that he explain the situation to the manufacturer’s sales rep. “I’m my rep’s livelihood,” Ready said. “He probably makes five hundred dollars a case from me.” Ready spoke to his rep. The price was dropped.
Wright has become the hospital’s kitchen manager—not always a pleasant role. He told me that about half of the surgeons appreciate what he’s doing. The other half tolerate it at best. One or two have been outright hostile. But he has persevered, because he’s gratified by the results. The surgeons now use a single manufacturer for seventy-five per cent of their implants, giving the hospital bargaining power that has helped slash its knee-implant costs by half. And the start-to-finish standardization has led to vastly better outcomes. The distance patients can walk two days after surgery has increased from fifty-three to eighty-five feet. Nine out of ten could stand, walk, and climb at least a few stairs independently by the time of discharge. The amount of narcotic pain medications they required fell by a third. They could also leave the hospital nearly a full day earlier on average (which saved some two thousand dollars per patient).
My mother was one of the beneficiaries. She had insisted to Dr. Wright that she would need a week in the hospital after the operation and three weeks in a rehabilitation center. That was what she’d required for her previous knee operation, and this one was more extensive.
“We’ll see,” he told her.
The morning after her operation, he came in and told her that he wanted her getting out of bed, standing up, and doing a specific set of exercises he showed her. “He’s pushy, if you want to say it that way,” she told me. The physical therapists and nurses were, too. They were a team, and that was no small matter. I counted sixty-three different people involved in her care. Nineteen were doctors, including the surgeon and chief resident who assisted him, the anesthesiologists, the radiologists who reviewed her imaging scans, and the junior residents who examined her twice a day and adjusted her fluids and medications. Twenty-three were nurses, including her operating-room nurses, her recovery-room nurse, and the many ward nurses on their eight-to-twelve-hour shifts. There were also at least five physical therapists; sixteen patient-care assistants, helping check her vital signs, bathe her, and get her to the bathroom; plus X-ray and EKG technologists, transport workers, nurse practitioners, and physician assistants. I didn’t even count the bioengineers who serviced the equipment used, the pharmacists who dispensed her medications, or the kitchen staff preparing her food while taking into account her dietary limitations. They all had to coördinate their contributions, and they did.
Three days after her operation, she was getting in and out of bed on her own. She was on virtually no narcotic medication. She was starting to climb stairs. Her knee pain was actually less than before her operation. She left the hospital for the rehabilitation center that afternoon.
The biggest complaint that people have about health care is that no one ever takes responsibility for the total experience of care, for the costs, and for the results. My mother experienced what happens in medicine when someone takes charge. Of course, John Wright isn’t alone in trying to design and implement this kind of systematic care, in joint surgery and beyond. The Virginia Mason Medical Center, in Seattle, has done it for knee surgery and cancer care; the Geisinger Health Center, in Pennsylvania, has done it for cardiac surgery and primary care; the University of Michigan Health System standardized how its doctors give blood transfusions to patients, reducing the need for transfusions by thirty-one per cent and expenses by two hundred thousand dollars a month. Yet, unless such programs are ramped up on a nationwide scale, they aren’t going to do much to improve health care for most people or reduce the explosive growth of health-care costs.
In medicine, good ideas still take an appallingly long time to trickle down. Recently, the American Academy of Neurology and the American Headache Society released new guidelines for migraine-headache-treatment. They recommended treating severe migraine sufferers—who have more than six attacks a month—with preventive medications and listed several drugs that markedly reduce the occurrence of attacks. The authors noted, however, that previous guidelines going back more than a decade had recommended such remedies, and doctors were still not providing them to more than two-thirds of patients. One study examined how long it took several major discoveries, such as the finding that the use of beta-blockers after a heart attack improves survival, to reach even half of Americans. The answer was, on average, more than fifteen years.
Scaling good ideas has been one of our deepest problems in medicine. Regulation has had its place, but it has proved no more likely to produce great medicine than food inspectors are to produce great food. During the era of managed care, insurance-company reviewers did hardly any better. We’ve been stuck. But do we have to be?
Every six months, the Cheesecake Factory puts out a new menu. This means that everyone who works in its restaurants expects to learn something new twice a year. The March, 2012, Cheesecake Factory menu included thirteen new items. The teaching process is now finely honed: from start to finish, rollout takes just seven weeks.
The ideas for a new dish, or for tweaking an old one, can come from anywhere. One of the Boston prep cooks told me about an idea he once had that ended up in a recipe. David Overton, the founder and C.E.O. of the Cheesecake Factory, spends much of his time sampling a range of cuisines and comes up with many dishes himself. All the ideas, however, go through half a dozen chefs in the company’s test kitchen, in Calabasas. They figure out how to make each recipe reproducible, appealing, and affordable. Then they teach the new recipe to the company’s regional managers and kitchen managers.
Dave Luz, the Boston regional manager, went to California for training this past January with his chief kitchen manager, Tom Schmidt, a chef with fifteen years’ experience. They attended lectures, watched videos, participated in workshops. It sounded like a surgical conference. Where I might be taught a new surgical technique, they were taught the steps involved in preparing a “Santorini farro salad.” But there was a crucial difference. The Cheesecake instructors also trained the attendees how to teach what they were learning. In medicine, we hardly ever think about how to implement what we’ve learned. We learn what we want to, when we want to.
On the first training day, the kitchen managers worked their way through thirteen stations, preparing each new dish, and their performances were evaluated. The following day, they had to teach their regional managers how to prepare each dish—Schmidt taught Luz—and this time the instructors assessed how well the kitchen managers had taught.
The managers returned home to replicate the training session for the general manager and the chief kitchen manager of every restaurant in their region. The training at the Boston Prudential Center restaurant took place on two mornings, before the lunch rush. The first day, the managers taught the kitchen staff the new menu items. There was a lot of poring over the recipes and videos and fussing over the details. The second day, the cooks made the new dishes for the servers. This gave the cooks some practice preparing the food at speed, while allowing the servers to learn the new menu items. The dishes would go live in two weeks. I asked a couple of the line cooks how long it took them to learn to make the new food.
“I know it already,” one said.
“I make it two times, and that’s all I need,” the other said.
Come on, I said. How long before they had it down pat?
“One day,” they insisted. “It’s easy.”
I asked Schmidt how much time he thought the cooks required to master the recipes. They thought a day, I told him. He grinned. “More like a month,” he said.
Even a month would be enviable in medicine, where innovations commonly spread at a glacial pace. The new health-care chains, though, are betting that they can change that, in much the same way that other chains have.
Armin Ernst is responsible for intensive-care-unit operations in Steward’s ten hospitals. The I.C.U.s he oversees serve some eight thousand patients a year. In another era, an I.C.U. manager would have been a facilities expert. He would have spent his time making sure that the equipment, electronics, pharmacy resources, and nurse staffing were up to snuff. He would have regarded the I.C.U. as the doctors’ workshop, and he would have wanted to give them the best possible conditions to do their work as they saw fit.
Ernst, though, is a doctor—a new kind of doctor, whose goal is to help disseminate good ideas. He doesn’t see the I.C.U. as a doctors’ workshop. He sees it as the temporary home of the sickest, most fragile people in the country. Nowhere in health care do we expend more resources. Although fewer than one in four thousand Americans are in intensive care at any given time, they account for four per cent of national health-care costs. Ernst believes that his job is to make sure that everyone is collaborating to provide the most effective and least wasteful care possible.
He looked like a regular doctor to me. Ernst is fifty years old, a native German who received his medical degree at the University of Heidelberg before training in pulmonary and critical-care medicine in the United States. He wears a white hospital coat and talks about drips and ventilator settings, like any other critical-care specialist. But he doesn’t deal with patients: he deals with the people who deal with patients.
Ernst says he’s not telling clinicians what to do. Instead, he’s trying to get clinicians to agree on precise standards of care, and then make sure that they follow through on them. (The word “consensus” comes up a lot.) What I didn’t understand was how he could enforce such standards in ten hospitals across three thousand square miles.
Late one Friday evening, I joined an intensive-care-unit team on night duty. But this team was nowhere near a hospital. We were in a drab one-story building behind a meat-trucking facility outside of Boston, in a back section that Ernst called his I.C.U. command center. It was outfitted with millions of dollars’ worth of technology. Banks of computer screens carried a live feed of cardiac-monitor readings, radiology-imaging scans, and laboratory results from I.C.U. patients throughout Steward’s hospitals. Software monitored the stream and produced yellow and red alerts when it detected patterns that raised concerns. Doctors and nurses manned consoles where they could toggle on high-definition video cameras that allowed them to zoom into any I.C.U. room and talk directly to the staff on the scene or to the patients themselves.
The command center was just a few months old. The team had gone live in only four of the ten hospitals. But in the next several months Ernst’s “tele-I.C.U.” team will have the ability to monitor the care for every patient in every I.C.U. bed in the Steward health-care system.
A doctor, two nurses, and an administrative assistant were on duty in the command center each night I visited. Christina Monti was one of the nurses. A pixie-like thirty-year-old with nine years’ experience as a cardiac intensive-care nurse, she was covering Holy Family Hospital, on the New Hampshire border, and St. Elizabeth’s Medical Center, in Boston’s Brighton neighborhood. When I sat down with her, she was making her rounds, virtually.
First, she checked on the patients she had marked as most critical. She reviewed their most recent laboratory results, clinical notes, and medication changes in the electronic record. Then she made a “visit,” flicking on the two-way camera and audio system. If the patients were able to interact, she would say hello to them in their beds. She asked the staff members whether she could do anything for them. The tele-I.C.U. team provided the staff with extra eyes and ears when needed. If a crashing patient diverts the staff’s attention, the members of the remote team can keep an eye on the other patients. They can handle computer paperwork if a nurse falls behind; they can look up needed clinical information. The hospital staff have an OnStar-like button in every room that they can push to summon the tele-I.C.U. team.
Monti also ran through a series of checks for each patient. She had a reference list of the standards that Ernst had negotiated with the people running the I.C.U.s, and she looked to see if they were being followed. The standards covered basics, from hand hygiene to measures for stomach-ulcer prevention. In every room with a patient on a respirator, for instance, Monti made sure the nurse had propped the head of the bed up at least thirty degrees, which makes pneumonia less likely. She made sure the breathing tube in the patient’s mouth was secure, to reduce the risk of the tube’s falling out or becoming disconnected. She zoomed in on the medication pumps to check that the drips were dosed properly. She was not looking for bad nurses or bad doctors. She was looking for the kinds of misses that even excellent nurses and doctors can make under pressure.
The concept of the remote I.C.U. started with an effort to let specialists in critical-care medicine, who are in short supply, cover not just one but several community hospitals. Two hundred and fifty hospitals from Alaska to Virginia have installed a version of the tele-I.C.U. It produced significant improvements in outcomes and costs—and, some discovered, a means of driving better practices even in hospitals that had specialists on hand.
After five minutes of observation, however, I realized that the remote I.C.U. team wasn’t exactly in command; it was in negotiation. I observed Monti perform a video check on a middle-aged man who had just come out of heart surgery. A soft chime let the people in the room know she was dropping in. The man was unconscious, supported by a respirator and intravenous drips. At his bedside was a nurse hanging a bag of fluid. She seemed to stiffen at the chime’s sound.
“Hi,” Monti said to her. “I’m Chris. Just making my evening rounds. How are you?” The bedside nurse gave the screen only a sidelong glance.
Ernst wasn’t oblivious of the issue. He had taken pains to introduce the command center’s team, spending weeks visiting the units and bringing doctors and nurses out to tour the tele-I.C.U. before a camera was ever turned on. But there was no escaping the fact that these were strangers peering over the staff’s shoulders. The bedside nurse’s chilliness wasn’t hard to understand.
In a single hour, however, Monti had caught a number of problems. She noticed, for example, that a patient’s breathing tube had come loose. Another patient wasn’t getting recommended medication to prevent potentially fatal blood clots. Red alerts flashed on the screen—a patient with an abnormal potassium level that could cause heart-rhythm problems, another with a sudden leap in heart rate.
Monti made sure that the team wasn’t already on the case and that the alerts weren’t false alarms. Checking the computer, she figured out that a doctor had already ordered a potassium infusion for the woman with the low level. Flipping on a camera, she saw that the patient with the high heart rate was just experiencing the stress of being helped out of bed for the first time after surgery. But the unsecured breathing tube and the forgotten blood-clot medication proved to be oversights. Monti raised the concerns with the bedside staff.
Sometimes they resist. “You have got to be careful from patient to patient,” Gerard Hayes, the tele-I.C.U. doctor on duty, explained. “Pushing hard on one has ramifications for how it goes with a lot of patients. You don’t want to sour whole teams on the tele-I.C.U.” Across the country, several hospitals have decommissioned their systems. Clinicians have been known to place a gown over the camera, or even rip the camera out of the wall. Remote monitoring will never be the same as being at the bedside. One nurse called the command center to ask the team not to turn on the video system in her patient’s room: he was delirious and confused, and the sudden appearance of someone talking to him from the television would freak him out.
Still, you could see signs of change. I watched Hayes make his virtual rounds through the I.C.U. at St. Anne’s Hospital, in Fall River, near the Rhode Island border. He didn’t yet know all the members of the hospital staff—this was only his second night in the command center, and when he sees patients in person it’s at a hospital sixty miles north. So, in his dealings with the on-site clinicians, he was feeling his way.
Checking on one patient, he found a few problems. Mr. Karlage, as I’ll call him, was in his mid-fifties, an alcoholic smoker with cirrhosis of the liver, severe emphysema, terrible nutrition, and now a pneumonia that had put him into respiratory failure. The I.C.U. team injected him with antibiotics and sedatives, put a breathing tube down his throat, and forced pure oxygen into his lungs. Over a few hours, he stabilized, and the I.C.U. doctor was able to turn his attention to other patients.
But stabilizing a sick patient is like putting out a house fire. There can be smoldering embers just waiting to reignite. Hayes spotted a few. The ventilator remained set to push breaths at near-maximum pressure, and, given the patient’s severe emphysema, this risked causing a blowout. The oxygen concentration was still cranked up to a hundred per cent, which, over time, can damage the lungs. The team had also started several broad-spectrum antibiotics all at once, and this regimen had to be dialled back if they were to avoid breeding resistant bacteria.
Hayes had to notify the unit doctor. An earlier interaction, however, had not been promising. During a video check on a patient, Hayes had introduced himself and mentioned an issue he’d noticed. The unit doctor stared at him with folded arms, mouth shut tight. Hayes was a former Navy flight surgeon with twenty years’ experience as an I.C.U. doctor and looked to have at least a decade on the St. Anne’s doctor. But the doctor was no greenhorn, either, and gave him the brushoff: “The morning team can deal with that.” Now Hayes needed to call him about Mr. Karlage. He decided to do it by phone.
“Sounds like you’re having a busy night,” Hayes began when he reached the doctor. “Mr. Karlage is really turning around, huh?” Hayes praised the doctor’s work. Then he brought up his three issues, explaining what he thought could be done and why. He spoke like a consultant brought in to help. This went over better. The doctor seemed to accept Hayes’s suggestions.
Unlike a mere consultant, however, Hayes took a few extra steps to make sure his suggestions were carried out. He spoke to the nurse and the respiratory therapist by video and explained the changes needed. To carry out the plan, they needed written orders from the unit doctor. Hayes told them to call him back if they didn’t get the orders soon.
Half an hour later, Hayes called Mr. Karlage’s nurse again. She hadn’t received the orders. For all the millions of dollars of technology spent on the I.C.U. command center, this is where the plug meets the socket. The fundamental question in medicine is: Who is in charge? With the opening of the command center, Steward was trying to change the answer—it gave the remote doctors the authority to issue orders as well. The idea was that they could help when a unit doctor got too busy and fell behind, and that’s what Hayes chose to believe had happened. He entered the orders into the computer. In a conflict, however, the on-site physician has the final say. So Hayes texted the St. Anne’s doctor, informing him of the changes and asking if he’d let him know if he disagreed.
Hayes received no reply. No “thanks” or “got it” or “O.K.” After midnight, though, the unit doctor pressed the video call button and his face flashed onto Hayes’s screen. Hayes braced for a confrontation. Instead, the doctor said, “So I’ve got this other patient and I wanted to get your opinion.”
Hayes suppressed a smile. “Sure,” he said.
When he signed off, he seemed ready to high-five someone. “He called us,” he marvelled. The command center was gaining credibility.
Armin Ernst has big plans for the command center—a rollout of full-scale treatment protocols for patients with severe sepsis, acute respiratory-distress syndrome, and other conditions; strategies to reduce unnecessary costs; perhaps even computer forecasting of patient volume someday. Steward is already extending the command-center concept to in-patient psychiatry. Emergency rooms and surgery may be next. Other health systems are pursuing similar models. The command-center concept provides the possibility of, well, command.
Today, some ninety “super-regional” health-care systems have formed across the country—large, growing chains of clinics, hospitals, and home-care agencies. Most are not-for-profit. Financial analysts expect the successful ones to drive independent medical centers out of existence in much of the country—either by buying them up or by drawing away their patients with better quality and cost control. Some small clinics and stand-alone hospitals will undoubtedly remain successful, perhaps catering to the luxury end of health care the way gourmet restaurants do for food. But analysts expect that most of us will gravitate to the big systems, just as we have moved away from small pharmacies to CVS and Walmart.
Already, there have been startling changes. Cleveland Clinic, for example, opened nine regional hospitals in northeast Ohio, as well as health centers in southern Florida, Toronto, and Las Vegas, and is now going international, with a three-hundred-and-sixty-four-bed hospital in Abu Dhabi scheduled to open next year. It reached an agreement with Lowe’s, the home-improvement chain, guaranteeing a fixed price for cardiac surgery for the company’s employees and dependents. The prospect of getting better care for a lower price persuaded Lowe’s to cover all out-of-pocket costs for its insured workers to go to Cleveland, including co-payments, airfare, transportation, and lodging. Three other companies, including Kohl’s department stores, have made similar deals, and a dozen more, including Boeing, are in negotiations.
Big Medicine is on the way.
Reinventing medical care could produce hundreds of innovations. Some may be as simple as giving patients greater e-mail and online support from their clinicians, which would enable timelier advice and reduce the need for emergency-room visits. Others might involve smartphone apps for coaching the chronically ill in the management of their disease, new methods for getting advice from specialists, sophisticated systems for tracking outcomes and costs, and instant delivery to medical teams of up-to-date care protocols. Innovations could take a system that requires sixty-three clinicians for a knee replacement and knock the number down by half or more. But most significant will be the changes that finally put people like John Wright and Armin Ernst in charge of making care coherent, coördinated, and affordable. Essentially, we’re moving from a Jeffersonian ideal of small guilds and independent craftsmen to a Hamiltonian recognition of the advantages that size and centralized control can bring.
Yet it seems strange to pin our hopes on chains. We have no guarantee that Big Medicine will serve the social good. Whatever the industry, an increase in size and control creates the conditions for monopoly, which could do the opposite of what we want: suppress innovation and drive up costs over time. In the past, certainly, health-care systems that pursued size and market power were better at raising prices than at lowering them.
A new generation of medical leaders and institutions professes to have a different aim. But a lesson of the past century is that government can influence the behavior of big corporations, by requiring transparency about their performance and costs, and by enacting rules and limitations to protect the ordinary citizen. The federal government has broken up monopolies like Standard Oil and A.T. & T.; in some parts of the country, similar concerns could develop in health care.
Mixed feelings about the transformation are unavoidable. There’s not just the worry about what Big Medicine will do; there’s also the worry about how society and government will respond. For the changes to live up to our hopes—lower costs and better care for everyone—liberals will have to accept the growth of Big Medicine, and conservatives will have to accept the growth of strong public oversight.
The vast savings of Big Medicine could be widely shared—or reserved for a few. The clinicians who are trying to reinvent medicine aren’t doing it to make hedge-fund managers and bondholders richer; they want to see that everyone benefits from the savings their work generates—and that won’t be automatic.
Our new models come from industries that have learned to increase the capabilities and efficiency of the human beings who work for them. Yet the same industries have also tended to devalue those employees. The frontline worker, whether he is making cars, solar panels, or wasabi-crusted ahi tuna, now generates unprecedented value but receives little of the wealth he is creating. Can we avoid this as we revolutionize health care?
Those of us who work in the health-care chains will have to contend with new protocols and technology rollouts every six months, supervisors and project managers, and detailed metrics on our performance. Patients won’t just look for the best specialist anymore; they’ll look for the best system. Nurses and doctors will have to get used to delivering care in which our own convenience counts for less and the patients’ experience counts for more. We’ll also have to figure out how to reward people for taking the time and expense to teach the next generations of clinicians. All this will be an enormous upheaval, but it’s long overdue, and many people recognize that. When I asked Christina Monti, the Steward tele-I.C.U. nurse, why she wanted to work in a remote facility tangling with staffers who mostly regarded her with indifference or hostility, she told me, “Because I wanted to be part of the change.”
And we are seeing glimpses of this change. In my mother’s rehabilitation center, miles away from where her surgery was done, the physical therapists adhered to the exercise protocols that Dr. Wright’s knee factory had developed. He didn’t have a video command center, so he came out every other day to check on all the patients and make sure that the staff was following the program. My mother was sure she’d need a month in rehab, but she left in just a week, incurring a fraction of the costs she would have otherwise. She walked out the door using a cane. On her first day at home with me, she climbed two flights of stairs and walked around the block for exercise.
The critical question is how soon that sort of quality and cost control will be available to patients everywhere across the country. We’ve let health-care systems provide us with the equivalent of greasy-spoon fare at four-star prices, and the results have been ruinous. The Cheesecake Factory model represents our best prospect for change. Some will see danger in this. Many will see hope. And that’s probably the way it should be. ♦
Article on Physician Burnout and Best Practice
A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.
To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week.
“Complete Care” across 8500 physicians and 4.4 million members at SCPMG has four elements:
1. Share accountability:
share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties.
One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians
3. Information technology
“Outreach team” manages information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant).
4. Standardized Care Process (see Atul Gawande Big Med)
“Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports.
Medical assistants (MAs)
Licensed vocational nurses (LVNs)
How One California Medical Group Is Decreasing Physician Burnout
Erin E. Sullivan
JUNE 07, 2017
Physician burnout is a growing problem for all health care systems in the United States. Burned-out physicians deliver lower quality care, reduce their hours, or stop practicing, reducing access to care around the country. Primary care physicians are particularly vulnerable: They have some of the highest burnout rates of any medical discipline.
As part of our work researching high-performing primary care systems, we discovered a system-wide approach launched by Southern California Permanente Medical Group (SCPMG) in 2004 that unburdens primary care physicians. We believe the program — Complete Care — may be a viable model for other institutions looking to decrease burnout or increase physician satisfaction. (While burnout can easily be measured, institutions often don’t publicly report their own rates and the associated turnover they experience. Consequently, we used physician satisfaction as a proxy for burnout in our research.)
In most health care systems, primary care physicians are the first stop for patients needing care. As a result, their patients’ needs — and their own tasks — vary immensely. A primary care physician’s work includes vaccinations, screenings, chronic disease prevention and treatment, relationship building, family planning, behavioral health, counseling, and other vital but time-consuming work.
Some studies have examined just how much time a primary care physician needs to do all of these tasks and the results are staggering. To be in full compliance with the U.S. Preventive Services Task Force recommendations, primary care physicians with average-sized patient populations need to dedicate 7.4 hours per day to preventative care alone. Taken in conjunction with the other primary care services, namely acute and chronic care, the estimated total working hours per primary care physician comes to 21.7 hours per day, or 108.5 hours per week. Given such workloads, the high burnout rate is hardly surprising.
While designed with the intent to improve quality of care, SCPMG’s Complete Care program also alleviates some of the identified drivers of physician burnout by following a systematic approach to care delivery. Comprised of 8,500 physicians, SCPMG consistently provides the highest quality care to the region’s 4.4 million plan members. And a recent study of SCPMG physician satisfaction suggests that regardless of discipline, physicians feel high levels of satisfaction in three key areas: their compensation, their perceived ability to deliver high-quality care, and their day-to-day professional lives.
Complete Care has four core elements:
Share Accountability with Specialists
A few years ago, SCPMG’s regional medical director of quality and clinical analysis noticed a plateauing effect in some preventative screenings where screenings rates failed to increase after a certain percentage. He asked his team to analyze how certain patient populations — for example, women in need of a mammogram — accessed the health care system. As approximately one in eight women will develop invasive breast cancer over the course of their lifetimes, a failure to receive the recommended preventative screening could have serious health repercussions.
What the team found was startling: Over the course of a year, nearly two-thirds of women clinically eligible for a mammogram never set foot in their primary care physician’s office. Instead they showed up in specialty care or urgent care.
While this discovery spurred more research into patient access, the outcome remained the same: To achieve better rates of preventative and chronic care compliance, specialists had to be brought into the fold.
SCPMG slowly started to share accountability for preventative and chronic care services (e.g., treating people with hypertension or women in need of a mammogram) with high-volume specialties. In order to bring the specialists on board, SCPMG identified and enlisted physician champions across the medical group to promote the program throughout the region; carefully timed the rollouts of different elements of the program pieces so increased demands wouldn’t overwhelm specialists; and crafted incentive programs whose payout was tied to their performance of preventative and chronic-care activities.
This reallocation of traditional primary care responsibilities has allowed SCPMG to achieve a high level of care integration and challenge traditional notions of roles and systems. Its specialists now have to respond to patients’ needs outside their immediate expertise: For example, a podiatrist will inquire whether a diabetic patient has had his or her regular eye examination, and an emergency room doctor will stitch up a cut and give immunizations in the same visit. And the whole system, not just primary care, is responsible for quality metrics related to prevention and chronic care (e.g., the percentage of eligible patients who received a mammogram).
In addition, SCPMG revamped the way it provided care to match how patients accessed and used their system. For example, it began promoting the idea of the comprehensive visit, where patients could see their primary care provider, get blood drawn, and pick up prescribed medications in the same building.
Ultimately, the burden on primary care physicians started to ease. Even more important, SCPMG estimates that Complete Care has saved over 17,000 lives.
“Right work, right people,” a guiding principle, helped shape the revamping of the organization’s infrastructure. One fundamental move was to transfer tasks from physicians — not just those in primary care — to non-physicians so physicians could spend their time doing tasks only they could do and everyone was working at the top of his or her license. For example, embedded nurse managers of diabetic patients help coordinate care visits, regularly communicate directly with patients about meeting their health goals (such as weekly calls about lower HbA1c levels), and track metrics on diabetic populations across the entire organization. At the same time, dedicated prescribing nurse practitioners work closely with physicians to monitor medication use, which in the case of blood thinners, is very time intensive and requires careful titration.
SCPMG invested in information technologies that allowed patients to schedule visits from mobile apps, access online personalized health care plans (e.g., customized weight-loss calendars and healthy recipes), and manage complex schedules (e.g., the steps prior to a kidney transplant). It also established a small outreach team (about four people) that uses large automated registries of patients to mail seasonal reminders (e.g., “it’s time for your flu vaccine shot”) and alerts about routine checkups (e.g., “you are due for a mammogram”) and handle other duties (e.g., coordinating mail-order, at-home fecal tests for colon cancer). In addition, the outreach team manages automated calls and e-mail reminders for the regions 4.4 million members.
Thanks to this reorganization of responsibilities and use of new technology, traditional primary care tasks such as monitoring blood thinners, managing diabetic care, and tracking patients eligibility for cancer screenings have been transferred to other people and processes within the SCPMG system.
Standardize Care Processes
The final element of Complete Care is the kind of process standardization advocated by Atul Gawande’s in his New Yorker article “Big Med.” Standardizing processes — and in particular, workflows — removes duplicative work, strengthens working relationships, and results in higher-functioning teams, reliable routines and higher-quality outcomes. In primary care, standardized workflows help create consistent communications between providers and staff and providers and patients, which allows physicians to spend more time during visits on patients’ pressing needs.
One such process, the “Proactive Office Encounter” (POE), ensures consistent evidence-based care at every encounter across the organization. At its core, the POE is an agreement of process and delegation of tasks between physicians and their administrative supports. It was originally developed to improve communications between support staff and physicians after SCPMG’s electronic medical record was introduced.
Medical assistants (MAs) and licensed vocational nurses (LVNs) are key players. A series of checklists embedded into the medical record guide their work both before and after the visit. These checklists contain symptoms, actions, and questions that are timely and specific to each patient based on age, disease status, and reason for his or her visit. Prior to the visit, MAs or LVNs contact patients with pre-visit instructions or to schedule necessary lab work. During the visit, they use the same checklists to follow up pre-visit instructions, take vitals, conduct medication reconciliation and prep the patient for the provider.
Pop-ups within the medical record indicate a patient’s eligibility for a new screening or regular test based on new literature, prompting the MAs or LVNs to ask patients for additional information. During the visit, physicians have access to the same checklists and data collected by the MAs or LVNs. This enables them to review the work quickly and efficiently and follow up on any flagged issues. After the visit with the physician, patients see an MA or LVN again and receive a summary of topics discussed with the provider and specific instructions or health education resources.
Contemporary physicians face many challenges: an aging population, rising rates of chronic conditions, workforce shortages, technological uncertainty, changing governmental policies, and greater disparities in health outcomes across populations. All of this, it could be argued, disproportionately affect primary care specialties. These factors promise to increase physician burnout unless something is done by health care organizations to ease their burden. SCPMG’s Complete Care initiative offers a viable blueprint to do just that.
Sophia Arabadjis is a researcher and case writer at the Harvard Medical School Center for Primary Care and a research assistant at the University of Colorado. She has investigated health systems in Europe and the United States.
Erin E. Sullivan is the research director of the Harvard Medical School Center for Primary Care. Her research focuses on high-performing primary care systems.
I have been tracking Google’s NEST for awhile now. It’s the best example I know of a learning system for the home. The latest is …. it is still the best!
We spent more than a month trying five popular smart thermostats—testing the hardware, their accompanying mobile apps, and their integrations with various smart-home systems—and the third-generation Nest remains our pick. Five years after the Nest’s debut, a handful of bona fide competitors approach it in style and functionality, but the Nest Learning Thermostat remains the leader. It’s still the easiest, most intuitive thermostat we tested, offering the best combination of style and substance.
Last Updated: November 10, 2016
We’ve added our review of Ecobee’s new Ecobee3 Lite, and we’ve updated our thoughts on HomeKit integration following the launch of Apple’s Home app. We’ve also included details on Nest’s new Eco setting and color options, a brief look at the upcoming Lyric T5, and a clarification regarding the use of a C wire for the Emerson Sensi.
The Nest works well on its own or integrated with other smart-home products. Its software and apps are solid and elegant, too, and it does a really good job of keeping your home at a comfortable temperature with little to no input from you. Plus, if you want to change the temperature yourself, you can easily do so from your smartphone or computer, or with your voice via Google or an Amazon Echo. All of that means never having to get up from a cozy spot on the couch to mess with the thermostat. While the competition is catching up, none of the other devices we tested could match the Nest’s smarts. The expansion of the Works with Nest smart-home ecosystem and the introduction of Home/Away Assist have kept the Nest in the lead by fine-tuning those smart capabilities. The recent hardware update merely added a larger screen and a choice of clock interfaces, but the ongoing software improvements (which apply to all three generations of the product) have helped keep the Nest in its position as the frontrunner in this category without leaving its early adopters out in the cold.
Not as sleek or intuitive as the Nest, but it supports Apple’s HomeKit and uses stand-alone remote sensors to register temperature in different parts of a house, making it an option for large homes with weak HVAC systems.
The Ecobee3’s support for remote sensors makes it appealing if your thermostat isn’t in the best part of your house to measure the temperature. If you have a large, multistory house with a single-zone HVAC system, you can have big temperature differences between rooms. With Ecobee3’s add-on sensors (you get one with the unit and can add up to 32 more), the thermostat uses the sensors’ occupancy detectors to match the target temperature in occupied rooms, rather than just wherever the thermostat is installed. However, it doesn’t have the level of intelligence of the Nest, or that model’s retro cool look (which even the Honeywell Lyric takes a good stab at). Its black, rounded-rectangle design and touchscreen interface have a more modern feel, it looks a bit like someone mounted a smartphone app on your wall.
Ecobee’s new Lite model is a great budget option. It doesn’t have any occupancy sensors or remote temperature sensors, but it would work well for a smaller home invested in the Apple ecosystem.
For a cheaper smart thermostat with most of the important features of the more expensive models, we suggest the Ecobee3 Lite. This budget version of the Ecobee3 lacks the remote sensors and occupancy sensors of its predecessor but retains the programming and scheduling features, and like the main Ecobee3, it works with a variety of smart-home systems, including HomeKit, Alexa, SmartThings, Wink, and IFTTT. However, the lack of an occupancy sensor means you’ll have to manually revert it to its prescheduled state anytime you use Alexa, Siri, or any other integration to change its temperature.
real people should not fill this in and expect good things – do not remove this or risk form bot signups
Table of contents
Why a smart thermostat?
Who this is for
The C-wire conundrum
How we picked and tested
Who else likes our pick
Flaws but not deal breakers
Potential privacy issues
The next best thing (for larger homes)
What to look forward to
Wrapping it up
Why a smart thermostat?
A smart thermostat isn’t just convenient: Used wisely, it can save energy (and money), and it offers the potential for some cool integrations. If you upgrade to any smart thermostat after years with a basic one, the first and most life-changing difference will be the ability to control it remotely, from your phone, on your tablet, or with your voice. No more getting up in the middle of the night to turn up the AC. No dashing back into the house to lower the heat before you go on errands (or vacation). No coming home to a sweltering apartment—you just fire up the AC when you’re 10 minutes away, or even better, have your thermostat turn itself on in anticipation of your arrival.
Technically, thermostats have been “smart” since the first time a manufacturer realized that such devices could be more than a mercury thermometer and a metal dial. For years, the Home Depots of the world were full of plastic rectangles that owed a lot to the digital clock: They’d let you dial in ideal heating and cooling temperatures, and maybe even set different temperatures for certain times of the day and particular days of the week.
The thermostat landscape changed with the introduction of the Nest in 2011 by Nest Labs, a company led by Tony Fadell, generally credited to be one of the major forces behind Apple’s iPod. (Google acquired Nest Labs in 2014; Fadell has since moved on to an advisory position at Alphabet, Google’s parent company.) The original Nest was a stylish metal-and-glass Wi-Fi–enabled device, with a bright color screen and integrated smartphone apps—in other words, a device that combined style and functionality in a way never before seen in the category.
The Nest got a lot of publicity, especially when you consider that it’s a thermostat. Within a few months, Nest Labs was slapped with a patent suit by Honeywell, maker of numerous competing thermostats.
But once the Nest was out there, it was hard to deny that the thermostat world had needed a kick in the pants. And five years later, not only have the traditional plastic beige rectangles gained Wi-Fi features and smartphone apps, but other companies have also entered the high-feature, high-design thermostat market, including the upstart Ecobee and the old standards Honeywell, Emerson, and Carrier.
The fact is, a cheap plastic thermostat with basic time programming—the kind people have had for two decades—will do a pretty good job of keeping your house at the right temperature without wasting a lot of money, so long as you put in the effort to program it and remember to shut it off. But that’s the thing: Most people don’t.
These new thermostats are smart because they spend time doing the thinking that most people just don’t do.
“The majority of people who have a programmable thermostat don’t program it, or maybe they program it once and never update it when things change,” said Bronson Shavitz, a Chicago-area contractor who has installed and serviced hundreds of heating and cooling systems over the years.
Smart thermostats spend time doing the thinking that most people just don’t do, turning themselves off when nobody’s home, targeting temperatures only in occupied rooms, and learning your household schedule through observation. Plus, with their sleek chassis and integrated smartphone apps, these thermostats are fun to use.
Nest Labs claims that a learning thermostat (well, its learning thermostat) saves enough energy to pay for itself in as little as two years.
Since the introduction of the Nest, energy companies have begun offering rebates and incentives for their customers to switch to a smart thermostat, and some have even developed their own devices and apps and now offer them for free or at a greatly reduced price to encourage customers to switch. Clearly, these devices provide a larger benefit than simple convenience. Because they can do a better job of scheduling the heating and cooling of your house than you can, they save money and energy.
Among the useful features of smart thermostats is the ability to work as part of a larger smart-home system and to keep developing even after you’ve purchased one. For example, many of the thermostats we tested now integrate with the Amazon Echo, a Wi-Fi–connected speaker that can control many smart-home devices. You can speak commands to Alexa, Echo’s personal assistant, to adjust your climate control. This function came to the thermostats via a software update, so a smart thermostat purchased last year has the same functionality as one bought yesterday.
These over-the-air software updates, while sometimes known to cause issues, are a key feature of smart devices. Shelling out $250 for a thermostat that has the potential to become better as it sits on your wall helps cushion some of the sticker shock. The Nest earns particularly high marks in this area, because whether you bought one in 2011 or 2016, you get the same advanced learning algorithms and smart integrations.
Additionally, all of the thermostats we tested work with one or more smart-home hubs such as SmartThings and Wink, or within a Web-enabled ecosystem like Amazon’s Alexa or IFTTT (If This Then That). The Nest also has its own developer program, Works with Nest, which integrates the company’s thermostat and other products directly with a long and growing list of devices including smart lights, appliances, locks, cars, shades, and garage door openers. This means you can add your thermostat to different smart scenarios and have it react to other actions in your home: It could set itself to Away mode and lock your Kevo smart door lock when you leave your house, for instance, or it could turn up the heat when your Chamberlain MyQ garage door opener activates. These ecosystems are continually growing, meaning the interactions your thermostat is capable of are growing as well (sometimes with the purchase of additional hardware).
With the release of the Home app for HomeKit, Apple’s smart-home unification plans have taken a bigger step toward fruition. While the devices are still limited (a hardware update is required for compatibility), you can now create scenes (linking devices together) and control them from outside the home on an iPad; previously you had to use a third-generation Apple TV. This change increases the number of people who will see HomeKit as a viable smart-home option. Even without an iPad permanently residing in your home, you can still talk to and operate HomeKit products using Siri on your iPhone or iPad while you are at home. The system works in the same way Alexa does, and it’s actually a little more pleasant to use than shouting across the room.
The Ecobee3, Ecobee 3 Lite and Honeywell Lyric (released January 2016) are all HomeKit compatible, and can communicate with other HomeKit devices to create scenes such as “I’m Home,” to trigger your thermostat to set to your desired temp and your HomeKit-compatible lights to come on.
Google now offers its own voice-activated speaker similar to Amazon’s Echo, the Google Home. The Home, which integrates with Nest as well as IFTTT, SmartThings, and Philips Hue, allows you to control your Nest thermostat via voice.
Who this is for
Get a smart thermostat if you’re interested in saving more energy and exerting more control over your home environment. If you like the prospect of turning on your heater on your way home from work, or having your home’s temperature adjust intelligently, a smart thermostat will suit you. And, well, these devices just look cooler than those plastic rectangles of old.
Get a smart thermostat if you’re interested in saving more energy and exerting more control over your home environment.
If you already have a smart thermostat, such as a first- or second-generation Nest, you don’t need to upgrade. And if you have a big, complex home-automation system that includes a thermostat, you may prefer the interoperability of your current setup to the intelligence and elegance of a Nest or similar thermostat.
If you don’t care much about slick design and attractive user interfaces, you can find cheaper thermostats (available from companies such as Honeywell) that offer Wi-Fi connectivity and some degree of scheduling flexibility. The hardware is dull and interfaces pedestrian, but they’ll do the job and save you a few bucks.
The devices we looked at are designed to be attached to existing heating and cooling systems. Most manufacturers now offer Wi-Fi thermostats of their own, and while they’re generally not as stylish as the models we looked at, they have the advantage of being designed specifically for that manufacturer’s equipment. That offers some serious benefits, including access to special features and a deep understanding of how specific equipment behaves that a more general thermostat can’t have.
The C-wire conundrum
One major caveat with all smart thermostats is the need for a C wire, or “common wire,” which supplies AC power from your furnace to connected devices such as thermostats. Smart thermostats are essentially small computers that require power to operate—even more so if you want to keep their screens illuminated all the time. If your heating and cooling system is equipped with a C wire, you won’t have any concerns about power. The problem is, common wires are not very common in houses.
In the absence of a C wire, both the Nest and the Honeywell Lyric can charge themselves by stealing power from other wires, but that can cause serious side effects, according to contractor Bronson Shavitz. He told us that old-school furnaces are generally resilient enough to provide power for devices such as the Nest and the Lyric, but that the high-tech circuit boards on newer models can be more prone to failure when they’re under stress from the tricks the Nest and Lyric use to charge themselves without a common wire.
Installing a C wire requires hiring an electrician and will add about $150 to your costs. The Ecobee3 includes an entire wiring kit to add a C wire if you don’t have one (for the previous version of this guide, reviewer Jason Snell spent about two hours rewiring his heater to accommodate the wiring kit). The Emerson Sensi is the only thermostat we tested that claims not to need a C wire, but it too draws power from whichever system is not in currently in use (for example, the heating system if you’re using the AC). This means that if you have a heat- or air-only system, you will need a C wire.
Note: If the power handling is not correct, the damage to your system can be significant. The expense of replacing a furnace or AC board, plus the cost of professional installation, will probably outweigh the convenience or energy savings of a smart thermostat. Nest addresses the power requirements of its thermostat, including whether a common wire is necessary, in detail on its website, so if you’re unsure whether your system is suited for it, check out this page for C wire information, as well as this page for system compatibility questions and this page for solutions to wiring problems.
If you have more than one zone in your HVAC system, you will need to purchase a separate smart thermostat for each zone. Currently, while all of the smart thermostats we tested are compatible with multizone systems, none can control more than one zone. Even though the Ecobee3 supports remote sensors, those feed only a single thermostat—so if you want more zones, you’ll still need separate thermostats, with their own sensors. However, the Ecobee3 is the only thermostat we tested that allows you to put more than one thermostat into a group so that you can program them to act identically, if you choose.
How we picked and tested
We put these five smart thermostats through their paces to bring you our top pick. Photo: Michael Hession
By eliminating proprietary and basic Wi-Fi–enabled thermostats, we ended up with six finalists: the third-generation Nest, Ecobee’s Ecobee3 and Ecobee3 Lite, Honeywell’s second-generation Lyric, Emerson’s Sensi Wi-Fi thermostat, and Carrier’s Cor. We installed each model ourselves and ran them for three to 10 days in routine operation. We did our testing in a 2,200-square-foot, two-story South Carolina home, running a two-zone HVAC system with an electric heat pump and forced air.
For each thermostat, our testing considered ease of installation and setup, ease of adjusting the temperature, processes for setting a schedule and using smartphone app features, multizone control capabilities, and smart-home interoperability.
A greener grid
China’s embrace of a new electricity-transmission technology holds lessons for others
The case for high-voltage direct-current connectors
Jan 14th 2017
YOU cannot negotiate with nature. From the offshore wind farms of the North Sea to the solar panels glittering in the Atacama desert, renewable energy is often generated in places far from the cities and industrial centres that consume it. To boost renewables and drive down carbon-dioxide emissions, a way must be found to send energy over long distances efficiently.
The technology already exists (see article). Most electricity is transmitted today as alternating current (AC), which works well over short and medium distances. But transmission over long distances requires very high voltages, which can be tricky for AC systems. Ultra-high-voltage direct-current (UHVDC) connectors are better suited to such spans. These high-capacity links not only make the grid greener, but also make it more stable by balancing supply. The same UHVDC links that send power from distant hydroelectric plants, say, can be run in reverse when their output is not needed, pumping water back above the turbines.
Boosters of UHVDC lines envisage a supergrid capable of moving energy around the planet. That is wildly premature. But one country has grasped the potential of these high-capacity links. State Grid, China’s state-owned electricity utility, is halfway through a plan to spend $88bn on UHVDC lines between 2009 and 2020. It wants 23 lines in operation by 2030.
That China has gone furthest in this direction is no surprise. From railways to cities, China’s appetite for big infrastructure projects is legendary (see article). China’s deepest wells of renewable energy are remote—think of the sun-baked Gobi desert, the windswept plains of Xinjiang and the mountain ranges of Tibet where rivers drop precipitously. Concerns over pollution give the government an additional incentive to locate coal-fired plants away from population centres. But its embrace of the technology holds two big lessons for others. The first is a demonstration effect. China shows that UHVDC lines can be built on a massive scale. The largest, already under construction, will have the capacity to power Greater London almost three times over, and will span more than 3,000km.
The second lesson concerns the co-ordination problems that come with long-distance transmission. UHVDCs are as much about balancing interests as grids. The costs of construction are hefty. Utilities that already sell electricity at high prices are unlikely to welcome competition from suppliers of renewable energy; consumers in renewables-rich areas who buy electricity at low prices may balk at the idea of paying more because power is being exported elsewhere. Reconciling such interests is easier the fewer the utilities involved—and in China, State Grid has a monopoly.
That suggests it will be simpler for some countries than others to follow China’s lead. Developing economies that lack an established electricity infrastructure have an advantage. Solar farms on Africa’s plains and hydroplants on its powerful rivers can use UHVDC lines to get energy to growing cities. India has two lines on the drawing-board, and should have more.
Things are more complicated in the rich world. Europe’s utilities work pretty well together but a cross-border UHVDC grid will require a harmonised regulatory framework. America is the biggest anomaly. It is a continental-sized economy with the wherewithal to finance UHVDCs. It is also horribly fragmented. There are 3,000 utilities, each focused on supplying power to its own customers. Consumers a few states away are not a priority, no matter how much sense it might make to send them electricity. A scheme to connect the three regional grids in America is stuck. The only way that America will create a green national grid will be if the federal government throws its weight behind it.
Building a UHVDC network does not solve every energy problem. Security of supply remains an issue, even within national borders: any attacker who wants to disrupt the electricity supply to China’s east coast will soon have a 3,000km-long cable to strike. Other routes to a cleaner grid are possible, such as distributed solar power and battery storage. But to bring about a zero-carbon grid, UHVDC lines will play a role. China has its foot on the gas. Others should follow.
This article appeared in the Leaders section of the print edition under the headline “A greener grid”
“Distributed generation” (DG) is what the electric utility industry calls solar panels, wind turbines, etc.
The article points out what is well-known: even with aggressive use of solar, any DG customer still needs the grid ….. at least this is true until a reasonable cost methodology for storing electricity at the point of generation comes on-line (at which time perhaps a true “off-grid” location is possible.
So …. for a DG customer …. the grid becomes a back-up, a source of power when the sun does not shine, the wind does not blow, etc.
So the fairness question is: should a DG customer pay for their fair share of the grid? Asked this way, the answer is obvious: yes. Just like people pay for insurance, in that same way should people be asked to pay for the cost of the grid.
Unfortunately, these costs are astronomical. This paper claims that they are 55% of total costs!
“In this example, the typical residential customer consumes, on average, about 1000 kWh per month and pays an average monthly bill of about $110 (based on EIA data). About half of that bill (i.e., $60 per month) covers charges related to the non-energy services provided by the grid….”
Excellent comment on the state of the art of voice recognition in the Economist. The entire article is below.
I think its fair to say, as the article does, “we’re in 1994 for voice.” In other words, just like the internet had core technology in place in 1994, no one really had a clue about what it would ultimately mean to society.
My guess is …. it will be a game-changer of the first order.
ECHO, SIRI, CORTANA – the beginning of a new era!
Just like the GUI, the mouse, and WINDOWS allowed computers to go mainstream, my instinct is that removing a keyboard as a requirement will take the computer from a daily tool, and will make it a second-by-second tool. The Apple Watch, which looks rather benign right now, could easily become the central means of communication. And the hard-to-use keyboard on the iPhone will become increasingly a white elephant – rarely used and quaint.
FINDING A VOICE
Language: Finding a voice
Computers have got much better at translation, voice recognition and speech synthesis, says Lane Greene. But they still don’t understand the meaning of language.
I’M SORRY, Dave. I’m afraid I can’t do that.” With chilling calm, HAL 9000, the on-board computer in “2001: A Space Odyssey”, refuses to open the doors to Dave Bowman, an astronaut who had ventured outside the ship. HAL’s decision to turn on his human companion reflected a wave of fear about intelligent computers.
When the film came out in 1968, computers that could have proper conversations with humans seemed nearly as far away as manned flight to Jupiter. Since then, humankind has progressed quite a lot farther with building machines that it can talk to, and that can respond with something resembling natural speech. Even so, communication remains difficult. If “2001” had been made to reflect the state of today’s language technology, the conversation might have gone something like this: “Open the pod bay doors, Hal.” “I’m sorry, Dave. I didn’t understand the question.” “Open the pod bay doors, Hal.” “I have a list of eBay results about pod doors, Dave.”
Creative and truly conversational computers able to handle the unexpected are still far off. Artificial-intelligence (AI) researchers can only laugh when asked about the prospect of an intelligent HAL, Terminator or Rosie (the sassy robot housekeeper in “The Jetsons”). Yet although language technologies are nowhere near ready to replace human beings, except in a few highly routine tasks, they are at last about to become good enough to be taken seriously. They can help people spend more time doing interesting things that only humans can do. After six decades of work, much of it with disappointing outcomes, the past few years have produced results much closer to what early pioneers had hoped for.
Speech recognition has made remarkable advances. Machine translation, too, has gone from terrible to usable for getting the gist of a text, and may soon be good enough to require only modest editing by humans. Computerised personal assistants, such as Apple’s Siri, Amazon’s Alexa, Google Now and Microsoft’s Cortana, can now take a wide variety of questions, structured in many different ways, and return accurate and useful answers in a natural-sounding voice. Alexa can even respond to a request to “tell me a joke”, but only by calling upon a database of corny quips. Computers lack a sense of humour.
When Apple introduced Siri in 2011 it was frustrating to use, so many people gave up. Only around a third of smartphone owners use their personal assistants regularly, even though 95% have tried them at some point, according to Creative Strategies, a consultancy. Many of those discouraged users may not realise how much they have improved.
In 1966 John Pierce was working at Bell Labs, the research arm of America’s telephone monopoly. Having overseen the team that had built the first transistor and the first communications satellite, he enjoyed a sterling reputation, so he was asked to take charge of a report on the state of automatic language processing for the National Academy of Sciences. In the period leading up to this, scholars had been promising automatic translation between languages within a few years.
But the report was scathing. Reviewing almost a decade of work on machine translation and automatic speech recognition, it concluded that the time had come to spend money “hard-headedly toward important, realistic and relatively short-range goals”—another way of saying that language-technology research had overpromised and underdelivered. In 1969 Pierce wrote that both the funders and eager researchers had often fooled themselves, and that “no simple, clear, sure knowledge is gained.” After that, America’s government largely closed the money tap, and research on language technology went into hibernation for two decades.
The story of how it emerged from that hibernation is both salutary and surprisingly workaday, says Mark Liberman. As professor of linguistics at the University of Pennsylvania and head of the Linguistic Data Consortium, a huge trove of texts and recordings of human language, he knows a thing or two about the history of language technology. In the bad old days researchers kept their methods in the dark and described their results in ways that were hard to evaluate. But beginning in the 1980s, Charles Wayne, then at America’s Defence Advanced Research Projects Agency, encouraged them to try another approach: the “common task”.
Many early approaches to language technology got stuck in a conceptual cul-de-sac
Step by step
Researchers would agree on a common set of practices, whether they were trying to teach computers speech recognition, speaker identification, sentiment analysis of texts, grammatical breakdown, language identification, handwriting recognition or anything else. They would set out the metrics they were aiming to improve on, share the data sets used to train their software and allow their results to be tested by neutral outsiders. That made the process far more transparent. Funding started up again and language technologies began to improve, though very slowly.
Many early approaches to language technology—and particularly translation—got stuck in a conceptual cul-de-sac: the rules-based approach. In translation, this meant trying to write rules to analyse the text of a sentence in the language of origin, breaking it down into a sort of abstract “interlanguage” and rebuilding it according to the rules of the target language. These approaches showed early promise. But language is riddled with ambiguities and exceptions, so such systems were hugely complicated and easily broke down when tested on sentences beyond the simple set they had been designed for. Nearly all language technologies began to get a lot better with the application of statistical methods, often called a “brute force” approach. This relies on software scouring vast amounts of data, looking for patterns and learning from precedent. For example, in parsing language (breaking it down into its grammatical components), the software learns from large bodies of text that have already been parsed by humans. It uses what it has learned to make its best guess about a previously unseen text. In machine translation, the software scans millions of words already translated by humans, again looking for patterns. In speech recognition, the software learns from a body of recordings and the transcriptions made by humans. Thanks to the growing power of processors, falling prices for data storage and, most crucially, the explosion in available data, this approach eventually bore fruit. Mathematical techniques that had been known for decades came into their own, and big companies with access to enormous amounts of data were poised to benefit. People who had been put off by the hilariously inappropriate translations offered by online tools like BabelFish began to have more faith in Google Translate. Apple persuaded millions of iPhone users to talk not only on their phones but to them. The final advance, which began only about five years ago, came with the advent of deep learning through digital neural networks (DNNs). These are often touted as having qualities similar to those of the human brain: “neurons” are connected in software, and connections can become stronger or weaker in the process of learning.
But Nils Lenke, head of research for Nuance, a language-technology company, explains matter-of-factly that “DNNs are just another kind of mathematical model,” the basis of which had been well understood for decades. What changed was the hardware being used. Almost by chance, DNN researchers discovered that the graphical processing units (GPUs) used to render graphics fluidly in applications like video games were also brilliant at handling neural networks. In computer graphics, basic small shapes move according to fairly simple rules, but there are lots of shapes and many rules, requiring vast numbers of simple calculations. The same GPUs are used to fine-tune the weights assigned to “neurons” in DNNs as they scour data to learn. The technique has already produced big leaps in quality for all kinds of deep learning, including deciphering handwriting, recognising faces and classifying images. Now they are helping to improve all manner of language technologies, often bringing enhancements of up to 30%. That has shifted language technology from usable at a pinch to really rather good. But so far no one has quite worked out what will move it on from merely good to reliably great.
Speech recognition: I hear you
Computers have made huge strides in understanding human speech
WHEN a person speaks, air is forced out through the lungs, making the vocal chords vibrate, which sends out characteristic wave patterns through the air. The features of the sounds depend on the arrangement of the vocal organs, especially the tongue and the lips, and the characteristic nature of the sounds comes from peaks of energy in certain frequencies. The vowels have frequencies called “formants”, two of which are usually enough to differentiate one vowel from another. For example, the vowel in the English word “fleece” has its first two formants at around 300Hz and 3,000Hz. Consonants have their own characteristic features.
In principle, it should be easy to turn this stream of sound into transcribed speech. As in other language technologies, machines that recognise speech are trained on data gathered earlier. In this instance, the training data are sound recordings transcribed to text by humans, so that the software has both a sound and a text input. All it has to do is match the two. It gets better and better at working out how to transcribe a given chunk of sound in the same way as humans did in the training data. The traditional matching approach was a statistical technique called a hidden Markov model (HMM), making guesses based on what was done before. More recently speech recognition has also gained from deep learning.
English has about 44 “phonemes”, the units that make up the sound system of a language. P and b are different phonemes, because they distinguish words like pat and bat. But in English p with a puff of air, as in “party”, and p without a puff of air, as in “spin”, are not different phonemes, though they are in other languages. If a computer hears the phonemes s, p, i and n back to back, it should be able to recognise the word “spin”.
But the nature of live speech makes this difficult for machines. Sounds are not pronounced individually, one phoneme after the other; they mostly come in a constant stream, and finding the boundaries is not easy. Phonemes also differ according to the context. (Compare the l sound at the beginning of “light” with that at the end of “full”.)
Speakers differ in timbre and pitch of voice, and in accent. Conversation is far less clear than careful dictation. People stop and restart much more often than they realise.
All the same, technology has gradually mitigated many of these problems, so error rates in speech-recognition software have fallen steadily over the years—and then sharply with the introduction of deep learning. Microphones have got better and cheaper. With ubiquitous wireless internet, speech recordings can easily be beamed to computers in the cloud for analysis, and even smartphones now often have computers powerful enough to carry out this task.
Bear arms or bare arms?
Perhaps the most important feature of a speech-recognition system is its set of expectations about what someone is likely to say, or its “language model”. Like other training data, the language models are based on large amounts of real human speech, transcribed into text. When a speech-recognition system “hears” a stream of sound, it makes a number of guesses about what has been said, then calculates the odds that it has found the right one, based on the kinds of words, phrases and clauses it has seen earlier in the training text.
At the level of phonemes, each language has strings that are permitted (in English, a word may begin with str-, for example) or banned (an English word cannot start with tsr-). The same goes for words. Some strings of words are more common than others. For example, “the” is far more likely to be followed by a noun or an adjective than by a verb or an adverb. In making guesses about homophones, the computer will have remembered that in its training data the phrase “the right to bear arms” came up much more often than “the right to bare arms”, and will thus have made the right guess.
Training on a specific speaker greatly cuts down on the software’s guesswork. Just a few minutes of reading training text into software like Dragon Dictate, made by Nuance, produces a big jump in accuracy. For those willing to train the software for longer, the improvement continues to something close to 99% accuracy (meaning that of each hundred words of text, not more than one is wrongly added, omitted or changed). A good microphone and a quiet room help.
Advance knowledge of what kinds of things the speaker might be talking about also increases accuracy. Words like “phlebitis” and “gastrointestinal” are not common in general discourse, and uncommon words are ranked lower in the probability tables the software uses to guess what it has heard. But these words are common in medicine, so creating software trained to look out for such words considerably improves the result. This can be done by feeding the system a large number of documents written by the speaker whose voice is to be recognised; common words and phrases can be extracted to improve the system’s guesses.
As with all other areas of language technology, deep learning has sharply brought down error rates. In October Microsoft announced that its latest speech-recognition system had achieved parity with human transcribers in recognising the speech in the Switchboard Corpus, a collection of thousands of recorded conversations in which participants are talking with a stranger about a randomly chosen subject.
Error rates on the Switchboard Corpus are a widely used benchmark, so claims of quality improvements can be easily compared. Fifteen years ago quality had stalled, with word-error rates of 20-30%. Microsoft’s latest system, which has six neural networks running in parallel, has reached 5.9% (see chart), the same as a human transcriber’s. Xuedong Huang, Microsoft’s chief speech scientist, says that he expected it to take two or three years to reach parity with humans. It got there in less than one.
The improvements in the lab are now being applied to products in the real world. More and more cars are being fitted with voice-activated controls of various kinds; the vocabulary involved is limited (there are only so many things you might want to say to your car), which ensures high accuracy. Microphones—or often arrays of microphones with narrow fields of pick-up—are getting better at identifying the relevant speaker among a group.
Some problems remain. Children and elderly speakers, as well as people moving around in a room, are harder to understand. Background noise remains a big concern; if it is different from that in the training data, the software finds it harder to generalise from what it has learned. So Microsoft, for example, offers businesses a product called CRIS that lets users customise speech-recognition systems for the background noise, special vocabulary and other idiosyncrasies they will encounter in that particular environment. That could be useful anywhere from a noisy factory floor to a care home for the elderly.
But for a computer to know what a human has said is only a beginning. Proper interaction between the two, of the kind that comes up in almost every science-fiction story, calls for machines that can speak back.
Hasta la vista, robot voice
Machines are starting to sound more like humans
“I’LL be back.” “Hasta la vista, baby.” Arnold Schwarzenegger’s Teutonic drone in the “Terminator” films is world-famous. But in this instance film-makers looking into the future were overly pessimistic. Some applications do still feature a monotonous “robot voice”, but that is changing fast.
Examples of speech synthesis from OSX synthesiser:
A basic sample:
An advanced sample:
Example from Amazon’s “Polly” synthesiser:
Creating speech is roughly the inverse of understanding it. Again, it requires a basic model of the structure of speech. What are the sounds in a language, and how do they combine? What words does it have, and how do they combine in sentences? These are well-understood questions, and most systems can now generate sound waves that are a fair approximation of human speech, at least in short bursts.
Heteronyms require special care. How should a computer pronounce a word like “lead”, which can be a present-tense verb or a noun for a heavy metal, pronounced quite differently? Once again a language model can make accurate guesses: “Lead us not into temptation” can be parsed for its syntax, and once the software has worked out that the first word is almost certainly a verb, it can cause it to be pronounced to rhyme with “reed”, not “red”.
Traditionally, text-to-speech models have been “concatenative”, consisting of very short segments recorded by a human and then strung together as in the acoustic model described above. More recently, “parametric” models have been generating raw audio without the need to record a human voice, which makes these systems more flexible but less natural-sounding.
DeepMind, an artificial-intelligence company bought by Google in 2014, has announced a new way of synthesising speech, again using deep neural networks. The network is trained on recordings of people talking, and on the texts that match what they say. Given a text to reproduce as speech, it churns out a far more fluent and natural-sounding voice than the best concatenative and parametric approaches.
The last step in generating speech is giving it prosody—generally, the modulation of speed, pitch and volume to convey an extra (and critical) channel of meaning. In English, “a German teacher”, with the stress on “teacher”, can teach anything but must be German. But “a German teacher” with the emphasis on “German” is usually a teacher of German (and need not be German). Words like prepositions and conjunctions are not usually stressed. Getting machines to put the stresses in the correct places is about 50% solved, says Mark Liberman of the University of Pennsylvania.
Many applications do not require perfect prosody. A satellite-navigation system giving instructions on where to turn uses just a small number of sentence patterns, and prosody is not important. The same goes for most single-sentence responses given by a virtual assistant on a smartphone.
But prosody matters when someone is telling a story. Pitch, speed and volume can be used to pass quickly over things that are already known, or to build interest and tension for new information. Myriad tiny clues communicate the speaker’s attitude to his subject. The phrase “a German teacher”, with stress on the word “German”, may, in the context of a story, not be a teacher of German, but a teacher being explicitly contrasted with a teacher who happens to be French or British.
Text-to-speech engines are not much good at using context to provide such accentuation, and where they do, it rarely extends beyond a single sentence. When Alexa, the assistant in Amazon’s Echo device, reads a news story, her prosody is jarringly un-humanlike. Talking computers have yet to learn how to make humans want to listen.
Machine translation: Beyond Babel
Computer translations have got strikingly better, but still need human input
IN “STAR TREK” it was a hand-held Universal Translator; in “The Hitchhiker’s Guide to the Galaxy” it was the Babel Fish popped conveniently into the ear. In science fiction, the meeting of distant civilisations generally requires some kind of device to allow them to talk. High-quality automated translation seems even more magical than other kinds of language technology because many humans struggle to speak more than one language, let alone translate from one to another.
Computer translation is still known as “machine translation”
The idea has been around since the 1950s, and computerised translation is still known by the quaint moniker “machine translation” (MT). It goes back to the early days of the cold war, when American scientists were trying to get computers to translate from Russian. They were inspired by the code-breaking successes of the second world war, which had led to the development of computers in the first place. To them, a scramble of Cyrillic letters on a page of Russian text was just a coded version of English, and turning it into English was just a question of breaking the code.
Scientists at IBM and Georgetown University were among those who thought that the problem would be cracked quickly. Having programmed just six rules and a vocabulary of 250 words into a computer, they gave a demonstration in New York on January 7th 1954 and proudly produced 60 automated translations, including that of “Mi pyeryedayem mislyi posryedstvom ryechyi,” which came out correctly as “We transmit thoughts by means of speech.” Leon Dostert of Georgetown, the lead scientist, breezily predicted that fully realised MT would be “an accomplished fact” in three to five years.
Instead, after more than a decade of work, the report in 1966 by a committee chaired by John Pierce, mentioned in the introduction to this report, recorded bitter disappointment with the results and urged researchers to focus on narrow, achievable goals such as automated dictionaries. Government-sponsored work on MT went into near-hibernation for two decades. What little was done was carried out by private companies. The most notable of them was Systran, which provided rough translations, mostly to America’s armed forces.
La plume de mon ordinateur
The scientists got bogged down by their rules-based approach. Having done relatively well with their six-rule system, they came to believe that if they programmed in more rules, the system would become more sophisticated and subtle. Instead, it became more likely to produce nonsense. Adding extra rules, in the modern parlance of software developers, did not “scale”.
Besides the difficulty of programming grammar’s many rules and exceptions, some early observers noted a conceptual problem. The meaning of a word often depends not just on its dictionary definition and the grammatical context but the meaning of the rest of the sentence. Yehoshua Bar-Hillel, an Israeli MT pioneer, realised that “the pen is in the box” and “the box is in the pen” would require different translations for “pen”: any pen big enough to hold a box would have to be an animal enclosure, not a writing instrument.
How could machines be taught enough rules to make this kind of distinction? They would have to be provided with some knowledge of the real world, a task far beyond the machines or their programmers at the time. Two decades later, IBM stumbled on an approach that would revive optimism about MT. Its Candide system was the first serious attempt to use statistical probabilities rather than rules devised by humans for translation. Statistical, “phrase-based” machine translation, like speech recognition, needed training data to learn from. Candide used Canada’s Hansard, which publishes that country’s parliamentary debates in French and English, providing a huge amount of data for that time. The phrase-based approach would ensure that the translation of a word would take the surrounding words properly into account.
But quality did not take a leap until Google, which had set itself the goal of indexing the entire internet, decided to use those data to train its translation engines; in 2007 it switched from a rules-based engine (provided by Systran) to its own statistics-based system. To build it, Google trawled about a trillion web pages, looking for any text that seemed to be a translation of another—for example, pages designed identically but with different words, and perhaps a hint such as the address of one page ending in /en and the other ending in /fr. According to Macduff Hughes, chief engineer on Google Translate, a simple approach using vast amounts of data seemed more promising than a clever one with fewer data.
Training on parallel texts (which linguists call corpora, the plural of corpus) creates a “translation model” that generates not one but a series of possible translations in the target language. The next step is running these possibilities through a monolingual language model in the target language. This is, in effect, a set of expectations about what a well-formed and typical sentence in the target language is likely to be. Single-language models are not too hard to build. (Parallel human-translated corpora are hard to come by; large amounts of monolingual training data are not.) As with the translation model, the language model uses a brute-force statistical approach to learn from the training data, then ranks the outputs from the translation model in order of plausibility.
Statistical machine translation rekindled optimism in the field. Internet users quickly discovered that Google Translate was far better than the rules-based online engines they had used before, such as BabelFish. Such systems still make mistakes—sometimes minor, sometimes hilarious, sometimes so serious or so many as to make nonsense of the result. And language pairs like Chinese-English, which are unrelated and structurally quite different, make accurate translation harder than pairs of related languages like English and German. But more often than not, Google Translate and its free online competitors, such as Microsoft’s Bing Translator, offer a usable approximation.
Such systems are set to get better, again with the help of deep learning from digital neural networks. The Association for Computational Linguistics has been holding workshops on MT every summer since 2006. One of the events is a competition between MT engines turned loose on a collection of news text. In August 2016, in Berlin, neural-net-based MT systems were the top performers (out of 102), a first.
Now Google has released its own neural-net-based engine for eight language pairs, closing much of the quality gap between its old system and a human translator. This is especially true for closely related languages (like the big European ones) with lots of available training data. The results are still distinctly imperfect, but far smoother and more accurate than before. Translations between English and (say) Chinese and Korean are not as good yet, but the neural system has brought a clear improvement here too.
What machines cannot yet do is have true conversations
The Coca-Cola factor
Neural-network-based translation actually uses two networks. One is an encoder. Each word of an input sentence is converted into a multidimensional vector (a series of numerical values), and the encoding of each new word takes into account what has happened earlier in the sentence. Marcello Federico of Italy’s Fondazione Bruno Kessler, a private research organisation, uses an intriguing analogy to compare neural-net translation with the phrase-based kind. The latter, he says, is like describing Coca-Cola in terms of sugar, water, caffeine and other ingredients. By contrast, the former encodes features such as liquidness, darkness, sweetness and fizziness.
Once the source sentence is encoded, a decoder network generates a word-for-word translation, once again taking account of the immediately preceding word. This can cause problems when the meaning of words such as pronouns depends on words mentioned much earlier in a long sentence. This problem is mitigated by an “attention model”, which helps maintain focus on other words in the sentence outside the immediate context.
Neural-network translation requires heavy-duty computing power, both for the original training of the system and in use. The heart of such a system can be the GPUs that made the deep-learning revolution possible, or specialised hardware like Google’s Tensor Processing Units (TPUs). Smaller translation companies and researchers usually rent this kind of processing power in the cloud. But the data sets used in neural-network training do not need to be as extensive as those for phrase-based systems, which should give smaller outfits a chance to compete with giants like Google.
Fully automated, high-quality machine translation is still a long way off. For now, several problems remain. All current machine translations proceed sentence by sentence. If the translation of such a sentence depends on the meaning of earlier ones, automated systems will make mistakes. Long sentences, despite tricks like the attention model, can be hard to translate. And neural-net-based systems in particular struggle with rare words.
Training data, too, are scarce for many language pairs. They are plentiful between European languages, since the European Union’s institutions churn out vast amounts of material translated by humans between the EU’s 24 official languages. But for smaller languages such resources are thin on the ground. For example, there are few Greek-Urdu parallel texts available on which to train a translation engine. So a system that claims to offer such translation is in fact usually running it through a bridging language, nearly always English. That involves two translations rather than one, multiplying the chance of errors.
Even if machine translation is not yet perfect, technology can already help humans translate much more quickly and accurately. “Translation memories”, software that stores already translated words and segments, first came into use as early as the 1980s. For someone who frequently translates the same kind of material (such as instruction manuals), they serve up the bits that have already been translated, saving lots of duplication and time.
A similar trick is to train MT engines on text dealing with a narrow real-world domain, such as medicine or the law. As software techniques are refined and computers get faster, training becomes easier and quicker. Free software such as Moses, developed with the support of the EU and used by some of its in-house translators, can be trained by anyone with parallel corpora to hand. A specialist in medical translation, for instance, can train the system on medical translations only, which makes them far more accurate.
At the other end of linguistic sophistication, an MT engine can be optimised for the shorter and simpler language people use in speech to spew out rough but near-instantaneous speech-to-speech translations. This is what Microsoft’s Skype Translator does. Its quality is improved by being trained on speech (things like film subtitles and common spoken phrases) rather than the kind of parallel text produced by the European Parliament.
Translation management has also benefited from innovation, with clever software allowing companies quickly to combine the best of MT, translation memory, customisation by the individual translator and so on. Translation-management software aims to cut out the agencies that have been acting as middlemen between clients and an army of freelance translators. Jack Welde, the founder of Smartling, an industry favourite, says that in future translation customers will choose how much human intervention is needed for a translation. A quick automated one will do for low-stakes content with a short life, but the most important content will still require a fully hand-crafted and edited version. Noting that MT has both determined boosters and committed detractors, Mr Welde says he is neither: “If you take a dogmatic stance, you’re not optimised for the needs of the customer.”
Translation software will go on getting better. Not only will engineers keep tweaking their statistical models and neural networks, but users themselves will make improvements to their own systems. For example, a small but much-admired startup, Lilt, uses phrase-based MT as the basis for a translation, but an easy-to-use interface allows the translator to correct and improve the MT system’s output. Every time this is done, the corrections are fed back into the translation engine, which learns and improves in real time. Users can build several different memories—a medical one, a financial one and so on—which will help with future translations in that specialist field.
TAUS, an industry group, recently issued a report on the state of the translation industry saying that “in the past few years the translation industry has burst with new tools, platforms and solutions.” Last year Jaap van der Meer, TAUS’s founder and director, wrote a provocative blogpost entitled “The Future Does Not Need Translators”, arguing that the quality of MT will keep improving, and that for many applications less-than-perfect translation will be good enough.
The “translator” of the future is likely to be more like a quality-control expert, deciding which texts need the most attention to detail and editing the output of MT software. That may be necessary because computers, no matter how sophisticated they have become, cannot yet truly grasp what a text means.
Meaning and machine intelligence: What are you talking about?
Machines cannot conduct proper conversations with humans because they do not understand the world
IN “BLACK MIRROR”, a British science-fiction satire series set in a dystopian near future, a young woman loses her boyfriend in a car accident. A friend offers to help her deal with her grief. The dead man was a keen social-media user, and his archived accounts can be used to recreate his personality. Before long she is messaging with a facsimile, then speaking to one. As the system learns to mimic him ever better, he becomes increasingly real.
This is not quite as bizarre as it sounds. Computers today can already produce an eerie echo of human language if fed with the appropriate material. What they cannot yet do is have true conversations. Truly robust interaction between man and machine would require a broad understanding of the world. In the absence of that, computers are not able to talk about a wide range of topics, follow long conversations or handle surprises.
Machines trained to do a narrow range of tasks, though, can perform surprisingly well. The most obvious examples are the digital assistants created by the technology giants. Users can ask them questions in a variety of natural ways: “What’s the temperature in London?” “How’s the weather outside?” “Is it going to be cold today?” The assistants know a few things about users, such as where they live and who their family are, so they can be personal, too: “How’s my commute looking?” “Text my wife I’ll be home in 15 minutes.”
And they get better with time. Apple’s Siri receives 2bn requests per week, which (after being anonymised) are used for further teaching. For example, Apple says Siri knows every possible way that users ask about a sports score. She also has a delightful answer for children who ask about Father Christmas. Microsoft learned from some of its previous natural-language platforms that about 10% of human interactions were “chitchat”, from “tell me a joke” to “who’s your daddy?”, and used such chat to teach its digital assistant, Cortana.
The writing team for Cortana includes two playwrights, a poet, a screenwriter and a novelist. Google hired writers from Pixar, an animated-film studio, and The Onion, a satirical newspaper, to make its new Google Assistant funnier. No wonder people often thank their digital helpers for a job well done. The assistants’ replies range from “My pleasure, as always” to “You don’t need to thank me.”
Good at grammar
How do natural-language platforms know what people want? They not only recognise the words a person uses, but break down speech for both grammar and meaning. Grammar parsing is relatively advanced; it is the domain of the well-established field of “natural-language processing”. But meaning comes under the heading of “natural-language understanding”, which is far harder.
First, parsing. Most people are not very good at analysing the syntax of sentences, but computers have become quite adept at it, even though most sentences are ambiguous in ways humans are rarely aware of. Take a sign on a public fountain that says, “This is not drinking water.” Humans understand it to mean that the water (“this”) is not a certain kind of water (“drinking water”). But a computer might just as easily parse it to say that “this” (the fountain) is not at present doing something (“drinking water”).
As sentences get longer, the number of grammatically possible but nonsensical options multiplies exponentially. How can a machine parser know which is the right one? It helps for it to know that some combinations of words are more common than others: the phrase “drinking water” is widely used, so parsers trained on large volumes of English will rate those two words as likely to be joined in a noun phrase. And some structures are more common than others: “noun verb noun noun” may be much more common than “noun noun verb noun”. A machine parser can compute the overall probability of all combinations and pick the likeliest.
A “lexicalised” parser might do even better. Take the Groucho Marx joke, “One morning I shot an elephant in my pyjamas. How he got in my pyjamas, I’ll never know.” The first sentence is ambiguous (which makes the joke)—grammatically both “I” and “an elephant” can attach to the prepositional phrase “in my pyjamas”. But a lexicalised parser would recognise that “I [verb phrase] in my pyjamas” is far more common than “elephant in my pyjamas”, and so assign that parse a higher probability.
But meaning is harder to pin down than syntax. “The boy kicked the ball” and “The ball was kicked by the boy” have the same meaning but a different structure. “Time flies like an arrow” can mean either that time flies in the way that an arrow flies, or that insects called “time flies” are fond of an arrow.
“Who plays Thor in ‘Thor’?” Your correspondent could not remember the beefy Australian who played the eponymous Norse god in the Marvel superhero film. But when he asked his iPhone, Siri came up with an unexpected reply: “I don’t see any movies matching ‘Thor’ playing in Thor, IA, US, today.” Thor, Iowa, with a population of 184, was thousands of miles away, and “Thor”, the film, has been out of cinemas for years. Siri parsed the question perfectly properly, but the reply was absurd, violating the rules of what linguists call pragmatics: the shared knowledge and understanding that people use to make sense of the often messy human language they hear. “Can you reach the salt?” is not a request for information but for salt. Natural-language systems have to be manually programmed to handle such requests as humans expect them, and not literally.
Shared information is also built up over the course of a conversation, which is why digital assistants can struggle with twists and turns in conversations. Tell an assistant, “I’d like to go to an Italian restaurant with my wife,” and it might suggest a restaurant. But then ask, “is it close to her office?”, and the assistant must grasp the meanings of “it” (the restaurant) and “her” (the wife), which it will find surprisingly tricky. Nuance, the language-technology firm, which provides natural-language platforms to many other companies, is working on a “concierge” that can handle this type of challenge, but it is still a prototype.
Such a concierge must also offer only restaurants that are open. Linking requests to common sense (knowing that no one wants to be sent to a closed restaurant), as well as a knowledge of the real world (knowing which restaurants are closed), is one of the most difficult challenges for language technologies.
Common sense, an old observation goes, is uncommon enough in humans. Programming it into computers is harder still. Fernando Pereira of Google points out why. Automated speech recognition and machine translation have something in common: there are huge stores of data (recordings and transcripts for speech recognition, parallel corpora for translation) that can be used to train machines. But there are no training data for common sense.
Brain scan: Terry Winograd
The Winograd Schema tests computers’ “understanding” of the real world
THE Turing Test was conceived as a way to judge whether true artificial intelligence has been achieved. If a computer can fool humans into thinking it is human, there is no reason, say its fans, to say the machine is not truly intelligent.
Few giants in computing stand with Turing in fame, but one has given his name to a similar challenge: Terry Winograd, a computer scientist at Stanford. In his doctoral dissertation Mr Winograd posed a riddle for computers: “The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?”
It is a perfect illustration of a well-recognised point: many things that are easy for humans are crushingly difficult for computers. Mr Winograd went into AI research in the 1960s and 1970s and developed an early natural-language program called SHRDLU that could take commands and answer questions about a group of shapes it could manipulate: “Find a block which is taller than the one you are holding and put it into the box.” This work brought a jolt of optimism to the AI crowd, but Mr Winograd later fell out with them, devoting himself not to making machines intelligent but to making them better at helping human beings. (These camps are sharply divided by philosophy and academic pride.) He taught Larry Page at Stanford, and after Mr Page went on to co-found Google, Mr Winograd became a guest researcher at the company, helping to build Gmail.
In 2011 Hector Levesque of the University of Toronto became annoyed by systems that “passed” the Turing Test by joking and avoiding direct answers. He later asked to borrow Mr Winograd’s name and the format of his dissertation’s puzzle to pose a more genuine test of machine “understanding”: the Winograd Schema. The answers to its battery of questions were obvious to humans but would require computers to have some reasoning ability and some knowledge of the real world. The first official Winograd Schema Challenge was held this year, with a $25,000 prize offered by Nuance, the language-software company, for a program that could answer more than 90% of the questions correctly. The best of them got just 58% right.
Though officially retired, Mr Winograd continues writing and researching. One of his students is working on an application for Google Glass, a computer with a display mounted on eyeglasses. The app would help people with autism by reading the facial expressions of conversation partners and giving the wearer information about their emotional state. It would allow him to integrate linguistic and non-linguistic information in a way that people with autism find difficult, as do computers.
Asked to trick some of the latest digital assistants, like Siri and Alexa, he asks them things like “Where can I find a nightclub my Methodist uncle would like?”, which requires knowledge about both nightclubs (which such systems have) and Methodist uncles (which they don’t). When he tried “Where did I leave my glasses?”, one of them came up with a link to a book of that name. None offered the obvious answer: “How would I know?”
Knowledge of the real world is another matter. AI has helped data-rich companies such as America’s West-Coast tech giants organise much of the world’s information into interactive databases such as Google’s Knowledge Graph. Some of the content of that appears in a box to the right of a Google page of search results for a famous figure or thing. It knows that Jacob Bernoulli studied at the University of Basel (as did other people, linked to Bernoulli through this node in the Graph) and wrote “On the Law of Large Numbers” (which it knows is a book).
Organising information this way is not difficult for a company with lots of data and good AI capabilities, but linking information to language is hard. Google touts its assistant’s ability to answer questions like “Who was president when the Rangers won the World Series?” But Mr Pereira concedes that this was the result of explicit training. Another such complex query—“What was the population of London when Samuel Johnson wrote his dictionary?”—would flummox the assistant, even though the Graph knows about things like the historical population of London and the date of Johnson’s dictionary. IBM’s Watson system, which in 2011 beat two human champions at the quiz show “Jeopardy!”, succeeded mainly by calculating huge numbers of potential answers based on key words by probability, not by a human-like understanding of the question.
Making real-world information computable is challenging, but it has inspired some creative approaches. Cortical.io, a Vienna-based startup, took hundreds of Wikipedia articles, cut them into thousands of small snippets of information and ran an “unsupervised” machine-learning algorithm over it that required the computer not to look for anything in particular but to find patterns. These patterns were then represented as a visual “semantic fingerprint” on a grid of 128×128 pixels. Clumps of pixels in similar places represented semantic similarity. This method can be used to disambiguate words with multiple meanings: the fingerprint of “organ” shares features with both “liver” and “piano” (because the word occurs with both in different parts of the training data). This might allow a natural-language system to distinguish between pianos and church organs on one hand, and livers and other internal organs on the other.
Proper conversation between humans and machines can be seen as a series of linked challenges: speech recognition, speech synthesis, syntactic analysis, semantic analysis, pragmatic understanding, dialogue, common sense and real-world knowledge. Because all the technologies have to work together, the chain as a whole is only as strong as its weakest link, and the first few of these are far better developed than the last few.
The hardest part is linking them together. Scientists do not know how the human brain draws on so many different kinds of knowledge at the same time. Programming a machine to replicate that feat is very much a work in progress.
Looking ahead: For my next trick
Talking machines are the new must-haves
IN “WALL-E”, an animated children’s film set in the future, all humankind lives on a spaceship after the Earth’s environment has been trashed. The humans are whisked around in intelligent hovering chairs; machines take care of their every need, so they are all morbidly obese. Even the ship’s captain is not really in charge; the actual pilot is an intelligent and malevolent talking robot, Auto, and like so many talking machines in science fiction, he eventually makes a grab for power.
Speech is quintessentially human, so it is hard to imagine machines that can truly speak conversationally as humans do without also imagining them to be superintelligent. And if they are super intelligent, with none of humans’ flaws, it is hard to imagine them not wanting to take over, not only for their good but for that of humanity. Even in a fairly benevolent future like “WALL-E’s”, where the machines are doing all the work, it is easy to see that the lack of anything challenging to do would be harmful to people.
Fortunately, the tasks that talking machines can take off humans’ to-do lists are the sort that many would happily give up. Machines are increasingly able to handle difficult but well-defined jobs. Soon all that their users will have to do is pipe up and ask them, using a naturally phrased voice command. Once upon a time, just one tinkerer in a given family knew how to work the computer or the video recorder. Then graphical interfaces (icons and a mouse) and touchscreens made such technology accessible to everyone. Frank Chen of Andreessen Horowitz, a venture-capital firm, sees natural-language interfaces between humans and machines as just another step in making information and services available to all. Silicon Valley, he says, is enjoying a golden age of AI technologies. Just as in the early 1990s companies were piling online and building websites without quite knowing why, now everyone is going for natural language. Yet, he adds, “we’re in 1994 for voice.”
1995 will soon come. This does not mean that people will communicate with their computers exclusively by talking to them. Websites did not make the telephone obsolete, and mobile devices did not make desktop computers obsolete. In the same way, people will continue to have a choice between voice and text when interacting with their machines.
Not all will choose voice. For example, in Japan yammering into a phone is not done in public, whether the interlocutor is a human or a digital assistant, so usage of Siri is low during business hours but high in the evening and at the weekend. For others, voice-enabled technology is an obvious boon. It allows dyslexic people to write without typing, and the very elderly may find it easier to talk than to type on a tiny keyboard. The very young, some of whom today learn to type before they can write, may soon learn to talk to machines before they can type.
Those with injuries or disabilities that make it hard for them to write will also benefit. Microsoft is justifiably proud of a new device that will allow people with amyotrophic lateral sclerosis (ALS), which immobilises nearly all of the body but leaves the mind working, to speak by using their eyes to pick letters on a screen. The critical part is predictive text, which improves as it gets used to a particular individual. An experienced user will be able to “speak” at around 15 words per minute.
People may even turn to machines for company. Microsoft’s Xiaoice, a chatbot launched in China, learns to come up with the responses that will keep a conversation going longest. Nobody would think it was human, but it does make users open up in surprising ways. Jibo, a new “social robot”, is intended to tell children stories, help far-flung relatives stay in touch and the like.
Another group that may benefit from technology is smaller language communities. Networked computers can encourage a winner-take-all effect: if there is a lot of good software and content in English and Chinese, smaller languages become less valuable online. If they are really tiny, their very survival may be at stake. But Ross Perlin of the Endangered Languages Alliance notes that new software allows researchers to document small languages more quickly than ever. With enough data comes the possibility of developing resources—from speech recognition to interfaces with software—for smaller and smaller languages. The Silicon Valley giants already localise their services in dozens of languages; neural networks and other software allow new versions to be generated faster and more efficiently than ever.
There are two big downsides to the rise in natural-language technologies: the implications for privacy, and the disruption it will bring to many jobs.
Increasingly, devices are always listening. Digital assistants like Alexa, Cortana, Siri and Google Assistant are programmed to wait for a prompt, such as “Hey, Siri” or “OK, Google”, to activate them. But allowing always-on microphones into people’s pockets and homes amounts to a further erosion of traditional expectations of privacy. The same might be said for all the ways in which language software improves by training on a single user’s voice, vocabulary, written documents and habits.
All the big companies’ location-based services—even the accelerometers in phones that detect small movements—are making ever-improving guesses about users’ wants and needs. The moment when a digital assistant surprises a user with “The chemist is nearby—do you want to buy more haemorrhoid cream, Steve?” could be when many may choose to reassess the trade-off between amazing new services and old-fashioned privacy. The tech companies can help by giving users more choice; the latest iPhone will not be activated when it is laid face down on a table. But hackers will inevitably find ways to get at some of these data.
The other big concern is for jobs. To the extent that they are routine, they face being automated away. A good example is customer support. When people contact a company for help, the initial encounter is usually highly scripted. A company employee will verify a customer’s identity and follow a decision-tree. Language technology is now mature enough to take on many of these tasks.
For a long transition period humans will still be needed, but the work they do will become less routine. Nuance, which sells lots of automated online and phone-based help systems, is bullish on voice biometrics (customers identifying themselves by saying “my voice is my password”). Using around 200 parameters for identifying a speaker, it is probably more secure than a fingerprint, says Brett Beranek, a senior manager at the company. It will also eliminate the tedium, for both customers and support workers, of going through multi-step identification procedures with PINs, passwords and security questions. When Barclays, a British bank, offered it to frequent users of customer-support services, 84% signed up within five months.
Digital assistants on personal smartphones can get away with mistakes, but for some business applications the tolerance for error is close to zero, notes Nikita Ivanov. His company, Datalingvo, a Silicon Valley startup, answers questions phrased in natural language about a company’s business data. If a user wants to know which online ads resulted in the most sales in California last month, the software automatically translates his typed question into a database query. But behind the scenes a human working for Datalingvo vets the query to make sure it is correct. This is because the stakes are high: the technology is bound to make mistakes in its early days, and users could make decisions based on bad data.
This process can work the other way round, too: rather than natural-language input producing data, data can produce language. Arria, a company based in London, makes software into which a spreadsheet full of data can be dragged and dropped, to be turned automatically into a written description of the contents, complete with trends. Matt Gould, the company’s chief strategy officer, likes to think that this will free chief financial officers from having to write up the same old routine analyses for the board, giving them time to develop more creative approaches.
Carl Benedikt Frey, an economist at Oxford University, has researched the likely effect of artificial intelligence on the labour market and concluded that the jobs most likely to remain immune include those requiring creativity and skill at complex social interactions. But not every human has those traits. Call centres may need fewer people as more routine work is handled by automated systems, but the trickier inquiries will still go to humans.
Much of this seems familiar. When Google search first became available, it turned up documents in seconds that would have taken a human operator hours, days or years to find. This removed much of the drudgery from being a researcher, librarian or journalist. More recently, young lawyers and paralegals have taken to using e-discovery. These innovations have not destroyed the professions concerned but merely reshaped them.
Machines that relieve drudgery and allow people to do more interesting jobs are a fine thing. In net terms they may even create extra jobs. But any big adjustment is most painful for those least able to adapt. Upheavals brought about by social changes—like the emancipation of women or the globalisation of labour markets—are already hard for some people to bear. When those changes are wrought by machines, they become even harder, and all the more so when those machines seem to behave more and more like humans. People already treat inanimate objects as if they were alive: who has never shouted at a computer in frustration? The more that machines talk, and the more that they seem to understand people, the more their users will be tempted to attribute human traits to them.
That raises questions about what it means to be human. Language is widely seen as humankind’s most distinguishing trait. AI researchers insist that their machines do not think like people, but if they can listen and talk like humans, what does that make them? As humans teach ever more capable machines to use language, the once-obvious line between them will blur.
In winter, the Pacific Northwest needs power for heat – and must import it from So Cal.
In summer, the Pacific Northwest has excess power – and exports it to So Cal.
“Paths” are the major transmission lines that form the “grid” – which connect geographic areas covered by utilities.
Of interest here are the “paths” that transmit power north and south in California. These path make the importing and exporting of power possible.
These paths were built in the 1970s and 1980s in order to provide California and the Southwest with excess hydropower from the Pacific Northwest without actually having to construct any new power plants.
During the cold Pacific Northwest winters, power is sent north due to heater use. This transfer reverses in the hot, dry summers, when many people in the South run air conditioners. In order to do this the maximum south-to-north transmission capacity is 5,400 MW for most parts, but between Los Banos substation and Gates substation, there were only two 500 kV lines.
The capacity at this electricity bottleneck was only 3,900 MW, and this was identified in the 1990s as a trouble spot, but no one acted upon it. This bottleneck was one of the leading causes of the California electricity crisis in 2000-2001. To remedy this problem, WAPA along with several utilities built a third 500 kV line between these two substations to eliminate this transmission constraint and raise the maximum south-to-north transmission capacity to 5,400 MW. The project was completed under budget and on time on December 21, 2004. California’s governor, Arnold Schwarzenegger attended the commissioning ceremony at California-ISO’s control center in Folsom.
Path 26 is three 500 kV lines with 3,700 MW capacity North to South and 3,000 MW capacity south to north. Itl inks PG&E (north) to SCE (south).
Path 26 forms Southern California Edison’s (SCE) intertie (link) with Pacific Gas & Electric (PG&E) to the north. Since PG&E’s power grid and SCE’s grid both have interconnections to elsewhere, in the Pacific Northwest (PG&E) and the Southwestern United States (SCE), Path 26 is a southern extension of Path 15 and Path 66, and a crucial link between the two regions’ grids.
The path consists of three transmission lines, Midway–Vincent No. 1, Midway–Vincent No. 2 and Midway–Whirlwind. Midway–Whirlwind was part of what was called Midway–Vincent No. 3 before Whirlwind was built, as part of the Tehachapi Renewable Transmission Project.
The three Path 26 500 kV lines can transmit 3,700 MW of electrical power north to south. The capacity for south to north power transmission is 3,000 MW.
Path 26 – Vincent to Midway
The Path, starting from the south, starts at the large Vincent substation close to State Route 14 and Soledad Pass near Acton east of the Santa Clarita Valley. The same Vincent substation is linked to Path 46 and Path 61 via two SCE 500 kV lines that head southeast to Lugo substation. As for these SCE 500 kV wires, like Path 15 to the north, the three 500 kV wires are never built together for the entire length of the route. Straight from the substation, all three lines head north-northwest. The westernmost SCE 500 kV line splits away and runs west of the other two SCE 500 kV lines.
After crossing State Route 14, two 500 kV wires built by Los Angeles Department of Water and Power (LADW&P) join the eastern two SCE 500 kV wires. Some point west of Palmdale, one line (SCE) continues northwest and the other three (one SCE, two LADW&P) head west. The lone SCE line continuing northwest (with 230 kV lines) runs close to the Antelope Valley California Poppy Reserve, famed for its California Poppy flowers. The one SCE line that ran west of the other two SCE lines (now separated) re-joins the single SCE 500 kV running west with the two LADW&P lines. The four 500 kV lines run together for some distance until, at some point in the mountains, the two SCE lines continue to head west and the two LADW&P lines turn southwest and head for Sylmar in the San Fernando Valley (close to the Sylmar Converter Station southern terminus of the Pacific Intertie HVDC line). The two SCE lines heading west meet up with Interstate 5 on the arid foothills of the Sierra Pelona Mountains to the east of Pyramid Lake. The lines parallel I-5 crossing Tejon Pass (running on the eastern foothills of Frazier Mountain) and run out of sight for a while as they cross the high woodlands of the northern San Emigdio Mountains at their highest point at around 5,350 ft (1,630 m).
As for the third line, north of Lancaster and State Route 138, it runs through a remote, roadless area of the Tehachapi Mountains with two 230 kV lines. Although it runs across sparse to dense oak woodlands at around 5,300 ft (1,615 m), it is not easy to spot it on Google Earth since its right of way is not as clear cut as Path 15 and Path 66 to the north. Due to this, the line is not readily seen again until it crosses State Route 184 as a PG&E power line. Somewhere to the east of State Route 184, in the mountains, the line changes from SCE towers to PG&E towers. By the time the all three lines are visible to Interstate 5, they roughly parallel each other until all three lines, two SCE and one PG&E, terminate at the massive Midway substation in Buttonwillow in the San Joaquin Valley. Two pairs of PG&E 500 kV lines heading north and southwest (separated), form Path 15.
Connecting wires to Path 46 – Vincent to Lugo
Adjacent to the Path 26 wires, two other SCE 500 kV also begin in Vincent substation. The two 500 kV power lines head northeast from Vincent to meet up with LADW&P’s two other 500 kV wires from Rinaldi and then all four lines head east in the Antelope Valley along the northern foothills of the San Gabriel Mountains. Another LADW&P line from Toluca joins the four-line transmission corridor, resulting in a large path of five power lines. However, one LADW&P splits off from the other four lines and heads southeast. Soon after, the SCE lines split away from the remaining two LADW&P lines and head southeast as well. They cross the lone LADW&P line that split away and Interstate 15 as they head to the Lugo substation northeast of Cajon Pass. The lines terminate at Lugo, where one SCE Path 61 500 kV line, two SCE Path 46 500 kV lines, and three other SCE 500 kV lines end.
Path 15 is an 84-mile (135 km) portion of the north-south power transmission corridor in California, U.S. It forms a part of the Pacific AC Intertie and the California-Oregon Transmission Project.
Path 15, along with the Pacific DC Intertie running far to the east, forms an important transmission interconnection with the hydroelectric plants to the north and the fossil fuel plants to the south. Most of the three AC 500 kV lines were built by Pacific Gas and Electric (PG&E) south of Tesla substation.
Path 15 consists of three lines at 500 kV and four lines at 230 kV. The 500 kV lines connect Los Banos to Gates and Los Banos to Midway. All four 230 kV lines have Gates at one end with the other ends at Panoche, Gregg, and McCall.
There are only two connecting PG&E lines north of Tracy substation that connect Path 15 to Path 66 at the Round Mountain substation. The third line between Los Banos and Gates substation, south of Tracy, is operated by the Western Area Power Administration (WAPA), a division of the United States Department of Energy. This line was constructed away from the other two lines and is often out of sight. Most of the time the lines are in California’s Sierra foothills and the Central Valley, but there are some PG&E lines that come from power plants along the shores of the Pacific Ocean and cross the California Coast Ranges and connect with the intertie. The Diablo Canyon Power Plant and the Moss Landing Power Plant are two examples.
The Vaca-Dixon substation (38°24′8.33″N 121°55′14.75″W) was the world’s largest substation at the time of its inauguration in 1922.