Tag Archives: Internet

User Pays is Trending

“User pays” is trending.

….because convenient methods of paying are trending.

This new trend is making the economics of capitalization and payback MUCH easier. Markets will be built for investment capital where payback rates can be credibly estimated.

Discussion

New apps, GPS technology, iPhones, Internet technologies, credit card apps and sensors make it simpler for the user to pay for the services they use.

An example of app-based technologies is MobilePass – an app for conveniently paying for parking instead of “feeding the meter”. Its popularity is making it possible for paid parking to work.

A second example of app-based is Uber. Uber users summon their ride using an app, which charges in advance to a pre-assigned credit card – thereby allowing a ride to be summoned based on the destination sought and the pick-up location. GPS technology effortlessly allows the app to calculate the price of the ride and to scan for nearby free Uber cars.

An example os sensor-based technologies is EZ Pass and technologies like it. EZ Pass reads license plates at bridges and on toll roads. It replaces the “toll booth”, where vehicles needed to stop, wait in line, and then pay their toll to a toll worker.

E‑ZPass enjoys tremendous brand recognition and high levels of customer satisfaction, and is the world leader in toll interoperability, with over 35 million E‑ZPass devices in circulation. It operates in all of New England, except VT and Conn, and goes west to Ohio, Indiana and Illinois, and South to North Carolina.

But EZ Pass is one of many. Here is a rundown of the apps chosen:

Simple Example of User Pays

ParkMobile (app that allows users to pay for parking)
Uber (app that allows users to pay for taxi service)
Meters (meter readers are being replaced by smart meters)
Toll roads (with EZ Pass and similar technologies)
Train, Bus, Boat and Plane Tickets
Concert Tickets and Ticketmaster
Hotel Rooms and Expedia and Hotels.Com
Rent and AirBnB
Electricity Service and smart meters

Colorado

EXPRESS TOLL®

Delaware
E-ZPASS

District of Columbia
E-ZPASS

Florida

EPASS®

LEEWAY

SUN PASS®

Illinois

I-PASS

Indiana
E-ZPASS

Maine
E-ZPASS

Maryland
E-ZPASS

Massachusetts
E-ZPASS

New Hampshire
E-ZPASS

New Jersey
E-ZPASS

New York
E-ZPASS

North Carolina
E-ZPASS

Ohio
E-ZPASS

Oklahoma

PIKEPASS™

Pennsylvania
E-ZPASS

Puerto Rico

AUTOEXPRESO

Rhode Island
E-ZPASS

San Francisco

FASTRAK

Texas

EZ TAG / TX TAG / TOLLTAG

Virginia
E-ZPASS

Washington

GOOD TO GO!

West Virginia
E-ZPASS

PlatePass makes all of this happen for rental cars

The 6 Best Payment Apps to Get in 2018
PayPal. Courtesy of PayPal. PayPal is the granddaddy of payment companies, with a history going back to 1998. …
Venmo. Courtesy of Venmo. …
Square Cash. Courtesy of Square Cash. …
Zelle. Courtesy of Zelle. …
Google Wallet. Courtesy of Google Wallet. …
Facebook Messenger. Courtesy of Facebook.
Sep 20, 2018
The 6 Best Payment Apps to Get in 2018 – The Balance
https://www.thebalance.com/best-payment-apps-4159058

HuffPost and the 3rd metric

Today’s NYT has a long story on the HuffPost, it’s acquisition by AOL, then Verizon; its toxic culture; its success why’s; and Ariana’s “third metric”: detoxing electronics, sleep, and reflection.

=============
One morning in March, a dozen Huffington Post staff members gathered around a glass table in Arianna Huffington’s office. They had been summoned to deliver a progress report to Huffington, the site’s president, editor in chief and co-founder, on a new initiative, What’s Working. It was created to help the site cover solutions, rather than focusing only on the world’s problems — or as Huffington explained in an internal memo in January, to ‘‘start a positive contagion by relentlessly telling the stories of people and communities doing amazing things, overcoming great odds and facing real challenges with perseverance, creativity and grace.’’
Huffington, who is 64, was getting over a cold, and coughed hoarsely now and then. She sipped a soy cappuccino through a straw as she asked for updates in her purring, singsongy Greek accent. One by one, staff members went through their story lists: corporations with innovative plans to reduce water use, a nonprofit putting former gang members to work, Muslims confronting radicalism. Huffington kept the pace brisk; she sounded like a person in a hurry trying hard to not sound like one. When an editor hashed out ways to present a new, recurring feature called the What’s Working Media Honor Roll — a roundup of similarly positive journalism from other publications — she suggested that he launch first and tinker later.
‘‘I think let’s start iterating,’’ she said. ‘‘Let’s not wait for the perfect product.’’
What’s Working might sound like a significant departure for a site that, like most media outlets, thrives on tales of conflict and wrongdoing. But in a sense What’s Working is not a departure at all. The Huffington Post has always been guided by the question: What works? Namely, what draws traffic? The answer has changed constantly. When Huffington co-founded the site in 2005, Facebook was still just a network for college students. Today, roughly half her mobile traffic comes from social media, Facebook above all. Arguably, this shift in browsing habits, as much as Huffington’s distaste for the media’s built-in bias toward negativity, helped inspire What’s Working. The initiative is in part an effort to get readers to share more Huffington Post stories on Facebook.
‘‘The numbers are amazing,’’ Huffington said as staff members filed out. ‘‘You’re not as likely to share a story of a beheading. Right? I mean, you’ll read it.’’
Within The Huffington Post, and away from the glass table, some staff members have fretted that What’s Working could result in a steady drip of pallid, upbeat stories (e.g., ‘‘How Hugh Jackman’s Coffee Brand Is Changing Lives’’). But it’s hard to argue with Huffington’s intuitions when it comes to generating traffic. Her site has more than 200 million unique visitors each month, according to comScore, and it is one of the country’s top online destinations for news.
Nevertheless, in May, Huffington’s tenure as editor in chief was briefly in question. The site’s corporate parent, AOL, was sold to Verizon for $4.4 billion, and Huffington was forced to spend a few weeks negotiating the terms of the site’s future with her new overlords. During that process, AOL revealed that two suitors, earlier in the year, tried to buy The Huffington Post for $1 billion, or roughly four times what Jeff Bezos paid for The Washington Post two years ago. Plainly, to certain investors, digital media companies are valuable because they deliver enormous audiences. Any difficulty turning a profit — The Huffington Post broke even last year on $146 million in revenue, according to someone familiar with the site’s finances — is considered a temporary problem that will eventually be fixed by the sheer size of the readership.
This singular focus on audience development expresses itself in different ways at different publications. At The Huffington Post, it takes the shape of an editorial mandate that, much like the universe itself, is unfathomably broad and constantly expanding. At least in theory, nothing gets past its editors and writers. They cover, in most cases through aggregation, everything from Federal Reserve policy to celebrity antics, from Islamic State atrocities to parenting tips, supplemented with a steady stream of uncategorizable click bait (‘‘Can Cannibalism Fight Brain Disease? Only Sort Of’’).
To work at The Huffington Post is to run a race without a finish line, at a clip that is forever quickening. The pace is stressful for many employees, who describe a newsroom with plenty of turnover. One former staff member I spoke with, who developed an ulcer while working there, called The Huffington Post ‘‘a jury-rigged, discombobulated chaos machine.’’
Huffington may be the Internet’s most improbable media pioneer. This is her first job as an editor or publisher, and few would describe her as a techie. But as one of the first major media properties born in the full light of the digital age, The Huffington Post has always been a skunk works for the sorts of experiments that have come to define the news business in the Internet era.
In its early days, when most visits came through Google searches, the site mastered search-engine optimization (S.E.O.), the art of writing stories based on topics trending on Google and larding headlines with keywords. The site’s annual ‘‘What Time Is the Super Bowl?’’ post has become such a famous example of S.E.O.-driven non-news that other media outlets have written half-disgusted, half-admiring posts dissecting its history.
When most sites were merely guessing about what would resonate with readers, The Huffington Post brought a radical data-driven methodology to its home page, automatically moving popular stories to more prominent spaces and A-B testing its headlines. The site’s editorial director, Danny Shea, demonstrated to me how this works a few months ago, opening an online dashboard and pulling up an article about General Motors. One headline was ‘‘How GM Silenced a Whistleblower.’’ Another read ‘‘How GM Bullied a Whistleblower.’’ The site had automatically shown different headlines to different readers and found that ‘‘Silence’’ was outperforming ‘‘Bully.’’ So ‘‘Silence’’ it would be. It’s this sort of obsessive data analysis that has helped web-headline writing become so viscerally effective.
Above all, from its founding in an era dominated by ‘‘web magazines’’ like Slate, The Huffington Post has demonstrated the value of quantity. Early in its history, the site increased its breadth on the cheap by hiring young writers to quickly summarize stories that had been reported by other publications, marking the birth of industrial aggregation.
Today, The Huffington Post employs an armada of young editors, writers and video producers: 850 in all, many toiling at an exhausting pace. It publishes 13 editions across the globe, including sites in India, Germany and Brazil. Its properties collectively push out about 1,900 posts per day. In 2013, Digiday estimated that BuzzFeed, by contrast, was putting out 373 posts per day, The Times 350 per day and Slate 60 per day. (At the time, The Huffington Post was publishing 1,200 posts per day.) Four more editions are in the works — The Huffington Post China among them — and a franchising model will soon take the brand to small and midsize markets, according to an internal memo Huffington sent in late May.
Throughout its history, the site’s scale has also depended on free labor. One of Huffington’s most important insights early on was that if you provide bloggers with a big enough stage, you don’t have to pay them. This audience-for-content trade has been imitated successfully by outlets like Thought Catalog and Bleacher Report, a sports-news website that Turner Broadcasting bought in 2012 for somewhere between $150 million and $200 million.

Throughout its history, the site’s scale has also depended on free labor. One of Huffington’s most important insights
As this more-is-better ethos has come to define the industry, shifts in online advertising have begun to favor publications that already attract large audiences. Display advertising — wherein advertisers pay each time an ad is shown to a reader — still dominates the market. But native advertising, designed to match the look and feel of the editorial content it runs alongside, has been on the rise for years. BuzzFeed, the media company started in 2006 by Jonah Peretti, a co-founder of The Huffington Post, was built to rely entirely on native advertising. The Huffington Post offers to make its advertisers custom quizzes, listicles, slide shows, videos, infographics, feature articles and blog posts. Prices start at $130,000 for three pieces of content. This is where size matters; top-tier sites can fetch premium rates because advertisers know their messages could be seen by millions. There have been concerns that readers might be deceived by native ads if they are not properly identified — The Huffington Post always clearly labels its sponsored content — but the ethical debate in the media world is over. Socintel360, a research firm, predicts that spending on native advertising in the United States will more than double in the next four years to $18.4 billion.
Kenneth Lerer, another Huffington Post co-founder, believes that news start-ups today are like cable-television networks in the early ’80s: small, pioneering companies that will be handsomely rewarded for figuring out how to monetize your attention through a new medium. If this is so, the size of The Huffington Post’s audience could one day justify that $1 billion valuation. But at least in cable, the ratings-driven mania of sweeps week comes only four times a year.
Even as she oversees an international news operation, Huffington spends most of her days and nights in a globe-spanning run of lectures, parties, talk shows, conferences and meetings, a never-ending tour that she chronicles in a dizzying Instagram feed. Her stamina is a source of awe to members of what she calls her A-Team — the A is for Arianna — a group of 9 or so Huffington Post staff members who, in addition to their editorial duties, help keep her in perpetual motion. Within the organization, A-Team jobs are known to be all-consuming — but also, for those who last, a ticket to promotion later on. While some stick around for years, many A-Teamers endure only about 12 months before calling it quits or asking to be transferred.
The first time I saw Huffington’s unremitting style up close was in New Haven last year, at the start of a tour to promote her self-help book, ‘‘Thrive.’’ In all of our interviews, she was warm and entertaining. She has a politician’s gift for seeming sincerely interested, having learned that nothing is so disarming as asking personal questions and then listening. She also has a comic’s timing. At a Barnes & Noble onstage chat in Manhattan, shortly before the trip to New Haven, the moderator, Katie Couric, asked her who was to blame for the merciless pace of life in corporate America. Huffington paused for a moment. Then she turned to the audience.
‘‘Men,’’ she deadpanned.
In her talk, she described her own transformation from fast-lane addict to evangelist for reflection, sleep and ‘‘digital detoxing’’ — basically, turning off your smartphone whenever possible. This is a catechism she has branded the ‘‘third metric’’ of success, with money and power being the first two. Her conversion narrative begins on the morning of April 6, 2007, when she collapsed from exhaustion. She fell to the floor in her home office, hitting her face on her desk and breaking her cheekbone. Medical tests found nothing that could explain the episode. Huffington realized that her lifestyle, which at the time was filled with 18-hour workdays, seven days a week, was wrecking her health.
‘‘By any sane definition of success,’’ she told the crowd that day in New Haven, ‘‘if you are lying in your office in a pool of blood, you are not a success.’’
The speech, which I caught a few times, is always a hit. Huffington presents herself as a redemption story, someone who overdosed on her mobile phone and survived to warn others. She looks the part, too: a Dolce & Gabbana-ed woman of a certain age, perfectly at ease, regularly brushing back a forelock of honey-blond hair with her fingers. After the speech in New Haven, people lined up to have their copies of ‘‘Thrive’’ signed. One by one, they offered Huffington variations of, ‘‘You are an inspiration.’’ Some shared their own success stories.
One woman told her: ‘‘In the horse world, I do holistic care, and I’m embarking on a barn that’s cutting-edge. It’s all about positive reinforcement.’’
‘‘We’d love you to write about it!’’ Huffington exclaimed.
Five years ago, in 2010, the site was successful, attracting nearly 25 million unique visitors a month, but it lacked the money Huffington felt it needed to expand. So it seemed fortunate when, later that year, she met Tim Armstrong, chief executive of AOL, at a media conference in New York.
‘‘He asked to meet with me privately, and he said: ‘What do you want to do with The Huffington Post?’ ’’ Huffington recalled. ‘‘And I said, ‘I want to be a global company, I want us to be everywhere in the world.’ ’’
Armstrong offered $315 million, and on Feb. 7, 2011, AOL announced the acquisition. Huffington was made the head of the Huffington Post Media Group, an entity that would control AOL’s empire of content — an odd mixture of offerings including Moviefone, TechCrunch, Engadget, MapQuest, Autoblog, AOL Music and the collection of hundreds of hyperlocal websites called Patch. In a stroke, Huffington found herself overseeing a diverse portfolio with 117 million unique visitors per month in the United States and 270 million around the world. She also managed several thousand editors, writers, bloggers and business staff members.
Initially, Armstrong and Huffington seemed like a natural match. Each is a fan of big ideas that can be executed quickly; each prizes boldness and energy. At one AOL meeting with brand managers, Armstrong ribbed his underlings by recounting how Huffington called him on a Sunday to tell him what was wrong with AOL’s home page. Why had no one else done that?
Integrating a group of such varied websites and personnel would have posed a challenge to any manager. For one who admits to having little interest in organization and planning, it was impossible. Huffington preferred to improvise, and she did so aggressively — ‘‘like a hockey player,’’ as one former AOL executive put it, with some admiration.
A clash of cultures, however, was soon evident. Many of AOL’s sites did little more than promote their sponsors; AOL Real Estate, for instance, was mainly a home for Bank of America ads, next to stories about the joys of mortgage refinancing. In an attempt to restore some semblance of editorial integrity, Huffington fired the freelancers who worked for the site and replaced them with young staff members. Many were recent graduates of Yale — her feeder of choice — whose chief qualification, aside from the obvious, was a willingness to work for a pittance. But the hiring spree was rushed and filled the sites with fledglings. Page views plunged, irking corporate sponsors.
At first, many on Armstrong’s team had been awed by her energy and range, but they quickly grasped that these didn’t always translate into results. ‘‘No one else could give a commencement speech at Smith one day, meet the prime minister of Japan on Tuesday and debate the Middle East on MSNBC on Wednesday,’’ one former executive said. ‘‘But that doesn’t mean she knows the ins and outs of running Moviefone.’’ It didn’t help that AOL stock, following the acquisition, had fallen to less than $12 by August from just above $20 at the time of the purchase half a year earlier. At some point, Huffington stopped going to meetings of AOL executives, and in April 2012, an organizational reshuffling quietly moved every AOL site except The Huffington Post out of Huffington’s portfolio. Her tenure as AOL content czar was over.
By then, Huffington was having a serious case of seller’s remorse. During a tech conference, she was overheard at a bar in Rancho Palos Verdes, Calif., talking to the venture capitalist Scott Stanford, then a Goldman Sachs banker. Speaking in a voice loud enough for many to hear, she posed questions like, ‘‘Who would buy The Huffington Post?’’ and ‘‘How much would it fetch?’’ Around that time, Huffington Post employees recall, she went on trips with very rich people and returned with news that the site was about to be purchased again, this time for $1 billion.
But The Huffington Post was no longer Huffington’s to sell, and AOL seemed uninterested in parting with it. By October 2012, discontent with Huffington was widespread enough that top executives at AOL were quietly strategizing about ways to ease her into a kind of ceremonial role — one in which she would only promote the site rather than running its day-to-day operations. (A source said the effort was given a one-word shorthand: ‘‘Popemobile.’’ Like the Pope in his bulletproof bubble, Huffington would glide through the world and wave.) The idea never caught on, mostly because it was clear Huffington would never agree to it, and by May of this year, when Verizon announced its acquisition of AOL, it had long been abandoned.
Verizon went after AOL principally for its ad-buying technologies, but in mid-June, Verizon’s C.E.O. and chairman, Lowell McAdam, said he was committed to keeping The Huffington Post, as incidental as its acquisition may have been. Huffington, whose contract with AOL expired earlier in the year, wanted guarantees that Verizon would finance the site’s growth and keep its hands off articles with which it may have a difference of opinion — those on net neutrality, for instance.
Soon after the Verizon-AOL deal was announced, Huffington began to negotiate her future and the future of The Huffington Post. According to two sources, Armstrong suggested closing the acquisition first and prodding Verizon to make promises about The Huffington Post later. Huffington refused, and she held out until mid-June, when Verizon pledged more than $100 million a year for ongoing operations and vowed to give the site editorial autonomy. (Others with knowledge of the talks say that no financial commitments have been made yet.) The money will allow The Huffington Post to broaden its video offerings, supporting a 24-hour online network and what Huffington called, in an internal memo, a ‘‘rapid-response satire unit.’’ Assurances in hand, Huffington signed a new four-year contract that will keep her in situ as editor in chief.
None of this necessarily means that The Huffington Post will remain in Verizon’s permanent portfolio. In fact, if Verizon’s real goal is to offload the site, it has done exactly what it should to burnish the asset for eventual sale. Were suitors to come courting again, they would surely offer less for a Huffington-less Huffington Post.
That is not to say that Huffington is inexpensive to keep around. She flies all around the planet, occasionally with members of the A-Team in tow. A-Team duties include tending to Huffington’s Twitter account, her Instagram feed and her Facebook posts; running her errands; organizing her day; planning her travel; and prepping her speeches, which, if they aren’t pro bono, cost at least $100,000. One former A-Teamer recalled loading The Huffington Post on Huffington’s computer when she showed up at the office.
‘‘Arianna doesn’t surf the web,’’ the former A-Teamer explained. ‘‘She reads stories that people send on her iPhone, and she sends and receives emails on her BlackBerry. But I’ve never seen her on a computer, surfing the web.’’ Huffington said that this is not true, and stated that she ditched her BlackBerry nearly two years ago. But more than a dozen former and current Huffington Post staff members said they had never seen her so much as open a web browser.
Some former employees file this in the category, Things at Odds With Arianna’s Public Image. Also in this category is the vibe at The Huffington Post’s downtown Manhattan office. Despite its nap rooms, meditation rooms and breathing classes, which were introduced as Huffington entered her ‘‘Thrive’’ phase, it is described as a surpassingly difficult place to work.
Much of this difficulty is inherent to life at an Internet news site, where victory means beating the competition by a matter of seconds with a post that might yield gobs of traffic. This is why so many editors and writers at The Huffington Post remain at their desks during lunch and keep an eye on the web at all times. If, while you’re offline, three new Instagram filters are announced and you’re late to post the news, that’s a problem. ‘‘Just about everyone works continuously, whether you’re at the office or not,’’ one former employee said. ‘‘That little green light that says you’re available on Gchat is what matters.’’
Low pay worsens the strain. One former employee said that some staff members take second jobs to cover their expenses. Some tutor; others wait on tables; others babysit. (A representative for The Huffington Post said the company was unaware of any moonlighting.) Many staff members rely on what has been called ‘‘HuffPost lunch’’ — Luna Bars, carrots, hummus, apples, bananas and sometimes string cheese, all served gratis in a kitchen area of the office.
Inevitably, there is burnout. At the New York office, nearly two dozen employees have left since the start of this year, either because they were laid off or found more enticing and less hectic jobs. A Gawker post in early June, written by an anonymous former staff member, said the recent departures were hardly a surprise because the place has long been ‘‘so brutal and toxic it would meet with approval from committed sociopaths.’’
A former editor told me about a period in 2013 when a series of departures left a cluster of empty desks along a wall that Huffington walks past on the way to her office. ‘‘Someone told my manager, ‘Arianna is really stressed out about the number of people leaving, so we need a bunch of people to sit at those desks in the path from the elevator to her office, to make her feel better,’ ’’ the former editor said. ‘‘So we sat there, waiting to say: ‘Hello! Greetings!’ as she walked by. It was supposed to be for two hours, but she got there at about 3 in the afternoon instead of 11 in the morning. It was absurd. I had to interrupt my workday because this woman was stressed out, because so many people had left, because they were stressed out.’’ (A Huffington Post representative denied this story, saying it was ‘‘clearly made up by someone with an ax to grind.’’)
Staff members in Huffington’s inner circle must also contend with her superhuman endurance. Her oft-repeated claim to sleep eight hours a night notwithstanding, she rarely seems to be idle. Emails from her cease, several ex-employees told me, only between 1 a.m. and 5 a.m.
There are staff members who have stuck it out for years and speak highly of the site as a place to work. They say they form lasting bonds with co-workers and relish the sense that they are writing for millions of readers. Some, like Daniel Koh, a former A-Teamer, speak with a reverence and fondness for Huffington herself. Koh described her as a perfectionist of exceptional intellectual wattage, a leader who never raises her voice and never holds a grudge. ‘‘Was it intense, and long hours, and did she teach me to maximize my workday?’’ he said. ‘‘Absolutely.’’
But others who have worked closely with Huffington have found it a bruising experience, saying that she is perpetually on the lookout for signs of disloyalty, to a degree that bespeaks paranoia or, at the very least, pettiness. Employees cycle in and out of her favor, hailed as the site’s savior one moment, ignored the next. (The Gawker post called the office ‘‘essentially Soviet in its functioning.’’) ‘‘Everyone’s stock is shooting up or falling at any given moment, so everyone is rattled with uncertainty and insecurity,’’ one former employee said. ‘‘I’ve never seen anything like it.’’
When I asked Huffington about criticisms of the newsroom, she was unmoved. She pointed to the nap rooms and breathing classes as evidence that she took employee well-being seriously. Only the voices of current employees were worth listening to, she cautioned, because the opinions of people who were laid off or left were likely to skew negative. I noted that she seemed unwilling to accept any responsibility for what a lot of former employees said was a vexing atmosphere.
‘‘I’m definitely a work in progress,’’ she acknowledged. ‘‘I’m not by any means saying I’m perfect. But I feel very good about our culture here, because a lot of our top leaders have embraced it.’’
The Huffington Post is hardly the only web media company with a reputation as an arduous place to work. Nor is Huffington the only editor in chief considered capricious and exasperating by employees. But she is surely the first described in those terms to install hammocks in a newsroom. Only someone with her unique combination of drive and outward placidity could run a tremendously popular, hugely productive website and then begin a second career chastening us for our addiction to the Internet. Somehow she has pulled it off. In her site’s parenting section, some of the most successful posts target moms who are checking their Facebook feeds late at night, apparently yearning to be told that they shouldn’t be on Facebook at that hour. ‘‘You know, posts about, ‘Stop procrastinating and go sleep,’ ‘Disconnect your devices,’ ’’ said Ethan Fedida, the site’s senior social media editor. ‘‘They go crazy for it.’’
It’s as though Huffington is spreading an illness while simultaneously peddling the cure. Call it hypocrisy, but it testifies to her savvy. The business of web media is figuring out what people want — and if what we want is contradictory, why shouldn’t Huffington profit from that contradiction?
Huffington may be engaged in a bit of wishful projection when she presents herself as an apostle of serenity. But it is a veneer she never drops, at least in public. At the Barnes & Noble event for ‘‘Thrive’’ last year, the one moderated by Katie Couric, a young woman rose during the question-and-answer session.
‘‘What do you say to employers who are now seeking people specifically to work in social media?’’ she asked. ‘‘Our job is to be connected 24-7, where we have to manage your Facebook, your Instagram, your Twitter, your Pinterest. How do we detox when we’re told we have to be in the social-media revolution in order to earn our living?’’
After thinking for a moment, Huffington suggested that she tell her employer that tweets can be scheduled in advance, so she doesn’t have to be awake at all hours. Remind your boss that people are paid for their judgment, Huffington added, not their endurance. Couric then asked the young woman, ‘‘You think your boss would be receptive to that?’’
‘‘No,’’ she said, flatly.
All eyes turned back to Huffington. Some bosses are toxic, she offered, so start looking for a new job. With a smile, she added: ‘‘We’re hiring.’’

Forbes Discusses Google and Nest

Forbes Article

Nest Labs is taking the next step in its quest to become a hub for the smart home, by letting other gadgets and services access its learning thermostat and smoke detector for the first time.

With the long-awaited developer program Nest is launching today, other apps and devices will be able to access what Nest detects through its sensors, including vague readings on temperature and settings that show if a person is away from their home for long periods. These services will even be able to talk to one another via Nest as the hub.

Nest, founded by former Apple executive Tony Fadell, has long-been seen as one of the leading companies in the smart home revolution. Google bought the company for $3.2 billion in January, and last week Nest bought video monitoring service DropCam for $555 million to (for better or worse) learn how people behave in their homes, for instance by reportedly tracking how doors are open and shut.

Crucially, Nest is not letting third parties get access to the motion sensors on its thermostat and smoke alarm, says co-founder Matt Rogers — though it’s unclear what sort of access Nest might eventually give to DropCam’s video footage. “We’ve been building it for about a year,” he says. “One reason it’s taken us this long to build is we realized we had to be incredibly transparent with our user about data privacy.”

That means plenty of reminders to developers about what data can be used for, and requirements that they get user permission before sharing data with Nest. It will be a private, but very open platform, says Rogers. Apple’s own foray into smart homes with a service called HomeKit will likely have far more restrictions.

“Also,” he points out, “ours is not vaporware.”

Nest is expecting myriad developers to start building integrations into its two main devices, but it’s already done some early integrations with eight other companies, including wearable-fitness tracker firm Jawbone, Mercedes-Benz and Google Now, the digital mobile assistant that learns about a person’s routines and notifies them of important information. The pitch from Nest: “create a more conscious and thoughtful home.”

As of today, the Jawbone UP24 band will have a setting that turns on the Nest thermostat when it senses its wearer has woken up from a night’s sleep. Mercedes-Benz’s cars will be able to tell Nest when a driver is expected home, so it can set the temperature ahead of time. Smart lights made by LIFX can also be programmed to flash red when the Nest Protect detects smoke, or randomly turn off and on to make it look like someone is home when Nest’s thermostat is in “away” mode.

Developers are excited about the program because it means they can learn more about users than they could before. One partner in the program who didn’t want to be named, said that the extra data they could collect from Nest’s devices could help them become more competitive in their own field. “We can’t live with just the information we get naturally,” they said.

….

Opening up to other services is integral to Nest’s re-invention of the humble thermostat, which some say parallels the way Apple reinvented the mobile phone. “It’s going to be a huge, huge game changer and it’s only the beginning,” Wernick says, adding that the role of the smart thermostat may be gradually morphing “to being a controller for your house and lifestyle.”

The bigger advantage for Google is what it can learn through Nest and potentially through other devices connected to it. Wernick believes Google Now will eventually be able to use Nest as just another sensor point to learn more about people’s lifestyles, so it can better predict habits. “It’s going to understand your behavior better to help guide you in your life,” he says.

Would Google Now be able to use Nest’s data to serve Google’s all-important advertising ambitions?

….

“Nest is sticking its toe in home automation, which opens them to all the same problems that home automation companies are dealing with,” says Dan Tentler, founder of security company Aten Labs and expert on SHODAN, the search engine for Internet-connected devices.

Reference:
Google buys Nest

History of Computing


Www.computerhistory.org/timeline/

See also 2001 and 2007 posts (this is a more extensive more current update):

And

History of Computing

http://us.penguingroup.com/static/packages/us/kurzweil/excerpts/timeline/timeline2.htm

TIME LINE
1950
Eckert and Mauchley develop UNIVAC, the first commercially marketed computer. It is used to compile the results of the U.S. census, marking the first time this census is handled by a programmable computer.
1950
In his paper “Computing Machinery and Intelligence,” Alan Turing presents the Turing Test, a means for determining whether a machine is intelligent.
1950
Commercial color television is first broadcast in the United States, and transcontinental black-and-white television is available within the next year.
1950
Claude Elwood Shannon writes “Programming a Computer for Playing Chess,” published in Philosophical Magazine.
1951
Eckert and Mauchley build EDVAC, which is the first computer to use the stored-program concept. The work takes place at the Moore School at the University of Pennsylvania.
1951
Paris is the host to a Cybernetics Congress.
1952
UNIVAC, used by the Columbia Broadcasting System (CBS) television network, successfully predicts the election of Dwight D. Eisenhower as president of the United States.
1952
Pocket-sized transistor radios are introduced.
1952
Nathaniel Rochester designs the 701, IBM’s first production-line electronic digital computer. It is marketed for scientific use.
1953
The chemical structure of the DNA molecule is discovered by James D. Watson and Francis H. C. Crick.
1953
Philosophical Investigations by Ludwig Wittgenstein and Waiting for Godot, a play by Samuel Beckett, are published. Both documents are considered of major importance to modern existentialism.
1953
Marvin Minsky and John McCarthy get summer jobs at Bell Laboratories.
1955
William Shockley’s Semiconductor Laboratory is founded, thereby starting Silicon Valley.
1955
The Remington Rand Corporation and Sperry Gyroscope join forces and become the Sperry-Rand Corporation. For a time, it presents serious competition to IBM.
1955
IBM introduces its first transistor calculator. It uses 2,200 transistors instead of the 1,200 vacuum tubes that would otherwise be required for equivalent computing power.
1955
A U.S. company develops the first design for a robotlike machine to be used in industry.
1955
IPL-II, the first artificial intelligence language, is created by Allen Newell, J. C. Shaw, and Herbert Simon.
1955
The new space program and the U.S. military recognize the importance of having computers with enough power to launch rockets to the moon and missiles through the stratosphere. Both organizations supply major funding for research.
1956
The Logic Theorist, which uses recursive search techniques to solve mathematical problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1956
John Backus and a team at IBM invent FORTRAN, the first scientific computer-programming language.
1956
Stanislaw Ulam develops MANIAC I, the first computer program to beat a human being in a chess game.
1956
The first commercial watch to run on electric batteries is presented by the Lip company of France.
1956
The term Artificial Intelligence is coined at a computer conference at Dartmouth College.
1957
Kenneth H. Olsen founds Digital Equipment Corporation.
1957
The General Problem Solver, which uses recursive search to solve problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.
1957
Noam Chomsky writes Syntactic Structures, in which he seriously considers the computation required for natural-language understanding. This is the first of the many important works that will earn him the title Father of Modern Linguistics.
1958
An integrated circuit is created by Texas Instruments’ Jack St. Clair Kilby.
1958
The Artificial Intelligence Laboratory at the Massachusetts Institute of Technology is founded by John McCarthy and Marvin Minsky.
1958
Allen Newell and Herbert Simon make the prediction that a digital computer will be the world’s chess champion within ten years.
1958
LISP, an early AI language, is developed by John McCarthy.
1958
The Defense Advanced Research Projects Agency, which will fund important computer-science research for years in the future, is established.
1958
Seymour Cray builds the Control Data Corporation 1604, the first fully transistorized supercomputer.
1958-1959
Jack Kilby and Robert Noyce each develop the computer chip independently. The computer chip leads to the development of much cheaper and smaller computers.
1959
Arthur Samuel completes his study in machine learning. The project, a checkers-playing program, performs as well as some of the best players of the time.
1959
Electronic document preparation increases the consumption of paper in the United States. This year, the nation will consume 7 million tons of paper. In 1986, 22 million tons will be used. American businesses alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and 4 trillion in 1990.
1959
COBOL, a computer language designed for business use, is developed by Grace Murray Hopper, who was also one of the first programmers of the Mark I.
1959
Xerox introduces the first commercial copier.
1960
Theodore Harold Maimen develops the first laser. It uses a ruby cylinder.
1960
The recently established Defense Department’s Advanced Research Projects Agency substantially increases its funding for computer research.
1960
There are now about six thousand computers in operation in the United States.
1960s
Neural-net machines are quite simple and incorporate a small number of neurons organized in only one or two layers. These models are shown to be limited in their capabilities.
1961
The first time-sharing computer is developed at MIT.
1961
President John F. Kennedy provides the support for space project Apollo and inspiration for important research in computer science when he addresses a joint session of Congress, saying, “I believe we should go to the moon.”
1962
The world’s first industrial robots are marketed by a U.S. company.
1962
Frank Rosenblatt defines the Perceptron in his Principles of Neurodynamics. Rosenblatt first introduced the Perceptron, a simple processing element for neural networks, at a conference in 1959.
1963
The Artificial Intelligence Laboratory at Stanford University is founded by John McCarthy.
1963
The influential Steps Toward Artificial Intelligence by Marvin Minsky is published.
1963
Digital Equipment Corporation announces the PDP-8, which is the first successful minicomputer.
1964
IBM introduces its 360 series, thereby further strengthening its leadership in the computer industry.
1964
Thomas E. Kurtz and John G. Kenny of Dartmouth College invent BASIC (Beginner’s All-purpose Symbolic Instruction Code).
1964
Daniel Bobrow completes his doctoral work on Student, a natural-language program that can solve high-school-level word problems in algebra.
1964
Gordon Moore’s prediction, made this year, says integrated circuits will double in complexity each year. This will become known as Moore’s Law and prove true (with later revisions) for decades to come.
1964
Marshall McLuhan, via his Understanding Media, foresees the potential for electronic media, especially television, to create a “global village” in which “the medium is the message.”
1965
The Robotics Institute at Carnegie Mellon University, which will become a leading research center for AI, is founded by Raj Reddy.
1965
Hubert Dreyfus presents a set of philosophical arguments against the possibility of artificial intelligence in a RAND corporate memo entitled “Alchemy and Artificial Intelligence.”
1965
Herbert Simon predicts that by 1985 “machines will be capable of doing any work a man can do.”
1966
The Amateur Computer Society, possibly the first personal computer club, is founded by Stephen B. Gray. The Amateur Computer Society Newsletter is one of the first magazines about computers.
1967
The first internal pacemaker is developed by Medtronics. It uses integrated circuits.
1968
Gordon Moore and Robert Noyce found Intel (Integrated Electronics) Corporation.
1968
The idea of a computer that can see, speak, hear, and think sparks imaginations when HAL is presented in the film 2001: A Space Odyssey, by Arthur C. Clarke and Stanley Kubrick.
1969
Marvin Minsky and Seymour Papert present the limitation of single-layer neural nets in their book Perceptrons. The book’s pivotal theorem shows that a Perceptron is unable to determine if a line drawing is fully connected. The book essentially halts funding for neural-net research.
1970
The GNP, on a per capita basis and in constant 1958 dollars, is $3,500, or more than six times as much as a century before.
1970
The floppy disc is introduced for storing data in computers.
c. 1970
Researchers at the Xerox Palo Alto Research Center (PARC) develop the first personal computer, called Alto. PARC’s Alto pioneers the use of bit-mapped graphics, windows, icons, and mouse pointing devices.
1970
Terry Winograd completes his landmark thesis on SHRDLU, a natural-language system that exhibits diverse intelligent behavior in the small world of children’s blocks. SHRDLU is criticized, however, for its lack of generality.
1971
The Intel 4004, the first microprocessor, is introduced by Intel.
1971
The first pocket calculator is introduced. It can add, subtract, multiply, and divide.
1972
Continuing his criticism of the capabilities of AI, Hubert Dreyfus publishes What Computers Can’t Do, in which he argues that symbol manipulation cannot be the basis of human intelligence.
1973
Stanley H. Cohen and Herbert W. Boyer show that DNA strands can be cut, joined, and then reproduced by inserting them into the bacterium Escherichia coli. This work creates the foundation for genetic engineering.
1974
Creative Computing starts publication. It is the first magazine for home computer hobbyists.
1974
The 8-bit 8080, which is the first general-purpose microprocessor, is announced by Intel.
1975
Sales of microcomputers in the United States reach more than five thousand, and the first personal computer, the Altair 8800, is introduced. It has 256 bytes of memory.
1975
BYTE, the first widely distributed computer magazine, is published.
1975
Gordon Moore revises his observation on the doubling rate of transistors on an integrated circuit from twelve months to twenty-four months.
1976
Kurzweil Computer Products introduces the Kurzweil Reading Machine (KRM), the first print-to-speech reading machine for the blind. Based on the first omni-font (any font) optical character recognition (OCR) technology, the KRM scans and reads aloud any printed materials (books, magazines, typed documents).
1976
Stephen G. Wozniak and Steven P. Jobs found Apple Computer Corporation.
1977
The concept of true-to-life robots with convincing human emotions is imaginatively portrayed in the film Star Wars.
1977
For the first time, a telephone company conducts large-scale experiments with fiber optics in a telephone system.
1977
The Apple II, the first personal computer to be sold in assembled form and the first with color graphics capability, is introduced and successfully marketed. (JCR buys first Apple II at KO in 1978
J1978
Speak & Spell, a computerized learning aid for young children, is introduced by Texas Instruments. This is the first product that electronically duplicates the human vocal tract on a chip.
1979
In a landmark study by nine researchers published in the Journal of the American Medical Association, the performance of the computer program MYCIN is compared with that of doctors in diagnosing ten test cases of meningitis. MYCIN does at least as well as the medical experts. The potential of expert systems in medicine becomes widely recognized.
1979
Dan Bricklin and Bob Frankston establish the personal computer as a serious business tool when they develop VisiCalc, the first electronic spreadsheet.
1980
AI industry revenue is a few million dollars this year.
1980s
As neuron models are becoming potentially more sophisticated, the neural network paradigm begins to make a comeback, and networks with multiple layers are commonly used.
1981
Xerox introduces the Star Computer, thus launching the concept of Desktop Publishing. Apple’s Laserwriter, available in 1985, will further increase the viability of this inexpensive and efficient way for writers and artists to create their own finished documents.
1981
IBM introduces its Personal Computer (PC).
1981
The prototype of the Bubble Jet printer is presented by Canon.
1982
Compact disc players are marketed for the first time.
1982
Mitch Kapor presents Lotus 1-2-3, an enormously popular spreadsheet program.
1983
Fax machines are fast becoming a necessity in the business world.
1983
The Musical Instrument Digital Interface (MIDI) is presented in Los Angeles at the first North American Music Manufacturers show.
1983
Six million personal computers are sold in the United States.
1984
The Apple Macintosh introduces the “desktop metaphor,” pioneered at Xerox, including bit-mapped graphics, icons, and the mouse.
1984
William Gibson uses the term cyberspace in his book Neuromancer.
1984
The Kurzweil 250 (K250) synthesizer, considered to be the first electronic instrument to successfully emulate the sounds of acoustic instruments, is introduced to the market.
1985
Marvin Minsky publishes The Society of Mind, in which he presents a theory of the mind where intelligence is seen to be the result of proper organization of a hierarchy of minds with simple mechanisms at the lowest level of the hierarchy.
1985
MIT’s Media Laboratory is founded by Jerome Weisner and Nicholas Negroponte. The lab is dedicated to researching possible applications and interactions of computer science, sociology, and artificial intelligence in the context of media technology.
1985
There are 116 million jobs in the United States, compared to 12 million in 1870. In the same period, the number of those employed has grown from 31 percent to 48 percent, and the per capita GNP in constant dollars has increased by 600 percent. These trends show no signs of abating.
1986
Electronic keyboards account for 55.2 percent of the American musical keyboard market, up from 9.5 percent in 1980.
1986
Life expectancy is about 74 years in the United States. Only 3 percent of the American workforce is involved in the production of food. Fully 76 percent of American adults have high-school diplomas, and 7.3 million U.S. students are enrolled in college.
1987
NYSE stocks have their greatest single-day loss due, in part, to computerized trading.
1987
Current speech systems can provide any one of the following: a large vocabulary, continuous speech recognition, or speaker independence.
1987
Robotic-vision systems are now a $300 million industry and will grow to $800 million by 1990.
1988
Computer memory today costs only one hundred millionth of what it did in 1950.
1988
Marvin Minsky and Seymour Papert publish a revised edition of Perceptrons in which they discuss recent developments in neural network machinery for intelligence.
1988
In the United States, 4,700,000 microcomputers, 120,000 minicomputers, and 11,500 mainframes are sold this year.
1988
W. Daniel Hillis’s Connection Machine is capable of 65,536 computations at the same time.
1988
Notebook computers are replacing the bigger laptops in popularity.
1989
Intel introduces the 16-megahertz (MHz) 80386SX, 2.5 MIPS microprocessor.
1990
Nautilus, the first CD-ROM magazine, is published.
1990
The development of HypterText Markup Language by researcher Tim Berners-Lee and its release by CERN, the high-energy physics laboratory in Geneva, Switzerland, leads to the conception of the World Wide Web.
1991
Cell phones and e-mail are increasing in popularity as business and personal communication tools.
1992
The first double-speed CD-ROM drive becomes available from NEC.
1992
The first personal digital assistant (PDA), a hand-held computer, is introduced at the Consumer Electronics Show in Chicago. The developer is Apple Computer.
1993
The Pentium 32-bit microprocessor is launched by Intel. This chip has 3.1 million transistors.
1994
The World Wide Web emerges.
1994
America Online now has more than 1 million subscribers.
1994
Scanners and CD-ROMs are becoming widely used.
1994
Digital Equipment Corporation introduces a 300-MHz version of the Alpha AXP processor that executes 1 billion instructions per second.
1996
Compaq Computer and NEC Computer Systems ship hand-held computers running Windows CE.
1996
NEC Electronics ships the R4101 processor for personal digital assistants. It includes a touch-screen interface.
1997
Deep Blue defeats Gary Kasparov, the world chess champion, in a regulation tournament.
1997
Dragon Systems introduces Naturally Speaking, the first continuous-speech dictation software product.
1997
Video phones are being used in business settings.
1997
Face-recognition systems are beginning to be used in payroll check-cashing machines.
1998
The Dictation Division of Lernout & Hauspie Speech Products (formerly Kurzweil Applied Intelligence) introduces Voice Xpress Plus, the first continuous-speech-recognition program with the ability to understand natural-language commands.
1998
Routine business transactions over the phone are beginning to be conducted between a human customer and an automated system that engages in a verbal dialogue with the customer (e.g., United Airlines reservations).
1998
Investment funds are emerging that use evolutionary algorithms and neural nets to make investment decisions (e.g., Advanced Investment Technologies).
1998
The World Wide Web is ubiquitous. It is routine for high-school students and local grocery stores to have web sites.
1998
Automated personalities, which appear as animated faces that speak with realistic mouth movements and facial expressions, are working in laboratories. These personalities respond to the spoken statements and facial expressions of their human users. They are being developed to be used in future user interfaces for products and services, as personalized research and business assistants, and to conduct transactions.
1998
Microvision’s Virtual Retina Display (VRD) projects images directly onto the user’s retinas. Although expensive, consumer versions are projected for 1999.
1998
“Bluetooth” technology is being developed for “body” local area networks (LANs) and for wireless communication between personal computers and associated peripherals. Wireless communication is being developed for high-bandwidth connection to the Web.
1999
Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence is published, available at your local bookstore!

FORECASTS:

2009
A $1,000 personal computer can perform about a trillion calculations per second.
Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book.
Cables are disappearing. Communication between components uses short-distance wireless technology. High-speed wireless communication provides access to the Web.
The majority of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
Most routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality. Often, the virtual personality includes an animated visual presence that looks like a human face.
Although traditional classroom organization is still common, intelligent courseware has emerged as a common means of learning.
Pocket-sized reading machines for the blind and visually impaired, “listening machines” (speech- to- text conversion) for the deaf, and computer- controlled orthotic devices for paraplegic individuals result in a growing perception that primary disabilities do not necessarily impart handicaps.
Translating telephones (speech-to-speech language translation) are commonly used for many language pairs.< Accelerating returns from the advance of computer technology have resulted in continued economic expansion. Price deflation, which had been a reality in the computer field during the twentieth century, is now occurring outside the computer field. The reason for this is that virtually all economic sectors are deeply affected by the accelerating improvement in the price performance of computing. Human musicians routinely jam with cybernetic musicians. Bioengineered treatments for cancer and heart disease have greatly reduced the mortality from these diseases. The neo-Luddite movement is growing. 2019 A $1,000 computing device (in 1999 dollars) is now approximately equal to the computational ability of the human brain. Computers are now largely invisible and are embedded everywhere -- in walls, tables, chairs, desks, clothing, jewelry, and bodies. Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory "lenses," are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality. Most interaction with computing is through gestures and two-way natural-language spoken communication. Nanoengineered machines are beginning to be applied to manufacturing and process-control applications. High-resolution, three-dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity. Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers. Blind persons routinely use eyeglass-mounted reading-navigation systems. Deaf persons read what other people are saying through their lens displays. Paraplegic and some quadriplegic persons routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices. The vast majority of transactions include a simulated person. Automated driving systems are now installed in most roads. People are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers, and lovers. Virtual artists, with their own reputations, are emerging in all of the arts. There are widespread reports of computers passing the Turing Test, although these tests do not meet the criteria established by knowledgeable observers. 2029 A $1,000 (in 1999 dollars) unit of computation has the computing capacity of approximately 1,000 human brains. Permanent or removable implants (similar to contact lenses) for the eyes as well as cochlear implants are now used to provide input and output between the human user and the worldwide computing network. Direct neural pathways have been perfected for high-bandwidth connection to the human brain. A range of neural implants is becoming available to enhance visual and auditory perception and interpretation, memory, and reasoning. Automated agents are now learning on their own, and significant knowledge is being created by machines with little or no human intervention. Computers have read all available human- and machine-generated literature and multimedia material. There is widespread use of all-encompassing visual, auditory, and tactile communication using direct neural connections, allowing virtual reality to take place without having to be in a "total touch enclosure." The majority of communication does not involve a human. The majority of communication involving a human is between a human and a machine. There is almost no human employment in production, agriculture, or transportation. Basic life needs are available for the vast majority of the human race. There is a growing discussion about the legal rights of computers and what constitutes being "human." Although computers routinely pass apparently valid forms of the Turing Test, controversy persists about whether or not machine intelligence equals human intelligence in all of its diversity. Machines claim to be conscious. These claims are largely accepted. 2049 The common use of nanoproduced food, which has the correct nutritional composition and the same taste and texture of organically produced food, means that the availability of food is no longer affected by limited resources, bad crop weather, or spoilage.< Nanobot swarm projections are used to create visual-auditory-tactile projections of people and objects in real reality. 2072 Picoengineering (developing technology at the scale of picometers or trillionths of a meter) becomes practical.1 By the year 2099 There is a strong trend toward a merger of human thinking with the world of machine intelligence that the human species initially created. There is no longer any clear distinction between humans and computers. Most conscious entities do not have a permanent physical presence. Machine-based intelligences derived from extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic equivalents. Most of these intelligences are not tied to a specific computational processing unit. The number of software-based humans vastly exceeds those still using native neuron-cell-based computation. Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural-implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do. Because most information is published using standard assimilated knowledge protocols, information can be instantly understood. The goal of education, and of intelligent beings, is discovering new knowledge to learn. Femtoengineering (engineering at the scale of femtometers or one thousandth of a trillionth of a meter) proposals are controversial.2 Life expectancy is no longer a viable term in relation to intelligent beings. Some many millenniums hence . . . Intelligent beings consider the fate of the Universe.