Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.
In less than thirty years, it will end.
Jaan Tallinn stumbled across these words in 2007, in an online essay calledStaring into the Singularity. The “it” was human civilisation. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI, that surpasses human-level intelligence in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. Staring into the Singularity mashed up computer code, quantum physics and Calvin and Hobbes quotes. He was hooked.
On 21 November 2015, James Bates had three friends over to watch the Arkansas Razorbacks play the Mississippi State Bulldogs. Bates, who lived in Bentonville, Arkansas, and his friends drank beer and did vodka shots as a tight football game unfolded. After the Razorbacks lost 51–50, one of the men went home; the others went out to Bates’s hot tub and continued to drink. Bates would later say that he went to bed around 1am and that the other two men – one of whom was named Victor Collins – planned to crash at his house for the night. When Bates got up the next morning, he didn’t see either of his friends. But when he opened his back door, he saw a body floating face-down in the hot tub. It was Collins.
A grim local affair, the death of Victor Collins would never have attracted international attention if it were not for a facet of the investigation that pitted the Bentonville authorities against one of the world’s most powerful companies –Amazon. Collins’ death triggered a broad debate about privacy in the voice-computing era, a discussion that makes the big tech companies squirm.
The media bombards us with news about the threats to our security: will China invade Taiwan as a punishment for the US trade war? Will the US attack Iran? Will the EU descend into chaos after the Brexit mess? But I think there is one topic which – in the long view, at least – dwarfs all others: the effort of the US to contain the expansion of Huawei. Why?
Today’s digital network controls and regulates our lives: most of our activities (and passivities) are now registered in some digital cloud that also permanently evaluates us, tracing not only our acts but also our emotional states. When we experience ourselves as free to the utmost (surfing in the web where everything is available), we are totally “externalised” and subtly manipulated. The digital network gives new meaning to the old slogan “the personal is political”.
all but pleads guilty to a severe form of data addiction, confessing
its digital sins and promising to reinvent itself as a
privacy-worshiping denizen of the global village, the foundations of Big
Tech’s cultural hegemony appear to be crumbling. Most surprisingly,
it’s in the United States, Silicon Valley’s home territory, where they
seem to be the weakest.
Even in these times of extreme polarization, Trump, who has habitual
outbursts against censorship by social media platforms, eagerly joins
left-wing politicians like Elizabeth Warren and Bernie Sanders in
presenting Big Tech as America’s greatest menace The recent call by
Chris Hughes, Facebook’s co-founder, to break up the firm hints at
things to come.
Neither the Silicon Valley moguls nor financial markets seem to care though. The recent decision by Warren Buffet – one of America’s most successful but also most conservative investors –to finally invest in Amazon is probably a better indication of wait awaits the tech giants in the medium term: more lavish initial public offerings, more Saudi cash, more promises to apply artificial intelligence to resolve the problems caused by artificial intelligence.
Hundreds of human reviewers across the globe, from Romania to Venezuela, listen to audio clips recorded from Amazon Echo speakers, usually without owners’ knowledge, Bloomberg reported last week. We knew Alexa was listening; now we know someone else is, too.
global review team fine-tunes the Amazon Echo’s software by listening
to clips of users asking Alexa questions or issuing commands, and then
verifying whether Alexa responded appropriately. The team also annotates
specific words the device struggles with when it’s addressed in
According to Amazon, users can opt out of the service, but they seem to be enrolled automatically. Amazon says these recordings are anonymized, with any identifying information removed, and that each of these recorded exchanges came only after users engaged with the device by uttering the “wake word.” But in the examples in Bloomberg’s report—a woman overheard singing in the shower, a child screaming for help—the users seem unaware of the device.
His speech then took a turn: “Now,
we’ve had a lot of interesting tools over the years, but fundamentally
the way that we work with those tools is through our bodies.” Then a
further turn: “Here’s a situation that I know all of you know very
well—your frustration with your smartphones, right? This is another
tool, right? And we are still communicating with these tools through our
And then it
made a leap: “I would claim to you that these tools are not so smart.
And maybe one of the reasons why they’re not so smart is because they’re
not connected to our brains. Maybe if we could hook those devices into
our brains, they could have some idea of what our goals are, what our
intent is, and what our frustration is.”
So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”
You’re probably most familiar with recognition systems, like Facebook’s photo-tagging recommender and Apple’s FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance.
Many people believe that such systems are both highly accurate and impartial. The logic goes that airport security staff can get tired and police can misjudge suspects, but a well-trained AI system should be able to consistently identify or categorize any image of a face.
As a teenager in Maryland in the 1950s, Mary Allen
Wilkes had no plans to become a software pioneer — she dreamed of being a
litigator. One day in junior high in 1950, though, her geography
teacher surprised her with a comment: “Mary Allen, when you grow up, you
should be a computer programmer!” Wilkes had no idea what a programmer
was; she wasn’t even sure what a computer was. Relatively few Americans
were. The first digital computers had been built barely a decade earlier
at universities and in government labs.
By the time she was
graduating from Wellesley College in 1959, she knew her legal ambitions
were out of reach. Her mentors all told her the same thing: Don’t even
bother applying to law school. “They said: ‘Don’t do it. You may not get
in. Or if you get in, you may not get out. And if you get out, you
won’t get a job,’ ” she recalls. If she lucked out and got hired, it
wouldn’t be to argue cases in front of a judge. More likely, she would
be a law librarian, a legal secretary, someone processing trusts and
But Wilkes remembered her junior high school teacher’s
suggestion. In college, she heard that computers were supposed to be the
key to the future. She knew that the Massachusetts Institute of
Technology had a few of them. So on the day of her graduation, she had
her parents drive her over to M.I.T. and marched into the school’s
employment office. “Do you have any jobs for computer programmers?” she
asked. They did, and they hired her.
It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding.
In a series of remarkably prescient articles, the first of which was published in the German newspaper Frankfurter Allgemeine Zeitung
in the summer of 2013, Shoshana Zuboff pointed to an alarming
phenomenon: the digitization of everything was giving technology firms
immense social power. From the modest beachheads inside our browsers,
they conquered, Blitzkrieg-style, our homes, cars, toasters, and even
mattresses. Toothbrushes, sneakers, vacuum cleaners: our formerly dumb
household subordinates were becoming our “smart” bosses. Their business
models turned data into gold, favoring further expansion.
Google and Facebook were restructuring the world, not just solving its problems. The general public, seduced by the tech world’s youthful, hoodie-wearing ambassadors and lobotomized by TED Talks, was clueless. Zuboff saw a logic to this digital mess; tech firms were following rational—and terrifying—imperatives. To attack them for privacy violations was to miss the scale of the transformation—a tragic miscalculation that has plagued much of the current activism against Big Tech.
The 18th of March 2018, was the day tech
insiders had been dreading. That night, a new moon added almost no light
to a poorly lit four-lane road in Tempe, Arizona, as a specially
adapted Uber Volvo XC90 detected an object ahead. Part of the modern
gold rush to develop self-driving vehicles, the SUV had been driving
autonomously, with no input from its human backup driver, for 19
minutes. An array of radar and light-emitting lidar sensors allowed
onboard algorithms to calculate that, given their host vehicle’s steady
speed of 43mph, the object was six seconds away – assuming it remained
stationary. But objects in roads seldom remain stationary, so more
algorithms crawled a database of recognizable mechanical and biological
entities, searching for a fit from which this one’s likely behavior
could be inferred.
At first the computer drew a blank; seconds later, it decided it was dealing with another car, expecting it to drive away and require no special action. Only at the last second was a clear identification found – a woman with a bike, shopping bags hanging confusingly from handlebars, doubtless assuming the Volvo would route around her as any ordinary vehicle would. Barred from taking evasive action on its own, the computer abruptly handed control back to its human master, but the master wasn’t paying attention. Elaine Herzberg, aged 49, was struck and killed, leaving more reflective members of the tech community with two uncomfortable questions: was this algorithmic tragedy inevitable? And how used to such incidents would we, should we, be prepared to get?