This summer I spent a month in Beijing. I’d last lived in China in 2016,
and I was relieved to find my favorite noodle shops in their usual
niches. But this time round, navigating the city felt inexplicably
different. The cabs I tried to hail passed me by. On the subway, other
riders jostled past me, swiping their phones at the turnstiles as I
fumbled with my ticket. When I tried to sneak into the cafeteria in
Renmin University for a cheap lunch, clutching my grubby backpack, I
made it past the guards only to be stopped at the cash register—apart
from student cards, the only form of payment accepted was Alipay.
Tag Archives: Hi-Tech
This summer I spent a month in Beijing. I’d last lived in China in 2016,
SOME PEOPLE worry about robots taking work away from human beings, but there are a few jobs that even these sceptics admit most folk would not want. One is cleaning up radioactive waste, particularly when it is inside a nuclear power station—and especially if the power station in question has suffered a recent accident.
Those who do handle radioactive material must first don protective suits that are inherently cumbersome and are further encumbered by the air hoses needed to allow the wearer to breathe. Even then their working hours are strictly limited, in order to avoid prolonged exposure to radiation and because operating in the suits is exhausting. Moreover, some sorts of waste are too hazardous for even the besuited to approach safely.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.
In less than thirty years, it will end.
Jaan Tallinn stumbled across these words in 2007, in an online essay called Staring into the Singularity. The “it” was human civilisation. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI, that surpasses human-level intelligence in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. Staring into the Singularity mashed up computer code, quantum physics and Calvin and Hobbes quotes. He was hooked.
On 21 November 2015, James Bates had three friends over to watch the Arkansas Razorbacks play the Mississippi State Bulldogs. Bates, who lived in Bentonville, Arkansas, and his friends drank beer and did vodka shots as a tight football game unfolded. After the Razorbacks lost 51–50, one of the men went home; the others went out to Bates’s hot tub and continued to drink. Bates would later say that he went to bed around 1am and that the other two men – one of whom was named Victor Collins – planned to crash at his house for the night. When Bates got up the next morning, he didn’t see either of his friends. But when he opened his back door, he saw a body floating face-down in the hot tub. It was Collins.
A grim local affair, the death of Victor Collins would never have attracted international attention if it were not for a facet of the investigation that pitted the Bentonville authorities against one of the world’s most powerful companies – Amazon. Collins’ death triggered a broad debate about privacy in the voice-computing era, a discussion that makes the big tech companies squirm.
We are already controlled by the digital giants, but Huawei’s expansion will usher in China-style surveillance
The media bombards us with news about the threats to our security: will China invade Taiwan as a punishment for the US trade war? Will the US attack Iran? Will the EU descend into chaos after the Brexit mess? But I think there is one topic which – in the long view, at least – dwarfs all others: the effort of the US to contain the expansion of Huawei. Why?
Today’s digital network controls and regulates our lives: most of our activities (and passivities) are now registered in some digital cloud that also permanently evaluates us, tracing not only our acts but also our emotional states. When we experience ourselves as free to the utmost (surfing in the web where everything is available), we are totally “externalised” and subtly manipulated. The digital network gives new meaning to the old slogan “the personal is political”.
As Facebook all but pleads guilty to a severe form of data addiction, confessing its digital sins and promising to reinvent itself as a privacy-worshiping denizen of the global village, the foundations of Big Tech’s cultural hegemony appear to be crumbling. Most surprisingly, it’s in the United States, Silicon Valley’s home territory, where they seem to be the weakest.
Even in these times of extreme polarization, Trump, who has habitual outbursts against censorship by social media platforms, eagerly joins left-wing politicians like Elizabeth Warren and Bernie Sanders in presenting Big Tech as America’s greatest menace The recent call by Chris Hughes, Facebook’s co-founder, to break up the firm hints at things to come.
Neither the Silicon Valley moguls nor financial markets seem to care though. The recent decision by Warren Buffet – one of America’s most successful but also most conservative investors –to finally invest in Amazon is probably a better indication of wait awaits the tech giants in the medium term: more lavish initial public offerings, more Saudi cash, more promises to apply artificial intelligence to resolve the problems caused by artificial intelligence.
Hundreds of human reviewers across the globe, from Romania to Venezuela, listen to audio clips recorded from Amazon Echo speakers, usually without owners’ knowledge, Bloomberg reported last week. We knew Alexa was listening; now we know someone else is, too.
This global review team fine-tunes the Amazon Echo’s software by listening to clips of users asking Alexa questions or issuing commands, and then verifying whether Alexa responded appropriately. The team also annotates specific words the device struggles with when it’s addressed in different accents.
According to Amazon, users can opt out of the service, but they seem to be enrolled automatically. Amazon says these recordings are anonymized, with any identifying information removed, and that each of these recorded exchanges came only after users engaged with the device by uttering the “wake word.” But in the examples in Bloomberg’s report—a woman overheard singing in the shower, a child screaming for help—the users seem unaware of the device.
His speech then took a turn: “Now, we’ve had a lot of interesting tools over the years, but fundamentally the way that we work with those tools is through our bodies.” Then a further turn: “Here’s a situation that I know all of you know very well—your frustration with your smartphones, right? This is another tool, right? And we are still communicating with these tools through our bodies.”
And then it made a leap: “I would claim to you that these tools are not so smart. And maybe one of the reasons why they’re not so smart is because they’re not connected to our brains. Maybe if we could hook those devices into our brains, they could have some idea of what our goals are, what our intent is, and what our frustration is.”
So began “Beyond Bionics,” a talk by Justin C. Sanchez, then an associate professor of biomedical engineering and neuroscience at the University of Miami, and a faculty member of the Miami Project to Cure Paralysis. He was speaking at a tedx conference in Florida in 2012. What lies beyond bionics? Sanchez described his work as trying to “understand the neural code,” which would involve putting “very fine microwire electrodes”—the diameter of a human hair—“into the brain.” When we do that, he said, we would be able to “listen in to the music of the brain” and “listen in to what somebody’s motor intent might be” and get a glimpse of “your goals and your rewards” and then “start to understand how the brain encodes behavior.”
You’re probably most familiar with recognition systems, like Facebook’s photo-tagging recommender and Apple’s FaceID, which can identify specific individuals. Detection systems, on the other hand, determine whether a face is present at all; and analysis systems try to identify aspects like gender and race. All of these systems are now being used for a variety of purposes, from hiring and retail to security and surveillance.
Many people believe that such systems are both highly accurate and impartial. The logic goes that airport security staff can get tired and police can misjudge suspects, but a well-trained AI system should be able to consistently identify or categorize any image of a face.
As a teenager in Maryland in the 1950s, Mary Allen Wilkes had no plans to become a software pioneer — she dreamed of being a litigator. One day in junior high in 1950, though, her geography teacher surprised her with a comment: “Mary Allen, when you grow up, you should be a computer programmer!” Wilkes had no idea what a programmer was; she wasn’t even sure what a computer was. Relatively few Americans were. The first digital computers had been built barely a decade earlier at universities and in government labs.
By the time she was graduating from Wellesley College in 1959, she knew her legal ambitions were out of reach. Her mentors all told her the same thing: Don’t even bother applying to law school. “They said: ‘Don’t do it. You may not get in. Or if you get in, you may not get out. And if you get out, you won’t get a job,’ ” she recalls. If she lucked out and got hired, it wouldn’t be to argue cases in front of a judge. More likely, she would be a law librarian, a legal secretary, someone processing trusts and estates.
But Wilkes remembered her junior high school teacher’s suggestion. In college, she heard that computers were supposed to be the key to the future. She knew that the Massachusetts Institute of Technology had a few of them. So on the day of her graduation, she had her parents drive her over to M.I.T. and marched into the school’s employment office. “Do you have any jobs for computer programmers?” she asked. They did, and they hired her.
It might seem strange now that they were happy to take on a random applicant with absolutely no experience in computer programming. But in those days, almost nobody had any experience writing code. The discipline did not yet really exist; there were vanishingly few college courses in it, and no majors. (Stanford, for example, didn’t create a computer-science department until 1965.) So instead, institutions that needed programmers just used aptitude tests to evaluate applicants’ ability to think logically. Wilkes happened to have some intellectual preparation: As a philosophy major, she had studied symbolic logic, which can involve creating arguments and inferences by stringing together and/or statements in a way that resembles coding.