If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
A year ago, you couldn’t go anywhere in Silicon Valley without being reminded in some way of Tristan Harris.
The former Googler was giving talks, appearing on podcasts, counseling Congress, sitting on panels, posing for photographers. The central argument of his evangelism—that the digital revolution had gone from expanding our minds to hijacking them—had hit the zeitgeist, and maybe even helped create it.
We were addicted to likes, retweets, and reshares, and our addiction was making us distracted and depressed. Democracy itself was faltering. Harris coined a series of phrases that became so popular they morphed into cliché. “The people behind the screen have a lot more power than the people in front of the screen,” he said, pithily explaining the power of engineers. He talked about the ability of technology to manipulate our basest instincts through “the race to the bottom of the brain stem.” The problem was “the attention economy.”
And most significantly, he popularized a three-word phrase—time well spent—that became a rallying cry for people arguing that we needed to look up from our damn phoness from time to time. In February 2018, Harris founded an organization called the Center for Humane Technology. But then, oddly, he seemed to disappear.
What happened? It turns out he snuck off into seclusion to write on his walls. Today he’s rejoining the public conversation, and he’s doing it in part because he believes the old phrases—the words themselves—were tepid and insufficient. Talking about the attention economy or time well spent didn’t capture the true ability of modern technology to dismantle free will and create social anomie. The words didn’t say anything about the risks that increase as AI improves and as deepfakes proliferate.
Harris says language shapes reality, but in his estimation the language describing the real impact of technology wasn’t sufficient to illustrate the ever-darkening storms. So, months ago, he draped his office with white sheets of paper and began attacking them with dark markers.
He jotted phrases, made doodles, put things in all caps. He was looking for the right combination of words, a conceptual framework that could help reverse the trends tearing society apart. “There’s this sort of cacophony of grievances and scandals in the tech industry,” he says. “But there’s no coherent agenda about what specifically is wrong that we’re agreeing on, and what specifically we need to do, and what specifically we want.”
His brainstorming was almost manic: part Don Draper, part Carrie Mathison, and part John Nash as portrayed by Russell Crowe. He and his colleague Aza Raskin went down to the Esalen Institute in Big Sur, California, and covered the walls of their room with paper. They went back to San Francisco and did it again, scrawling phrases like humans swiss cheese and lists of problems caused by technology. Addiction. Fake news. Rising populism. There was a sketch of a deer in headlights.
Recently, Harris gave WIRED a tour of the sketches via Zoom, pacing around the room and reading from the walls: “What if you had headgear for tech? Aligning your paleolithic instincts to your orthodontics for humanity; humanity headgear for humanity’s technological adolescence. Paleolithic headgear. Chewing on ourselves so we can chew on our biggest problems.”
As he struggled with the words, he had a few eureka moments. One was when he realized that the danger for humans isn’t when technology surpasses our strengths, like when machines powered by AI can make creative decisions and write symphonies better than Beethoven. The danger point is when computers can overpower our weaknesses—when algorithms can sense our emotional vulnerabilities and exploit them for profit.
Another breakthrough came in a meeting when he blurted out, “There’s Hurricane Cambridge Analytica, Hurricane Fake News, and there’s Hurricane Tech Addiction. And no one’s asking the question ‘Why are we getting all these hurricanes?’” He hadn’t thought of the problem that way until he said it, so he made a note. He fixated on E. O. Wilson’s concise observation that humans have “paleolithic emotions, medievals institutions, and god-like technology.”
Harris still didn’t have the right original phrase, though. Occasionally he came up with something he liked but that wasn’t exactly it. Could he call what was happening “aggressive algorithms”? Not really. The algorithms weren’t actually aggressive. What about “hostile technology”? That didn’t entirely suit either.
He didn’t want to define the problem as one of evil technology companies. Even social media platforms do all sorts of good, and Harris, in fact, uses them all, albeit in grayscale. There are also plenty of technologies that don’t ever hack us, help elect fascists, or drive teens to cut themselves. Think about Adobe Photoshop or Microsoft Word. He needed a phrase that didn’t make him seem like a Luddite or a crank.
Finally, in February, he got it. He and Raskin had been spending time with someone whom Harris won’t identify, except to note that the mysterious friend consulted on the famous “story of stuff” video. In any case, the three of them were brainstorming, kicking around the concept of downgrading. “It feels like a downgrading of humans, a downgrading of humanity,” he remembers them saying, “a downgrading of our relationships, a downgrading of our attention, a downgrading of democracy, a downgrading of our sense of decency.”
That was it. Perfect, he decided. His fellow brainstormers agreed. And with that he started planning a big event at the SFJAZZ Center auditorium in San Francisco to explain his process and to unveil the phrase he thinks will help people understand how computers are changing our lives and our minds for the worse.
Today, on stage, in front of some of the most important people in the most important industry in the world, Harris will show a series of slides detailing his thought process and then building to the climax. The phrase he found after months of thought: human downgrading.
The story of Tristan Harris up to this point takes place in four acts. In the first act, he’s an undergraduate at Stanford, meandering among the palm trees, taking a class in the famous persuasive technology lab, and joining a generation of students who learned how to harness the magic of notifications, nudges, and streaks to convince people to keep using their products.
He is classmates with one of Instagram’s founders, Kevin Systrom, and Harris helps create a demo app with the other one, Mike Krieger. His contemporaries as undergraduates include, among others, Chris Cox, future head of product at Facebook, and Sam Altman, future president of Y Combinator. Evan Spiegel, CEO of Snap, comes right after them.
Google, founded on campus a decade earlier, has made a decent chunk of the faculty rich. The university’s president sits on the search company’s board. Stanford, in short, has become the world’s most successful startup incubator, one that also happens to sponsor a nationally ranked football team.
In the second act, Harris is working at Google but disillusioned by all the work the company devotes to locking people into their screens. Why does your phones have to buzz every time a new message comes into your Gmail, even when it’s just a newsletter from which you can’t unsubscribe?
The company, Harris realizes, has unparalleled power over a vast swath of humanity, and it makes him question whether all that power is going into making people’s lives richer. So he writes a manifesto in February 2013—“A Call to Minimize Distraction & Respect Users’ Attention”—that he sends to 10 friends.
Those friends send it to their own 10 friends, who then send it to their friends and so on. It’s Harris’ first experience with the delightfully recursive phenomenon of going viral by attacking virality. Google gives Harris the slightly awkward title of “design ethicist” but doesn’t actually change the onslaught of notifications it sends to users.
For act three, Harris takes the ideas in his manifesto mainstream. He leaves Google to focus on a nonprofit he’s founded called Time Well Spent, and he starts talking to reporters. He’s profiled in The Atlantic, alongside a huge photo of Harris standing with his eyes closed, as though he’s just produced an album of introspective acoustic love songs.
He appears on 60 Minutes. He gives an interview on Sam Harris’ podcast, which introduces a vast audience to his ideas and also explains how to pronounce his first name. (Think of Tristan and Isolde, not Tristan Thompson.) He talks to WIRED and advises Congress. Somehow, he’s able to do something almost no one else in Silicon Valley can do: give interviews and stay relentlessly on message while somehow appearing thoughtful rather than canned.
He also starts talking to tech companies. Because the executives have listened to his interviews or talked to people who have, they echo his language. He finds himself sitting in meetings and hearing billionaires unconsciously mimicking his phrases.
And then, in late 2017, he has the awkward experience of learning that Mark Zuckerberg has professed his allegiance to his signature idea in a quarterly earnings call and even used the phrase time well spent. At the beginning of 2018, the Facebook CEO revealed a major revision to the company’s core News Feed algorithm focused on meaningful interactions, which is essentially Facebook’s modification of Harris’ idea.
Harris thought Zuckerberg’s embrace of the phrase was a step in the right direction, and he was even more pleased when Apple and Google—which he calls “the central banks of the attention economy”—rolled out features to try to help people limit their addictions to their screens.
But he also considered the actions woefully insufficient. He said that Zuckerberg’s affinity for Time Well Spent was “dishonest” given the incentives of Facebook’s business model, which depends on selling users’ attention to advertisers. As for Apple’s and Google’s offerings—essentially providing people with information about how much time they spend on their phoness and what they do there—he pronounced them Band-Aids on a wound that needs a tourniquet. “We set off a race to the top for who can build a better chart of where you spend your time on your phones,” he says. “That’s a pathetically small, insufficient race.”
At the tech companies he criticizes, Harris is viewed with both respect and resentment. To some, Harris provides an aspirational vision. Every tech platform was founded in idealism, some of which got lost in the gold rush but much of which remains. Others, though, view his critique as absurd. People don’t use social media platforms because they’re addicted; they use them because they provide value and connect people to friends, ideas, and information.
According to one executive at a company that Harris often criticizes, “Tristan sees humans as pawns incapable of managing their own lives. He thinks designers are infinitely powerful and can coerce people to do whatever they want. It is a pure farce.” The executive adds, “I like to imagine Tristan reviewing the latest restaurant. ‘They have clearly intentionally added flavor to this dish to make me want to come back and visit this business again. What scoundrels!’”
Harris, of course, doesn’t see it that way. Steve Jobs talked about technology as a “bicycle for the mind.” Harris’ argument last year was that the bicycle was now taking us places we didn’t want to go. His argument this year is that the tires are flat, the handlebars are broken, and there’s a truck coming right at us. And that’s why he decided to try to come up with something new for act four.
Tristan Harris had lots of options in 2018. He’s charismatic, eloquent, and he could probably raise millions for any idea he wanted. He could play the former Google product manager card whenever someone accused him of not understanding the industry he’s criticizing. He could have sold out and joined a large tech company, offering them his implicit endorsement in exchange for stock options. He could have gone all Beto O’Rourke, driving from city to city, charming the media and trying to start a movement. He also could have let the whole thing get to him: microdosing, moving to Humboldt County, making his own sauerkraut.
His actual choice—starting an organization called the Center for Humane Technology and then obsessing over language—seems, in one way, remarkably unambitious. If you really believe that the most powerful companies in the world are destroying the human species, shouldn’t you counterattack with more than a sharpie and a thesaurus?
Harris doesn’t buy that argument. For one thing, he’s good at language, so why shouldn’t he focus on it? More importantly, he believes in its power. Pressed on this point, he recalls a moment back at Stanford. “We studied the power of language semiotics and Alfred Korzybski and people like him,” he says. “And they have this concept that something doesn’t exist until there’s a language symbol for it. I used to think of that as a kind of a poetic thing. But I’m really convinced now that language actually does create things, and it creates momentum and pressure. That’s why we’re focused on it.”
He believes this phenomenon was at play over the past two years. Yes, the world began to question Silicon Valley in part because of the election of Donald Trump and in part because our addiction to our devices seemed so obvious. But the sense of alarm also spiked because people had language that allowed them to name the icky feelings they had about what their phoness were doing to them and to society. They had language that helped them center their thoughts and thus their critique.
Harris is sometimes called the conscience of Silicon Valley, but it’s more accurate to say that he’s the spokesperson for the conscience of Silicon Valley. His campaign has been run without writing code, without hiring engineers, and without Harris getting arrested on a picket line by the Menlo Park police. Oddly, his critics sometimes embrace him more tightly than his allies. And he’s completely comfortable with this role. All he wants is the perfect phrase.
But will this new phrase catch on? Will there be network television specials soon about “human downgrading,” and will Zuckerberg appropriate it for one of his periodic essays? Perhaps. It’s clever and original. It plays on some of the themes expounded by Yuval Noah Harari in his best-selling books, without sounding exactly like Yuval Noah Harari. It twins nicely with the fears that people have about being rendered obsolete in a world of near-infinite Moore’s law upgrades. It sounds both existentially threatening and manageable.
But perhaps critics will think it misses the target. The problems that Harris is explaining do concern downgraded humans, but they also come from upgraded machines. The phrase also isn’t uplifting. Time well spent had aspirational overtones. In any case, today Harris will stand before a packed house at SFJAZZ and present a detailed Keynote presentation that culminates with his description of downgrading. “The reason we’re inviting people into the room,” he says, “is because we can’t change a system unless we have a shared understanding and shared language.”
“It’s not like I want to get up and think about words,” he adds. “I want to get up and actually see these problems go away. But if we don’t have the language as a lever it’s not going to happen.” And so, with that, Tristan Harris is back.
Nicholas Thompson (@nxthompson) is WIRED's editor in chief.
- 15 months of fresh hell inside Facebook
- Combatting drug deaths with opioid vending machines
- What to expect from Sony's next-gen PlayStation
- How to make your smart speaker as private as possible
- Move over, San Andreas: There’s a new fault in town
- 🏃🏽♀️Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers, running gear (including shoes and socks), and best headphoness.
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter