In Trump, Tech Finds a Troll It Can't Ignore

The President presents an impossible, if familiar, question: How do you respond when a system that you respect produces a result that you cannot?
This image may contain Mark Zuckerberg Human Person Lighting Clothing Apparel and Finger
Zuma Press

To adapt one of our new president's favorite aphorisms: We knew he was a troll when we elected him. Throughout the campaign, Donald Trump gleefully behaved more like a social-network scourge than a presidential candidate, combining a slash-and-burn approach to social norms with an aggressive strategy of constant provocation. So it's perhaps not surprising that, in the not-quite-two-weeks since his inauguration, internet companies have struggled to respond to his presidency. After all, they're not very good at dealing with trolls.

Like internet trolls, Trump poses an impossible question: How do you respond when a process that you respect---in this case, American democracy---produces a result that you cannot? The problem offers no easy way out. Democrats struggle with whether to take Trump's nominees seriously on a case-by-case basis, or reject them outright. The press wrestles with how to cover an administration in "open warfare" against them. Working within the accepted rules of the system might empower a threat to the system itself. But rejecting Trump shows contempt for the people who voted for him, and the system that produced him.

It’s an unforgiving paradox, but it’s one with which internet companies, at least, are painfully familiar. They’ve been wrestling with some version of this problem for years as trolls usurped Twitter, fake news overtook Facebook, and racists spread across Reddit. In each of those cases, the platforms responded gingerly. Trained in the McLuhan-esque belief that the medium is more important than the individual messages it carries, they resisted calls to expel the trolls or exert editorial judgment. Doing so would imply a distrust in the systems they built, promoting fallible human opinion over seemingly objective algorithms and democratic principles.

And so they argued that the process through which information is delivered was more sacrosanct than the quality or content of the information itself. In 2014, Reddit CEO Yishan Wong shut down a particularly egregious subreddit, even as he defended his platform as a bastion of free speech . “[W]e believe that you, the user, has the right to choose between right and wrong, good and evil, and that is your responsibility to do so,” he wrote. Twitter was slow to respond to reports of harassment, apparently out of a commitment to “freedom of speech with clear limits.” And the day after the election, Facebook CEO Mark Zuckerberg famously dismissed the impact of fake news as a red herring that betrayed a lack of faith in democracy. “I do think that there is a certain profound lack of empathy in asserting that the only reason why someone could have voted the way that they did was because they saw some fake news,” he said. “I think if you believe that, then I don’t think you have internalized the message that Trump supporters are trying to send in this election.”

So perhaps it wasn’t surprising to see all those tech CEOs trudging to Trump Tower to partake in his “tech summit” last month. None of them looked thrilled to be there---you could practically hear the strains of the Curb Your Enthusiasm theme, our new national anthem, unspooling behind them. Still, they all showed up. They may not respect the content of the Trump presidency, but they seemed to respect the platform that produced it.

Many tech observers strenuously objected. Kara Swisher derided them as “sheeple” who were willing to “walk the gantlet of prostration at Trump Tower and get exactly nothing for handing over their dignity so easily.” Investor Chris Sacca was even more forthright: “They are being used to legitimize a fascist.” But the technologists preached faith in the system, even as they were notably unenthusiastic about its consequence. “The way that you influence these issues is to be in the arena,” Apple CEO Tim Cook told his employees. After all, what is the point of a complex system---be it a social network or a method of self-governance---if you refuse to accept results that you happen to dislike?

An Unacceptable Result

Over the weekend, the tech industry’s calculus shifted dramatically. After Trump attempted to ban immigration from seven majority-Muslim countries, the CEOs of Google, Facebook, Lyft, and several other tech companies all spoke out. Amazon joined a lawsuit. Airbnb offered to provide free shelter to refugees worldwide. Some of the flip was due to the unique nature of the ban---Sergey Brin was a refugee, and many of these companies’ employees were personally impacted by it. But it also felt like a tipping point, as if the tech companies were coming to grips with the idea that sometimes a system, even one as robust as American democracy, can produce results that they are duty-bound to oppose.

Perhaps it’s just a coincidence that we’re also starting to see tech companies finally begin to grapple with the content that their platforms distribute. Just yesterday Twitter announced it was developing a new, more aggressive approach to help combat abuse. “Making Twitter a safer place is our primary focus and we are now moving with more urgency than ever,” tweeted vice-president of engineering Ed Ho. Facebook finally seems to be backing away from its insistence that it is not a media company, accepting its responsibility to not only tackle its fake news problem, but to take steps to ensure that higher-quality content finds a home on its service. And Steve Huffman, now Reddit’s CEO, seems to have lost faith in his users’ inherent goodness. Just after the election, he was caught anonymously editing abusive posts from what he deemed a “toxic minority” of rabid Trump fans.

Huffman ended up apologizing for that overstep, as he should. Still, it’s encouraging to see technologists recognize the limits of their hands-off approach to content moderation. After all, technology is never neutral, and history does not look kindly on companies who allow their products, or their executives, to be used for immoral purposes. Just ask IG Farben, once the world’s largest chemical and pharmaceutical company, until its cooperation with the Nazi government led to its breakup in the years following World War Two.

Many of the employees of Silicon Valley---the engineers and designers who are actually building these services---seem to understand that. In the weeks after Trump’s election, hundreds of them signed a pledge never to build a registry of Muslim citizens, a step that Trump had openly contemplated during the campaign. Until now, their bosses had been silent about their own willingness to comply. But if the events of this weekend are any indication, Trump should not count on their acquiescence.

Last summer, in the thick of the election, Chris Tolles, the conservative CEO of Topix, tweeted out a response to the political turmoil gripping the world: "Asking for tech to 'take a stand' on issues? Look down at the device and out at the platforms enabling the conversation. That's the stand." It's a more aggressive version of the kind of context-over-content argument that Silicon Valley has relied upon for years. But in Trump's America, that's not going cut it any more.