My people, I have missed you!
Who would have thought that my bliss in these pandemic days would come from being able to bend my phalanges to ninety degrees without pain?
But here we are. I’m glad to be back strumming this nearly fully-functional finger on these laptop keys like the guitarist I’ll never be.
Thank you for all the well wishes.
What have you been up to? What have you been reading?
Did you watch the Euros?
Football has always been my biggest passion. And as I watched the final unfold, I realized that one of the biggest stories of this Euros was a story about social media. As a football-mad man who writes a tech-focused newsletter, I had to give you something.
For the uninitiated, Italy beat England on penalties in the final. Despite their obsession with football, the English have only ever won one international trophy and that was fifty-five years ago. In a year full of isolation and loss, the English team provided a beacon of hope for the country to rally around. More so than any other English team in recent history, the players and coaches seemed fully united and genuinely likable. Overall, the vibe seemed great and they made it to the final.
And then in the final, three black players missed their penalty kicks. It’s important to specify their race here because the team had galvanized to protest against racist abuse. Yet, immediately they missed their kicks, and I truly mean, immediately, these black players began to receive a deluge of disgusting, cowardly racist abuse on social media. If they had scored, they would’ve been English heroes, deified and sung about for decades. But one wrong swing of each player’s boot, and they were racially abused.
I know social media can not solve racism. It’s a deeply human problem.
But social media products can do more to counteract the toxicity and abuse that its users experience. Racism is only one flavor of this toxicity, there’s hate speech, disinformation, bullying and so much more.
👩🏽🔬 Product experiments
I often criticize social media companies for not doing enough to counteract the toxicity that festers on their platforms. But it’s easy to criticize while chilling in the backseat. So today, I’m sharing some product ideas worth exploring that might reduce the harm users experience online.
🚦Let users decide how much sensitive content they see on these products
Every platform has a line for content they consider acceptable, and what’s against Terms of Service. But then there’s stuff in the middle - which might not break existing rules, but would still upset people. Think violent, abusive, explicit or sexually suggestive posts.
Why not let users decide how much of this stuff they want to see? You want to see all this juice, go ahead! Want a more refined experience? Or you want to be able to figure out the right balance of explicit content? You decide!
Why this might work: It’s really difficult to have one set of content rules for everyone in a given country. So many of us exist in thought bubbles or have strong ideological differences. Things get far more complicated when you have users from hundreds of countries and thousands of cultures. Letting each person choose their preferences allows everyone to have an experience where they feel more welcome and comfortable. Instagram recently announced a version of this idea: Sensitive Content Control. It’s still early days but I reckon more products should try this.
🚥 Slow users down when you detect they’re about to post a sensitive message
Hardcore social media abusers will abuse people regardless of the tool. But I reckon there is a subset of online abusers that hide behind the cloaks of anonymity and imperfect internet justice to “jump into” hateful behaviors they see online. They abuse people in ways they’d never attempt face-to-face. It’s morally indefensible but it exists.
For this demographic, increasing the friction when sending messages might cause them to rethink. Adding an in-product label like the following:
“500,000 people have flagged messages like this as hateful. Are you sure you want to send it?”
Product teams should experiment with changing the copywriting message, adding links to credible social institutions and borrowing tactics from behavioral economists that are proven to change people’s behaviors for the better.
Why this might work: Technology influences us, and whether we like it or not, there are behaviors that are rewarded on every social platform. Why not use technology to influence those who might be deterred with a simple message asking for pause? Banking apps often ask users to double-check before making transactions. We can borrow that pattern here. Again, no guarantee it works, but no harm trying.
💪🏽 Let users gain access to features by earning trust over time
Most of entire internet ecosystem is predicted on the belief that users will be “good custodians” of tech products. By default, users are given access to majority or all of the features that a product offers. But we understand, that some users will abuse this trust so we reprimand, suspend or ban those users.
An alternative, radical approach is to do the opposite. Let new users have access to a useful minimal set of features. These features should be enough for them to gain reasonable-but-not-optimum utility from your product. Then, as they start to “earn karma” by modeling good behavior (as defined by your product terms), you progressively give users access to more features. Some will hate this idea as it feels a little infantilizing, but it’s worth considering, we don’t allow anyone to drive cars, work in biology labs, or operate heavy machinery without showing they can handle it. We know those actions are dangerous so we built speed bumps in those processes. Some would argue that gaining a microphone to the entire world and being able to cost-effectively microtarget ads to influence susceptible people is equally as menacing and thus, should be treated with appropriate caution. The hypothesis here is that “users who progressively earn trust are less likely to abuse other users”.
Why this might work: It could work if users care enough about the product to wait till they get access to the hidden features. This wait could be designed in an interactive way so it remains engaging and people feel excited about waiting. Or this idea could totally backfire if users are not fulfilled by the provided features - they might leave the product and never come back.
❗️Provide a one-click abuse “siren” and emergency playbook
In the event of a social media emergency (e.g. a user receiving a torrent of racial abuse or online bullying), there should be an easy way to “ring the siren”. Ringing the siren should notify the product operators and execute a pre-defined playbook. I say a playbook because there are certain individuals who will absolutely receive abuse and thus, the product should be prepared for this. When Saka missed his penalty, several of my friends instantly told me that he would be racially abused. If you know it’s gonna happen, you should be better prepared for it.
This playbook should be flexible enough to handle different needs: for example, some users might want to temporarily freeze all incoming messages, others might want to freeze messages to only users in their trusted list, others will want to continue as normal so the abusers “don’t win”. Some will want to drown out the nasty abuse with a deluge of positive messages. But no two victims would want the same things. Let them choose.
A post-abuse playbook is no substitute for removing the hateful messages. But it can potentially dampen the embers and make an awful situation, a little less bad.
Why this might work: Once a user suffers abuse on a platform, it’s a lose-lose situation. The product can’t “win”per se, but they can surely make things worse. Imagine if Twitter was recommending hateful tweets on your timeline? Or dodgy bots were showing up in your suggested follow list. The best scenario is that all abuse is filtered out before users notice, the next best is that there’s a playbook for mitigating the .
If you can’t stop the ignition or fuel source, it’s best that you extinguish the fire before it ravages.
✨Talk to Tobi
What do you think about these ideas? Talk to me in the comments