Why should an engineer think about ethics?
And why do we complain that engineers try to fix the world with code?
Welcome to The Perfect MVP. If you’re joining the party for the first time, welcome! We meet here once every two weeks. I write my reflections on how we can build tech to be better for all of us. You can subscribe by clicking the purple button below.
Today, I’m trying out a new format. Some of you, my dear readers have texted me to say that you love the long essays. Others think they’re good but too long. As a Product Engineer, I’m listening to your feedback, iterating, and giving you a considered solution.
In this post, I’m using TLDRs (too long, didn’t read❗️) and TSWMs(too short, want more📚) in every section. Can’t stay for the entire party but don’t want to miss the action? Read the TLDRs in each section. One heaping spoon of condensed decadence. Want the full meal with every last sinful calorie present? I present you the TSWM. Sink your teeth into the meal of your choosing.
Oh, one more thing. This week, I’m responding to two questions from this tweet. It caught my eye because it asks some reasonable, fundamental questions about technology ethics. Though they are entire dissertations and books that have been written on this topic, I’ll try my best to be concise.
Why do you complain that engineers think they can fix the world with code?
❗️TLDR (too long, didn’t read)
Tech solutionism is a problem because it is often short-sighted, arrogant, and unrealistic. Nothing’s wrong with being optimistic but when you have a hammer, you start to see the world as a field of nails. Technology is our best bet for solving many intractable problems, but we can’t out-innovate deeply human problems like sexism and corruption. Moreover, technology is only one part of the puzzle - for example, we can’t address climate change solely by innovation.
📚TSWM (too short, want more)
I’ve written before about the sprawling effects of tech companies - Google won search, then funded the solution to the protein folding problem. I consider this a mostly positive thing. Nonetheless, it’s a tactic unique to tech companies. We don’t expect the same from Kraft Foods, Sherwin-Williams, or Zara. Each of these is a billion-dollar company and market leader in their respective fields but they don’t presume their products are tools for solving society’s ills. When tech companies and engineers venture out of their domain, they need to be cognizant of the following:
🎓Optimism bias and real-world messiness
It’s common for innovators to suffer from optimism bias - the unrealistic belief that their outcomes will be exceptional. It’s unrealistic because everyone can’t get exceptional results, if they all could, they wouldn’t be exceptional. Innovators also tend to focus on the technological aspects of problems because those aspects are less messy. Computers are dependable and definite, while humans are messy and emotional. So innovators spend a disproportionate amount of time on the technology aspects, and too little time working with governments, regulators, and international bodies. But what progress can you make when the bottleneck is not technology?
When innovators speak, they tend to consume all the oxygen in the room. And this distracts from the beneficial political or social problem-solving approaches that are often more impactful. So we need to approach our work in this area with humility and an understanding that our help may not be needed.
🤝Pick projects prudently
This doesn’t mean innovators should step back. We should carefully figure out the problems that are blocked by technological barriers. But we shouldn’t delude ourselves into thinking technology solutions are applicable in all domains. Even if they did, the results are unlikely to be systemic and lasting.
In his book, The Net Delusion, Evgeny Morozov argues:
Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized--if only the right algorithms are in place!--this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.
🎲Reject framing things in binary terms
Instead of seeing the available options as “step back from fixing large societal problems” and “fix large societal problems”, consider a third option. We could “thoughtfully consider where our help is needed”. If we find out the barriers are not technological, we don’t engage there. Several other engagement models exist that could yield positive results for society. So we need to reject the notion that we only have two options here because we don’t.
Why should an engineer or a researcher think in a detailed way about ethics?
Jon says: “I did an engineering degree, and neither I nor my peers had serious ethical training. In my career, I’ve known many programmers, engineers, mathematicians, and scientists of different types, and passing few of them are sophisticated philosophers or spiritual authorities. Why do you think ethics belongs on the plate of these professions?”
❗️TLDR (too long, didn’t read)
Tech ethics is on all of us. There is no all-knowing ethics specialist or philosopher who can prescribe ethical solutions to the problems introduced or exacerbated by technology. Even if such people existed, it is unlikely they have a deep understanding of how technology products are created, scaled, or maintained. If you’re building products with social consequences, you are drawing ethical lines in your products, and thus you should discuss where those lines are.
📚TSWM (too short, want more)
This past February, I served as a cohort leader for this Stanford/Bloomberg Beta course where we discussed issues like algorithmic fairness, data privacy, and the power of private platforms. Our professors were from the Philosophy, Political Science, and Computer Science departments. Participants in the class were VCs, content moderators, lawyers, engineers, product managers, and everything in between.
So I’ve heard variations of this idea. Jon’s phrasing suggests he believes thinking about ethics in technology is a valuable endeavor. But perhaps, engineers or researchers are not well-equipped to make useful contributions to this end. I’d argue the responsibility is on all job functions - from engineers to content moderators to marketers to executives.
📊Our tools are not ethically neutral
The first thing technologists need to understand when it comes to ethics is that our tools are not neutral. Technology products are embedded with ethical ideas. Whether or not you openly discuss and acknowledge them doesn’t change their existence. Building communication tools is predicated on the idea that global communication is a “good thing”. If the Internet pioneers did not believe this, the Internet would look different today. Robinhood’s mission to “democratize finance” is an extension of the belief that everyone should be able to invest in financial assets.
🏄🏽♂️You don’t need to be a Priest to be productive in ethics discussions
Who is an ethics specialist? Who is so knowledgeable about philosophy that they can prescribe the panacea to issues so fundamental to who we are, what we do, how we exist in the world? The Pope? Priests? Ethics researchers? Rabbis?
The only person I know who I’d consider an ethics specialist, Rob Reich (the Director of Center for Ethics at Stanford) posted in our class:
“Ethics specialist = oxymoron”
Ethics is not something that should be doled out to specialists to “fix” and decide for the rest of us. I don’t expect many tech employees to be Priests. And while I’m sure they exist, I don’t think that spiritual authority is required. Moreover, the kinds of questions we ask in tech ethics conversations are questions that should be answered by the general public, tech employees, and regulators. A little curiosity goes a long way. Ethics conversations should provoke us to consider where violations exist, acknowledge that there aren’t easy, silver bullet solutions but yet encourage us to take action nonetheless.
Software engineers can’t outsource ethics. We don’t make such allowances for other engineering disciplines. Process engineers are ethically responsible if they choose to pollute rivers. Civil engineers are ethically responsible for designing bridges that will work. To be an engineer is to be bestowed with society’s trust that you’re going to be thoughtful about your creations.
The fact that engineers don’t learn ethics in college is more an indictment of the education system. We also didn’t learn how to file taxes or manage credit history in college. But those are important things. So I’m happy to blame college for that miss. But it doesn’t absolve us of the responsibility.
🎯Scale of harm
Imagine if Airbnb decided that “anybody could be a host, even people with criminal records and ongoing investigations”. Yes, this is fanciful but humor me for a second.
Airbnb has 2 million hosts worldwide. Consider a tiny fraction of these, 0.02% of these hosts harmed their guests in a year. Harm could be anything from physical assault to discrimination. This yields 4000 harmed people in a year.
If a doctor went rogue and wanted to harm every patient they saw in a year, they could probably harm fewer than 5000 people. In medicine, that would be totally unacceptable. Yet in tech, there is the temptation to say 99.98% of hosts were unharmed. Then we call those 4000 people outliers as if that’s acceptable. And pat ourselves on our backs for a job well done.
The consequences of our product decisions are too important for us to abstain from having these discussions. Having discussions with people from different roles is the bare minimum. In 1984, a toxic gas leaked from a chemical plant in Bhopal, India, and harmed up to 600,000 people. More than 500,000 people were injured and thousands died. This was one of the worst industrial accidents ever. In response, the chemical process industry built rigorous safety and ethical standards. While imperfect, they have significantly increased the level of quality and safety engineering in the industry. These standards were not developed in a silo or solely by ethics experts - they involved safety engineers, process engineers, plant operators, plant managers, and others. In tech, a single engineer can push out a product experiment that is used by 600,000 people. Yet we don’t have the vocabulary to even appreciate the probability of harming consumers. We need to appreciate the scale of harm in tech products sooner rather than later.
✨Talk to Tobi
Did you like this new format? Drop me a line in the comments, reply to this email or mail me hi@tobiwrites.com.
If this post resonated with you, consider sharing this with your friends.
Great post and I really like the new format.
At the end you talk about the ease of pushing out experiments without oversight. Do you think tech companies should implement some sort of institutional experiment review like they have in academia/medicine?