Welcome to The Perfect MVP. If you’re joining the party for the first time, welcome! We meet here once every two weeks. I write my reflections on how we can build tech to be better for all of us. You can subscribe by clicking the purple button below.
If you teleport into any reasonably ambitious tech company, you are guaranteed to hear the following question:
Yeah, I see what you’re saying. But would this scale?
Scale.
For a word so simple, this five-letter term plays an outsized role in the vernacular of techspeak. You hear it when founders describe their company vision: “We want to give everyone the ability to do X”. We use words like unicorn and decacorns to describe the size of privately-owned companies. And whenever anyone suggests a new solution to any tech problem, a sleepy coworker awakens from their slumber to question whether this approach might work at scale.
stu•pen•dous scale
when a software company has so many users that its product failures have significant real-world consequences.
Software products fail all the time, for a variety of reasons. Sometimes, they fail for innocuous but unfortunate reasons. Other times, the products technically work as designed, but their use results in unintended consequences. So while there’s not a “product failure” per se, there’s a societal cost. There was a time when tech companies released compelling products but stayed in the background of society - think Cisco, Compaq, Intel, and Microsoft in the 80s and 90s. Yes, IBM and Microsoft faced legal anti-trust challenges. But largely, their influence was limited to their domains. Then the boom of the 2010s gave us immovable titans masquerading as big tech companies. Today, our software products directly influence the most important underpinnings of modern life like the state and health of multiple democracies and the sanctity and acceptance of shared truths. Beyond that, our products can accelerate or decelerate tragedies like genocides and self-harm. And as we deal with the pandemic, we are seeing that social media platforms can affect how people view the truth on vaccines, masks and bogus, alleged treatments like bleach. Simply put, the scale of harm in today’s software world is greatly magnified.
Nonetheless, we live in the information age where access to quality data has become a competitive moat in itself. Tesla’s Autopilot AI is the best self-driving software, partly because it has billions of miles of driving data. Tiktok’s recommendations are so good because you can watch several short videos in a short time - therefore you feed the algorithm with more digital fuel to better understand your likes. If 23AndMe didn’t have troves of data, how useful or accurate would their ancestry predictions be? Probably not great. Scale, it seems is both the Judas and the supposed Messiah of the technophile’s scripture.
So how do we contend with the massive scale of tech companies? What risks do we face in a tech ecosystem where scale is everything? How can we improve the status quo?
I. The good
Scale is the reason why an Ad company (Google) could fund the solution to the 50-year old protein-folding problem. For many decades, scientists have toiled to better understand how proteins fold - because the shape of a protein greatly affects its function. So if you can predict with great certainty the shape of a protein, the hope is that you can design better drugs, better foods, better genetic tools, or even enzymes that can “eat” oil or plastics. In recent years, it’s become cheap to read and write DNA but predicting protein shape has remained elusive. For years, many groups of smart people worked very hard and made progress but never solved the challenge. Then in 2014, Google bought Deepmind, a promising deep learning startup, fed it hundreds of millions of dollars a year, gave it access to engineering talent and immense computing power, and let it run wild. Late last year, Deepmind announced that they had solved the challenge.
Think about this for a moment. Larry Page and Sergey Brin created an algorithm for online search, and in a major miracle, their innovation ended up giving us a breakthrough in biology. Only capitalism could yield such a feat.
This cross-domain influence is pretty unique to the tech industry. You’d never hear of Exxon spending millions to make sleek smart speakers. If I told you that Unilever was building a nuclear fusion lab, you’d be a little alarmed. Yet this is the standard playbook in the technophilic creed - big tech companies are not content with owning large parts of online ads or entertainment or app stores- they also fund self-driving cars, clean energy production, and cheap water purification technology.
I consider this mostly a good thing. We live in a time filled with many vexing technical challenges out there that require smart people pursuing novel ideas. If Google has to choose between using their profits to pay dividends or pursue moonshot ideas, I’d prefer the latter. Sure, governments should do more to encourage basic research and to encourage other smaller players to have a fighting chance. But these endeavours are not mutually exclusive. Climate change for example, is far too important and too complex for one single agent to do all the work. We need all the help we can get.
The pursuit of stupendous scale has other important benefits. The obvious one is money, that sweet sweet nectar that capitalism rewards its winners with. It doesn’t take a genius to realize that stupendous scale leads to stupendous revenues. Judging by their combined market caps, big tech has created trillions of dollars of value - enriching not just their founders but thousands of employees to different degrees. Building your startup for high-scale can also improve your product early on. Architecting your product to be fast and resilient can lead to a better product experience even if you never hit that wild scale. Aiming your startup for high scale also significantly helps with hiring. When founders preach their ambitious visions on mountain tops, they’re screaming out for smart, ambitious people to join their ranks. People whether motivated by money, mission or prestige want to work at these high-flying startups. On balance, these are all positive effects if they produce great products.
II. The bad
When a company gets so large and influential that it has to invent an independent oversight board to guide its decisions, it’s a clue that things have gotten out of hand. When CEOs (of Twitter, of FB, of Cloudflare) publicly lament that they have too much power when making content moderation decisions, it stands out. Remember these are CEOs, they aren’t normally averse to power or the use of it.
Like any good capitalist enterprise that gets high on its own supply, today’s tech titans do not want to give up their scale or power. Why should they? They have shareholders to pay, trust funds to pad, foundations to fund, earnings forecasts to beat, and new countries to expand to. The nature of unchecked capitalism demands that these companies get bigger and bigger, further entrenching their influence and power over us. This necessary expansion widens the area under the graph of societal harms. Greater scale means more power and over enough time, we know that power corrupts even the most angelic intentions. The once starry-eyed, idealistic founder becomes the Emperor fighting to keep the empire in check. To protect against irrelevance, the big company starts to engage in anti-competitive behaviour that would’ve precluded that same founder from ever succeeding in the first place. Wielding its cashbook as a sharpened sword, the empire dismantles invaders (new startups) that dare trespass on its turf. The end result is fewer and worse choices for the consumer.
Beyond illegal monopolies, there is a whole class of social costs that we don’t talk about enough in tech. We don’t yet have a good way of quantifying and conceptualizing these harms, partly because technophiles continue to proclaim that their tools are all neutral (they are not). In the process engineering world, there are many ways of estimating potential harms. One might consider the total amount of carbon emissions or particulate matter that are caused by a chemical process. Or perhaps you evaluate the outcomes - you could roughly estimate the number of hospitalizations that will occur because of air pollution that arises from a certain density of cars in an area. Or when figuring out where to build a new refinery, your safety engineers should be calculating how many people might be harmed if there was a catastrophic incident.
Tech is not one single business but dozens of interrelated businesses that span the gamut of modern life. Each business has distinct harm profiles, different degrees of social costs and varying likelihoods of catastrophic events. Netflix is less likely to make product decisions that have life-or-death consequences than Robinhood - where tragically we know a young man took his life because he thought he owed $750k. Yet, Netflix is more likely to receive takedown requests from governments or citizens especially as it scales globally across different cultures including non-democracies. The pursuit of stupendous scale means startups will make many if not all possible mistakes available to them. Some of these failures will have high societal costs - think FB and Cambridge Analytica, Robinhood and Gamestop, Uber and all its scandals. This is a feature of our tech ecosystem. This is a decision we’ve made consciously or unintentionally. And if left unchecked, tomorrows’ startups will also fail in profound ways that have high societal costs. This is the tradeoff we’ve accepted.
📚Improvements
The pursuit of scale itself is not evil. Though, certain harms are more likely to persist in startups aiming for high scale. Here are some recommendations for fixing things:
Research your product failures: As your startup approaches product adolescence, you start to gain more understanding of the impact your product has on the world. Facebook for instance, conducted some research that allegedly proved that its algorithms “exploit the human brain’s attraction to divisiveness”. The report claims FB execs ignored the report, which is regrettable, but more tech companies should be running such analyses to understand how their products are failing society. And then have the courage to actually do something to remedy the situation.
Figure out your risk profile and invest accordingly: startups have limited resources and by definition are swimming against the tide in a piranha-infested pool with multiple, larger competitors. It’s hard, brutal, exhausting work trying to build a better product while not running out of money. Nonetheless, startups have a responsibility to understand the ways their products can be abused and they need to address these early on. Obvious attack surfaces should be plugged early on e.g. Amazon Ring should have never launched before enforcing 2FA (two-factor authorization) because there are bad people on the internet that like to watch children and you have to protect against that. There can’t be any excuses.
Sort your house out before you expand internationally: Venture-funded software startups often suffer from a peculiar malaise where they feel compelled to ship experiments everywhere. This pressure usually comes from the desire to grow rapidly to meet KPIs, and it’s understandable. However, more companies should consider a more thoughtful approach to scaling. Why would a company that hasn’t figured out political ads in one country try launching in another? Why have we normalized running massive experiments across borders en masse? Why not figure the nuances of one market before launching another half-baked approach? When the stakes are so high, please pump the brakes.
Antitrust: Experts like Lina Khan et al are pursuing multiple legal approaches to curtail the excesses of stupendous scale. Thoughtful regulation is badly needed to level the playing field and to reintroduce a thriving digital ecosystem. I’ll write a separate essay about this someday.
Build explainable products: in my last post, I argued that algorithm-driven companies need to build products that are appealable. As tech companies get even bigger, the need for transparency, explainability, and first-class appeals only becomes greater.
✨Talk to Tobi
What do you think about the length of this article? Do you wrestle with the scale of big tech companies? Drop me a line in the comments, reply to this email or mail me hi@tobiwrites.com.
If this post resonated with you, consider sharing this to your friends.
Tobi-This piece is great! I love these pieces in particular:
"They have shareholders to pay, trust funds to pad, foundations to fund, earnings forecasts to beat, and new countries to expand to. The nature of unchecked capitalism demands that these companies get bigger and bigger, further entrenching their influence and power over us. This necessary expansion widens the area under the graph of societal harms."
"Beyond illegal monopolies, there is a whole class of social costs that we don’t talk about enough in tech. We don’t yet have a good way of quantifying and conceptualizing these harms, partly because technophiles continue to proclaim that their tools are all neutral (they are not)."
I am not 100% sure I agree with your posit: "The pursuit of scale itself is not evil." The more I dig into capitalism and White Supremacy the more convinced I am that "scale" is a euphemism for monster. Scale implies: Bigger is better. Faster is better. Replication is better. It takes all the humanity out of things: less time for 1:1 human interaction, real conversations, organic small, steady growth, and the main goal for scale is to make more, be bigger, be better. It is the outcome of a marketplace based on competition, not cooperation. And yet, social enterprise is no better...they talk about scale all the time. Will your idea replicate? How many people do you impact? What are you measuring for impact? The thing is if your impact is health, well-being, a safer internet, better sustainability, and decisions for "planet and people" over profit, scale is secondary to making the right choices. Is it more important the number of people you touch with a product or service, or the quality of life you create and shifts you make for the people you can touch in the time you have?
The Care economy is a good example of this. It is not easily scalable, not high paid, or paid at all, which is how we are rewarded and how we sustain ourselves. Yet, it is arguably the most important job there is. The Invisible and Emotional labor of care work doesn't scale. Therefore under current standards for investors and governments, it is worthless.
This is coming from someone who has BIG dreams and would have no problem unseating the current reigning leaders in my industry. But scale is not my initial goal–how big can this idea get? My goal is how can I make this idea the best it can be? FWIW it makes it hard to show Silicon Valley why investing in good ideas (mostly from underestimated, underrepresented founders) is a good investment, even if it only changes the lives of a handful of people.
"Scale" may not be evil, but the pursuit of it, imo, is ego-driven, patriarchal, capitalist, and antithetical to "building back better" for "people and planet". Harsh? Maybe. But consider the Industrial Revolution, scale, but at what cost if we are in a 4th mass extinction, GBV is on the rise, equality in the workplace and home is a distant dream for many, and worldwide economic gaps continue to grow. (Insert Debbie Downer GIF).