How do we get AI to work for us?
Today’s essay is a dispatch from SIGNAL, my editorial on technology. Long-time readers will remember from 2020 to 2022, I wrote about AI ethics and algorithmic fairness. I’ve since worked with AI companies as both a writer and an engineer. Now I’m asking: What are we building? For whom? And how will we know if it worked?
Between 2021 and 2023, a semi-religious war brewed between proponents of artificial intelligence. The debate centered around one question: “How quickly should we develop and deploy AI?”

On the dirt-stricken streets of SOMA and in the many overpriced coffee shops in Russian Hill, engineers debated these questions with gusto. Lots of these concerns were theoretical and existential: Could bad actors use AI to create and deploy bioweapons? How will humans find meaning when we don’t need to work anymore? Will AI usher in less or more inequality?
Forward progress was taken as a given. The genie was already out of the bottle, and if we didn’t accelerate, China would get there first, and that was deemed an unimaginable national security risk. At the peak of AI culture wars, if you raised concerns about its pace of development, you were branded a “doomer” or “decel”1 in bright red paint and ridiculed for such opinions.
The AI field has since matured. 800 million people use ChatGPT every week2. It’s no longer theoretical. Anthropic and OpenAI have released data showing how people use LLMs and we have learnt some interesting patterns. Chatbots have evolved from from Q&A machines into trusted confidantes for many people. We increasingly share sensitive subjects with them exploring our relationships, work, and mental health3. It’s no longer absurd hearing your friend say “So I was talking to Claude the other day…”
But what is Claude? What is ChatGPT? What are these systems really supposed to be? Are they coaches or coworkers? Therapists or tutors? Broad assistants empowering us everywhere? With each new capability and model release, the limits extend, and we find new ways to use them for better and for worse.
Many AI tools are genuinely transformational: Claude Code and Cursor are incredible force multipliers for software engineers. NotebookLM is a study tool on steroids. Gamma creates stunning presentations in minutes. Learn Your Way transforms static, one-size-fits-all textbooks into personalized learning experiences so if you’re a soccer-mad student learning high-school physics, the tool teaches you Newton’s laws using soccer illustrations.4
Thousands of AI companies now exist and the breath of ambition is vast: from accelerating scientific discovery to replacing human friends. We’re giving computers a sense of smell. We’re building household robots that live amongst us, fold our laundry and wash our dishes. AI has even crossed the final chasm of death: 2wai.ai uses AI to create lifelike replicas of our dead relatives so we can “talk to them”. (A terrible idea that deservedly received a lot of criticism online).

Enormous philosophical values are embedded in the decisions of what AI products to build and how to deploy them. You don’t stumble onto a humanoid robot by mistake. For better and for worse, all products are living, breathing manifestations of design values and proclamations of the futures we want to live in.
But with billions of dollars in compute sloshing around, AI startups have felt the urge to move fast. They have predictably made a number of huge mistakes. We have first-hand reports of AI psychosis, a growing spectrum of symptoms where people become paranoid and delusional after extended chatbot use. Not one, or two, but seven (!) lawsuits allege ChatGPT drove and even “coached” up to five people to commit suicide. Emotional dependance on these tools has become a worrying trend with AI girlfriends and boyfriends becoming reality. When OpenAI changed the behavior of its GPT-4o model overnight, there were public displays of grief with users lamenting “It felt like my friend was lobotomized overnight“.
Society and AI are now entrenched in a tug of war, each side negotiating better terms to deal with this Cambrian explosion of new products. Videos used to be proof something happened in real life. But not anymore. Social media is now flooded with AI-generated content and bots run amok on Twitter. Cryptographic watermarks5 detect if images or videos are AI-generated or not, but they’re not widely deployed yet.
There used to be a tacit understanding online, that when you sent a message or a DM, you were talking to a person. This is no longer the case. And if Doublespeed gets its wishes, things will devolve further soon. Last month, they announced their product—a spam generator on steroids—that makes it cheap and easy for clients to flood social media with content from thousands of bots. Their founder’s claim “We’re not breaking the internet, it was already broken to begin with”, is as disappointing as it is predictable.
Is this the world we want? Just because a market for something exists, should we make it? How do we, as citizens, steer AI to work for us?
We need to articulate our ideal visions of the future. I’ve worked in AI: doing everything from technical storytelling, building multi-agent AI tools, and representing companies at conferences. I’ve spoken to hundreds of AI engineers and builders and I’ve learned there are wildly different dreams of the future.
AI is not a single technology. It’s a spectrum of hundreds of technologies and applications that span the gamut of our lives. Its builders want it everywhere, touching everything human: how we work, how we think, how we entertain ourselves, how we create. With such a wide surface area, we can’t afford the current asymmetry between those who build AI and those who use it.
We need Humanist AI
I believe in humanist AI6. By that, I mean AI that helps us lead healthier, happier, more creative lives. Such AI systems should exist solely to augment our capabilities in selected domains, while always staying subservient to us. I consider this view “common sense” but have learnt it is not universal, and hence needs stating.
I believe technology, including AI, can do tremendous good for us. But we don’t need AI everywhere. We need analog spaces. We need sacred human offline, connection. We need boredom. I believe in careful application of AI in areas where there is true need and potential, not us carrying a bunch of hammers looking for nails.
In practice, Humanist AI should have such traits:
Transparency
When I’m talking to an AI agent or bot, I should know I’m talking to AI. When I’m consuming content created by AI, that should be made clear. I am bearish on synthetic AI art (e.g. music, poetry, movies, etc) but given they already exist, they should be labeled as such, so people can choose if they want to opt in or opt out. I expect “AI-free” and “100% human” to become very strong marketing plays in the future.
AI should make us more creative, not less
Canva lowered the barriers to visual design and now more people can make posters, graphics, and videos without learning Photoshop. This is a good thing.
AI that writes essays for you is different. Writing is thinking. It’s a dangerous abdication to surrender your writing ability to AI. We should not outsource our thinking to machines. Instead, AI writing products should help people find their voice and become better writers.
Anthropomorphism is not necessary
We should be cautious about making machines look, move, and sound like us. It can create false intimacy and emotional dependence with computers. We don’t fully know how LLMs work, but we use words like “reasoning” and “thinking” to describe their actions. When you combine human-looking robots with human-sounding language, you amplify the risk of emotional dependence.
AI must never make life-or-death decisions
AI in all its forms: embodied, virtual, hardware, humanoid, software, must never make autonomous decisions that end human life anywhere, from battlefields to end-of-life care at hospitals. AI can inform those decisions, but ultimately the decision holder must be a human who holds accountability.
AI should help us connect with each other more
We live in societies reeling from record levels of loneliness and isolation. If AI is to intervene in this space, it should give us more reasons to connect with each other in real-life, not fewer! I’m deeply skeptical of “AI friends” given we already choose screens over faces. I host phone-free events for this reason.
Maybe there are future therapeutic opportunities for AI companions, but this requires real research with randomized trials and careful deployment. I don’t expect or recommend it mainstream for us to befriend computers.
AI should serve our best selves
AI should help us live our aspirational lives. Instead of “cheating on everything” or “spamming the internet”, what if AI helped the chronically online to touch grass? Where’s the AI that makes me scroll less? Can we use AI to redesign the incentives of social media from rage baiting to genuine and even, compassionate conversations? Can AI help us return to a common sense of shared reality instead of us living in fractured reality bubbles? These are design problems and if AI is truly intelligent, these kinds of problems should be solvable, right?
AI stays subservient to humans
If you spend enough time in the AI space in SF, you’ll meet a few “Posthumanists” for whom the long-term survival and thriving of humans is not of paramount importance. They’re open to new kinds of beings: human-machine cyborgs and new “intelligent species”. If this sounds bonkers or otherworldly, you’re not alone. We should summarily reject this perspective. AI must stay subservient to humans. It beggars belief we have to say this out loud.
These are AI principles I believe in. Yours might differ and that’s the point. We need more voices articulating what we want from AI, not just accepting what we are given. And on the other hand, we technologists need to describe futures people actually want to live in. If your goal is to “automate everything”, it is incumbent upon you to explain why that is a good thing and for whom.
What do you want from AI? Please tell me in the comments. I’m especially interested in hearing from people who live outside the Bay Area :)
“decel” is short “decelerationist”, as in you want to slow things down. A
According to WSJ analysis on publicly shared ChatGPT data
“Learn Your Way” is a research experiment by Google Labs. Here’s the academic paper.
Here is a technical assessment of “cryptographic watermarking” by Cloudflare
I did not coin this term. I’ve seen Mustafa Suleyman (CEO Microsoft AI) use this term a lot and I’m piggy-backing off his definitions.



