Sam Altman: Saying ‘please’ and ‘thank you’ to ChatGPT is costing OpenAI millions—Should users stop?

OpenAI CEO Sam Altman just pulled back the curtain on a strange but fascinating cost driver behind ChatGPT: politeness.
In a recent statement, Altman revealed that simple phrases like “please” and “thank you” are quietly racking up millions in computing costs. Why? Every word typed into ChatGPT gets processed as a token—those tiny bits of language the model chews on to generate responses. Add a couple of extra words to billions of queries, and suddenly you’re looking at a serious tab.
Politeness to ChatGPT is Costing Millions
The conversation started on X when a user named Tomie asked, “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.”
Sam Altman didn’t miss a beat. The OpenAI CEO replied:
“Tens of millions of dollars well spent—you never know.”
tens of millions of dollars well spent–you never know
— Sam Altman (@sama) April 16, 2025
Why People Keep Being Polite to AI Anyway
Despite the cost, users aren’t in a rush to stop saying “please.” And that’s not just nostalgia or muscle memory. Plenty of posts on X show that people are using courtesy as a way to stay grounded in the human experience, even when talking to machines.
Tomie added, “sama i just want to say thank you for OpenAI, your models have been so useful for helping me with complex tasks like understanding their names.”
Back in the summer, Scientific American explored why so many people are polite to ChatGPT. The article pointed to social norms and the possibility that courtesy might even improve the AI’s responses. “The benefits of being polite to AI may include prompting better chatbot replies—and nurturing our humanity,” the piece noted.
That sentiment showed up in real-world behavior, too. In an informal online survey by Ethan Mollick, a professor at the University of Pennsylvania, nearly half of the respondents said they’re often polite to the chatbot. Only about 16% admitted they “just give orders.”
Even developers echo this habit. In an OpenAI forum, one user explained: “I find myself using please and thanks with [ChatGPT] because it’s how I would talk to a real person who was helping me.
The Token Economy of AI and The Technical Toll
The backend story is pretty straightforward: more tokens = more processing. ChatGPT’s responses are powered by GPU-heavy infrastructure. Even small additions to input—like “please”—require a little extra computation. One token doesn’t cost much, but across billions of interactions, that math catches up fast.
Large language models like ChatGPT break down user input into tokens—tiny bits of text that include words, punctuation, and even spaces. Each token taps into computing resources: GPU cycles, electricity, and server time. So when users add polite phrases like “please” and “thank you,” it’s not just extra words—it’s extra processing.
At scale, it adds up. ChatGPT handles millions of queries daily. Multiply that by just a few extra tokens per prompt, and the cost becomes noticeable.
A March 2025 report from MIT Technology Review notes that while each individual query consumes only a sliver of energy, the volume makes it a growing concern. When billions of interactions stack up, even small inefficiencies carry a hefty price tag.
So these words have a ripple effect. Not huge for any one person. But at scale? It’s real money.
Efficiency vs Etiquette
Altman’s comments highlight a bigger question for AI platforms: how do you scale efficiently without alienating users who value natural interactions?
A Bloomberg report also noted that AI companies are quietly testing ways to reduce token bloat—things like prompt compression or subtle reminders to keep it short. But there’s a tradeoff. Go too far, and it starts to feel like you’re being told how to talk.
This is where things get tricky. If people start treating AI more like a utility and less like a conversational partner, what do we lose in the process?
So… Should We Stop?
That’s the million-dollar question. Or more like multi-million, if you ask OpenAI’s finance team. OpenAI might eventually build systems that trim out filler words before they hit the GPU. Or maybe they’ll launch awareness campaigns about token limits. But stripping out pleasantries comes with a cost of its own: it makes interactions feel colder, more mechanical.
For now, users are split. Some are trimming the fluff to keep things lean. Others are sticking with their manners, either from principle, habit, or superstition.
Whether it’s about efficiency, values, or future-proofing your robot karma, one thing’s clear—those tiny words aren’t so small after all.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured