The OpenAI chief executive took the TED stage with no product to announce but something more significant to offer: a glimpse of the mind shaping our collective future. With his company serving half a billion weekly active users, Sam Altman now wields influence that rivals (and perhaps exceeds) that of most elected officials.
I find myself somewhat conflicted watching this interview. On one hand, I’m genuinely impressed by what OpenAI has built. ChatGPT, Sora and their other tools are remarkable achievements that are changing our world. We’re truly living in the future that science fiction promised us. The innovation is extraordinary. The dystopian trajectory? Not so much.
And that’s the tension at the heart of AI. Revolutionary technology with unprecedented benefits coupled with concerning questions about who controls it and how it’s governed. Chris Anderson spent 30 minutes trying to get straight answers from Altman about accountability and control. What the audience witnessed instead was a display of verbal parkour and Altman avoiding every substantive question about who ultimately holds power.
The All-Seeing AI Companion
Altman described ChatGPT’s new memory system as if it’s just a helpful friend who remembers your conversations. “We’re uploading bit by bit,” he said, before casually dropping that “eventually, it may listen to you throughout the day.”
When Anderson tried the system and looked visibly uncomfortable at how much it knew about him, Altman compared it to people once being afraid to enter their credit cards details online – making legitimate concerns sound like outdated technophobia.
The technology itself is fascinating but the crucial issue Altman conveniently ignores is that OpenAI can see everything we write to ChatGPT. This is Google’s “just use Gmail, just download Chrome” approach multiplied a hundredfold. Except now we’re becoming reliant on a tool that knows us better than we know ourselves – a tool that lives entirely on OpenAI’s servers.
OpenAI currently states they won’t abuse your data, sell it or use it to train models or create profiles on you. Which is great, however Google once had “Don’t be evil” as their company motto. That promise disappeared with a simple website update. Without actual governance and strong legal protections, their promises only last until they update their terms of service. We made this mistake with the rise of search and ad networks, we repeated it with social media. WHY are we doing it again with AI?
The Robot That Spends Your Money
OpenAI’s upcoming “Operator” system takes things further – it can book restaurants and make purchases using your credit card. Anderson revealed he’d tested it but wouldn’t share his payment details. I guess there’s that technophobia Alman was talking about.
When pressed on safety, Altman mentioned OpenAI’s “preparedness framework” which is corporate-speak that tells us absolutely nothing. I looked it up, it’s a “living document” on protecting against “catastrophic” risks posed by AI. It hasn’t been updated since December 2023.
Asked about red lines OpenAI wouldn’t cross, he nodded without specifying any, before shifting responsibility: “There will be many powerful models in the world. Not just ours.”
Translation: “If we don’t build dangerous AI first, someone else will.” It’s the same logic as every other arms race in history.
Creativity Gets “Disrupted”
The conversation turned awkward when discussing AI copying artists’ styles. Audience members applauded concerns about IP theft but Altman had a ready answer about the “creative spirit of humanity” while offering no real solution.
I agree that these AI tools have democratised creativity. I’m using AI to help write this very article – something that would have taken much longer otherwise. These tools genuinely empower people to create in ways previously inaccessible. In that sense, Altman is right about AI expanding creative possibilities.
Sora is incredible technology but when Altman claims it won’t generate images in the style of living artists without permission, the question remains: who’s enforcing that? What recourse do creators have? His vague idea about revenue sharing for artists who “opt in” conveniently ignores the fact that no one was given the option to opt out to begin with.
The issue isn’t the democratisation of creativity but that we’re feeding our creative processes, writing and thinking into systems we don’t control. The data doesn’t remain our own. People like Altman decide what happens to it, how it’s used and ultimately what it will cost us to access the tools we’re becoming dependent on.
AGI? What even is that?
When Anderson suggested ChatGPT might already qualify as artificial general intelligence (AGI), Altman actually gave a straight answer: “It can’t improve itself. It can’t do any knowledge work you can do in front of a computer. That’s not AGI.”
But then immediately pivoted: “Call it AGI, call it whatever you want. The point is, it won’t stop. It’ll go way beyond what we call AGI anyway.”
So, in short: definitions, boundaries and accountability mechanisms are inconvenient obstacles on the path of technological advance. “AGI”, it seems, is a bit like the “open” in OpenAI – it’s just a bunch of letters, don’t worry about it.
Who Put You in Charge?
The interview’s most revealing moment came when Anderson asked ChatGPT what question he should pose to Altman. The AI’s suggestion was spot on: Who gave him the authority to reshape humanity’s future?
Altman’s response? “Probably some of the criticism is true.” Talk about an understatement.
When Anderson showed an image of Sauron’s ring from Lord of the Rings (referencing Elon Musk’s jabs, accusing Altman of being corrupted by power), Altman deflected: “How do you think I’m doing relative to other CEOs that have gotten a lot of power?”
Right… I guess this is a popularity contest? The question isn’t whether he’s better than Elon or Zuckerberg; it’s why unelected tech executives get to make these decisions at all.
When Anderson suggested gathering experts to establish safety standards, Altman rejected the idea of “elite consensus”, adding: “I’m much more interested in what our hundreds of millions of users want as a whole.”
An answer that should be called out for what it is: a populist dogwhistle that pretends to be democratic whilst minimising the reality that user feedback is just data to improve products and drive engagement.
OpenAI isn’t letting users vote on how the company operates – they’re mining our interactions to build more profitable products. Just like the algorithm-driven newsfeeds of social networks before them, whose aim was to capture and keep our attention, AI tools are being improved with our own feedback to become indispensable to every facet of our lives.
The Parent
Perhaps the strangest moment was Altman talking about his son. He admitted he wouldn’t press a button giving his child an amazing life if it came with a 10% chance of destroying him.
One wonders if Sam has considered applying that same thoughtful risk assessment to everybody else’s children?
He closed by hoping his kids would look back at us with pity: “They lived such horrible lives, they were so limited, the world sucked so much.”
This reveals the underlying belief system that Altmen and Silicon Valley seem to exude: our present human experience is somehow deficient and only technology-driven transformation can save us from our limited existence. The world needs saving and conveniently, they’re just the ones to do it.
Anderson’s parting words acknowledged what we are all witnessing: “Over the next few years, you’re going to have some of the biggest moral challenges, the biggest decisions to make of perhaps any human in history.”
The audience applauded but after 30 minutes of watching Altman skirt around every question, I find it difficult to believe the carefully constructed persona – the thoughtful tech leader reluctantly accepting responsibility for our future. He’s running a business valued at $80 billion, not a charity gifting humanity AI out of altruism.
This is the fundamental tension with AI: these tools are genuinely transformative and beneficial in countless ways. The problem isn’t the technology itself but the power structures forming around it. Watching someone in Altmans position avoid every important question about governance while accumulating unprecedented influence over our collective future should concern all of us, regardless of how much we benefit from the tools.
We can simultaneously embrace technological progress and demand better oversight. We can use AI tools while questioning the unaccountable power of those who control them. And we can acknowledge the amazing future these technologies might create while insisting on having a say in how we get there.
The technology is brilliant. The governance is not. And for me, Altman’s responses don’t inspire confidence about where we’re headed. On the plus side, there’s going to be an open source model so I guess it’s not all bad.