Random Musing: Does Sam Altman think of humans as batteries? | World News
“Who Let the Dogs Out” — a track that inspired insouciant children across the world to bark at ungodly hours — is widely considered one of the most irritating songs of all time, so much so that Rolling Stone magazine deemed it the eighth most annoying song of the 1990s, even though it was released in 2000. Unfortunately for us, “Who Let the Dogs Out” also became the overarching leitmotif of the 2026 AI Impact Summit in New Delhi, where the memetic black hole created by a robo-dog — masquerading as a private university’s home-grown innovation before becoming a very public humiliation — almost overshadowed everything else that happened there.Of course, the farce was elevated by a loquacious professor, who gave us one of the most eminently quotable lines of the year: “My six can be your nine.” Honestly, discussing the robo-dog as the headline of the AI Impact Summit 2026 is like claiming the Lady in the Red’s inability to speak was the most important — and therefore disappointing — thing in The Matrix.
Youth Congress protestors did their best French feminist impression. Sarvam wowed AI geeks with two large, voice-first, home-grown AI models. French President Emmanuel Macron gushed about UPI as if it had originated in the Champagne region of France. Desi and international players promised to spend big money on AI infrastructure in India. Tech bros showed that they get along like Israel and Palestine. Naysayers complained about India’s AI stack. The Cassandras of foreign media whined about the traffic and India’s ostensible VVIP culture, forgetting that events like Davos and the UNSC take place.But perhaps something that went under the radar — not the comment itself, but the inference of what Altman said — was his view on AI’s energy usage, which he curiously juxtaposed with a human’s carbon footprint.When pressed about AI harming the environment, Altman said: “One of the unfair comparisons in this case is that people talk about how much energy it takes to train an AI model versus how much it costs one human to do an inference query. It also takes a lot of time to train a human. It takes 20 years of your life, and all of the food you eat during that time, before you get smart. Not only that, it took a very wide spread of evolution, like a hundred billion people who have ever lived, who learned not to be eaten by predators and learned how to figure out science to produce you. The fair comparison, if you ask ChatGPT a question, is how much energy it takes to answer that question versus a human. And AI has probably caught up on an energy efficiency basis that way.”Dilbert creator Scott Adams argued that we are a planet of six billion ninnies living in a civilisation designed by a few thousand smart deviants. There’s no doubt that Altman is one of those deviants, but his statement did show the two different philosophies at the core of his thinking, in particular his human vs AI analogy.The first is deeply human. When Altman complains about humans consuming food for 20 years before becoming productive, it sounds like the lament of a desi middle-class father scolding his never-do-well son for stuffing his face at home without doing anything productive. It’s an umbrage many of us have heard over the years.The second is entirely machine-like, so machine-like that it could have been voiced by the Architect in The Matrix.
For those who have not watched the greatest sci-fi film ever made, here is a brief recap. After AI was created, humans and machines went to war once the machines decided they no longer wanted to serve their lazy overlords. In a desperate attempt to weaken them, humans blocked out the sun, the machines’ primary source of energy. The machines responded by discovering a different source of power: humans themselves.As Morpheus explains to Neo: “The Matrix is a computer-generated dream world built to keep us under control in order to change a human being into this (a battery).” The entire premise of The Matrix is the reduction of humans to a power source.The interesting part is how that simulation came to be. The machines created an advanced control programme called the Architect, whose job was to subdue humanity by constructing an illusion. The first version was a utopia, and it was rejected. The second was dystopian, and that too was rejected. Eventually, another programme — the Oracle — realised that humans required the illusion of choice.This third version proved stable for 99% of humans. For the remaining 1%, the machines created a pressure valve called The One. The sum total of all anomalies would inevitably find his way to the Architect, who would explain the truth. The One would then, like Noah, choose a select few to rebuild Zion, only for the cycle of rebellion to begin again. That loop continued until Neo became the One and, instead of accepting the Architect’s offer, chose to save Trinity and offered the machines a truce in exchange for destroying Agent Smith.Eerily enough, Sam Altman’s view of humanity seems very much like the Architect’s view, who sees them as machine fodder with an illusion of choice.In this telling, the human is reduced to a system of inputs and outputs: food goes in, productivity comes out. Evolution, a scientific miracle that took millennia, is just re-training.Twenty years of childhood is quite a costly pre-burn before inference begins, one for which few product managers will have patience.In many ways, humans are far less productive and less energy-efficient than the models they are building. But then, does the tech bro and the machine share the same framing of humanity’s worth? And if it does, do we even need Artificial Generative Intelligence to arrive? At the end of the first film, Neo tells the Deus Ex Machina: “I didn’t come here to tell you how this is going to end. I came here to tell you how it’s going to begin. Where we go from there, I leave to you.”
Given the indifference between Altman’s and a machine’s view, one wonders whether the destination is the same. A version of this article appeared in the Weekly Vine Newsletter on LinkedIn by this author. You can sign up here.