Elon Musk, chief executive officer of Tesla Inc., during the US-Saudi Investment Forum at the Kennedy Center in Washington, DC, US, on Wednesday, November 19, 2025.
Bloomberg | Bloomberg | Getty Images
Elon Musk has again raised the alarm about the dangers of artificial intelligence and outlined what he sees as the three most important ingredients for securing a positive future with technology.
Billionaire CEO TeslaSpaceX, kAI, X and The Boring Company appeared on a podcast with Indian billionaire Nikhil Kamath on Sunday.
“It’s not like we’re guaranteed a positive future with artificial intelligence,” Musk said on the podcast. “There is a danger when you create powerful technology, that powerful technology can be potentially destructive.”
Musk co-founded OpenAI with Sam Altman, but left its board in 2018 and publicly criticized the company for abandoning its founding mission as a non-profit to safely develop artificial intelligence after launching ChatGPT in 2022. Musk’s kAI developed its own chatbot, Grok, in 2023.
Musk has previously warned that “one of the biggest risks to the future of civilization is artificial intelligence” and stressed that rapid progress is making artificial intelligence a greater risk to society than cars, planes or medicine.
In the podcast, the tech billionaire emphasized the importance of ensuring that AI technologies seek the truth instead of repeating inaccuracies. “It can be very dangerous,” Musk told Kamath, who also co-founded retail brokerage Zerodha.
“Truth and beauty and curiosity. I think those are the three most important things for artificial intelligence,” he said.
He said that without strictly adhering to the truth, AI will learn information from online sources where it will “absorb a lot of lies and then have trouble reasoning because those lies are incompatible with reality.”
He added: “You can drive an AI crazy by making it believe things that aren’t true because it will lead to conclusions that are also bad.”
“Hallucinations” — responses that are incorrect or misleading — represent a major challenge facing AI. Earlier this year, the AI feature launched by Apple on its iPhones generated fake news.
This included a fake summary from a BBC News app notification of a PDC World Darts Championship semi-final story, wrongly claiming British darts player Luke Littler had won the championship. Littler did not win the tournament final until the next day.
Apple told the BBC at the time that it was working on an update to fix the issue that clarifies when Apple Intelligence is responsible for the text displayed in notifications.
Musk added that “a little respect for beauty is important” and that “you know it when you see it.”
Musk said that artificial intelligence should want to know more about the nature of reality because humanity is more interesting than machines.
“It is more interesting to see the continuation if not the prosperity of mankind than to exterminate mankind,” he said.
Geoffrey Hintoncomputer scientist and former Google vice president known as the “Godfather of Artificial Intelligence,” said earlier this year that there’s a “10% to 20% chance” that artificial intelligence will “wipe us out,” on an episode of the Diaries of a CEO podcast. Some of the short-term risks he cited included hallucinations and the automation of entry-level jobs.
“Our hope is that if enough smart people do enough research with enough resources, we’ll figure out a way to build them so they never want to harm us,” Hinton added.




