News & Analysis

AI Will Become a Daily Routine: Sam Altman

And it would not require new hardware or lots of model training of data to get there, he says

When OpenAI was revealed in the Fall of 2022, everyone sat up and took notice of it as the smartest thing on the Internet after electronic mail. But, Sam Altman and his team wasn’t done yet as they brought artificial intelligence (AI) to create images and videos via DALL-E and Sora. And he isn’t done yet either. Altman believes GenAI would overtake smartphones as a help agent. 

In an interview published by MIT Technology Review, Altman notes that going forward AI tools, specifically those related to GenAI (like ChatGPT etc.) would become more enmeshed in the daily lives of users than what smartphones are capable of at this juncture. “What you really want is just this thing that is off helping you,” he says. 

The article said, Altman who was visiting Cambridge for events hosted by Harvard and VC firm Xfund, said the killer app for AI would be akin to “a super-competent colleague who knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” 

Is this a  major shift for OpenAI of the future?

Altman’s latest comments represent a major shift in strategy for OpenAI which has for long highlighted its applications led by ChatGPT, Sora and DALL-E that he now describes as “dumb” compared to what could be coming next. The company used AI models to generate convincing text and images but might have fallen short when it came to videos. 

The OpenAI CEO, who staved off attempts to unseat him by the likes of Elon Musk and others with some help from Microsoft and Satya Nadella, says the new app could tackle some tasks almost instantly while the more complex ones could result in an attempt to resolve. And in case, it does fail to answer them, the app could ask more questions to the users to find one. 

Is a multimodal new AI model in the works?

Altman’s revelations to the academic community comes days before another report published by The Verge that suggests a new multimodal AI model that can both talk to users and recognize objects. The report quotes another report published by The Information which quotes sources to suggest that such an app could be showcased later today. 

It says the new model would be faster, and more accurate while interpreting images and audio than existing ones handling transcription and text-to-speech models. The report said it could help customer service agents understand the intonation of caller voices better such as when they’re being sarcastic. They claim that it could outdo GPT-4 Turbo at answering some types of questions though it could still get things wrong. 

According to Altman, while all existing options created by OpenAI remains tools used for isolated tasks with limited capacity to learn from queries directed at the model, what comes out in the future could be AI that is capable of helping humans outside the chat interface that will take real-world tasks away from the user’s plates. 

Do we need new hardware? Altman says not really

While Altman does not rule out the possibility of some new hardware replacing the smartphones with custom-built solutions that deliver the AI of the future (He’s an investor in the company that launched the wearable AI Pin from Humane), he also considers that this a new device may actually not be necessary at all. 

“I don’t think it will require a new piece of hardware,” he told me, adding that the type of app envisioned could exist in the cloud. But he quickly added that even if this AI paradigm shift won’t require consumers to buy a new hardware, “I think you’ll be happy to have [a new device],” he notes in the interaction with MIT Technology review.  

Interestingly, a Bloomberg report over the weekend suggested that OpenAI had made a deal with Apple to use OpenAI solutions on the iPhone. This deal has been in the pipeline for some time and would likely feature in Apple’s iOS18, which is scheduled to arrive some time later this year. Interestingly, Apple also held talks with Google for the Gemini chatbot.