OpenAI gave up on Sora last week, but in a post about Google’s new “cost efficient” Veo 3.1 Lite AI video model, DeepMind staffer Logan Kilpatrick says that “video’s here to stay.” And in a blog post about the model, the authors discuss Google’s “commitment to making video generation more available to developers.”
OpenAI
OpenAI kicked off an AI revolution with DALL-E and ChatGPT, making the organization the epicenter of the artificial intelligence boom. Led by CEO Sam Altman, OpenAI became a story unto itself when Altman was briefly fired and then brought back after pressure from staff and Microsoft, an investor and close partner.
OpenAI’s latest round of private investment has closed, with participation from Amazon, Nvidia, Softbank, and Microsoft, as well as $3 billion from individual investors, as it prepares for a potential IPO. This comes after it announced the end of its video generator Sora, and the announcement says it will focus on building a “unified superapp” with ChatGPT, Codex, browsing, and other agents all built in.
ChatGPT has 6x the monthly web visits and mobile sessions than the next largest AI app, while total AI time spent is 4x the next largest AI app and 4x all others combined. Search usage has nearly tripled in a year, and our ads pilot reached more than $100 million in ARR in under six weeks.





Sam Altman announced that he’s stepping down from the board of nuclear fusion startup Helion Energy. Axios reports that OpenAI is in “in advanced talks” with Helion, even though significant scientific advancements still need to be made for nuclear fusion — long considered the Holy Grail of clean energy — to become a reality.
According to Axios, discussing a potential energy deal between OpenAI and Helion Energy for “a guaranteed portion of Helion’s production, potentially scaling to 50 gigawatts by 2035 (assuming the company can develop a fusion process that generates more energy than it consumes).
Axios also reports Altman has stepped down as Helion’s board chair and recused himself from discussions.

Ghost in the Machine director Valerie Veatch wants you to understand how race science has shaped this moment in tech.



A sick dog, desperate owner, and a bunch of chatbots made for a great story. The actual science was much messier.
CEO of applications Fidji Simo told staff the company will prioritize coding and enterprise users over the wide array of projects it has been pursuing, the WSJ reports, including deepfake machine Sora, browser Atlas, and gadgets.
It gave a similar explanation when it delayed the release of ChatGPT’s “adult mode.”



The job requires the ‘ability to recognize, express, and shift between emotions in a way that feels authentic and human.’




The AI startup, founded by ex-OpenAI executive Mira Murati, will team up with Nvidia on a “long-term gigawatt scale strategic partnership” to power TML’s AI model training. The news comes after multiple founding members of the startup left for OpenAI at the same time earlier this year.
[Thinking Machines Lab]
The tool, which OpenAI said is used by 25 percent of Fortune 500 companies, helps “identify and remediate vulnerabilities in AI systems during development.”
[OpenAI]


ChatGPT’s long-promised “adult mode” is still on the way, but will miss its planned Q1 launch. For some, that means the long, lonely wait continues.
pretendworld:
reading the news that your new girlfriend has been delayed must be just brutal
Get the day’s best comment and more in my free newsletter, The Verge Daily.
Elon Musk’s short-lived agency rolled into the NEH with the mandate to cancel grants that it deemed contrary to Donald Trump’s anti-DEI agenda. According to the New York Times, decisions about which grants to cancel weren’t made after careful analysis and deliberation. Instead, they were made with a ChatGPT prompt.
… instead of looking closely at funded projects, they pulled short summaries off the internet and fed them into the A.I. chatbot.
The prompt was simple: “Does the following relate at all to D.E.I.? Respond factually in less than 120 characters. Begin with ‘Yes’ or ‘No.’” The results were sweeping, and sometimes bizarre.
[New York Times]
Caitlin Kalinowski posted on X that she resigned from OpenAI, saying the company’s contract didn’t do enough to protect Americans from warrantless surveillance and that granting AI “lethal autonomy without human authorization” was a line that “deserved more deliberation.”
The feature was expected to launch sometime this quarter, but an OpenAI spokesperson tells former Verge staffer Alex Heath that “We’re pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now, including gains in intelligence, personality improvements, personalization, and making the experience more proactive.”
OpenAI just launched Codex Security, a new research preview AI agent focused on identifying and fixing app security issues. Separately, it announced the Codex Open Source Fund is also now including “conditional access” to Codex Security as part of the six-month ChatGPT Pro with Codex subscriptions it’s offering open source developers.

Focus Features’ new documentary about gen AI suffers from having too much access and not enough thought.


Prompted by recent GitHub outages, OpenAI is in the early stages of developing its own code repository, with completion still months away. The company is considering making it available to OpenAI customers, putting the ChatGPT creator in direct competition with Microsoft, a company holding a significant stake in OpenAI.
[The Information]
The new GPT-5.3-Instant model rolling out today is supposed to be more accurate, understand the context of questions, and “reduces unnecessary dead ends, caveats, and overly declarative phrasing that can interrupt the flow of
conversation.”
Does that mean the return of 4o’s glaze? Maybe not, but a blog post says 5.2-Instant’s tendency toward “coming across as overbearing or making unwarranted assumptions about user intent or emotions” has been addressed.
The protest’s planned focus is AI-powered mass domestic surveillance and lethal autonomous weapons. It’s organized by QuitGPT, a grassroots campaign that says it’s inspired more than 1.5 million people have to take relevant action (by either sharing on social media or signing up for the boycott itself).
The OpenAI CEO laid out some updated wording he hoped would address people’s concerns about mass domestic surveillance, though the new language still included the phrase “consistent with applicable laws.” Altman also said he reiterated over the weekend that Anthropic should not be designated a supply chain risk.
[X (formerly Twitter)]

The law doesn’t say what Sam Altman claims it does.
CEO Sam Altman wrote on X that the agreement allowed the US military to “deploy our models in their classified network.” He said the agreement reflects OpenAI’s desire for prohibitions on domestic mass surveillance and “human responsibility for the use of force, including for autonomous weapon systems.” Altman also wrote that OpenAI is “asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.
[X (formerly Twitter)]
The OpenAI co-founder, who left after CEO Sam Altman’s ouster and reinstatement and then started his own AI startup called Safe Superintelligence, posted on X:
It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance.
In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.

AI companies could stand together to draw red lines on military AI — why aren’t they?














