
Hello, dear readers. Happy Belated Thanksgiving and Black Friday!
This year felt like living in a permanent DevDay. Every week a lab releases a new model, agent framework, or “this changes everything” demo. It’s overwhelming. But it’s also the first year that I feel like AI is finally diversifying – not just one or two frontier models in the cloud, but a whole ecosystem: open and closed, huge and tiny, Western and Chinese, cloud and on-premises.
So for this Thanksgiving edition, I want to be really thankful in the AI space in 2025 – the publications that look like they’ll matter 12 to 24 months from now, not just during this week’s hype cycle.
1. OpenAI continued to be strong in shipping: GPT-5, GPT-5.1, Atlas, Sora 2 and open weights
As the company that undeniably produced this "generative AI" While the OpenAI era began in late 2022 with its viral success product ChatGPT, in 2025 OpenAI faced arguably one of the most difficult tasks of any AI company: continuing its growth path even as well-funded competitors like Google with its Gemini models and other start-ups like Anthropic brought their own highly competitive offerings to market.
Luckily, OpenAI met the challenge and then some. Its main act was GPT-5, which was unveiled in August as the next model for cross-border thinking November of GPT-5.1 with new Instant and Thinking variants that dynamically adjust how much “thinking time” they spend per task.
In practice, GPT-5’s launch was rocky – VentureBeat documented early calculation and coding errors and a cooler than expected community reaction in “OpenAI’s GPT 5 rollout is not going smoothly," However, based on user feedback, the course was quickly corrected and as a daily user of this model, I am personally happy and impressed with it.
At the same time, companies that actually use the models are reporting solid growth. ZenDesk GlobalFor example, According to GPT-5-powered agents, more than half of customer tickets are now solvedwith some customers reporting an 80-90% resolution rate. This is the quiet story: These models may not always impress the chattering classes on X, but they are starting to move real KPIs.
As for tools, OpenAI finally gave developers a serious AI engineer with GPT 5.1 Codex Max, a new coding model capable of running long, agentic workflows that is already the standard in OpenAI’s Codex environment. VentureBeat covered it in detail in “OpenAI introduces the GPT-5.1-Codex-Max coding model and has already completed a 24-hour task internally.”
Then there is ChatGPT Atlas, a complete browser with ChatGPT integrated into Chrome itself – Sidebar summaries, on-page analytics, and search that are tightly integrated with normal browsing. It’s the clearest sign yet that “Assistant” and “Browser” are on a collision course.
On the media side, Sora 2 has transformed the original Sora video demo into a full video and audio model with better physics, synchronized sound and dialogue, and more control over style and recording structure a special Sora app with a full-fledged social networking component that allows every user Create your own TV network in your pocket.
Finally – and perhaps most symbolically – OpenAI has released gpt-oss-120B and gpt-oss-20Bopen MoE reasoning models under an Apache 2.0 license. Whatever you think of their quality (and early open source users have been vocal about their complaints), this is the first time since GPT-2 that OpenAI has given serious importance to the public commons.
2. China’s open source wave goes mainstream
If 2023-24 was about Lama and Mistral, 2025 belongs to China’s open ecosystem.
This was the result of a study by MIT and Hugging Face When it comes to global open model downloads, China is now slightly ahead of the USAlargely thanks to DeepSeek and Alibaba’s Qwen family.
Highlights:
-
DeepSeek R1 dropped in January as an open source reasoning model that competes with OpenAI’s o1, with weights licensed from MIT and a family of distilled smaller models. VentureBeat tracked the story from publication to publication Cybersecurity implications To performance-enhanced R1 variants.
-
Kimi K2 Thinking by Moonshot, a “thinking” open source model that argues step by step with tools, in the spirit of o1/R1, and is positioned as the best open argumentation model in the world to date.
-
Z.ai shipped GLM-4.5 and GLM-4.5-Air as “agentic” models, open source basic and hybrid argumentation variants on GitHub.
-
Baidus ERNIE 4.5 The family appeared as a fully open source, multimodal MoE suite on Apache 2.0, including a 0.3B dense model and visual “Think“Variations focusing on diagrams, STEM and tool usage.
-
Alibaba’s Qwen3 The product line – including Qwen3 coders, large reasoning models and the Qwen3 VL series, released in the summer and fall months of 2025 – continues to set high standards for open weights in the areas of coding, translation and multimodal reasoning, leading me to describe this past summer as "
VentureBeat has been tracking these changes, including Chinese math and reasoning models like Light R1-32B and Weibo is tiny VibeThinker-1.5Bthat exceeds DeepSeek core values while training budgets are tight.
If you’re interested in open ecosystems or on-premise options, this is the year China’s open weight scene stopped being just a curiosity and became a serious alternative.
3. Small and local models are emerging
And something else I’m grateful for: we’re finally getting it Good small models, not just toys.
Liquid AI spent 2025 advancing its Liquid Foundation Models (LFM2). LFM2-VL Vision Language variantsDesigned from day one for device-aware, low-latency deployments – edge boxes, robots and constrained servers, not just massive clusters. The newer one LFM2-VL-3B aims at embedded robotics and industrial autonomy, demos are planned at ROSCon.
On the Big Tech side Google’s Gemma 3 series made a strong case that “tiny” can still be capable. Gemma 3 includes parameters from 270M to 27B, all with open weights and multimodal support in the larger variants.
The standout is Gemma 3 270M, a compact model designed specifically for fine-tuning and structured word tasks – think custom formatters, routers and watchdogs – which is covered both on Google’s developer blog and in community discussions in local LLM circles.
These models like
4. Meta + Midjourney: Aesthetics as a service
One of the strangest twists this year: Meta partnered with Midjourney instead of simply trying to beat it.
In August, Meta announced a deal to license Midjourney’s “aesthetic technology” – its image and video generation stack – and integrate it into Meta’s future models and products, from Facebook and Instagram feeds to Meta AI capabilities.
VentureBeat reported on the partnership in “Meta is working with Midjourney and will license its technology for future models and products“, which raises the obvious question: Is this slowing down or changing Midjourney’s own API roadmap? I’m still waiting for an answer there, but unfortunately the stated plans for an API release have not yet materialized, suggesting that this is the case.
For creators and brands, however, the immediate impact is simple: mid-journey-grade visuals appear in mainstream social tools instead of being locked away in a Discord bot. That could normalize higher-quality AI art for a much broader audience — and force competitors like OpenAI, Google and Black Forest Labs to keep raising the bar.
5. Google’s Gemini 3 and Nano Banana Pro
Google tried to answer GPT-5 with Gemini 3, considered its most powerful model yet, with better reasoning, coding and multimodal understanding, as well as a new Deep Think mode for slow, difficult problems.
VentureBeat’s reporting: “Google unveils Gemini 3, taking the lead in math, science, multimodal and agent-based AI“presented it as a direct look at border benchmarks and agent workflows.
But the surprise hit is Nano Banana Pro (Gemini 3 Pro Image), Google’s new flagship image generator. It specializes in infographics, charts, multi-topic scenes, and multilingual text that renders actually readable in 2K and 4K resolutions.
In the world of enterprise AI — where diagrams, product schematics, and images to “visually explain this system” are more important than fantasy dragons — this is a big deal.
6. Wildcards I’m keeping an eye on
I’m grateful for a few more publications, even if they don’t fit into one pot:
-
Black Forest Labs fluxes.2 image models, which just launched earlier this week with the aim of challenging both Nano Banana Pro and Midjourney in terms of quality and control. VentureBeat delved into the details in “Black Forest Labs introduces Flux.2 AI image models to challenge Nano Banana Pro and Midjourney."
-
Anthropic’s Claude Opus 4.5a new flagship aimed at cheaper, more powerful coding and long-term task execution, covered in “Anthropic’s Claude Opus 4.5 is here: Cheaper AI, endless chats, and programming skills that surpass humans."
-
A steady drumbeat of open math/reasoning models—from Light-R1 to VibeThinker and others—that show you don’t need $100 million training runs to move the needle.
Final thought (for now)
If 2024 was the year of “one big model in the cloud,” 2025 is the year when the map explodes: multiple borders at the top, China taking the lead in open models, small and efficient systems maturing quickly, and creative ecosystems like Midjourney being pulled into big tech stacks.
I’m grateful not just for a single model, but for the fact that we have it now Options – closed and open, local and hosted, reasoning first and media first. For journalists, developers and companies, this diversity is the real story of 2025.
Happy holidays and all the best to you and your loved ones!




