Google is back to the top of AI race!
The last few months have been very significant. Significant not only in terms of AI tools and capabilities that we have access to, but also in regards overall dynamics of AI race.
Google is becoming the most important player in AI race
Since the release of Bard few years Google was lagging behind and to the point of being embarrassed by OpenAI. First versions of Gemini, were rushed and came with all sort of problems, ranging from wild recommendations in AI overviews to a questionable results in AI search. And don’t get me started on image generation.
Let’s just say that for the past two years, Gemini was not AI of choice for most of us.
Release of NotebookLM was the first step towards changing overall prospective. This was an AI product that started to win audience because it actually was useful, it solved actual problem. NotebookLM changed changed how people study and offered a variety of useful AI features, in particular audio overviews.
Gemini 2.5 models offered a solid improvement. Flash model was good in terms of speed and cost prospective and Pro model was able to compete with other top LLMs for most use cases.
Gemini 3 and state of art image generation model
Google’s newest multi-purpose model, Gemini 3 and state of art image generation model Nano Banana Pro have pinned Google to the top of the board in AI race.
Gemini 3 Pro scored top on closely watched AI leaderboard Humanity’s Last Exam, outscoring ChatGPT 5.1 by 11% and Cloud Sonnet 4.5 by 24%. That is 16% improvement from previous Gemini 2.5 Pro model (benchmarks below).
Not to mention that, with one subscription you have access to Gemini, NotebookLM, extra storage, experimental models and more. Also when Google drops its new model, access is given to all users at the same time, unlike OpenAI where latest models only available to a selected few.
Google’s bet on Tensor Processing Units (TPUs)
Another point that must be considered is Google’s aim to reduces its dependency on external suppliers like Nvidia and instead using its own Tensor Processing Units (TPUs). This allows them to control entire technology stack, offers room for faster scaling, better efficiency and lower operational costs.
So all in all, perhaps Google was caught off-guard by the likes of OpenAI and Anthropic few years back, but as of recently the sleeping giant is fully awake!
