Google Gemini 2.0 Flash Series AI Models Launched, Reaching a New Level of Reasoning

This article is machine translated
Show original

Source: IT Home

Google released a blog post yesterday (February 5) inviting all Gemini app users to visit the latest Gemini 2.0 Flash app model and open up the 2.0 Flash Thinking reasoning experiment model.

2.0 Flash: Fully Updated, Fully Open

The 2.0 Flash model first appeared at the 2024 I/O conference, and with its low latency and high performance, it quickly became a popular choice in the developer community. The model is suitable for large-scale, high-frequency tasks, and can handle context windows of up to 1 million Tokens, demonstrating powerful multimodal reasoning capabilities.

The Gemini 2.0 Flash model can interact with applications such as YouTube, Google Search, and Google Maps, helping users discover and expand their knowledge in multiple application scenarios.

Gemini 2.0 Flash Thinking Model

The Gemini 2.0 Flash Thinking model is based on the speed and performance of the 2.0 Flash. This model has been trained to break down prompts into a series of steps, enhancing its reasoning capabilities and providing higher-quality responses.

The 2.0 Flash Thinking Experimental model demonstrates its thought process, allowing users to see why it responds in a certain way, what its assumptions are, and track the model's reasoning logic. This transparency enables users to better understand the model's decision-making process.

k5VDGyjdBPPqGTtLmYorHAwoAGtO7kCQJ4gk6jbh.jpeg

Gemini has also launched 2.0 Flash Thinking versions that interact with applications such as YouTube, Search, and Google Maps. These connected applications have made Gemini a unique AI assistant, and future exploration will focus on combining new reasoning capabilities with user applications to help users accomplish more tasks.

2.0 Pro Experimental Version: Best Programming Performance and Complex Prompt Handling

Google has also launched the Gemini 2.0 Pro experimental version, which the company claims is skilled at programming and can handle complex prompts. This model has a context window of 2 million Tokens, allowing it to comprehensively analyze and understand vast amounts of information, and supports tools such as Google Search and code execution.

Developers can now experience this experimental model in Google AI Studio and Vertex AI, and Gemini advanced users can also access it on desktop and mobile. IT Home has provided the following performance comparison:

gAPKhGMRAxfYeG68PsyEPpclfGJ945G5C5slIuvL.jpeg

2.0 Flash-Lite: The Most Cost-Effective Model

Google AI Studio has also launched the Gemini 2.0 Flash-Lite model, which the company claims is the most cost-effective model to date. It aims to maintain low cost and fast response while providing higher quality than the 1.5 Flash.

This model also supports a context window of 1 million Tokens and multimodal input. For example, it can generate a one-line relevant description for 40,000 unique photos in a paid Google AI Studio subscription, at a cost of less than $1.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments