PANews reported on April 14th that Google DeepMind has released Gemini Robotics-ER 1.6 , an embodied reasoning model for robotics. Compared to Gemini Robotics-ER 1.5 and Gemini 3.0 Flash , it significantly improves spatial and physical reasoning tasks such as pointing and localization, object counting, and multi-view success detection, and adds the ability to read industrial instruments. This model can serve as a high-level decision-making hub for robots, natively utilizing tools such as Google Search and VLA . It uses academic vision to magnify key areas and then combines pointing and code calculations to achieve high-precision instrument readings. The official statement also claims that it outperforms previous models in terms of safety command compliance and physical safety constraint judgment. It is now available to developers through the Gemini API and Google AI Studio .
Google unveils Gemini Robotics-ER 1.6, a model designed for robotic applications.
This article is machine translated
Show original
Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content





