Google’s AI Studio adds adjustable video frame extraction, context caching

Google announced its AI Studio will support adjustable video frame extraction and context caching, available soon through its Gemini API.
Admin

AI-generated image of a storage mechanism where relevant information is placed in cached memory.

Two new features are coming to Google AI Studio, one of which was a top developer request. The platform for prototyping and experimenting with machine learning models now supports native video frame extraction and context caching. The former is available today, while the latter is coming soon to the Gemini API.

Using video frame extraction, developers can take videos uploaded to their apps and have Gemini capture individual frames or images from a specific sequence. This will help the AI better understand what’s happening in the scene, provide concise summaries, and improve the user experience. The adjustable video frame extraction capability can be found within the Gemini API.


With context caching, developers whose apps deal with large amounts of information can cache frequently used context, reducing costs and streamlining workflows. In other words, files can be sent to Gemini once, not repeatedly. Google says context caching is useful for scenarios like “brainstorming content ideas based on your existing work, analyzing complex documents, or providing summaries of research papers and training materials.” It’ll be supported in the Gemini API when it’s released.

These features are part of a litany of Gemini announcements Google is making during its developer conference. It joins other headlines, such as the release of Gemini 1.5 Flash, a new Gemma 2 model and a pre-trained variant called PaliGemma.

About the author

Admin
I am Manish Singh Adhikari. I just love tech. Feel free to ask any question.

Post a Comment

Feel free to ask your queries.