Powerful Functiongemma 270m Model
We’re releasing a specialized version of our Gemma 3 270M model fine-tuned for function calling and a training recipe for users to specialize for even better performance.
It has been a transformative year for the Gemma family of models. In 2025, we have grown from 100 million to over 300 million downloads while demonstrating the transformative potential of open models, from defining state-of-the-art single-accelerator performance with Gemma 3 to advancing cancer research through the C2S Scale initiative.
Since launching the Gemma 3 270M model, the number one request we’ve received from developers is for native function calling capabilities. We listened, recognizing that as the industry shifts from purely conversational interfaces to active agents, models need to do more than just talk — they need to act. This is particularly compelling on-device, where agents can automate complex, multi-step workflows, from setting reminders to toggling system settings. To enable this at the edge, models must be lightweight enough to run locally and specialized enough to be reliable.
Today, we are releasing FunctionGemma, a specialized version of our Gemma 3 270M model tuned for function calling. It is designed as a strong base for further training into custom, fast, private, local agents that translate natural language into executable API actions.
FunctionGemma acts as a fully independent agent for private, offline tasks, or as an intelligent traffic controller for larger connected systems. In this role, it can handle common commands instantly at the edge, while routing more complex tasks to models like Gemma 3 27B.
FunctionGemma accuracy on Mobile Actions dataset before and after fine-tuning on a held out eval set.
FunctionGemma is the bridge between natural language and software execution. It is the right tool if:
Let's look at how these models transform actual user experiences. You can explore these capabilities in the Google AI Edge Gallery app through two distinct experiences: an interactive game and a developer challenge.
This demo reimagines assistant interaction as a fully offline capability. Whether it’s "Create a calendar event for lunch tomorrow," "Add John to my contacts" or "Turn on the flashlight," the model parses the natural language and identifies the correct OS tool to execute the command. To unlock this agent, developers are invited to use our fine-tuning cookbook to build the model and load it onto their mobile device.
In this interactive mini
Source: HackerNews