<- Back
Comments (32)
- canyon289Hey folks, I'm on the Gemma team, we released new model(s) just recently, and I saw many questions here about function calling. We just published the docs to detail this more. In short Gemma3's prompted instruction following is quite good for the larger models and that's how you use the feature.You don't need to take our word for it! We were waiting for an external and independent validation from the Berkeley team, and they just published their results. You can use their metrics to get a rough sense of performance, and of course try it out yourself in AIstudio or locally with your own prompts.https://gorilla.cs.berkeley.edu/leaderboard.htmlHope you all enjoy the models!
- minimaxirThe example of function calling/structured output here is the cleanest example on how function it works behind the scenes, incorporating prompt engineering and JSON schema.With the advent of agents/MCP, the low level workflow has only become more confusing.
- zellynAm I getting slightly different use-cases mixed up, or would it be better if everything just spoke MCP?
- mentalgearGreat, your work on open-source SLM are much appreciated ! (btw: seems like the google page does not respect the theme device "auto" setting)
- nurettinSo it's just a prompt? Well then you can do function calling with pretty much any model from this quarter.
- sunrabbitIt's honestly frightening to see how fast it's evolving. It hasn't even been that many years since GPT was first released.
- behnamohI'm glad this exists. It ruins the day for Trelis who took the open-source and free Llama and made it commercial by giving it function calling abilities: https://huggingface.co/Trelis/Meta-Llama-3-70B-Instruct-func...