Using Gemma 4 with profClaw for free local AI #37
thegdsks
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Gemma 4 E4B dropped April 2026. Apache 2.0 license. Native tool calling. 128K context. Runs on a 16GB Mac via Ollama. Costs nothing.
We added support in v2.2.0. Here is the setup:
profclaw chat -m gemma4:e4b 'What can you help me with?'Or via API:
Token usage: 16-24 input tokens per message (we optimized the system prompt). Tool calling works, the model calls web_search, exec, read_file correctly.
Also added Qwen 3 4B which is even better at tool calling if you want an alternative.
Anyone else running local models with profClaw? What is your setup?
Beta Was this translation helpful? Give feedback.
All reactions