A few days ago Google released Gemma 4, and the GGUF quantization can run comfortably in my RTX 5060 Ti GPU. In this text I will guide you step-by-step on setting up your own Gemma 4 to run locally on Windows using OpenClaw.
I was planning to do this with the NVFP4 model, but it’s not supported with llama.ccp at this time.
Note that the performance comparison below is for the unquantized models, but it gives an idea about it’s power.

Installation
Installing llama.cpp
Open PowerShell as admin and type the following command:
winget install llama.cpp
Once the installation is done, pick which GGUF quantization you want to use. I went with the Q5_K_M model, but you can pick whichever you feel like from Unsloth repository. Once you have decided which model to use you will download the model and start the llama server by entering the following command in your Powershell terminal:
llama-server -hf unsloth/gemma-4-E4B-it-GGUF:Q5_K_M
Note: If you plan to use another GGUF quantization than Q5_K_M, you will have to replace the Q5_K_M in the command above with the quantization you want to use.
Alternatively, if you prefer manual file management, create a folder named AI_Models in your C:\ drive and download the model there. You can then point the server to that specific file using this command:
llama-server -m "C:\AI_Models\gemma-4-E4B-it-UD-Q5_K_XL.gguf" --port 8080
Make sure that the bolded part is the same as your actual file name.
Once the model is downloaded and the server is running you should see this in your terminal.

This means that the model now is loaded and the server is running. To verify that everything is working as it should and is connected, you can open an new PowerShell terminal (as admin) and enter the command:
curl http://127.0.0.1:8080/v1/models
You should see this message in your terminal:

Installing OpenClaw
The next step is to install OpenClaw. In a separate PowerShell terminal, enter the following command:
powershell -c "irm https://openclaw.ai/install.ps1 | iex"
The command will download and install all dependencies needed, and will also start an on-screen installation wizard.
Independent research like this is self-funded. If this guide saved you hours of troubleshooting, consider fueling the lab.
Support the ProjectWhen the installation wizard starts, you will first be shown a safety and security text. Once you accept the risks, you will be asked if you wish to proceed with the QuickStart. Select yes here as well.


For the rest of the options in the installation wizard pick these settings:
- Model/Auth provider: Skip for now
- Filter models by provider: All providers
- Default model: Keep current
- Select channel: Skip for now
- Configure skills: No
- Enable hooks: Enable boot-md, command-logger and session-memory
Once you are done with that part of the wizard you will get this Dashboard Ready text, which includes a link to OpenClaw UI that you need to save for later. You can use the link that includes token= including the long string of numbers and letters and paste it in your browser to open OpenClaw UI, once the installation is done.

When asked if you want to install the shell completion script, pick yes.
Make llama and OpenClaw Play Together
To be able to open Gemma 4 in OpenClaw, you need to update the settings. Locate your openclaw.json in the installation folder (by default you find it at c:\user\your_username\.openclaw\) and edit it manually.
Alternatively you can download my pre-configured openclaw.json and only edit the location of the workspace and your token (the one you got from the installation).



Once you have updated and saved the openclaw.json, restart OpenClaw gateway using:
openclaw gateway restart
When the gateway has restarted, you should be able to access Gemma inside OpenClaw UI.

You can now chat with Gemma 4 in OpenClaw webchat, and any questions you might have about further configurations, plugins and skills you can simply ask Gemma for help with. A lot of the configurations it can complete by itself, and it will ask you for permission before making any changes.
Here is a list with some useful commands that can be used in Powershell.
openclaw gateway install
openclaw gateway start
openclaw gateway stop
openclaw gateway restart
openclaw gateway uninstall
openclaw daemon status
openclaw daemon install
openclaw daemon start
openclaw daemon stop
openclaw daemon restart
openclaw doctor
openclaw doctor --repair
Full list of the different commands, as well as full OpenClaw documentation can be found here: OpenClaw docs
If you found this guide helpful, consider signing up for my newsletter and get news, tips and tricks directly in your inbox.
