GLM-4.7: How to Run Locally Guide
A guide on how to run Z.ai GLM-4.7 model on your own local device!
GLM-4.7 is Z.ai’s latest thinking model, delivering stronger coding, agent, and chat performance than GLM-4.6. It achieves SOTA performance on on SWE-bench (73.8%, +5.8), SWE-bench Multilingual (66.7%, +12.9), and Terminal Bench 2.0 (41.0%, +16.5).
The full 355B parameter model requires 400GB of disk space, while the Unsloth Dynamic 2-bit GGUF reduces the size to 134GB (-75%). GLM-4.7-GGUF
All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and Aider performance, meaning you can run & fine-tune quantized GLM LLMs with minimal accuracy loss.
⚙️ Usage Guide
The 2-bit dynamic quant UD-Q2_K_XL uses 135GB of disk space - this works well in a 1x24GB card and 128GB of RAM with MoE offloading. The 1-bit UD-TQ1 GGUF also works natively in Ollama!
You must use --jinja for llama.cpp quants - this uses our fixed chat templates and enables the correct template! You might get incorrect results if you do not use --jinja
The 4-bit quants will fit in a 1x 40GB GPU (with MoE layers offloaded to RAM). Expect around 5 tokens/s with this setup if you have bonus 165GB RAM as well. It is recommended to have at least 205GB RAM to run this 4-bit. For optimal performance you will need at least 205GB unified memory or 205GB combined RAM+VRAM for 5+ tokens/s. To learn how to increase generation speed and fit longer contexts, read here.
Though not a must, for best performance, have your VRAM + RAM combined equal to the size of the quant you're downloading. If not, hard drive / SSD offloading will work with llama.cpp, just inference will be slower. Also use --fit on in llama.cpp to auto enable maximum GPU usage!
Recommended Settings
Use distinct settings for different use cases. Recommended settings for default and multi-turn agentic use cases:
temperature = 1.0
temperature = 0.7
top_p = 0.95
top_p = 1.0
131072 max new tokens
16384 max new tokens
Use
--jinjafor llama.cpp variants - we fixed some chat template issues as well!Maximum context window:
131,072
Run GLM-4.7 Tutorials:
See our step-by-step guides for running GLM-4.7 in Ollama and llama.cpp.
✨ Run in llama.cpp
Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.
If you want to use llama.cpp directly to load models, you can do the below: (:Q2_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.
Use --fit on introduced 15th Dec 2025 for maximum usage of your GPU and CPU.
Optionally, try -ot ".ffn_.*_exps.=CPU" to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.
If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.
Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.
And finally offload all layers via -ot ".ffn_.*_exps.=CPU" This uses the least VRAM.
You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards.
Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q2_K_XL (dynamic 2bit quant) or other quantized versions like Q4_K_XL . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy.
You can edit --threads 32 for the number of CPU threads, --ctx-size 16384 for context length, --n-gpu-layers 2 for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference.
🦙 Run in Ollama
Install ollama if you haven't already! To run more variants of the model, see here.
Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload!
To run other quants, you need to first merge the GGUF split files into 1 like the code below. Then you will need to run the model locally.
✨ Deploy with llama-server and OpenAI's completion library
To use llama-server for deployment, use the following command:
Then use OpenAI's Python library after pip install openai :
🔨Tool Calling with GLM 4.7
See Tool Calling LLMs Guide for more details on how to do tool calling. In a new terminal (if using tmux, use CTRL+B+D), we create some tools like adding 2 numbers, executing Python code, executing Linux functions and much more:
We then use the below functions (copy and paste and execute) which will parse the function calls automatically and call the OpenAI endpoint for any model:
After launching GLM 4.7 via llama-server like in ✨ Deploy with llama-server and OpenAI's completion library or see Tool Calling LLMs Guide for more details, we then can do some tool calls:
Tool Call for mathematical operations for GLM 4.7

Tool Call to execute generated Python code for GLM 4.7

🏂 Improving generation speed
Use --fit on introduced 15th Dec 2025 for maximum usage of your GPU and CPU. See https://github.com/ggml-org/llama.cpp/pull/16653 --fit on auto offloads as much of the model as possible to the GPU, then places the rest on CPU.
If you have more VRAM, you can try offloading more MoE layers, or offloading whole layers themselves.
Normally, -ot ".ffn_.*_exps.=CPU" offloads all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.
If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.
Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.
You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards.
Llama.cpp also introduces high throughput mode. Use llama-parallel. Read more about it here. You can also quantize the KV cache to 4bits for example to reduce VRAM / RAM movement, which can also make the generation process faster.
📐How to fit long context (full 128K)
To fit longer context, you can use KV cache quantization to quantize the K and V caches to lower bits. This can also increase generation speed due to reduced RAM / VRAM data movement. The allowed options for K quantization (default is f16) include the below.
--cache-type-k f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
You should use the _1 variants for somewhat increased accuracy, albeit it's slightly slower. For eg q4_1, q5_1
You can also quantize the V cache, but you will need to compile llama.cpp with Flash Attention support via -DGGML_CUDA_FA_ALL_QUANTS=ON, and use --flash-attn to enable it. Then you can use together with --cache-type-k :
--cache-type-v f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
Last updated
Was this helpful?

