Back to List
Z-Image De-Turbo De-distilled Model
De-distilled version of Z-Image model, breaking turbo distillation limits and restoring trainability and flexibility
De-distilled
LoRA Training
Deep Fine-tuning
ComfyUI
Trainability
Overview
Z-Image De-Turbo is a de-distilled version of Tongyi-MAI/Z-Image-Turbo, fine-tuned on images generated by Z-Image-Turbo to break down the turbo distillation limitations. This model is specifically designed for training and deep fine-tuning, offering enhanced trainability and flexibility compared to the original turbo model.
Features
- De-distillation technology breaking original turbo model limitations
- Direct training capability without adapters
- LoRA training support while maintaining compatibility with base model
- Deep fine-tuning support exceeding original turbo model capabilities
- ComfyUI version and diffusers-based version available
- CFG normalization support
Images
Sample image generated using Z-Image De-Turbo
Installation
git clone https://huggingface.co/ostris/Z-Image-De-Turbo pip install -r requirements.txt
Usage
For inference, use low CFG (2.0-3.0) and 20-30 steps. The model supports CFG normalization for better generation results.
Requirements
- Python 3.8+
- PyTorch
- Diffusers library
- CUDA compatible GPU
- 16GB+ VRAM recommended for training