1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/docs/source/en/api/pipelines/glm_image.md
Sayak Paul 7299121413 Z rz rz rz rz rz rz r cogview (#12973)
* init

* add

* add 1

* Update __init__.py

* rename

* 2

* update

* init with encoder

* merge2pipeline

* Update pipeline_glm_image.py

* remove sop

* remove useless func

* Update pipeline_glm_image.py

* up

(cherry picked from commit cfe19a31b9)

* review for work only

* change place

* Update pipeline_glm_image.py

* update

* Update transformer_glm_image.py

* 1

* no  negative_prompt for GLM-Image

* remove CogView4LoraLoaderMixin

* refactor attention processor.

* update

* fix

* use staticmethod

* update

* up

* up

* update

* Update glm_image.md

* 1

* Update pipeline_glm_image.py

* Update transformer_glm_image.py

* using new transformers impl

* support

* resolution change

* fix-copies

* Update src/diffusers/pipelines/glm_image/pipeline_glm_image.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update pipeline_glm_image.py

* use cogview4

* Update pipeline_glm_image.py

* Update pipeline_glm_image.py

* revert

* update

* batch support

* update

* version guard glm image pipeline

* validate prompt_embeds and prior_token_ids

* try docs.

* 4

* up

* up

* skip properly

* fix tests

* up

* up

---------

Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com>
Co-authored-by: yiyixuxu <yixu310@gmail.com>
2026-01-13 06:39:22 -10:00

6.3 KiB
Raw Permalink Blame History

GLM-Image

Overview

GLM-Image is an image generation model adopts a hybrid autoregressive + diffusion decoder architecture, effectively pushing the upper bound of visual fidelity and fine-grained details. In general image generation quality, it aligns with industry-standard LDM-based approaches, while demonstrating significant advantages in knowledge-intensive image generation scenarios.

Model architecture: a hybrid autoregressive + diffusion decoder design、

  • Autoregressive generator: a 9B-parameter model initialized from GLM-4-9B-0414, with an expanded vocabulary to incorporate visual tokens. The model first generates a compact encoding of approximately 256 tokens, then expands to 1K4K tokens, corresponding to 1K2K high-resolution image outputs. You can check AR model in class GlmImageForConditionalGeneration of transformers library.
  • Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space image decoding. It is equipped with a Glyph Encoder text module, significantly improving accurate text rendering within images.

Post-training with decoupled reinforcement learning: the model introduces a fine-grained, modular feedback strategy using the GRPO algorithm, substantially enhancing both semantic understanding and visual detail quality.

  • Autoregressive module: provides low-frequency feedback signals focused on aesthetics and semantic alignment, improving instruction following and artistic expressiveness.
  • Decoder module: delivers high-frequency feedback targeting detail fidelity and text accuracy, resulting in highly realistic textures, lighting, and color reproduction, as well as more precise text rendering.

GLM-Image supports both text-to-image and image-to-image generation within a single model

  • Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.
  • Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.

This pipeline was contributed by zRzRzRzRzRzRzR. The codebase can be found here.

Usage examples

Text to Image Generation

import torch
from diffusers.pipelines.glm_image import GlmImagePipeline

pipe = GlmImagePipeline.from_pretrained("zai-org/GLM-Image",torch_dtype=torch.bfloat16,device_map="cuda")
prompt = "A beautifully designed modern food magazine style dessert recipe illustration, themed around a raspberry mousse cake. The overall layout is clean and bright, divided into four main areas: the top left features a bold black title 'Raspberry Mousse Cake Recipe Guide', with a soft-lit close-up photo of the finished cake on the right, showcasing a light pink cake adorned with fresh raspberries and mint leaves; the bottom left contains an ingredient list section, titled 'Ingredients' in a simple font, listing 'Flour 150g', 'Eggs 3', 'Sugar 120g', 'Raspberry puree 200g', 'Gelatin sheets 10g', 'Whipping cream 300ml', and 'Fresh raspberries', each accompanied by minimalist line icons (like a flour bag, eggs, sugar jar, etc.); the bottom right displays four equally sized step boxes, each containing high-definition macro photos and corresponding instructions, arranged from top to bottom as follows: Step 1 shows a whisk whipping white foam (with the instruction 'Whip egg whites to stiff peaks'), Step 2 shows a red-and-white mixture being folded with a spatula (with the instruction 'Gently fold in the puree and batter'), Step 3 shows pink liquid being poured into a round mold (with the instruction 'Pour into mold and chill for 4 hours'), Step 4 shows the finished cake decorated with raspberries and mint leaves (with the instruction 'Decorate with raspberries and mint'); a light brown information bar runs along the bottom edge, with icons on the left representing 'Preparation time: 30 minutes', 'Cooking time: 20 minutes', and 'Servings: 8'. The overall color scheme is dominated by creamy white and light pink, with a subtle paper texture in the background, featuring compact and orderly text and image layout with clear information hierarchy."
image = pipe(
    prompt=prompt,
    height=32 * 32,
    width=36 * 32,
    num_inference_steps=30,
    guidance_scale=1.5,
    generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]

image.save("output_t2i.png")

Image to Image Generation

import torch
from diffusers.pipelines.glm_image import GlmImagePipeline
from PIL import Image

pipe = GlmImagePipeline.from_pretrained("zai-org/GLM-Image",torch_dtype=torch.bfloat16,device_map="cuda")
image_path = "cond.jpg" 
prompt = "Replace the background of the snow forest with an underground station featuring an automatic escalator."
image = Image.open(image_path).convert("RGB")
image = pipe(
    prompt=prompt,
    image=[image], # can input multiple images for multi-image-to-image generation such as [image, image1]
    height=33 * 32,
    width=32 * 32,
    num_inference_steps=30,
    guidance_scale=1.5,
    generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]

image.save("output_i2i.png")
  • Since the AR model used in GLM-Image is configured with do_sample=True and a temperature of 0.95 by default, the generated images can vary significantly across runs. We do not recommend setting do_sample=False, as this may lead to incorrect or degenerate outputs from the AR model.

GlmImagePipeline

autodoc pipelines.glm_image.pipeline_glm_image.GlmImagePipeline

  • all
  • call

GlmImagePipelineOutput

autodoc pipelines.glm_image.pipeline_output.GlmImagePipelineOutput