DavidBert
|
0ef0dc6837
|
use dispatch_attention_fn for multiple attention backend support
|
2025-10-21 07:29:30 +00:00 |
|
DavidBert
|
d0c029f15d
|
built-in RMSNorm
|
2025-10-21 07:27:15 +00:00 |
|
DavidBert
|
015774399e
|
Refactor PhotonAttention to match Flux pattern
|
2025-10-21 07:27:14 +00:00 |
|
David Bertoin
|
bb36735379
|
make quality + style
|
2025-10-21 07:27:14 +00:00 |
|
davidb
|
12dbabe607
|
fix timestep shift
|
2025-10-21 07:27:14 +00:00 |
|
davidb
|
ec70e3fdc0
|
fix T5Gemma loading from hub
|
2025-10-21 07:27:14 +00:00 |
|
davidb
|
6a66fbd2c4
|
just store the T5Gemma encoder
|
2025-10-21 07:27:13 +00:00 |
|
davidb
|
e487660e05
|
Add Photon model and pipeline support
This commit adds support for the Photon image generation model:
- PhotonTransformer2DModel: Core transformer architecture
- PhotonPipeline: Text-to-image generation pipeline
- Attention processor updates for Photon-specific attention mechanism
- Conversion script for loading Photon checkpoints
- Documentation and tests
|
2025-10-21 07:27:13 +00:00 |
|