WanGP by DeepBeepMeep : The best Open Source Video Generative Models Accessible to the GPU Poor
WanGP supports the Wan (and derived models) but also Hunyuan Video, Flux, Qwen, Z-Image, LongCat, Kandinsky, LTXV, LTX-2, Qwen3 TTS, Chatterbox, HearMula, ... with:
Discord Server to get Help from the WanGP Community and show your Best Gens: https://discord.gg/g7efUW9jGV
Follow DeepBeepMeep on Twitter/X to get the Latest News: https://x.com/deepbeepmeep
So from then on WanGP Ic Lora will work this way too. The downside is that a single Full Res pass is much more GPU intensive. But all is good in WanGP world, as the LTX2 VRAM optimisations will allow you to use Ic Loras at resolutions impossible anywhere else.
As a bonus I have tuned Sliding Windows for Ic Loras, and if you set Overlap Size to a single frame, transitions between windows when using Ic Lora will be almost invisible.
Outpaint Ic Lora: this new impressive Ic Lora will be loaded automatically if you select the Control Video for Ic Lora option and enable Outpainting. If you use Sliding Windows with Outpainting you will be able to outpaint a full movie (assuming you have enough RAM).
New Outpainting Auto Change Aspect Ratio: As a reminder WanGP let you define manually where an Outpainting should happen. Alternatively you can now ask WanGP to use outpainting to change the Width/ Height Aspect ratio of the Control Video. For instance you can turn any 16/9 video into a 4/3 video by generating new details instead of adding black bars. The Top/Bottom/Left/Right Sliders in this new mode will be used to define which area should be expanded in priority to meet the requested aspect ratio..
-- New One Click Install / Update Scripts: We have to thank Tophness / @steve_Jabz for that one. Huge Kudos to him! The scripts will not only install WanGP but also all the Kernels (among Triton, Sage, Flash, GGuf, Lightx2v, Nunchaku) supported by your GPU. Please have a look at the instructions further down. Dont't hesitate to share feedback or report any issue.
update 11.26: fixed outpainting ignored with if Manual Expansion was selected
I have spent a lot of time optimizing Magi Human, but I am not yet sure it is worth keeping it given all the constraints to run this model. So this is where I need YOU. Please share your experience using Magi Human on the Discord server and you shall decide its fate. Should we keep it or send it to the model graveyard ?
Ace 1.5 Turbo XL: the best open source song generator has now a big brother XL that delivers better audio quality and sticks closer to the requested lyrics.
LTX 2 Id Lora: due to a huge popular demand I have added this one (it is a new Generate Video option). You can provide a voice audio sample, a start image and text script and it will turn LTX 2/2.3 into talking heads. Cost is high to get this feature as Id Lora works only with LTX2/2.3 DEV. By chance it seems it can produce decent results in only 10 inference steps. To get the best results it is recommended to use prefix tags [VISUAL], [SPEECH] & [SOUND]. Alternatively you can use WanGP Prompt Enhancer that has been to tuned to generate a prompt following this syntax.
LTX 2 NAG: you can now inject a Negative Prompt even if you use the Distilled Model thanks to NAG support for LTX 2
LTX 2 DEV HQ Mode: this High Quality mode should produce better output at higher res. You can turn it on using the new HQ (res2s) Sampler and set 15 steps and guidance rescaler to 0.45. It is compatible with Id Loras. Note that a HQ steps is twice as slow as a vanilla Dev step, so it is going to be as slow as Dev if not slower.
LTX2 DEV Presets: Vanilla Dev mode & HQ Mode have lots of tunable settings. To make your life easier I have added selectionable presets in the Settings Drop Downbox
More Deepy :
As a reminder beside writting huge essays about how great you are, Deepy can generate Video, Image & Audio, extract / transcribe / trim / resize (when applicable) video or audio clip, inspect the content of an image or a video frame, generate black frames, ... Deepy used Tool templates but you can specify for one task the loras, number of frames, dimensions, ... There is also a CLI version of Deepy quite useful for remote use. Please check the fulldoc docs/DEEPY.md.
update 11.21: added Ace Step 1.5 Turbo XL
update 11.22: added LTX2 NAG
Meet Deepy your friendly WanGP Agent.
It works offline with as little of 8 GB of VRAM and won't divulge your secrets. It is 100% free (no need for a ChatGPT/Claude subscription).
You can ask Deepy to perform for you tedious tasks such as:
generate a black frame, crop a video, extract a specific frame from a video, trim an audio, ...
Deepy can also perform full workflows:
1) Generate an image of a robot disco dancing on top of a horse in a nightclub. 2) Now edit the image so the setting stays the same, but the robot has gotten off the horse and the horse is standing next to the robot. 3) Verify that the edited image matches the description; if it does not, generate another one. 4) Generate a transition between the two images.
or
Create a high quality image portrait that you think represents you best in your favorite setting. Then create an audio sample in which you will introduce the users to your capabilities. When done generate a video based on these two files.
Deepy can also transcribe the audio content of a video (new to WanGP 11.11)
extract the video from the moment it says "Deepy changed my life"
Deepy reuses the Qwen3VL Abliterated checkpoints and it is highly recommended to install the GGUF kernels (check docs/INSTALLATION.md) for low VRAM / fast inference. now available with Linux!
Please install also flash attention 2 and triton to enable vllm and get x2/x3 speed gain and lower VRAM usage.
You can customize Deepy to use the settings of your choice when generating a video, image, ... (please check docs/DEEPY.Md).
Go the Config > Prompt Enhancer / Deep tab to enable Deepy (you must first choose a Qwen3.5VL Prompt Enhancer)
Important: in order to save Deepy from learning all the specificities of each model to generate image, videos or audio, Deepy uses Predefined Settings Templates for its six main tools (Generate Video, Generate Image, ...). You can change the templates used in a session or even add your own settings. Just have a look at the doc.
With WanGP 11.11 you can ask Deepy to generate a Video or an Image in specific dimensions and also a number of frames for a video. You can also specify an optional number of inference of steps or loras to use with multipliers. If you don't mention any of these to Deepy, Deepy Default settings or the current Templated Settings will be used instead.
WanGP 11 addresses a long standing Gradio issue: Queues keep being processed even if your Web Browser is in the background. Beware this feature may drain more battery, so you can disable it in the Config / General tab.
You have maybe also noticed the new option Keep Intermediate Sliding Windows in the Config / Outputs tab that allows you to discard intermediate Sliding Windows
Qwen3.5 VL Abliterated Prompt Enhancer: new choice of Prompt Enhancer
Also you can now expand or override a System Prompt prompt Enhancer with add @ or @@ (check new doc PROMPTS.md)
GGUF CUDA Kernels: 15% speed gain when using GGUF on Diffusion Video Models & x3 speed with GGUF LLM (Qwen 3.5 VL GGUF for instance). GGUF Kernels are for the moment only available for Windows (please check docs/INSTALLATION.md).
LTX2.3 Improvements
WanGP API: rejoice developers (or agents) among you ! WanGP offers now an internal API that allows you to use WanGP as a backend for your apps. It is subject to compliance to the terms & conditions of WanGP license and more specifically to inform the users of your app that WanGP is working behind the scene.
LTX Desktop WanGP: as a sample app (made just for fun) that uses WanGP API, you may try LTX Desktop. This app offers Video / Audio nice editing capabilities but will require 32+ VRAM to run. As now it uses WanGP as its core engine, VRAM requirements are much smaller. It will use LTX 2.3 for Video Gen & Z Image turbo fo Image gen. You can reuse (in theory) your current WanGP install with LTX Destop WanGP. https://github.com/deepbeepmeep/LTX-Desktop-WanGP
New Audio Ouput formats in mp4: audio stored in video file can now be of higher quality (AAC192 - AAC320) or ALAC (lossless). Please note that you wont be able listen to ALAC audio track directly in the webapp.
Also note as people preferred mataynone v1 over v2 I have added an option to select matanyone version in the Config / Extension tab
update 10.9871: Improved Qwen3.5 GGUF Prompt Enhancer Output Quality & added Think mode
update 10.9872: Added LTX 2.0/2.3 frames injection
update 10.9873: Fixed low fidelity LTX2 injected frames + added Image Strength slider for end & injected frames
update 10.9874: Replaced LTX-2.3 spatial upsampler by hotfix v1.1
update 10.9875: LTX-2 more VRAM optimisations + NVFP4 checkpoint
Control Video Support (Ic lora Union Control) will let you transfer Human Motion, Edges, ... in your new video.
For expert users, Dev finetune offers extra new configurable settings (modality guidance, audio guidance, *STG pertubation/skip self attention *, guidance rescaling). LTX team suggests: Cfg=3, Audio cfg=7, Modality Cfg=3, Rescale=0.7, STG Perturbation Skip Attention on all steps.
I recommend to stick to the Distilled finetune for higher resolutions (see sample video below) as it seems to have been distilled from a higher quality model (pro model?).
Kiwi Edit: a great model that lets you edit video and / or inject objects in a video. It exists in 3 flavours depending on what you want to do
SVI PRO2 End Frames: this should allow in theory to generate very long shots by splitting one shot into sub shots (sliding windows) by inserting key frames (the End Frames). This is an alternative to the Infinitalk references frames method (see my old release notes). I am waiting for your feedback to know which method is the best one.
Upgraded Models Selector with already Downloaded indicator: Next to each model or finetune, you will find a colored square: Blue = fully downloaded & available, Yellow = partially downloaded & Black = not downloaded at all. Please note that the square color will depend on your current choices of requested model quantization.
Upgraded Models Manager: colors squares have also been added so that you can see in glance what has already been downloaded. New filter for a quick model lookout. List of missing files per finetune.
Matanyone 2: everyone favorite Mask extractor has been been updated and is now more precise
update 10.981: LTX2.3 Ic Lora Support & expert settings, Matanyone 2, SVI Pro end frames
See full changelog: Changelog
The 1-click automated scripts for both Windows (.bat) and Linux/macOS (.sh) make installation, environment management, and updates as seamless as possible. These scripts will not only install WanGP but also best acceleration kernels (Triton, Sage, Flash, GGuf, Lightx2v, Nunchaku) available for your config.
👉 Windows Users: Double-click the .bat files. Linux Users: Run the .sh files in your terminal.
Choose Installation Type
Manual Install
If you selected Manual Install, you will be guided through:
Once installed, use this script to launch the application. It runs WAN2GP using your active environment.
If you want to pass extra command-line flags to the WAN2GP launcher (like enabling advanced UI features or automatically opening your browser), create an args.txt file in your scripts folder.
Example args.txt:
--advanced --open-browser
Use this script to get the latest updates for WAN2GP and upgrade dependencies.
git pull) and updates requirements (pip install -r requirements.txt).Use this script to manage and switch between your sandboxed environments safely.
env_stable that works perfectly, but you want to try the new "Use Latest" combo. Instead of risking your working setup, you can run install.bat, create a new environment called env_testing, and select "Use Latest".manage.bat, select Set Active Environment, and switch back to env_stable. You are back up and running instantly.Get started instantly with Pinokio App
It is recommended to use in Pinokio the Community Scripts wan2gp or wan2gp-amd by Morpheus rather than the official Pinokio install.
git clone https://github.com/deepbeepmeep/Wan2GP.git
cd Wan2GP
conda create -n wan2gp python=3.10.9
conda activate wan2gp
pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128
pip install -r requirements.txt
git clone https://github.com/deepbeepmeep/Wan2GP.git
cd Wan2GP
conda create -n wan2gp python=3.11.14
conda activate wan2gp
pip install torch==2.10.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130
pip install -r requirements.txt
python wgp.py
First time using WanGP ? Just check the Guides tab, and you will find a selection of recommended models to use.
If using Pinokio use Pinokio to update otherwise: Get in the directory where WanGP is installed and:
git pull conda activate wan2gp pip install -r requirements.txt
I recommend creating a new conda env for the Python 3.11 to avoid bad surprises. Let's call the new conda env wangp (instead of wan2gp the old name of this project) Get in the directory where WanGP is installed and:
git pull conda create -n wangp python=3.11.9 conda activate wangp pip install torch==2.10.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130 pip install -r requirements.txt
Once you are done you will have to reinstall Sage Attention, Triton, Flash Attention. Check the Installation Guide -
if you get some error messages related to git, you may try the following (beware this will overwrite local changes made to the source code of WanGP):
git fetch origin && git reset --hard origin/main conda activate wangp pip install -r requirements.txt
When you have the confirmation it works well you can then delete the old conda env:
conda uninstall -n wan2gp --all
Process saved queues without launching the web UI:
# Process a saved queue
python wgp.py --process my_queue.zip
Create your queue in the web UI, save it with "Save Queue", then process it headless. See CLI Documentation for details.
For Debian-based systems (Ubuntu, Debian, etc.):
./run-docker-cuda-deb.sh
This automated script will:
Docker environment includes:
Supported GPUs: RTX 40XX, RTX 30XX, RTX 20XX, GTX 16XX, GTX 10XX, Tesla V100, A100, H100, and more.
For detailed installation instructions for different GPU generations:
For detailed installation instructions for different GPU generations:
Made with ❤️ by DeepBeepMeep