I tested this GitHub project from THUDM user on my colab google account and works well with CogVideoX-5B model.
You can find the default implementation on my colab GitHub repo.
The default example comes with thjs prompt:
prompt = (
"A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
"The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
"pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
"casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
"The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
"atmosphere of this unique musical performance."
)
The speed of rendering starts from :
38% ... 19/50 [30:54<50:57, 98.61s/it] using T4 GPU
... then run at :
100% ... 50/50 [1:26:09<00:00, 109.87s/it]
when the video render was 100% somehow google give this error, but the source code run well:
OutOfMemoryError: CUDA out of memory. Tried to allocate 1.32 GiB. GPU 0 has a total capacity of 14.74 GiB of which 654.12 MiB is free. Process 26970 has 14.10 GiB memory in use. Of the allocated memory 12.97 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
I think is need to set some extra memory on CUDA but this require to parse some documentation and is not a task for me now.