Stable video diffusion

Winkletter  •  30 Nov 2023   •    
Screenshot

I spent a good portion of the day installing Stability AI’s new img2video model along with the software to run it. I’m using a program called Pinokio to install the program and run it. Unfortunately, it didn’t completely download the 9.6GB model the first time, and kept throwing up errors. Once I downloaded the full model manually it started working.

But that was around 10 p.m.

I can now turn any 1024x576 image into a 24-frame video clip. That’s not very long: Either 2 seconds of choppy video or 1 second of smoother video. But apparently, there are ways to piece together longer clips. They only take about 3 minutes to run.

It is interesting to see an image I created with Stable Diffusion come to life for a second or two. If I could make them longer, I could see how this would help with video production, even if just to create a zoom in onto a logo.

Comments

Starting to see img2video renders on Twitter. It’s wild how far the LLMs had come!

jasonleow  •  30 Nov 2023, 10:20 am

@jasonleow Many of those are probably from RunwayML. They have been adding better controls to choose what moves, in what direction, and how fast.

Winkletter  •  1 Dec 2023, 2:44 am

Discover more

Sourced from other writers across Lifelog

Ooops we couldn't find any related post...