Image(s) to Video
Summary
Image-to-Video models generate videos by taking a single image or multiple images as input, using the input image(s) to guide scene continuity, motion, and transitions, resulting in dynamic video content that expands the static visuals into a moving narrative while maintaining visual consistency.

Parameters
These are parameters that are applicable to all our base models.
Prompt
Text
The text prompt is processed through the selected model.
Seed
Seed
The seed is a deterministic number that indexes generations from the model. It's typically randomized, but you can set a seed if there's a particular output you're looking for! Keep in mind that all parameters must be the same in order for a given seed's output to persist.

Images to Video
The Images to Video node generates a video by connecting multiple frames. You can input up to 9 frames and the model will fill in the gaps! It primarily uses IP-Adapter based morphing techniques that's happening in the backend.
You can interact with it in a number of ways:
Input images by clicking the upload button in the node itself
Connect any image output to the node and it'll populate the images section
Reorder or delete images once populated
Use the prompt to help guide the output!
How to use
Here are some example workflows using Text to Video in our community page:
Last updated
Was this helpful?