Generating Videos
Generating videos
Mavic can generate short-form videos for Reels, Shorts, TikTok, ads, and product demos — from a text prompt, from a single image, or from two images that act as the first and last frame.
This article covers how each method works, which models support which mode, and how to keep videos on-brand using @-mentions.
Three ways to generate a video
Method | What it does | When to use |
|---|---|---|
Text-to-video | You describe the video in words and Mavic generates it | You're starting from scratch — no source image |
Image-to-video (animate) | You give Mavic one image and it brings it to life | You already have a strong still and want gentle motion |
Image-to-video (first & last frame) | You give two images — Mavic morphs between them | You want a directed transformation (open → close, day → night, before → after) |
Heads up: the first-and-last-frame mode is only supported by select models. See Which model does what below.
How to start
You have two entry points.
1. From a chat prompt
Open a chat and type what you want.
Text-to-video:
Generate a 6-second vertical video of a coffee cup being poured against a sunlit kitchen window, soft natural light, in @Brand Colors.Image-to-video (animate):
Animate @Image: spring_collection_hero — gentle camera push-in, soft fabric movement, 5 seconds, vertical 9:16.Image-to-video (first & last frame):
Generate a video that morphs from@Image: closed_boxto@Image: open_box_with_product. Smooth reveal motion, 4 seconds, square.
2. From the Video Generation workflow
Click the workflow icon in the chat composer and pick a video workflow. The form asks for:
- Mode — text-to-video / animate image / first-and-last frame
- Subject and action — what's in the video, what's happening
- Style — photorealistic, cinematic, anime, illustrated, product-shot
- Camera motion — static, slow zoom-in, push-in, pan, tracking
- Mood / lighting — golden hour, studio, moody, bright
- Duration — typically 4–8 seconds for short-form
- Aspect ratio — 9:16 (Reels/Shorts/TikTok), 1:1, 16:9
- Source image(s) — required for image-to-video modes
- Model — leave on Auto, or pick a specific one
Mavic builds a polished prompt from your inputs and generates the video.
Using @ to keep videos on-brand
The @ system is what makes Mavic videos feel yours and not stock-AI. Tag in any of these:
@Image— the still you want animated, or your first/last-frame inputs@Logo— to overlay your brand mark@Product— so Mavic grounds the video in real product specs (color, material, packaging)@Customer profile— Mavic adapts setting and casting to match your audience@Writing style— applied to any on-screen text, captions or voice-over scripts@Template— for branded intros / outros if you've saved one@Social post— to extend an existing post into video form
Strong prompt example:
Animate@Image: founder_portraitwith a slow push-in camera. Add the@Logobottom-right at 60% opacity. Voice-over script in@Writing Style: Founder Voice. 9:16, 8 seconds.
Which model does what
Mavic includes several video models. Auto picks the best one for your prompt — leave it on Auto unless you have a reason to override.
Model | Strengths | Modes supported |
|---|---|---|
Google Veo 3.1-Fast | Fast generation, great for short-form social | Text-to-video |
Seedance 1.0 Lite / 1.5 Pro (ByteDance) | High visual quality, strong realism | Text-to-video, image-to-video (animate) |
Kling v2.6 / v3 Omni | Best for directed motion and complex transformations | Text-to-video, image-to-video (animate), first-and-last frame |
If you specifically need first-and-last frame morphing, pick Kling v3 Omni in the model dropdown.
Tips for stronger video output
- Describe motion explicitly. "Slow push-in" beats "dynamic camera". Generic terms produce generic motion.
- Keep one clear action per video. Trying to fit three things into 6 seconds reads as chaos.
- Pair with a script. Use a chat prompt to generate the voice-over or on-screen text, then
@-tag yourWriting style. - Plan for sound-off. 85% of social video is watched muted — add captions or on-screen text in the post editor.
- Generate multiple takes. Ask for 3 variations and pick the best — generation is fast.
- Use square or vertical for social. 9:16 for Reels/Shorts/TikTok, 1:1 for feed.
Aspect ratios
Aspect | Best for |
|---|---|
9:16 (vertical) | Instagram Reels, TikTok, YouTube Shorts, Stories |
1:1 (square) | Instagram feed, LinkedIn |
16:9 (landscape) | YouTube, in-stream ads, web embeds |
Where generated videos go
Every video lands in Library → All Assets with version history. From there you can:
- Download — MP4
- Use in a post — attach to a Reel, Story, or Short draft
@-tag in a future prompt to extend, restyle, or repurpose- Restore previous versions — Mavic keeps a history of every refinement
Refining a generated video
After Mavic produces a video, you can:
- Type follow-up changes in the Add feedback box: "Make the camera slower and brighten the highlights."
- Re-roll the same prompt for new takes
- Compare versions side-by-side
- Restore an earlier version
See Editing & versioning generated assets.
Common questions
My first-and-last-frame video looks distorted.
The two source images need to share roughly the same composition. If frame 1 is a close-up and frame 2 is a wide shot, the morph will warp. Either re-shoot with similar framing or describe the transition explicitly: "Slow zoom-out from close-up to wide."
Can I generate videos longer than 8 seconds?
Most models are tuned for 4–8 second clips. For longer videos, generate multiple clips and stitch them together in your editor. Mavic can also generate a continuation that starts from the last frame of a previous clip.
Why is my video taking so long?
Video generation is more credit- and compute-heavy than images. Expect 30 seconds to a few minutes per clip depending on model and length.
Related articles
- Creating content with Mavic
- Generating images
- Using @ to reference brand data
- Editing & versioning generated assets
- Instagram post formats
Updated: May 2026
Updated on: 07/05/2026
Thank you!
