Trusted Group-buy tools service provider.....!
toolkiya

AI workflow tips from pro motion designers

How motion designers are using AI: Workflow tips, tools & real examples

How can you use AI for motion design? Jess Riley asked a bunch of her fellow motion designers exactly how they’re using AI in their work nowadays. Here are their tips.

How motion designers are using AI: Workflow tips, tools & real examples
Portrait for Jess RileyBy Jess Riley  |  Updated July 28, 2025

Here’s an understatement: AI in motion design is evolving at lightning speed. As a motion graphics designer, I’ve been intrigued and a little overwhelmed by the explosion of creative AI tools in everyday workflows.

After Envato launched its own AI video generation tool, VideoGen, ignoring this shift wasn’t an option. So I dove in — asking around the community, testing new tools, and learning how AI tools for animators and motion designers are changing the game.

From prompt engineering for visuals to style transfer, here’s a practical look at how generative AI for creatives is being used by pros and the AI workflows for motion designers shaping the future of video content.

AI workflows for motion designers: How pros use it in real projects

Image and video generation tools are probably the first thing you think of when you think of AI. Platforms like ImageGen can produce everything from photorealistic fashion shoots to the trippy images we see all over the internet today. These platforms use algorithms trained on large data sets (models) to produce images based on written or visual prompt inputs — it’s what gives you the ability to turn your profile picture into a Studio Ghibli-esque illustration or a dragon, as you prefer.

I have to be honest, I couldn’t initially see the fun in prompting as a form of creative expression; wrestling with AI to get the result I wanted felt tiresome. But I‘ve encountered a few main workflows that alleviate many of my former concerns. The first is training your own models.

Training custom AI models for motion graphics projects

Vitor Texeria, from the 3D-focused, Madrid-based studio NotReal, has been experimenting with creating his own data sets trained on the distinct style of NotReal.

He impressed on me the importance of owning the data that feeds the AI:

“ If you want to work commercially, you have to protect yourself… So, the database that you create, it’s very important that it’s based on your work. Not just because of the law and copyright, but also because of the results that you might get.”

Vitor’s process includes using NotReal’s portfolio of past work to train specific models in an AI tool. He then uses these data sets to generate imagery that can become moodboards for art directors and 3D designers in the initial conceptual stage of a project.

This kind of workflow reflects the growing role of generative AI in 3D, where image creation and scene ideation are becoming more efficient and scalable with the help of AI tools.

“Prior to storyboarding… you’re gathering images [in AI], for example, instead of relying on Google images. It can be interesting to have the 3D designers starting to do some images, grabbing the images that the 3D designers do, grab some references that you have of your previous work, mix them all and create a database.”

“If you’re doing a specific frame, it can be interesting to have a lot of variation… You have like 300 images and you start looking at them. Okay, I like this, this detail. That one.  That chair or that lamp or that color mix. It’s interesting to have options to present to the creative director.”

Most motion designers I spoke to were using generative AI in this way—not as a means to an end but as a tool for quicker iteration in the early stages of a project, trialing a wider and more specific range of ideas and then taking them into their traditional pipelines.

How AI helps motion designers scale projects and save time

I’ve also encountered some interesting workflows that use generative AI tools past the point of ideation. The music video clip for Cuco’s A Love Letter to LA used some pretty groundbreaking generative techniques to scale and speed up a traditional animation workflow.

ComfyUI is an open-source, node-based program that grants you a lot of control over parameters and feels similar to working in Blender or Houdini. Director Paul Trillo and his team of animators created complex node systems in ComfyUI that allowed them to input a cel-animated sketch and a filled style frame and output a fully painted cel animation.

Other motion designers and animators have used a similar workflow, where the AI makes itself useful in the tweening stages while human animators and illustrators handle the keyframes. 

Using AI for visual style transfer in animation

Style transfer is another good use for AI. For A Love Letter to LA, illustrations by local artist Paul Flores were used to train a model that could then be applied at the end, like a filter. This approach can combine the varied styles within a team of animators into one unified style at the click of a button.

Creative use cases: Extending backgrounds and generating assets with AI

In terms of my own process, I’ve found generative AI useful in extending photo backgrounds and creating still assets for use in broader, collage-like animations (a method that’s increasingly common in emerging 3D design trends).

For instance, check out this statue. I couldn’t find a good image on a stock site, so I made it using AI. I was able to prompt details like the phone and the bird to better suit the project’s messaging. This approach has saved me lots of time searching for the perfect stock photo.

MidJourney statue
animated statue

I have found that playing around with how you construct your prompt is very important. It’s akin to having different render engines. For example, I tested the above animation using AI to see what results I would get, and they were interesting.

Prompt:

“Roman marble sculpture of a woman, with a small blackbird perched on her outstretched hand. Green screen background. The statue at first has her arm by her side but then she reaches up as the bird flies in from the top left of frame to land on her hand.“

Start frame:

start frame for my animation

VideoGen with prompt enhance:

VideoGen with prompt enhance:

Without prompt enhance:

VideoGen without prompt enhance:

For short, specific moments like this scene, where, for instance, animating the bird realistically would take a long time, I could see certain models being super helpful. But for the day-to-day precise keyframing work, I tend to agree with Vitor:

“I’m not going to talk about animation because it’s not even close. The animation team that works at NotReal is really, really good and they have a very specific way of animating, so that is something that cannot be done in AI.”

Quick tips for using AI in motion design workflows

  • When training model sets, the more specific, the better. For instance, divide it up by style or subject matter instead of your entire back catalog of work for more control over your results.
  • Test different approaches to learn which platforms and models to use when.
  • Use AI to prompt itself. You can upload your visual prompt into tools like ChatGPT to generate a better written prompt, or simply lean on the prompt enhancer within VideoGen itself, which you can feed into the tool to create video.

Pro tips: How motion designers are actually using AI tools

Aside from the image and video generation tools, there are, of course, many more use cases for other kinds of AI that are more closely related to a supercharged plugin.

For example, Cavalry motion graphics has emerged as a flexible platform for procedural animation, especially when combined with AI-generated elements in data-driven workflows.

A quick survey of industry professions reveals some common threads.

Upscaling low-res imagery using AI tools

“I use Topaz Labs when pitching to render in low resolution, then upscale for clean, fast 3D output. I also use it to slow down shots for quick and easy retiming. It speeds up my workflow and lets me jump right back into the sandbox to keep pumping out new animated ideas.”
Jack Prenc, Kojo.

Creating guide VOs

“Rather than generating the whole image, I generate elements which I can’t find in stock imagery like ‘open book with empty pages’ and then collage them the way I need them.”
Giedre Elliott, Ginger Fox Studios

Coding in After Effects

“ChatGPT is a really good tool to help build complex animation presets in After Effects. At Datamate, we’ve actually built our own version of templater rigs layout tools with ChatGPT. So we now have a really easy tool to auto size and position layers with constraints against other elements in the composition.”
Jack Prenc, Kojo

Whether it’s building custom animation presets or developing unique workflows, motion designers continue to experiment across tools—many drawing inspiration from artists like Ben Marriott, who regularly explore both technical and stylistic techniques in motion design.

For a deeper dive on AI coding, you can read our recent article Vibe Coding in After Effects.

Prompt rotoscoping

Mask Prompter for After Effects uses AI technology within an After Effects plugin to quickly and accurately rotoscope your video.

AI-based frame interpolation

Bullet Time allows you to up the frame rate of your footage using AI for super slo-mo effects.

You only get out what you put in

If there’s one key takeaway from my conversations and experiments, it’s this: to create anything truly worthwhile with AI, you still need vision. The output quality depends entirely on the effort, creativity, and ideas you bring to it.

AI is a powerful tool for iterating on ideas. Motion graphics is a precise and time-consuming art, and AI can help speed up some of the more tedious tasks, like rotoscoping or sourcing stock footage. It can also open up new possibilities and scale projects to levels that were previously out of reach, as in the case of A Love Letter to LA.

Related Articles