Creating fire with RunwayML and Nuke´s Copycat

This project was done for the very talented Mark Lorincz´s fashion documentary called ´Not Allowed´. A very simple brief, light the old Puskin cinema on fire while a contemporary artist dances his way through the seats. The only difficulty? Needs to be done in a day on a 5 minute long video that can be used as a teaser for online PR. The solution: a mix of traditional vfx compositing and generative ai magic.

First, Runway´s Gen-3 Alpha was put to test by giving it the prompts to generate fire elements based on a noise render from Nuke:

a static camera looking at dense orange flames of fire in front of a constant black background

This already gave some very good usable results. However, the noise layers had to be rendered before generation, with not much control in Nuke later on. Around this time Josh Parks published his trained copycat checkpoint for fire creation. Using his model, the inference was applied on the same noise, this time within Nuke, that allowed more flexibility in shapes and sizes real time.

These already looked pretty convincing, but given the one day turnaround there was not much room for experimentation to get similar generations on the more tricky angles like corners or columns. For those, the more traditional cg elements were used from ActionVFX. Both Marks were very happy!

Previous
Previous

Replacing camera projection with AI

Next
Next

Text To Image Flowers