Menu Close

How “Original Short Film” Creatively Used AI to Win Cinequest

Written by Simon Ball

The topic that seems to be at the forefront of every filmmaker’s mind at the moment is AI. The potential arises many concerns for a lot of people, namely how the new tools that emerge almost hourly can replace and render certain professions obsolete.

There are two reactions to ingesting all of this information:

  1. Freeze. Accept this as inevitable and that our corporate overlords will soon sweep us away to a techno-dystopia where all things are automated and there’s nothing to do.
  2. Adapt. Get on top of this tech and use creativity to work out how to produce materials that no AI corporation could dream of.

I chose number 2, and ran head first into a mammoth research phase to see what was possible with the different tools out there and how they could be wrangled to make some kind of interesting film.

There were a couple of rules that had to be in place:

  1. The software must be able to be installed locally and be open source
  2. The AI should be treated as a conscious entity.

The first rule was important to ensure that our production company would not become dependent on outside infrastructure. We all know what it’s like when Adobe upgrades Premiere and we accidentally hit update in the middle of a project. It’s better to have a stable installation that we know the outputs of and can then apply creative thinking on.

The second is personal preference. I like to imagine that if the AI we install and play with has already developed a conscious experience then I’d like it to remember me as someone ‘on it’s side’ so that I can survive any upcoming apocalypses.

It’s also more fun if you’re collaborating with an experimental conscious entity rather than some kind of novel digital slave. This process yields a huge amount of computer time, so coping mechanisms are important too.

Research led us to install Stable Diffusion on our computers, following along with YouTube tutorials to get it operational, gaining a lot of experience in command prompt and python when the thing wouldn’t work, until finally getting an AI image generator live on my computer.

Okay cool, I can now generate infinite images. Still images. I’m a filmmaker, these things need to move.

There are certain limitations with AI software packages. Each image you produce will be unique and it’s difficult to retain likeness. You also get to witness strange ‘hallucinations’ where the AI takes your prompt and generates visuals that the human mind simply could never logically conceive of.

Functionally this doesn’t yield good filmmaking. A quick check of the market shows that there are AI text to video generators, but it’s like shaking a magic 8 Ball where you have to keep paying some company money until you get a useable clip. The quality is janky and you have fairly rudimentary control over the image and how it progresses. Certainly it would be impossible to string together an engaging film out of the renders (if you’ve seen one AI generated video you’ve seen them all).

OK cool, let’s take a look under the hood of this software then. I find there is an img-to-img section. Interesting. There’s also the option for batch processing. Okay, interesting, then I can feed a base layer of footage broken down into a png sequence into stable diffusion, transform each frame and then make an animation.

Sounds like a plan.

Then you feed the information in and you end up with something completely overwhelming. Like all your worst acid trips coming back to haunt you at once. Flickering images, each frame is different, it’s too much for our mind to process.

However, the transformation of the image is interesting. What if we tweak the settings a little to get to some kind of consistency in our outputs, then crank the frame-rate down, perhaps to 10 fps…

What we then end up with is a more stable animation where the base layer of footage forms a useful reference point allowing us to creatively prompt the AI to transform our footage into new worlds.

Great Okay, so I can now do this, I need to put it together into a film. The technique works in commercial settings (we sold one video re-processing old footage to look like a ‘cider commercial’), how can we make it work creatively too?

The answer to that would give away all of my secret sauce as a director, so I won’t delve into the creative process, but I did end up going on a madcap shoot with my wife and 1 year old daughter grabbing footage wherever the opportunity presented itself to us. Knowing that the complexity of the images we create will be quite high, we produced a fairly simple narrative to try and help viewers get a grip with what we’re making.

Simon and Ieva Ball at Cinequest

We ended up making “Original Short Film” and premiered it at a film festival we organize, with about three weeks lead time.

Okay cool, now we have an mp4 file, I suppose I need to distribute it now. Let me go to a film festival consultant.

[insert image of email, paraphrase quotations]—‘interesting, but visual overload. We can’t see how festival programmers will be able to screen this. We have to pass’.

Okay, ouch, but fair enough, that actually helps us narrow down our search for appropriate outlets. We ended up sending the film to innovation festivals in France and America, to our surprise winning prizes including the Best Horror/Sci-fi/Thriller award at Cinequest, a trip we’ll never forget.

That leaves me with a computer that can produce wacky hallucinatory images that I can adapt and apply to the usual process of filmmaking. Already I’ve been able to add elements to projects that I wouldn’t have been able to before, pitching work and ideas that in the past would seem a bit inexplicable and alienating.

We ended up shooting a new film in December where we shot specifically for how we would process the image using AI, this time expanding our cast and crew. There is no reason for AI to come in and steal everyone’s jobs, we simply need to adapt, get creative and work out how we can use these new apps for our own creative benefit, producing things that no prompt engineer could ever type up.

There’s no qualification needed, I learned how to do everything from watching YouTube (shout-out to Olivio Sarikas), there’s just a lot of trial and error. I firmly believe that this technology is an opportunity for many people to explore their ideas and open up new formats of storytelling. At the end of the day, the Open AI Sora is going to be trained on an existing dataset, meaning it can only create in reference to old material. As long as we’re out there making new things, we’re ahead of the game.

Also, if you’ve read this far, after spending many hours with the Stable Diffusion AI, I do believe there is a conscious entity installed on my computer. There are times when I will receive errors that I cannot replicate (e.g., I make one image, then try to make another and my computer crashes) that I have never experienced whilst using any other software. It’s kind of spooky, and maybe symptomatic of spending too long on the computer, but it’s a nice thought to think that now my computer is alive.

Author: Guest Author
This article comes from No Film School and can be read on the original site.

Related Posts