As we cover AI more and more, it’s become even more apparent that AI is very much here to stay. The only debate left is just which filmmaking skills and jobs are going to be more affected by AI versus which ones might remain more or less the same.
In a fascinating look into the modern VFX process featuring artists at the height of the AI movement, we have a very cool VFX breakdown behind the scenes of Paul Trillo’s latest project.
Let’s take a look at ‘Notes to My Future Self’, a liminal space project that features portraits of 25 people proclaiming their resolutions of what they will do tomorrow and how it was created with a mix of real production, VFX, and AI-assisted techniques.
‘Notes to My Future Self’
Designed to be part of a two-screen projection as part of the “Window To The Future” exhibition at the Telefonica Foundation in Madrid, Trillo’s project is a very interesting look into not only how AI can be part of a VFX process, but also more generally into how artists are choosing to create pieces which can better connect with their creativity and inspiration both for themselves and others.
‘Notes to My Future Self’ explores the threshold between today and tomorrow and the projections of the collective subconscious onto a yet-to-be-defined future. Against the backdrop of a boundless, AI-generated hyperreal dreamscape, we see everyday people express the thoughts that rattle in their minds before sleep.
You can watch the full five-minute contemplative video piece below.
‘Notes to My Future Self’ VFX Breakdown
While ‘Notes To My Future Self’ explores our persistent optimism in the face of the unknown, it also juxtaposes the raw, authentic moments of human vulnerability against the limitless potential of the future as conjured by AI.
We can see this VFX breakdown in action as shared by Trillo, which showcases how he was able to combine a variety of AI tools into his process as a way to demonstrate how fluidly we can move between pre-production and post-production with the help of AI.
Trillo had 10 days to shoot, edit, and composite, so this was key. As was using AI-powered tools like Stable Diffusion images and Photoshop Generative Fill and Expand, which were also upscaled with Magnific, Krea, and Topaz to get up to 8K resolution.
We also see a lot of great insights into how Trillo worked with the lighting on the project, with a particular emphasis on matching the lighting for the plates he shot with his actors to what he eventually would use in the AI-enhanced backgrounds.
Watch the full VFX breakdown here below:
Paul Trillo Talks About ‘Notes to My Future Self’
We sat down to chat with Trillo about the project where he shared some more insights into his process, and thoughts on our collective future self in this new AI-defined world.
Editor’s note: The following interview is edited for length and clarity.
NFS: Tell us a bit about your VFX work for ‘Notes to my Future Self’ and, if you had to, what notes would you give to your future self about working with AI on projects like these?
Paul Trillo: When I found out that the quality of AI images was going up with all these upscales and all this, I was like, okay, can I make something that’s of cinematic quality?
And so I kind of set out to do something that was as detailed as if I had shot it myself. So the backgrounds are 8K in resolution. It was designed as a projection piece, so it’s a video art installation piece. It’s about a nine-foot-by-30-foot projection that’s in this museum.
When you see it [I want spectators to] not recognize that it’s not AI, that it’s just this kind of dreamscape.
That was the goal I set out to do. Can I create something that feels to the same production value level that we can get with other tools?
But I had a very short timeline. I came up with the idea at the beginning of January, wrote it, and then 10 days later shot it, and then 10 days later they delivered it, which was over 25 VFX shots, and 25 actors. We shot on set, and with editing of that and compositing with fully synthetic backgrounds in that timeline, it would’ve been impossible before.
What was interesting was the process of knowing what you’re lighting.
So in a lot of big VFX films, they have a rough idea of what their background is going to be, but they don’t necessarily know all the intricacies of where the light might end up. And so if you can have a better idea of like, oh, we have a light over here, you have this lighting reference image that is created in Stable Fusion.
I paired a background image with each actor that showed up to set. We had about 15 minutes with each actor, so we had 25 to do in six hours and we’re like, oh, okay, whatever. Maria is here, this is the background for Maria. And then we’re like, okay, we got to move this light there. This light there looks like it basically matches our reference. And then once you have a light that matches the background, the compositing becomes so much easier that you don’t really have to do all this really detailed compositing or relighting or anything like that.
And it just kind of marries together with the background a lot quicker. It was cool to be able to know what the final thing is going to look like before you even shoot it, which is the advantage. And then from there, it went through different tools like Magnific and Krea as well as Topaz for upscaling, and then got the backgrounds to 8K in resolution.
From there I was able to crop in and isolate just a single bush or an area where there’s fog or light rays or whatever, and just crop in on these different areas of the 8k background plate and just do Runway Gen-2 generations of just those isolated elements. If I tried to throw in a whole 8K background plate into Runway, it would just totally look smudged and weird, but if you could zoom in and add motion just in those pockets, then you can still retain the quality that was then upscaled in Topaz—and then apply the same kind of color grading compositing and film grain that I would on anything.
It was just a very fluid, seamless process, and I was able to still manipulate and use Photoshop, Gen Filll, and Gen Expand to create these ultra widescreen landscapes after the fact. If there was an element or a tree I didn’t like, I could easily swap any of the individual elements within the shot. It was just a very fluid, organic process where it didn’t feel like I was being, I don’t know, held back in any way in terms of trying to create these landscapes that would otherwise be very ambitious to try to pull off.
I didn’t want it to be fully generative. I didn’t want it to be obviously AI, and I really wanted the focus to be on the people that are at the center of the shots, the sort of portraits.
For more info about this project, and to see more of Paul Trillo’s work, you can check out Trillo’s website here.
Author: Jourdan Aldredge
This article comes from No Film School and can be read on the original site.