In the constant landscape of AI’s infiltration into filmmaking, the most productive conversation that outside of the fear of its corporate potential to overtake our creative careers is in the space of “how can we use AI as a tool to make our jobs easier?”
Trust me, I’m as fearful and hesitant to trust AI as the next filmmaker, but I do think there are some tools out there that are pretty neat and less intrusively intimidating as the next. One of which is the newly released, web-based AI tool called Playbook.
Playbook was developed by Jean-Daniel LeRoy (CEO) and Skylar Thomas (CTO) as part of their capstone at USC and further developed it into a marketable web software. Playbook uses 3D rendering software with tools and keyframes similar to that of Blender and After Affects, adding the additional element of generative AI prompts to create digital sets and elements that animators and filmmakers can quickly design in order to play around with the built-in camera tools for film projects.
As not the tech-iest movie editor, I was impressed with the intuitive user interface and saw it’s usefulness as a method of quickly rendering screen tests that filmmakers can then apply IRL on set—almost like an advance storyboard for shots.
Below, we chat with LeRoy and Thomas about Playbook—it’s origins, its possible practical applications to filmmakers at large, as well as the recent Culver Cup short film contest where filmmakers were given the opportunity to make their own short films in the Playbook sandbox.
Check it out, and checkout Playbook for yourself here.
Editor’s note: the following quotes are edited for length and clarity.
What Exactly is Playbook?
When asked about the process of developing Playbook, LeRoy had this to say:
“Skyler and I shared this interest in AR, VR, and the future of entertainment.
The original vision of Playbook was very much to allow 3D design to happen on the web to make it accessible for those that didn’t have access to a crazy computer and for it to be collaborative.
The big aspect of Playbook at the time was it being cross-platform and collaborative, allowing multiple users to edit the same scene at the same time, we’ve since focused on a new part of the 3D problem, which is the rendering side. That was our showcase during the Culver Cup and what users have been using, which is Play 3D—a diffusion-based render engine.
What that means is we use diffusion technology, which is the same as Stability, Diffusion, Runway, Pika, all these other kind of AI creative tools. But we use 3D to guide that diffusion so that the creative has full control over what’s coming up and what they’re producing.
That’s our background from the 3D side and how we’ve integrated the latest AI technology to build a tool that can help storytellers very much control and create from their imagination.
The tool is accessible from the web browser, so it’s accessible to anyone that has a computer. It works on certain iPads. It doesn’t work on phones just yet, but anyone can access it. It’s not locked up or anything.
We’re launching new features pretty regularly basing them on feedback. In terms of the model, I think a big part of what we’re building is AI, so we’re not training our own models. This isn’t like stealing filmmaker data and scraping all of Hollywood to create this. We’re based on open source models that different creators are fine tuning for their own sakes.”
How is It Useful For Filmmakers?
With the intention of Playbook’s utilization for filmmakers, LeRoy said:
“With full 3D controls, we’re able to animate the different subjects live with a key frame system. So for the creators that are used to Premier Pro and After Effects, these more traditional animation tools, we tried to make it friendly. So this feature wasn’t able to be developed in time before the Culver Cup, but we did show some previews and all the creators are going to have access to this. This is kind of what our idea of what the future of AI 3D filmmaking is going to look like—a combination of more traditional animation and key framing, plus the ability to AI render.”
Thomas added:
“We’re also, we’re dropping plugins into Pro tools, which should allow this to be used within, directly within production pipelines without having to leave and go into another tool.
So those workflows that we expose, but we’re also going to open it up to those who might create their own very specialized production specific workflows for the kind of rendering or style that they’re trying to achieve.”
The Culver Cup Challenge
Per Culver Cup’s press release:
The Culver Cup is hosted by FBRC.ai, a startup accelerator that builds AI tools and workflows for the entertainment industry, along with Amazon Web Services
FBRC.ai’s current focus is building tools at the intersection of 3D and AI.
Playbook3D is one of the 3D tools in their portfolio of start ups.
Three pillars at FBRC
- Community – hosts community events, screenings, and workshops with a focus on the AI in entertainment community in Los Angeles
- Startup accelerator: five start ups right now all focused on intersection of 3D and AI
- Creating internal AI innovation programs within studios—bring in tools and creators, tinker with workflows, document, and basically create a safe container for studios to test how they could implement AI safely
Todd Terrazas, co-founder of FBRC.ai, added:
“We really think for AI to have a longer term production impact with more creator control and more consistency of output, 3D and AI are going to need to work hand in hand, which is why we are investing in the development of tools like Playbook3D and testing them in events like the Culver Cup.”
LeRoy also said of the Culver Cup and their collaboration:
“As part of the Culver Cup, every contestant had access to a base source 3D Canvas and 3D scene that was provided by Global Objects, one of the partners of the competition. And since we’re the only tool that combines 3D and Gen AI used to create images and videos, we were the ideal partner for them to integrate.
So that’s how that relationship came to be. One of the big goals of the competition was to see if we could push the boundaries of AI filmmaking. And in this—I think it’s been about two years since AI filmmaking has really been on the surface, and Skyler and I have attended a number of hackathons and film compositions and seen the early work—but a lot of it at that was fragmented shots. You could barely get motion out of the early generators, let alone control that kind of output you were getting. Mainly because it’s very, very key to have persistence between every frame of a shot, especially when you’re thinking about trying to tell a story about a certain subject.
You want that same subject to be the same subject across the entire movie. So play we thought we could solve that problem of consistency and control by using 3D in that process.
We’re wrapped up the Culver Cup on Monday October 14, so we’re really excited about that and the future of filmmaking. We have a couple plans in the future, another contest that’s a community contest that’s more focused on the future of fashion and fashion rendering where image and video will also be a workflow for us.
AI filmmaking isn’t just storytelling for creative purposes. It could also be applied for brands and marketing agencies. We’re pretty interested in expanding use cases and showing how the tool could be used for those different use cases. So the fashion one is the one we’re planning next, but yeah, we’re pretty open and we’re just listening to what comes to us on our inbound to steer that.”
Watch the Culver Cup short film finalists at Escape.ai.
Author: Grant Vance
This article comes from No Film School and can be read on the original site.