How AI can benefit Hollywood (Part 2 of 3)

Russell S.A. Palmer
6 min readNov 29, 2021

--

The entertainment industry might be the final one disrupted by Machine Learning, but if done right it can be as revolutionary as the camera.

Be sure to read Part 1 first

Robot hand reaching out to touch human hand, in style of “The Creation of Adam” by Michelangelo
AI art could be great

Act 2 — What’s On Next

Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks, and GANs. Hollywood will use more and more CGI to replace expensive film production, and use script recommendations to get to production faster such as the Foundation Models from OpenAI like GPT-3.

Other models can be used like DALL-E, for storyboards and GIF-animated pre-viz helping speed up production (and reduce costs). Filming will be democratized to the masses of creatives with stories to tell. But first a short history of the past, to better understand the future:

The industries of Northern and Southern California have been connected since the dawn of the 20th century. Technology has played a key role in the growth of Hollywood across Los Angeles County. Some people may think that artists and technologists are separate types, but before even the Italian Renaissance artists were often brilliant scientists and engineers [6].

The San Francisco (SF) Bay Area has become critically influential in recent decades. Just take a walk around the Presidio near the Golden Gate Bridge, and you’ll find inspiring offices of LucasFilm and Industrial Light & Magic, not to mention the iconic Yoda fountain — a must see for film and tech geeks alike!

This landmark SF park houses more historically-accurate statues like Eadweard Muybridge, who invented Motion Pictures. There’s also Philo Farnsworth who created the first electric television in his SF lab. Nearby is Walt Disney’s family museum celebrating his creations, and all the studios and theme parks in LA.

Technology and entertainment go hand-in-hand, from ancient paint sprayers to 3D glasses, every aspect of technology can be harnessed for creating art — and every generation pushes the envelope further. When the first “Motion Pictures” were projected for audiences they were blown away. When sound was added the “Talkies” were a game changer, followed soon with “Technicolor”.

One might have assumed that filmmaking had been perfected, until stereoscopic cameras were able to produce three-dimensional images. Enter the digital era, and a team hired to build non-linear editing software would soon produce the first computer generated animation — notably Pixar in Berkeley. The list goes on, inventions like green-screens, 4K resolution, AR-VR, drones, DVDs, even Netflix streaming.

A big step for Creativity and AI is todays English-language generative “Foundation Models” such as GPT-3 or GPT-Neo (generative pre-trained transformers) with more languages to follow in coming years. AI can already be used to invent log-lines and titles for movies, and there are several free apps online you can use today as “idea generators” to overcome writer’s block.

We believe the next step could be using similar models, but with more input around plot and character, to help complete a movie “one-pager” including all acts and major plot points. You could enter any story idea (e.g. the plots, title, settings, log-line, and hero) and an AI could help suggest ideas to add that you’re stuck on. It might use its knowledge of film to construct a complementary b-story, complex villains, ideal mentors, or even suggest a tone and genre based on your writing. Edit the things you like, delete those you don’t, and try again for more ideas.

After that we will see more creatives scan collections of movie data for fine-tuning, and use this to start filling in dialog for a few unfinished scenes. This workflow would require a screenwriter-professional “human-in-the-loop” to construct and approve— setting a quality bar such that they would put their own name on it. As mentioned in Part 1 this human-in-the-loop aspect is critical for doctors using AI when making final diagnosis and taking action. It’s just as important for published artworks, forming a sort of Master-Apprentice relationship like Da Vinci and his background painters, so the leading human can focus on the critical details like perfecting Mona Lisa’s smile.

For media the human artist will be the one to finalize output before sharing publicly and editorialize standards for the audience such as quality, language, sexuality, violence, discrimination, or other bad things that AI is known to come up with due to its dataset and filter as necessary. Having scanned humanity’s data from our Internet and also public movies, AI sometimes comes up with outrageous ideas not fit for the public as any human could.

Next up is using the scripts and screenplays as custom input sets, and tools like DALL-E to create simple cartoon images based on text prompts (scene descriptions such as the narration in a screenplay). This can then be used for storyboard generation. Perhaps someday as these models improve in quality they can be used to create Graphic Novels and Comic Books. Other uses of images in film pre-production include title-art, movie posters, story bibles, and look-books [7].

At some point, we believe GANs (Generative Adversarial Networks) will be used to discern outputs for quality, and even provide feedback in the form of Script Coverage, allowing the user to iterate as they set the bar such as: “Would this script win a nomination for Best Picture Academy Award with over 70% probability?” or using other metrics like box office revenue, ratings from IMDB and Rotten Tomato, etc.

Naturally after images would come animation, everything from GIFs to short clips could be generated. Cartoons for children would be a less discerning market for the quality AI would be capable of at this early stage, but could improve to the level of Disney and Pixar movies with time and enough data.

AI can generate “deep fakes” for videos with humans, breaking outside the uncanny valley for photo-realistic portrayals in film. CGI is already good at creating synthetic video, such as Josh Brolin playing Thanos and Andy Serkis playing a multitude of characters (albeit with face-tracking captured as part of the acting process).

With quality CGI may come a reduction in some roles yet an upgrade for others. People who used to do make-up and lighting could add overseeing teams producing visual Special Effects. Things like wardrobe and costume design especially benefit from a human touch, curating the best choices between themselves and what the AI can offer up.

Movies will make more money through global ticket sales, using AI for translation and release in more locales around the world. AI’s near-perfect translation of vocals and lip-movement offer a more authentic experience than dubbing. Every movie could be watched seamlessly in every language, and films created could soon market to an audience of 8 billion people, growing the industry like it’s never seen before.

Instead of technology hurting careers in media entertainment, picture this: Imagine a world where anyone with artistic vision can input a few seminal ideas, and with the push of a button generate a full-length feature film ready for streaming and sharing.

To Be Concluded…

Now Playing! Check out the final part 3.

Attribution and Links

Top photo by Tara Winstead from Pexels

[6] Da Vinci (2017 Biography — Isaacson)

[7] DALL-E: Creating Images from Text (OpenAI)

--

--

Russell S.A. Palmer

CEO of CyberFilm AI in SF. From Toronto Canada. AI PM for 15 years across Silicon Valley at Microsoft, Viv Labs, Samsung, and JPMorgan Chase.