In The AI Doc: Or How I Became an Apocaloptimist, filmmakers Daniel Roher and Charlie Tyrell use their cinematic talents to understand the labyrinthine phenomenon of AI. Bringing together an impressive array of industry experts, including CEOs from the most popular large language model firms, the filmmakers weave together a complex and probing impression of the force of AI from everyone’s point of view. This same democratic creative process informed the making of the film, with its five producers each bringing a different expertise.
We spoke with two of the producers, Jonathan Wang and Daniel Kwan, about how the film came about. In 2023, Kwan and Wang gained international attention when their multiverse adventure, Everything Everywhere All at Once, was nominated for 11 Academy Awards and won seven—including Best Director and Best Original Screenplay for Kwan and Best Picture for producers Kwan and Wang. Now, the two bring their boundless curiosity and imagination to the world of AI.
Get tickets for The AI Doc: Or How I Became an Apocaloptimist—now playing in theaters!
The official trailer for The AI Doc: Or How I Became an Apocaloptimist
How did The AI Doc come about?
Daniel Kwan: The week after the Oscars in which Everything Everywhere All at Once did a pretty good job, a couple of guys from Silicon Valley reached out to us and asked, “What do you know about AI?”
Jonathan Wang: I remember you telling me, “I hope they're not asking us to help save the world.”
DK: They told us, “This technology is coming and it's coming fast.” We realized that we needed to bring clarity to the conversation. We needed a film that could introduce the topic clearly, be entertaining and emotional, as well as meet people where they live. We put together a team, including the people from Navalny [that documentary for which Daniel Roher won an Academy Award® that year] whom we met on the Awards circuit. Three years later, it feels like the film is coming out at the right time with so many people who are really curious, excited, and a little scared about AI.

Producers Daniel Kwan and Jonathan Wang
What was important for you to include in the documentary for people just learning about the technology?
DK: One of the film’s subjects says, “AI is moving so fast that this documentary will be out of date by the time it comes out.” Early on, we realized that we couldn’t make a documentary just about AI’s technical capabilities. We needed to examine the underlying drivers, the things that are pushing the industry to rapidly deploy this technology as fast as possible. We decided to showcase the different drivers, be they economic, political, or ideological. What do the people building this technology believe about AI? What are the stories they are telling themselves about its future?
JW: When we started making it, people would immediately ask about AI, “Is it good or bad?” But that question is fundamentally flawed. It's both. So, what is the problem? It's this no-holds-barred race to get it out as fast as possible.
You have an impressive group of people in the film. How did you get them on camera?
DK: A lot of the credit goes to Ted Tremper. He figured out the tech industry and community, learning what all these different people believed about AI. In a way, he created a map showing where everyone stood and he gained their trust from the bottom all the way to the top. It was really important that we got a broad spectrum of voices from the critics to the CEOs. By including all these different voices, we can start to see the overlap where we agree rather than simply acknowledging all the points of difference.
JW: We had five producers and two directors, and all of them had different niches of what they really cared about. My way into AI was through seeing how it was supercharging all these environmental and social issues. Diane Becker, another producer, fought for female and diverse voices talking about AI. Producer Shane Boris, who is a deeply spiritual person, was concerned about AI stripping wisdom out of humanity. Daniel was asking, “How can we make sure that we get the future that we want? Ted [Tremper] was tasked to hear all of our opinions as well as the director's ideas to see how we can put them all together in a movie.

Daniel Roher in The AI Doc: Or How I Became an Apocaloptimist
As producers, what did you do to get the movie made?
JW: For me, the core job of a producer is to protect the DNA of a movie because it shifts constantly. For this movie, it was navigating all these disparate voices so they don’t disrupt the balance and the film is able to affect social change. In this film, everyone earned the title producer. I feel that this film needed five producers because it was such a Herculean effort.
DK: There are so many different ways to produce and contribute. Some of us were more about the day-to-day production from getting cameras and crews to coordinating flights. That level of production is absolutely necessary. I was mostly a creative producer whose job was to be sure everyone felt heard.
How did you work with the two directors to assist them creatively?
DK: From the start, we wanted to be sure that this film was a very human, emotional, and handcrafted experience. After we brought on Daniel—and he reached out to Charlie—we looked to them to make this cold subject into something personal. Daniel had the perspective, “I'm finding out about this technology just as I'm about to have my first son.” Interestingly, Charlie also was expecting a child, so both directors were becoming first-time fathers. That became our first “aha” moment. Becoming a parent became a powerful way to ask, “What does this future hold for us? Charlie has made a name for himself creating short, handmade docs with lots of texture. He does stop-motion animation in a very personal, beautiful way, so we were really excited to see how he could translate this very technical subject into something personal and inviting.
JW: Roher also served as a great BS meter in the film. If someone said something a bit too complex, he would simply ask, “What exactly does that mean?” That is his superpower, knowing when to stop and say, “explain that to me.” Charlie was always looking out for when the story might feel soulless and finding a way to make it relatable and human. Together, they created just the right counterbalance to the intellectual nature of the story.
What made you first aware that AI was no longer science fiction but a force that was going to disrupt the world?
DK: I remember reading an essay by Meghan O'Gieblyn on large language models of AI and what they could do. It was then I realized that AI’s ability to handle pattern recognition in order to form coherent sentences and paragraphs was really interesting. Of course, at the time, it was a bit flawed in how it handled language. A year later, though, when we talked to Tristan Harris and Aza Raskin [founders of the Center for Humane Technology], I realized that people were rapidly building and deploying this technology, but there was no plan. There were no adults in the room.
JW: There were two moments for me. One was getting an article sent to me that was generated by AI with credible links which were all fake. I couldn’t tell who drafted this or that the references were made up. Seeing that article was a bit of a canary in the coal mine. The other was when I asked AI to create a 15-sec clip to connect two key frames: the first showed a guy standing with a bunch of people around him in a casino, and the end frame showed the same guy from a top angle with all the people laying beaten up on the ground around him. The clip it created had all these different shots, from a low-angle shot of a punch in high impact to a tracking shot of someone being thrown into a table. The AI had made the decision to cast Donnie Yen, whose likeness it had clearly stolen. It didn’t look perfect, but it looked pretty good. What worried me is how all the cinematic artistry has been farmed out. I suddenly saw how there is a deeper problem beyond just the economic contraction.

Have there been moments where AI astonished in a positive way?
DK: Right now, this technology makes the most sense when it's used for very specific, narrow-use cases. Audrey Tang, an ex-hacker who became the first Minister of Digital Affairs of Taiwan, built digital Democracy platforms. She used algorithms to find alignments and places of overlap, specifically around governmental work, that could build consensus rather than division. If you do that enough, you actually build up enough trust in the people and lawmakers for them to work on the more polarizing issues. Her work has helped Taiwan become a very robust democracy.
JW: Tristan and Aza talked to us about how AI is able to help decode animal communication. That's amazing. If we can actually communicate with other species, we can increase our ability to empathize and love the planet. Of course, the problem is that technology could also be put to use as a more efficient way to capture and kill other animals. Whenever someone's touting the good of AI, I always want to ask what is the underlying wisdom behind it? AI always has this dual use problem. Until you can name that problem succinctly, I don't want to destroy all of the global economy and the environment to try to solve any problem.
What has been the reception to the doc at different film festivals?
DK: The moment the credits roll, people want to talk. It’s amazing watching complete strangers start talking to each other about what they’ve just seen. The movie invokes so many different emotions, which is great, because what we really want is conversation. Showing the film at festivals, we have been able to see how those conversations ignite people. We showed the film to an entire audience of high schoolers in Salt Lake City. It was exciting to see how teens really engaged with the topic, understanding that this is going to be their future. It was also great to see them question their assumptions. One kid, a ChatGPT power user, said the film made him question how he was using the platform. I think it's really important that the doc doesn't tell you what to do, but it does tell you how to think about it.
How did you arrive at the film’s title?
DK: The reason the film’s title became The AI Doc was very much a collaborative experience. Sometimes when you do things by committee, you create a title as generic as The AI Doc. That was really important for the film itself, because we wanted it to be as accessible as possible to many different people. At the same time, because we wanted to make sure that the doc felt special, we added the subtitle “Or How I Became an Apocaloptimist.” Jamming together the generic title with something so specific in many ways reflected the complexity of the documentary. To get through this together, we have to embrace that we all have very different ways of thinking about what is the right way to deal with this technology.
What do you hope audiences take away from the film?
JW: I hope they gain a clarity of vision to see what is happening in front of them. It’s a bit like seeing The Matrix happening in real life. It’s interesting to me that so many tech billionaires are investing in spaceships to other worlds or in building underground bunkers—both of which suggest rather dystopic visions of the future. I hope that when people leave the theater, they can ask themselves what they want from an AI future. What do I value here? What do I want to protect? And then work hard with their communities and government to protect the thing that makes us human.
DK: I hope people leave the theater with a sense of agency. Whether you're looking at climate change, income inequality, or global military conflicts, AI will have a part to play in it. It is all connected. I hope that this film gives people an entry point into gaining some agency over how AI will be used.
For resources and ways to join the Apocaloptimist community, please visit theaidocgetinvolved.com.
This interview has been edited and condensed for clarity.
