← home portfolio
HI, I'M UNDER CONSTRUCTION, PLEASE EXCUSE THE ROUGH EXTERIOR—I PROMISE MY CONTENT IS (MOSTLY) GOOD <3

ITP Fall '21 — Hypercinema — Project 2

Team members

Project Concept

Most Hollywood movies are overwhelmingly white, and the top blockbusters are even more so. So we wondered, what would it feel like if when we went into a movie theatre and the faces we saw on screen were our own (and everyone else in the movie theatre)? How would it feel like if who we saw on screen were truly representative of the diversity of America? What would that mean for those of us that have long been under-represented, or even unrepresented, in American cinema?

For our concept video, we decided to show one way this process could look like, from a character selection screen, to our faces being scanned, to a scene from Avengers with our faces instead.

The result is social commentary, both uncanny in seeing such a familiar scene with faces that aren't the actors and actresses we've become so accustomed to, but also a little bit emotional to see faces so similar to our own (and definitely a bit cheesy haha).

With more resources, it'd be interesting to have this experience in a small movie theatre, with a whole movie rendered with audience's faces. What would that experience be like?

And what if the technology advances to the point where this becomes the mainstream practice, if instead of going to movie theatres to see our favorite actors and actresses, we go to see interesting stories with our own faces, or at least the faces of those around us? What would that mean for script writing when they're no longer writing for specific actors and actresses, and what would it mean for acting when their face might not even be on screen? Would that mean a more diverse casting based on skill, rather than the color of their skin and whether they meet certain beauty standards?

Of course there would be logistical challenges—how would roles be assigned to an audience, would we be swapping bodies as well as faces? Regardless, it is an interesting thought experiment.

Process documentation

To create the video, we used Reface app to put our faces onto the Avengers', RunwayML for the "scanning" scene, and Adobe Premiere to edit.

We started with a rough storyboard for the video:

The rough storyboard for our video.

We then went about finding a movie scene, and landed on the 2012 Avengers Assemble scene as an iconic enough scene but without too many faces we had to swap in. We then asked three of our classmates to be the rest of the Avengers with us, and loaded our images and the movie clip into the Reface app.

To take out the watermark, we had the source video on a bottom layer and masked the corners with the watermark, letting the original video show through instead.

Next, in order to present the experience of swapping faces before the movie starts, we decided to add two scenes from the storyboard:

  1. Choosing the character
  2. Scanning for facial recognition + analysis
A recording of RunwayML's facial detection model, to use in our scene of "scanning" faces to add into the movie scene.

Jingjing took charge for this section. We initially had a video from AMC, which would give the intro a familiar movie theatre experience. She then tried to overlay a character-selecting UI designed in Illustrator over it, but because the video was too aesthetically busy, it was difficult to blend them together cohesively.

Jingjing then found another video from IMAX that went much better with the character selection screen. She kept the initial 14 seconds of sound, then used Audition to delete the announcer's voice while still keeping the background music. To make the whole video more cohesive, she used the color key to remove the background from the scanning video, and included transitions between each video and audio clip.

Video timeline in Premiere.