← home portfolio
HI, I'M UNDER CONSTRUCTION, PLEASE EXCUSE THE ROUGH EXTERIOR—I PROMISE MY CONTENT IS (MOSTLY) GOOD <3

ITP Fall '21 — Hypercinema — Week 4

This week's readings are about synthetic media and its effects on the world, in particular text-based (The Supply of Disinformation Will Soon Be Infinite), video-based (Deepfakes Are Becoming the Hot New Training Tool), and art-based (The AI Art at Christie’s is Not What You Think).

I was so alarmed when I first learned about deepfakes; I've learned to be skeptical of photos because they could be photoshopped, but I used to be able to rely on videos as a source of truth. But deepfakes have made it such that videos could be manipulated, people can be made to say things they've never said. Thankfully the tech is such that we can still tell when a video is a deepfake, but tech is advancing quickly. And even scarier, the people we'd want to be able to verify the most whether the video is true or not—those at the top levels of government—there are so much footage of them that it's presumably not hard to create a convincing depfake even now.

Having said that, it was interesting to read about deepfakes in a much less alarming use case—training videos. It feels very welcoming to have training videos tailored to each new employees, where the person in the video will greet the employee by name and speak their language—and also clever from a cost cutting perspective.

I have mixed feelings about the other use case—advertising campaigns with AI-generated models—because I wonder how many real-life humans they are replacing, particularly because a lot of the models the AI-generated ones advertise are women and people of color. So brands can appear more diverse while cutting costs, while not actually paying women and POC models. That money is going instead to the people behind the tech—which, I presume, are dominated by white men.

For some reason, I was even more disturbed reading about GPT-3 and the text it generated. Perhaps because with deepfakes, I could still have a clue that it's fake. But with text, real people misspell and write with awkward grammar all the time. I couldn't tell the difference between the author's writing, and the GPT-3 generated texts that were fed a few sentences of the author's writing. How do we even begin to discern what is real from what is fake?

Which made me realize, synthetic media challenges our trust in the media we consume.

video assignment

We're really struggling with this assignment: to augment a piece of media by addition, subtraction, or generation. But we're just not sure of the direction, or even the concept to go with.

The broad direction we currently have is "Contrast", or "scenes that can never happen". To hone in on something more specific, we agreed to bring a few ideas to class to discuss and pick from.

The two I thought of:

  1. Contrast: speaking and seeing someone else (that doesn't exist) speaking with our voice. How would that challenge our sense of self? Would that be uncomfortable, or liberating?
  2. Data: student debt or wealth gap, personalized and overlaid on top of video—but I don't have a clear vision for this one.

We couldn't quite land on an idea in class either, so we decided to work through the questions Gabe provided us to drill down into our ideas further.

Anief's idea

Jingjing's idea (environmental design, environmental protection)

In the end, we decided to go with Anief's idea, but with a spin inspired by his commentary that with the tech for swapping faces, there's no racial or gender discrimination. The algorithm indiscrimately applies the faces it was given to the faces it detects in an image or video.

So would we be able to make a social commentary, if we honed in on all the top blockbusters that have a white male lead, and purposefully face swap them with non-male, non-white people—what would that viewing experience be like?