Synthetic media as defined in these slides is "any media created or modified by algorithmic means, especially through the use of artificial intelligence algorithms".
And perhaps it's not what the definition intended, but I do wonder if data visualization can fit within that definition. After all, data visualization is a form of media whose primary purpose is to communicate something about the underlying dataset, and in translating the data into a visual form, we are often times algorithmically modifying the source material. And so it is both media and synthetic (in that it doesn't occur naturally, but I guess neither does data?).
But I feel that synthetic tends towards a slightly negative connotation, that something is fake. Which then leads to: is data visualization fake?
And that, I guess, goes back to how it's being used and applied—just like all the deep fakes that are so troubling because it can do both good (presumably, I haven't quite figured out what good it does yet) and bad. Data visualization can be used for good—for communicating important information, like the spread of a pandemic, but it can certainly be used for bad, like visually manipulating the results of an election. It depends on how it's used.
"The Machine Stops"
"The Machine Stops" by E.M. Forster was a wild read. I thought it was something machine generated at first, maybe like one of those movie scripts that were fed all the other scripts. I couldn't fully understand it at first, the descriptions and even the grammar sounded a little bit off—until I realized it was written in 1909 and perhaps that's just how people wrote back then.
It was even wilder once I realized someone had written it in 1909; some aspects were far from how life is currently, we certainly don't have a room that takes care of all of our bodily and social needs. But the image of each person being isolated in their own room, being able to communicate only via video feels chillingly close to this past pandemic year.
One of the things I appreciated was Kuno asking Vashanti to visit him in person, because the Machine could only express the vague idea of a person but couldn't render their expressions or body language in full detail. I feel that way with the direction we're taking with VR, that if we start communicating with avatars, we'll miss important aspects of human communication.
But I digress.
The overarching themes I took away from the story was how we as humans have become more and more reliant on technology to take care of every aspect of our lives, that we've given away our control and that would bring about the end of (at least a subset of) humanity.
I fully agree with the fact that we've become overly reliant on technology, and that there are plenty of orphaned code bases out there still running really important functions that no developer knows how to maintain let alone refactor for the better. And that we not only trust our finances, but at times even our lives to these algorithms scare me.
I don't think the majority of humanity will actually perish like they did at the end of the story, but I do think many lives will be very negatively impacted—and that negative impact will most likely be very skewed in that the most marginalized populations will be the most affected.
I do really strongly believe that with all the advancements in research for machine learning, that we should at the same time slow down enough for conversations to take place about who the new technology will affect positively and negatively, for public awareness to rise.
And in an ideal world, we'd have regulations that match the pace of advancement.
"Can Fake Images Show Us Something Real?"
Maybe I'm an old grumpy soul but I've always viewed ML with a lot of skepticism and distrust where others have been excited. I've also struggled with acknowledging AI generated art as art, because when a human being hunkers down to produce art, they presumably do it with intention. Can a machine have intention?
I much prefer thinking of AI as another tool that we have at our disposal, that help us create art—rather than as the artist itself.
In that sense, I found the article interesting.
Aarati Akkapeddi uses AI as the tool to generate images of their mother. And what they say about the generated images—that they are fuzzy in some places and well defined in others—is, as the article points out, really poetic in how similar it is to memory. That when we recall something or someone from memory, we don't have photo-perfect recall. We remember bits and pieces.
And another interesting tidbit, that our memory is malleable and it is constantly being edited and re-contextualized. So perhaps these generated images are helping Akkapeddi re-contextualize how they saw their mother.
But whether they shows us something real? Perhaps only Akkapeddi would know.