Artificial intelligence generates the cinematic substance of today, molding the clay of the cinematic image itself. This can most readily be observed in the cultural phenomena of deep fakes, produced by general adversarial networks (also known as GANs). These neural network processing works in binary manner, the former works in the form of competing datasets, the latter in collaborative encoding, both mechanisms try to produce visuals through combining multiple data inputs, which has the potential of generating infinitely long single takes and redefining the cinematic cut.
As these networks continuously invent images we have never seen before, they introduce potent cultural ambiguities around questions of authenticity. GAN images mash content together that no human has ever photographed, while GAN videos get produced from algorithms trained to simulate and cut time-based content. In this sense, AI is the contemporary non-linear film editor: collapsing pre-production with post-production and transforming cinematic content into an informational substance.
The current state of GANs is shifting to include 3 dimensional inputs, adding a z depth to the geometry and introducing hyperreal volumetric simulations of reality. Both mechanisms of deepfakes and hyperreals collapse information into one single viewing unit. Current takes this into account to speculate on a new form of editing, framing and transitioning in cinema that echoes ai mechanisms unbound by traditional frame to frame montage but instead grounded in real-time neural network generated transitions. Within a livestream context, algorithms are capable of finding events within non events, collapsing significant moments with the mundane and predicting outcomes. Current imagines how time might collapse based on a viewers engagement and thus form an algorithmically infused cinematic vocabulary.