Current is a speculation on the future of broadcasting cinema. It emerges from the intersection of contemporary trends in live streaming culture, volumetric cinema, AI deep fakes and personalized narratives. The film Current is an example of what this new cinematic experience might look and feel like within a few years based on the convergence of these trends. As artificial intelligence increasingly molds the clay of the cinematic image, it optimises its vocabulary to project information in a more dynamic space, embeds data in visuals without deconstructing its complexity, and directs a new way of seeing, from planar to global, flat to volumetric, personal to planetary. This information rich space implies an economy of values, that has potential in multiple live-streams, especially as this artificially generated content enters deep learning algorithms of personalization.
Core ideas: PERSONALISED NARRATIVE 400 hours of video are uploaded to youtube every minute and 1 billion hours of video are being watched every day. In order to optimise content suggestion in youtube, google’s deep-learning research team produced the largest scale industrial recommendation system in existence. Algorithmic personalisation of content is present in most online platforms and impression-based advertising mechanisms like Google Ads. Users’ search engine history, viewing data and banking statements are all taken into account by these algorithms. Netflix takes user interactions, title ratings and viewing history to categorize users into global communities with similar tastes and preferences. Current anticipates that the future of algorithmic curation of content will be linked not only to a user’s archived database, but also to their real-time biometric data inputs. For example, detection of a user’s gaze duration in a specific environment allows the system to prioritize and optimise similar content for their own non-linear narrative. With such a personalized current curated based on our embodied experiences, we may no longer want to sit in an audience with 50 other people watching the same exact content. AI OPTIMISATION The outsourcing of imagination to Artificial intelligence can most readily be observed in the cultural phenomena of generative adversarial networks and deep fakes. These neural networks work in binary manners: the former works through competing datasets, the latter in collaborative encoding. Both mechanisms recompose visuals by combining multiple data inputs, and have the potential to generate infinitely long single takes, which redefine the cinematic cut. As GANs shifting to include 3 dimensional inputs, they add a z depth to the geometry and introduce hyperreal volumetric simulations of reality. Neural networks are being trained to simulate and reinvent time-based moving images by content mashups, introducing potent cultural ambiguities around questions of authenticity. At the same time, they redefine the role of movie directors: collapsing pre-production with post-production, transforming cinema into an information based transference. Within a livestream context, algorithms are capable of finding events within n on events, collapsing significant moments with the mundane and predicting outcomes. Current imagines how time might collapse based on viewer’s engagement with ai generated content and thus form an algorithmically infused cinematic vocabulary. VOLUMETRIC CINEMA The contemporary advent of new sensing technologies is shifting the current state of visual content to include 3 dimensions. Through remote satellites we have transitioned from viewing the earth from a planar map to a 3 dimensional globe. Depth lidar sensors and motion trackers used in cinema, virtual reality and gaming industries have transformed the cinematic language of navigation into an editing technique: instead of pre cut-to-cut montage, we now experience navigation-based world-to-world transitions, where spaces get constructed through recursive interactions between the virtual space and the users’ real time attention and navigation. The transition from 2 to 3 dimensions has enriched the image with more information. Currently, all images and videos produced are embedded with spatial metadata. Photogrammetry reconstructions and lidar scans of environments can be localised to GPS-specific points. When coupled with multiple real-time cameras and sensor inputs of the same location, the informationally rich space of volumetric construction can provide decentralized perspectives to events. Vision mechanisms of self driving cars already use real-time collaborative vision to cross-check what they perceive with each other. Within the framework of volumetric attention-based navigation, Current speculates on the potential of this type of collaborative vision to authenticate truth for users. INFINITE LIVESTREAM Did you know that there are 13 million people streaming their lives right now? The self-streaming culture in Asia is exponentially growing with hundreds of streaming agencies hiring broadcasters to perform for their virtual audience, often augmenting their bodies in real time. On a city scale, there are more than 350 million video surveillance cameras worldwide, recording petabytes of footage everyday. In the wild, hundreds of cameras are placed in remote environments and on endangered animals, observing landscapes and their inhabitants. Meanwhile in outer space, NASA’s International Space Station streams a view of the earth in real time at all times. Access to these streams has transformed the moving image into an endless current that users can step in and out of at anytime. This LiveStreaming condition extends the human eye into inaccessible environments and non-human perspectives all within the immediacy of the present moment. In the lineage of expanded cinema, Current asks: what may a form of expanded streaming look and feel like?

micro values:

attention economy

blockchain camera

predictive self

time voxel

outsourced imagination


key green