videoart
generative
dreams
Generative Videoart

Generative Videoart

written by fitipeArt

11 Nov 202233 EDITIONS
1.5 TEZ

Hello world ~

My name is Filipe Britto, I’m a lens-based artist and newbie programmer, working at the intersection between videos and codes since 2017.

It all started in my Arts graduation at UFF (Universidade Federal Fluminense), during a class about art and psychoanalysis, where we were having a discussion comparing video editing and the mysterious process through which dreams are dreamed. Are they like video editings of memories? If so, why are they so unpredictable?

Well, I already had years of experience editing videos and, from that discussion on, I started to get very curious about the possible relations between dreams and videos … at the same day when I got home, I started to experiment ways of applying unprediction to video editing processes and importing the variable time-flow and rhythm of oneiric experiences. Two years later it would take me to a masters research at the same university.

But ... first, I knew I’d need to learn how to code.

Even though I took a very hybrid "Arts" graduation in which I had practical classes of drawing, dancing, painting, photogaphy, acting, literature, concrete music, conceptualism ... I was focused on visual arts when I found a path to start learning code with PureData (PD), where objects are distributed on a plane and connected via wires.

So I met Ffmpeg. And Ffmpeg came with ffplay. Together, both taught me I could play videos in specific ways by prompt-calling them with specific arguments given, such as the starting and ending points ... and that gave me the ability of creating cuts by writing.

Here is an example of a command line that would call ffplay to find the "4.mov" file, located on my desktop, and play it from a starting point of 13 seconds, during 8 seconds, looping the whole process 3 times and closing right after it's done:

ffplay -ss 13 -t 8 -loop 3 -autoexit /User/filipe/Desktop/4.mov

Then, in order to turn everything unpredictable, dreamier, I would re-write those command lines on PureData, but changing the arguments values and videoclip names to float variables, and setting their values to be chosen randomly. So PD would write a string with those numbers inside and drop it in a prompt shell, calling ffplay.

If I close a loop by connecting the "shell" object to the squared-circled toggle on the top, this simple program will keep playing a randomly chosen video file (and how to cut it) one after another. If I create a sequence of multiple groups of those same objects, it will go throughout the whole process an specific number of times.

Here is how one of first prototypes came out:

What the program is doing here is, simply, writing command lines with some randomly generated numbers and dropping them on a shell. So, by that time, I stil couldn't figure out how to connect the clips directly, in dry cuts ... you might notice that between the playing actions done by each command, the PD window always appears from behind.

In order to deal with it, my first solution was to leave black image always opened behind the videos and in front of the PD window, and it sort of worked out.

In the end of the year I joined a students art exhibit, got myself a RaspberryPi (RPI) and started to do some testings of how to make it run for about 8 hours a day, during a week. So every night I would set the RPI + TV turned to a wall, to run the codes while I was asleep and hope it's stil running by the morning. Most times it wasn't, so I'd spend the whole day trying to fix it and repeat all process again until it worked.

A curious fact: at some of those times, when I accidentally ran two versions of the same code at once and went to sleep, I woke up during the night with the TV flashing just like this:

Got myself frozen, half-sleeping, starring the flashing screen for several minutes and thought how curious it was to be setting the television to dream at the same time I was asleep, maybe dreaming too. Would it be dreaming my dreams and was I dreaming in codes? If that's so, where are the electric sheeps?

I woke up in the other day and found out those flashes were because the RPI's GPU was trying to play two video instances on fullscreen at the same time, and it's way to deal with it was to interleave the frames of both videos one by one.

That day, I learned a pretty interesting effect, transition, and way to deal with time in videos, from an awesome bug. This is the basis for what I reproduced later in python and for what I did to create the transitions in Teleport Loopr, my genesis on FxHash coded in p5.js.

But, before that, the two following videoarts were exhibited using the purposely bugging Raspberry's GPU, but this time with two copies of the same code running only one same video file.

A 30'' cut of the recording of "First Mirror" was my 3rd NFT minted on Tezos, through Teia: https://objkt.com/asset/hicetnunc/732900

A 60'' cut of the recording of "Second Mirror" was also minted on Tezos, through Teia: https://objkt.com/asset/hicetnunc/745316


By 2018 / 2019 an old friend who became a programmer, Matheus / @MatheusMortatti, taught me how to reproduce what I was doing in python. We also got another player, MPV, which allowed me to play with speed changes, connect clips directly in dry cuts and, later, send and receive info through sockets. This gave me the ability to change attributes of a video on the go while it's being played.

Below is a recording of "Escape into the eternal return", the generative videoart I exhibited as my final work at graduation. It never loops, but keeps being generated as it runs and only stops when everything is turned off.

From that on, I also started to play with some photography whithin the videos, created many artworks and exhibited some of them:


Finally, it was using sockets that I managed to reproduce that flashing effects I got from a bug, but in better GPUs: playing two video instances at once and setting a python code to send strings that would make MPV quickly switch between which one's window is on top. That's how I created the three videoarts found here:

https://fitipe.github.io/terceiro_contato

They were created during 2020 for a virtual event organized by the public department of culture of the state of Rio de Janeiro to promote art during pandemic times. For that, I made three different live transmissions, on three consecutive weekends, of those codes running.


Then in 2020 my masters research was approved and I got back to studying.

Reading neuroscience from Sidarta Ribeiro, I learned that dreams have nothing of randomness, but a lot about desire, prospection and utopian imagination. I also learned that what is seen on screens, especially in the last hour before going to bed, directly affects the dreams.

Reading the real-life stories of Davi Kopenawa and the ethnography from Bruce Albert (@Bruce_Albert), Viveiros de Castro and Barbara Glowczewski, I learned about the political matters of dreamings and how indigenous groups from all Americas to Australia's central deserts have been putting them into collective exchanges for countless years, and how that helps in the process of making routine and major life decisions.

Reading the manifestos from Patricia Reed (@AestheManag) and Laboria Cuboniks (@Xenofeminism), I learned about how social media algorithms are creating echo chambers by sorting information homophilically. That would've been partially responsible for the fast escalating of fake news which, in the end of the line, would've helped out a lot electing a far right president first in the USA, then here in Brasil.

Connecting all those thoughts, my hypothesis was: if those audiovisual virtual echo chambers are affecting the way we dream, maybe infiltrating on dreams, would the utopian imagination be in crisis partially because oneirical processes, narratives, plots, storylines, are not varying?

So I started to create groups of about ten people for a ten days practice focused into dream exchange and creating videos.

And how do all that relate to generative videoart? Because during those days participants were in their homes tasked with everyday recording: one video, one audio and taking one photo. That would create a growing archive shared by all of us on a GoogleDrive folder. Then, every night, I would go to that folder, re-download everything and run a randomizing program, adding the audio files that everyone was recording as soon as waking up, with dream reports and memories.

That was all transmitted live for two hours on a twitch link shared by all members and, within that period, each one could watch when and for the time wanted, with one agreement. A screen-fasting agreement: that randomized video would be the last screen seen before going to bed that night.

Why were those codes so important to these group processes? Because they create sequences continuously edited in which there's no specific beginning or ending, it's all middle with no order defined ... so all hierarchical relations that would naturally appear in scripted editings are quickly dissolved in a few seconds watching. Also, of course, because of the dreamy atmosphere these unpredictable editings create.


project name project name project name


Ok, so our far-right president deleted our culture ministry, drained our economy during multiple pandemic scandals, cut public universities fundings ... and job opportunities for videomakers like me became more scarce, at the same time as academic research started to be scrapped.

Then I found out about web3.

I firstly minted a bunch of photography on polygon, deleted most of it. Then started to mint photography editions on ETH that are hardly selling (opensea.io/collection/twinvolcanos), and finally found the Tezos Community. With the help of Scott (@spike_0124) and Teia Fountain, I got my first 0.5XTZ and started to mint on Teia.

Once you are in Tezos, you'll eventually end up finding FxHash.

I first heard about it at a workshop given by Morbeck (@morbeck_art) to Brashill Collective (@BrashillNFT), in which I'm an artist monthly creating 1/1s.

Then Scott took me to NullDAO (@nullDAO), where I found very nutritive discussions about generative art with real good artists active here on FxHash.

It had been more than a year since the last generative videoart I did individually and, from NullDAO on, I started to dream about transpositions of my python codes to p5.js, so I could be able to keep all this video x codes research on web3.

In two months of learning and working, that all brought me to Teleport Loopr, my genesis here on the platform:

project name project name project name

So we got elections again, Lula was gratefully elected and I got busy defending my masters research, doing political campaign volunteering: pasting papers on the streets, and doing projections on the building in front of my window.

Finally, two months after Teleport Loopr, I released Runaway Loopr, my second drop in the series:

project name project name project name

With Runaway Loopr, I also dropped two 1/1s on FxHash.

I made them as "one iteration only" because I didn't want them to have variations from one piece to another. I wanted the variations within the piece, being regenerated live and continuously, just like the videoarts I coded in python.

Here they are:

project name project name project name

project name project name project name


I will write a specific article about each of those projects and about the ones still to come up.

For now, I can tell that there are lots of ideas popping up and video clips waiting to be used. Also I can say that I'll keep focused on the dreams subject for a while and start studying shaders as soon as we get to 2023.

(. . .)

~ written by Filipe Britto (@fitipeArt), 11/11/2022

stay ahead with our newsletter

receive news on exclusive drops, releases, product updates, and more

feedback