hydra
hydrasynth
videosynthesis
elapse: video feedback processes

elapse: video feedback processes

written by lilcode

10 Aug 202399 EDITIONS
2 TEZ

elapse consists of two closed-feedback systems and one generating pattern

hydra synth is a modular videosynthesizer, inspired by early analog modularvideo synthesizer (similar to modular sound synths) consists on cables and modules that by stacking multiple generators, modulators and feedback loops we can create endless possibilities of composition and evolution.

as a modular videosynth, hydra's approach is in terms of signal flow, rather than canvas or geometric composition. this approach, has roots in the experimental contemporary art world since early electronic art. artists have been utilizing synthesis both in visual and sound as a fundamental part of their work. navigating the path of electronic signals through various interconnected components.

the concept of signal flow continues to inspire contemporary explorations, through frameworks like hydra or touchdesigner or shaders, where by manipulating the flow of information it creates layers of complexity, shaping both the process and the final artwork.

with that in mind, elapse is an exploration of the building blocks of videofeedback in hydra, without stacking multiple complex layers of modulation or sources, and by limiting the toolbox to just one source texture and one modulation layer, carefully handcrafting the intricacies of the displacement texture; and letting myself wonder on how from this simple gestures, intricate complex organic patterns and micropatterns emerge, without directly manipulating any pixel or texture beyond the displacement.


the code will be built iteratively, building upon prior commentaries and tips. all as comments in the code snippets. avoiding redundancies and overexplaining. with links to hydra editor

what the displacement modulation does is it pushes the coordinates of the pixels, their position, according to the brightness of the modulating texture. so we can use one texture to displace other texture. by doing this displacement modulation inside a closed feedback loop motion emerges

a closed videofeedback loop, or no input feedback loop, is done by routing to the input of a buffer its output, similar to the effect produced by pointing a camera to a screen.


how to create videofeedback loops in hydra-synth:

it basically consists of this input-output routing src(o0).out(o0) plus a masked layer that passes through and a modulating layer that displaces the pixels

//here we create a no-input videofeedback loop by routing in a buffer its output to its input
src(o0)
// -> here we can add image processing effects such as panning, scaling, modulating, etc. the displacement layer that makes motion emerge
//      .modulate()
// -> here we can add a 'pass-through' masked layer that feeds a texture into the feedback. it has to be masked and threshed
//      .layer( .mask())
.out(o0)

the layering is done with .layer(texture.mask(texture.thresh(.5,0))), with texture being any generator of hydra osc() noise() shape() voronoi() src().

.modulate(texture.color(1,1)), with color defining the horizontal (red) or vertical (green) displacement.

quick example of basic videofeedback patch with passing one 'passing' texture and two displacement layer

//this patch creates a central square that moves horizontally as mask with an rotating oscilator texture as color
//this colored square is passed to the feedback layer and vertically displaced with another oscilator
//and displaced in other way with 'modulateScale', that pushes the pixels inwards and outwards instead of horizontally and vertically
//this result in creating 'color swirls'
src(o0)
.modulateScale(noise(3,.1).color(1,0),.0125)
.modulate(noise(2,.25).pixelate(width,1).color(0,1),.01)
.layer(osc(Math.PI,.5,.5).rotate(0,.25).mask(shape(4,.025,0).scrollX(0,.1)))
.out(o0)

quick example of basic videofeedback patch with one passing texture and fixed modulation (without modulating with another texture)

//this patch creates a circle that spins across the center and has a displacement feedback with "scaling and rotating"
//here the displacement is not done by a texture, but by a fixed amount
src(o0)
.rotate(.01).scale(1.015)
.layer(osc(Math.PI*2,.5,.5).rotate(0,.25).mask(shape(300,.025,0).scrollX(-.05)).rotate(0,Math.PI*2))
.out(o0)

we can displace inside the feedback with any of modulation or geometry functions in hydra

GEOMETRY MANIPULATION
.scroll() // arguments are .scroll(fixed x scroll, fixed y scroll, x scroll per seconds, y scroll per seconds)
.scale() // arguments are .scale(global scale, x scale, y scale, x position, y position), positions go from 0 to 1
.rotate() // arguments are .rotate(fixed rotation, rotation per seconds), both measured in Math.PI

TEXTURE MODULATION // all texture modulation displaces a texture with another texture, and allow to set a global amount.
.modulate() // .modulate(texture.color(X,y),amount), where .color(X,Y) indicates horizontal or vertical displacement
.modulateScale() // .modulateScale(texture.color(X,Y),amount), where .color(X,Y) indicates the amount of both horizontal and vertical scaling
.modulateRotate() // .modulateScale(texture.color(X),amount), where the red channel (.color(X)) indicates the amount of rotation

elapse signal flow breakdown

elapse makes use of these basic building blocks and explores emergence of micropatterns and intricate textures out of basic, intermodulated modules

the input texture to the feedback system is the following micropattern

micro-pattern example

rn=()=>Math.random() //we set rn() our parametrizing function, for fx use fxrand()
A=height/width //fix the dimension-ratio distortion
shape(4,.5,0) //creates a square shape in the middle of the screen
.scroll(.25,.25) //moves this square to the bottom-right corner, creating a '2x2' grid
.repeat(width/2,height/2) //we repeat this grid a lot, converting this 2x2 grid into a 2x2 tiling micropattern
.out()

this tiling micropattern needs to be masked to allow pass only certain sections to the feedback loop. the mask needs to be 1-bit, any texture can be converted to 1-bit with .thresh(.5,0)

the texture is a custom noise function, that allows to set the position of the noise texture. kind of seeding it. allows for different noises with same frequency and speed.

masked micro-pattern

rn=()=>Math.random()
A=height/width
//create a custom noise function, that allows to set its position,
//effectivelly 'seeding' the noise (hydra simplex noise texture cant be seeded 
//so this works for using multiple, same frequency noise textures)
noisenp=(freq,vel,scl,posx=rn(),posy=rn())=>noise(freq,vel).scale(scl,A,1,posx,posy)
noisenp=(freq,vel,scl,posx=rn(),posy=rn())=>noise(freq,vel).scale(scl,A,1,posx,posy)
shape(4,.5,0)
.scroll(.25,.25)
.repeat(width/2,height/2)
.mask(noisenp(23,.25,4).thresh(.75,0).pixelate(12/A,12)) //mask the micropattern with a threshed noise texture, passing only the brightest
.out()

this example is the previous masked gesture inside the feedback loop, without any displacement modulation. as there are only pixels being added to the feedback loop, without any being displaced, the canvas will eventually fill up

visualization of feedback without displacement

//this is the way to set up a videofeedback loop, empty but already videofeedbacking.
src(o0)
.out(o0)


//with our masking gesture; the canvas will eventally fill because there is no displacement in the feedback loop yet.
rn=()=>Math.random()
A=height/width
noisenp=(freq,vel,scl,posx=rn(),posy=rn())=>noise(freq,vel).scale(scl,A,1,posx,posy)
src(o0) //route the output of the buffer to its input. the output when non-specified is always o0 (in .out(o0)) 
.layer(shape(4,.5,0)
.scroll(.25,.25)
.repeat(width/2,height/2)
.mask(noisenp(23,.25,4).thresh(.75,0).pixelate(12/A,12)))
.out()

the emergence of motion and patterns is given by the displacement map. in elapse the displacement map is two noise functions inside a .modulate(), one noise for each channel; red and green, horizontal and vertical displacement.

visualization of the displacement layer, for visualizing efforts this example shows vertical lines for vertical displacement, and horizontal lines for horizontal displacement; red and green channels accordingly

rn=()=>Math.random()
A=height/width
noisenp=(freq,vel,scl,posx=rn(),posy=rn())=>noise(freq,vel).scale(scl,A,1,posx,posy)
solid() //black background
.add(noisenp(25,.25,4).color(1,0,0) //add a noise layer, but only its red channel (for horizontal displacement)
    .mask(noisenp(10,.5,4).brightness(1)
        .mask(shape(4,1,0).scale(1,1,.125).repeatY(height/16))
        .thresh(.125,.125))//horizontal line masking, only for visualization. the actual modulating layer is not line-masked
    ,1)
.add(noisenp(25,.25,4).color(0,1,0) //add a noise layer, but only its green channel (for vertical displacement)
    .mask(noisenp(10,.5,4).brightness(1)
        .mask(shape(4,1,0).scale(1,.125,1).repeatX(width/16))
        .thresh(.375,.125))//vertical line masking
    ,1)
.out()

this displacement map is applied in the feedback loop before the .layer(), this is a visualization of both the feedback loop (emergent patterns) and its displacement map layered on top

out of just two simplex noise textures can emerge complex and organic patterns and motion

first feedback loop

the second feedback loop consists on another buffer, with the first feedback as input, but masked in a way that only allow the outer pixel columns to pass, its displacement its only horizontal, both unipolar and bipolar (towards one or both sides)

second and final feedback loop

hydra has an active community playing and building. some of the places where find data instagram, interactive docs, discord server. and hydra, hydrasynth, videosynthesis, videofeedback tags on fxhash and socials.

thx :)

project name project name project name

stay ahead with our newsletter

receive news on exclusive drops, releases, product updates, and more

feedback