Back to the top

Autapse

  • 956 Views
  • Likes

About

Multi-channel generative projection mapping. Scalable in any size and shape.
.
This artwork has got it’s inspiration from autapse which is an electronic connector between neurons in human brains. The structure of code behind the artwork follows similar structure that is used in deep learning and neural networks.

Artwork has made by using several procedural generative videolayers, which are generating and creating different variations all the time. Videos run thru multiple layers of video filters.
Algorithm is collecting data from IR-camera which tracks passengers walking by, so every passenger is changing a little pit how the video filters behave.

Macro level interactivity is made by using a motion sensor that tracks audiences hand gestures and it makes artwork’s pixels to start flow all around. Motion of the hand controls pixels direction, creating kind of a watercolor effect. Same time algorithm records the interactivity and when hand is took away, it sends the recorded video to be part of the code, so every participant is chaging and developing the artwork. Although the participants can change the artwork a bit, the main leader is still the computer, which controls the most of the pixel movement.
This kind of plays with the idea who is the actuall artist; is it a creator, participiant or the computer. Or is the artwork made with the synergy of them all.

The sensor is changeable, it can be lidar-scanner, midi-keyboard or even foot-pedal, depends on the place and current Covid19 -regulations.

Autapse can create over 100 000 000 000 different variations by itself and with interactivity included, it can produce endless amount of variations.

Data

  • Video dimensions: 1920x1080

About

Multi-channel generative projection mapping. Scalable in any size and shape.
.
This artwork has got it’s inspiration from autapse which is an electronic connector between neurons in human brains. The structure of code behind the artwork follows similar structure that is used in deep learning and neural networks.

Artwork has made by using several procedural generative videolayers, which are generating and creating different variations all the time. Videos run thru multiple layers of video filters.
Algorithm is collecting data from IR-camera which tracks passengers walking by, so every passenger is changing a little pit how the video filters behave.

Macro level interactivity is made by using a motion sensor that tracks audiences hand gestures and it makes artwork’s pixels to start flow all around. Motion of the hand controls pixels direction, creating kind of a watercolor effect. Same time algorithm records the interactivity and when hand is took away, it sends the recorded video to be part of the code, so every participant is chaging and developing the artwork. Although the participants can change the artwork a bit, the main leader is still the computer, which controls the most of the pixel movement.
This kind of plays with the idea who is the actuall artist; is it a creator, participiant or the computer. Or is the artwork made with the synergy of them all.

The sensor is changeable, it can be lidar-scanner, midi-keyboard or even foot-pedal, depends on the place and current Covid19 -regulations.

Autapse can create over 100 000 000 000 different variations by itself and with interactivity included, it can produce endless amount of variations.

Author

tiainen.xyz

tiainen.xyz

Finland