Back to the top

avatars4all

  • Workshop
  • 69 Views
  • Likes
avatars4all
How can you be anybody in Zoomspace? Very recent developments in deep-learning, allow
creating synthetic media of unprecedented quality and ease. The first-order-motion-model can do facial
reenactment in real-time, provided with only a single image of your desired avatar. This came not a
moment too soon, as Human communication was forced to move online due to the Coronavirus
pandemic. Can this be an opportunity to fulfill the long promised cybernetic utopia, where we could
shed our physical shells and become however we wish to be? And how does this pertain to issues of
privacy, identity and trust? I will review the contemporary technologies and show how I use them in my
artistic and activist practices. I will present accessible online interfaces that I am developing for making some of the latest released techniques be easily accessible as a fully automatic, online and free, ready and easy to use yet fully customizable tools.
This is a hands-on participatory and creative tutorial, where you will create deep-fake videos using your own materials, and play with various options of becoming a live avatar, As well as recent tools for automatic video manipulation, mixing and glitching. No prior knowledge needed. I will also introduce and demo the first available purely online solution for
live real-time avatars from the webcam, and show how I try to make these state-of-the-art technologies
accessible to a wide audience using Google Colab. repo: github.com/eyaler/avatars4all

Duration (minutes)

480

What is needed

laptops, electricity, tables and chairs, good internet, projector

What the artists brings

customary online accessible tools for deep-fakes, avatars, automatic background replacement, video manipulation, mixing and glitching.

  • Workshop

Authors

Eyal Gruss
Eyal Gruss

Israel Tel Aviv-Yafo