AInim explores the possibilities of employing Artificial-Intelligence-Technologies (AIT) in arts and media productions. The project combines audio-reactive visuals with Image-to-Image Translation by Conditional Adversarial Networks [Isola et al 2016] trained on hand-drawn anima-tions by renowned artists. The integration of these technologies within artistic processes can be viewed as an artistic toolset rather than as a surrogate artist. The project is developed with the support of ART&TECHLAB (// cooperation between Animationsinstitut (Filmakademie BW) and Samsung Electronics.
Prototype realized with support of ART&TECHLAB, Filmakademie Baden-Württemberg
Live performances are often accompanied by visual elements that interact directly with artists on stage. For certain venues, an expressive and personal style is more appreciated than a digital look. However, it is impossible to achieve the expressiveness of a hand-drawn animation within a live context due to the high production eﬀort and time requirements. Pre-produced content, on the other hand, forces the artists to perform in a given time frame and limits their artistic freedom. The project AInim delivers a solution to combine the dynamic synergy between sound and motion obtained by reactive computer graphics and the analog expressiveness of hand-drawn animations. The Ainim prototype was build in collaboration with Irina Rubina. Her animation-short "JazzOrgie" served as the foundation to set up a ﬂexible audio reactive environment that is able to transfer the original artistic intent to a system that allows user interaction and can be entirely controlled by the artist.
The project explores the possibilities of so-called Artiﬁcial Intelligence (AI) in conjunction with art and artistic processes. The project combines 3Dgenerated content that reacts to sound with an AI setup that transfers the structure of hand-drawn animations onto the 3D input. By combining the fast reaction times and precision of computer-generated graphics with the organic feeling of painted animations, the best qualities of two worlds are united. Within the experimental setup, the artists and creators of the original imagery are in direct exchange with the programming of the conﬁguration and also subject to the theoretical reﬂection of technology and artistic practice. The AI systems are running a conditional adversarial network for reconstructing objects from edge maps.
Dr.phil. Alexander König is a media theoretician and av-artist living in Berlin. He was postdoctoral researcher at the art academy of Trondheim(part of NTNU) and works as a freelancer in the fields of real-time-animation, livestream-system-engineering and digital-video. In 2017 University of Singapore (NUS) invited him as „Artist in Residence“ at Tembusu College and guest lecturer within the “Department of Communication&New Media”. He got his Dr.phil. from the University of Fine Arts Vienna (Department of Cultural-Theory, Prof. Diedrich Diederichsen). Alexander König was teaching semiotics and media-theory at the Merz-Akademie Stuttgart and works closely together with the Animationsinstitut of Filmakademie Ludwigsburg since 2006.
Contact: akoenig (at) media-art-theory.com