Quick facts
- Format: Live audiovisual performance
- Duration: 50 min
- On stage: 1 performer (Chi Him Chik)
- FOH: 1 operator (visual system / live control)
- Premiere: 5–6 December 2025, Schwankhalle Bremen (Neuer Saal)
- Workshop: 6 December 2025 (audience invited inside the kinetic structure)
For sound & composition details, see Chi Him Chik’s page: www.chihimchik.com
Media
Teaser
Full documentation
Idea
umbra is a live media performance built as a feedback loop between body, sound, and a computational double.
It questions the boundary between the human performer and a system that observes, remembers, and responds — a digital shadow that is at once a reflection and an agent.
At its core, umbra unfolds through three overlapping layers:
- the extractivist layer: what traces do we leave when our data is harvested — and who owns the digital double?
- the psychological layer: shadow as the repressed, drawing on Jungian and mythological archetypes
- the ontological / epistemological layer: what is a shadow, where does it begin, and how autonomous can it become?
Memory becomes an instrument: past gestures return as material for the next decisions, turning the performance into a living archive.
Experience
On stage, a motorized semi-transparent screen structure forms a 3×3m kinetic cube.
It opens and transforms during the performance, functioning alternately as container, stage, and projection surface.
The cube acts as a compositional and symbolic device: an “ideal room” of encapsulation and extraction.
When closed, it frames the performer as if inside a controlled chamber — a space where observation and capture intensify.
When it opens, shadows and body “spill” outward, shifting the choreography from containment to release — and, later, to re-capture.
Performer Chi Him Chik engages with his improvisation machine Aiii and the multimedia instrument Type-0《零式》, generating live sound and light.
Media artist Slava Romanov designs real-time visuals, depth-sensor tracking, and projection mapping — making the digital shadow appear as point-cloud bodies, particles, and AI-reinterpreted figures that answer the performer back.
The audience witnesses a fragile dialogue between body and double, shadow and source, analogue and digital — a choreography of presence and disappearance.
Composition (7 acts)
- Intro — bodyless contour → intentional encapsulation
- Development / recording gestures — exposition, absorption, classification, emerging conflict, shadow overtake, breakout
- Body as “Truth” — Type-0《零式》 performance
- Aria of umbra
- Encounter & improvisation — a multitude of reimagined twins
- Reverse aria — separation toward autonomy, then trapping the shadow back into synergy
- Outro — resolution
Visual system (real-time)
The visual layer of umbra is built as an instrument: a real-time pipeline that observes the performer, constructs a 3D shadow-body, mutates its rules, and reprojects it back into the scenography.
Two intertwined logics shape the visuals:
1) Audio-reactive immediacy
Onset-driven events, dynamics, and rhythmic impulses articulate particles, density shifts, and abrupt transitions — keeping the image tied to live physical energy.
2) Internal system states
“Physiological” values such as breath and heartbeat from Aiii act as slower, structural controls — shaping the temperament of the shadows: persistence, delay, instability, drift, and intensity.
Visual language moves between two poles:
- reduced, high-contrast white point constellations on black, flowing and mutating as a legible “data-body”
- occasional transitions into colorful, more figurative AI reinterpretations, where the same embodied stream is translated into unstable image-memory — not as background video, but as a responding agent.
Toolchain / setup: TouchDesigner + Resolume; 3× Azure Kinect; 2 PCs in the visual subsystem.
Technology
- Kinetic screen structure — semi-transparent, motorized cube (3×3m), opening/closing as an active scenographic gesture
- Depth cameras & point cloud tracking — live 3D shadow-body and motion-based interaction
- Type-0《零式》 — katana-inspired multimedia instrument with motion sensors, LED strip, and wireless data/audio
- Aiii — improvisation machine by Chi Him Chik, an algorithmic persona reacting across sound and control streams
- AI image synthesis (real-time) — driven by the same embodied data stream, transforming point-cloud shadows into unstable image-memory
- Shared control layer — synchronization across gesture, sound, and image through system-state signals and triggers
Workshops
After the performance, an interactive workshop invites audience members to step inside the kinetic screen structure, try sensors and instruments, and learn how sound, visuals, and shadows are produced in real time.
The workshop fosters transparency, discussion, and shared experimentation around AI and media art.
Photo galleries
Performance
Kinetic screen (build / details)
Photos: Jimi Liu.
Presentation
Premiere: 5–6 December 2025, Schwankhalle Bremen (Neuer Saal)
Workshops: 6 December 2025
Supported by: Der Senator für Kultur Bremen, Schwankhalle Bremen, Hochschule für Künste Bremen, We Dig It!, URBANSCREEN GmbH & Co. KG.
Credits
Concept / Direction — Slava Romanov × Chi Him Chik
Performance — Chi Him Chik
Sound / Type-0《零式》 / Aiii — Chi Him Chik
Visuals / Real-time system (depth scanning, point clouds, generative visuals) — Slava Romanov
Kinetic screen design & fabrication — Slava Romanov, Juan Camilo Luque, Leonard Spillner
External Communications — Alexandra Reinig
Photo — Jimi Liu
Video documentation — Patrick Peljhan
Teaser — Chi Him Chik
Links / Contact
Instagram: @umbra_mediaart
Email: chihimchik@gmail.com / node@slavaromanov.art
A compact touring configuration is in development. Tech rider available on request.