😚Taipei Performing Arts Center:Your browser seems not to support ✪ Javascript ✪ functionality. If webpage features are not working correctly, please enable your browser's ✪ Javascript ✪ status.
:::
2024/09/05

The Dance of Technology and Arts: AI’s Transforming Role in the Theatre

In the rapidly evolving intersection of technology and the arts, the exploration of artificial intelligence’s (AI) role in creative processes has become a focal point for leading innovators. This discussion, the fourth session of this year’s APPAC Annual Gathering, delves into how AI is transforming artistic practices across various disciplines. Moderated by Ariel Yonzon, the Associate Artistic Director of Production & Exhibition Department at the Culture Center of the Philippines (CCP), this session features unique insights from Pierre Caessa of Google Arts & Culture, Kathy Hong of Cloud Gate, choreographer Hiroaki Umeda, and Chieh-Hua Hsieh of Anarchy Dance Theatre. Each speaker explores how AI can expand creative possibilities while influencing the future of art, navigating both the challenges and exciting opportunities presented by this digital and physical collaboration.

How AI is Expanding Creative Possibilities: Pierre Caessa on Google Arts & Culture's Pioneering Projects

2024台北藝術節/借放_側記文章/AAPPAC 側記文章/day4/241009-150245-9

With an introduction by the moderator Ariel Yonzon, the session opened with a pre-recorded video from Pierre Caessa, Program Manager of Google Arts & Culture. Caessa began by posing the question of how artist innovation and AI could intersect, while summarizing over a decade of Google Arts & Culture’s work aimed at “giving a stronger voice online for cultural organizations” and “bringing organizations the tools they need.” He emphasized how technology’s ability to reveal what’s invisible to the naked eye offers art professionals new ways to tell stories.

 

Caessa proudly announced that Google Arts & Culture has partnered with more than 3,000 cultural organizations across 80+ countries. He likened the platform to “a daily dose of inspiration in [people’s] pocket,” noting how it makes sharing and accessing resources easy and free through an app, website, and even a game still in development. The platform not only provides insights from leading experts but also allows people to explore cultures from diverse lenses.

 

Since 2018, Google Arts & Culture has been exploring how AI can contribute to the cultural ecosystem, particularly at the intersection of AI and artistic expression. One notable collaboration with the Centre Pompidou in Paris involved a project that speculated on what artist Wassily Kandinsky might have been hearing while painting the iconic Yellow-Red-Blue, with a feature allowing audiences to create their own interpretations. Another early initiative explored how to engage people with art portraits by using AI to match participants’ selfies with portraits from a vast collection spanning over 3,000 museums. Caessa shared his personal experience of being matched with a portrait from a Spanish museum he hadn’t known, highlighting the idea of expanding horizons.

 

He then focused on a unique project developed in collaboration with British choreographer Wayne McGregor for the dance production Living Archive. Drawing from McGregor’s extensive video documentation of his choreography, the AI algorithm was designed not to mimic, but to generate new, original movement sequences based on his archived works.

 

In addition to McGregor using this AI system to further his choreographic practice, the system was also made accessible to the public. Anyone interested can interact with the AI by simply opening their webcam and making a movement, such as waving an arm. The model then detects the motion and matches it with movements from McGregor’s archive, allowing users to create their own dance sequence.

 

The session concluded by reflecting on how projects like Wayne McGregor’s AI-driven choreography exemplify the potential for technology to democratize creative processes, enabling artists and the public alike to engage with artistic archives in novel, dynamic ways.

Cloud Gate’s AI Journey: Kathy Hong on the Dance Troupe‘s Latest Work, Waves

2024台北藝術節/借放_側記文章/AAPPAC 側記文章/day4/241009-150245-5

The next presenter, Kathy Hong, the Executive Director of the Cloud Gate Culture and Arts Foundation, shared the behind-the-scenes insights of the dance troupe’s latest work, Waves, which marks the first time Cloud Gate integrated AI technology into their performances. The exploration began with a collaboration between Tsung-Lung Cheng, Cloud Gate’s choreographer and Artistic Director, and renowned Japanese new media artist Daito Manabe, best known for his technical work on the visuals for the Tokyo Olympics’ closing ceremony. 

 

Early dialogues about the production started in 2021 during the height of the pandemic, but it wasn’t until 2022 that Cheng met Manabe in Tokyo to fully dive into the creative process. Kathy explained that the team quickly recognized the significant gap between their understanding of technology and the possibilities of AI. However, they saw the immense potential AI offered, particularly its ability to perform movements beyond human physical limitations. This idea inspired the concept of a “13th dancer,” a virtual figure that could push the boundaries of choreography alongside human performers.

 

In February 2023, Manabe visited Taiwan to collect data from Cloud Gate’s dancers, but even before that, the team had been feeding his program video clips of their performances to build a movement database. This database was then used to generate visuals and music for Waves, transforming the dancers’ movements into digital representations for stage display. 

Although AI played a key role in shaping the creative process, the team opted not to include live AI during performances. Instead, they finalize the visual and music before shows to ensure touring feasibility. This decision allowed Cloud Gate to maintain the flexibility needed for limited setup and rehearsal time at each venue.

 

Beyond the stage, the team introduced an interactive AI component for audiences at the National Theatre in Taipei. A mega LED screen in the foyer allowed audience members to engage with AI-generated visuals of their own body movements before and after the show, giving the public a deeper understanding of the creative process and how AI influenced the work.

 

Working with AI also posed communication challenges, particularly between artists and engineers. Kathy highlighted the learning curve as the two groups adapted to each other’s language. Choreographers often describe ideas using adjectives, while engineers work with precise measurements. For instance, she joked that when Cheng requested a 60-second fade-out, the engineer needed clarification on exactly when and how the fade should begin. 

 

In July, Cloud Gate experimented with audience interaction during an outdoor performance of Waves. The final segment, Movement, a 16-minute piece choreographed by Tsung-Lung Cheng, involved collaboration with the Taiwan and Hong Kong artist group Dimension Plus and musician Lim Giong as the DJ. Dimension Plus captured city elements and used live cameras to track the movements of both dancers and the audience, feeding this data into their program, which then blended the visuals. The visuals, triggered by Lim Giong’s live music, created a dynamic interaction between performers and AI-generated content.

 

For Hong, working with AI proved unpredictable, as the technology introduces variability, making it difficult to control in real time. Despite these challenges, the team remains open-minded, exploring AI as a creative partner and progressing gradually with each step.

Hiroaki Umeda on Integrating Technology and Dance: From Early Experiments to Recent Innovations

2024台北藝術節/借放_側記文章/AAPPAC 側記文章/day4/241009-150245-7

Choreographer Hiroaki Umeda shared his journey of merging technology and dance, beginning with his exploration of visual and musical elements during his university studies in photography. Although photography was not his main focus later on, it paved the way for his gradual transition into dance, where visual components have become integral to his work. Since purchasing his first computer in 2000, Umeda has used technology to blend music, visuals, and choreography into cohesive performances.

 

Umeda described his approach as “layers of choreographic creation,” aimed at realizing his “aesthetic” through various “concepts” like movement and the stream of forces. To achieve this, different “ideas” such as projection, light colors are employed. In 2011, he collaborated with YCAM (Yamaguchi Center for Arts and Media) on the project Holistic Strata, integrating motion sensors into dance. However, early technology proved too slow for the dynamic nature of dance, leading him to experiment with newer systems that better responded to human movement. He continued by recounting his use of advanced motion sensors in his 2015 work Intensional Particle. Collaborating with programmers, he developed a fluid simulation program that incorporated real-time data from his movements to create spatial effects. During the COVID-19 pandemic in 2021, he restaged Intensional Particle as a live-streamed performance, combining visuals and dance through his phone.

 

Umeda is captivated by the concept of choreographing the human body as an extension of nature, exploring how technology can enhance this integration. He is particularly interested in incorporating natural elements, such as water, into his choreography, inspired by the fact that the human body is over 50% water. This gave birth to the installation Choreograph 1 – Water (2023),  as well as the dance production Assimilating (2023). In these works, Umeda uses sound and strobe lights to generate unique movements in water. The varying frequencies of sound and the dynamic visual effects create distinct and unusual patterns. This approach enables him to engage with nature through his art, integrating natural elements with technological innovations and deepening his exploration of dance and choreography.

 

He also addressed the challenges of working with both human performers and technology. While technology introduces its own uncontrollable elements, he finds the unpredictability of human interactions to be a greater challenge. Umeda further exemplified using his two new pieces: the solo performance Assimilating, which he choreographed, programmed, and performed, and Moving State 1, created in collaboration with dancers from his Somatic Field Project.

 

Reflecting on his creative process, Umeda highlighted his commitment to using technology to enhance dance while preserving its sensory and emotional elements, though he has not yet incorporated AI. He explored the potential technology for generating and analyzing movement data but emphasized that technology complements rather than replaces human creativity. His work strives to balance technological advancements with the profound human experience of performance.

Dancing with Technology: Anarchy Dance Theatre’s Exploration with AI

2024台北藝術節/借放_側記文章/AAPPAC 側記文章/day4/241009-150245-8

Chieh-Hua Hsieh, the Artistic Director of Anarchy Dance Theatre, reflected on the company’s journey of merging dance with technology since 2010, pushing the boundaries of performing arts. He explained that their early experiments aimed to find a balance between technologies—particularly video—and live performance. Although video can often feel dominant, the dancer remains at the core of their work. Achieving harmony between these two elements has been a long-term focus for the company.

 

In their earlier works, such as Seventh Sense (2011) and Second Body (2015), this vision took shape. Audiences were drawn to focus on the dancers, as the video projections were intricately tied to their actions. This created a unique fusion of visual and sensory experiences, where the video was not merely a backdrop but an extension of the dancers' movements. By 2019, the company introduced The Eternal Straight Line, exploring how uncertain elements like smoke could become a part of the performance, adding a dynamic and spontaneous quality. As dancers engaged with the smoke, this semi-controlled environment enhanced the vitality of the show, further deepening the complex interplay between human performers and technological elements.

 

In 2020, the company shifted its focus toward AI, especially the rapid advancements in Generative Adversarial Networks (GANs). Hsieh saw this as a turning point in AI development, as GANs allowed AI to learn and generate new outputs without relying solely on pre-set instructions, suggesting that AI was no longer just a tool but a collaborator capable of innovation. Through a partnership with the Industrial Technology Research Institute, they explored AI’s capacity to recognize and interpret human movement. By adjusting the dancers’ movements to confuse or challenge the AI’s recognition systems, they probed the boundaries of AI’s understanding of the human body. This process became not just a technical test but an artistic investigation into the potential of the human form.

 

Hsieh introduced CyborgEros (2023), a collaboration with the National Taichung Theater and technology company IF Plus, which employed the skeletal recognition system OpenPose to create digital representations of the human body on the stage. These systems detected dancers’ postures and movements, generating virtual skeletons in response. Hsieh also highlighted their use of BigPix AI, which involved feeding thousands of dancer photos into an AI system to generate a virtual dancer. This digital counterpart allowed the real dancers to interact with their virtual reflections, offering new insights into their own movements and bodies. He likened this process to a “mirror theory,” where the dancers, and even the audience involved, rediscovered themselves through their digital reflections. AI doesn’t merely project a virtual image but engages in real-time interactions with the dancers, making the performance richer and multi-dimensional.

 

Looking to the future, Hsieh is excited about the growing possibilities of AI in dance. He believes that rapid advancements in AI, particularly in deep learning, will help artists better understand complex movements and creative processes. He is also eager to explore how different cultures can approach the intersection of art and technology, using AI to deepen our mutual understanding in the digital age.

 

In response to convenor Yonzon’s concluding question on AI’s potential to replace human creativity in dance, Hsieh asserted that AI will not replace human artists but will impact how art is created and perceived. He highlighted that while AI can aid decision-making, it cannot replicate the human intuition and vision essential for originality. Hong noted that although AI can process extensive data, it is the human creator’s unique creativity and vision that define truly original work. Umeda added that while AI can support technical aspects, it lacks the nuanced taste and aesthetic sense inherent to human creators.  Therefore, as art and cultural practitioners continue to “Mind the Gap” between human and AI contributions, more possibilities will surely “Lead the Path” toward innovative and collaborative artistic futures.

Written by I-Ying Liu, Photo by Grace Lin