Photo Credit: Picture: Fu Xiao by Romanus Fuhrmann
Photo Credit: Picture: Fu Xiao by Romanus Fuhrmann

Xiao Fu

Xiao Fu completed both her Master’s and Doctoral degrees in Composition at the HfMT Hamburg. In 2025, supported by Musikfonds, she served as a visiting scholar at Stanford University. Her work lies at the intersection of acoustic and electronic music, exploring gesture-based instrumental control, interactive performance, and the application of artificial intelligence in music creation. Her compositions – ranging from solo to chamber works with live electronics – have been presented at major international festivals. She is currently developing a transmedia project that integrates contemporary music, sign-language poetry, and dance. Through the use of artificial intelligence, the Deaf performer can not only perceive the music but also generate new sounds in real time through her movements.

Questions and Answers

3 FACTS

1. I compose for both humans and machines — sometimes they misunderstand each other, and that’s where my music begins.

2. I believe silence also has a rhythm; we just need to listen closer.

3. My six-year-old daughter once said, “I like your music,” even though that piece had no tonality at all — just fragments of sound.

11 QUESTIONS

1. What is the biggest inspiration for your music?
Moments of transition — between people, languages, sounds, and systems. I’m fascinated by how we move from one state to another, and how fragile that movement can sound.

2. How and when did you get into making music?
I started with classical piano as a child. When I applied for university, I actually failed the piano entrance exam — so I switched to Sound Production instead. At that time, I had absolutely no idea what that even meant. Looking back now, I’m so grateful it happened — it opened a completely new world for me.

3. How will you integrate artificial intelligence into your project and which specific AI technologies or tools are you using?
In this project, I use AI as a collaborator rather than a tool. Through Max/MSP with AI tools such as FluCoMa, neural synthesis, and gesture mapping, I explore how AI can listen back to humans — translating motion and physical expression into evolving sound textures. It’s less about control and more about conversation.

4. What do you associate with Berlin?
A city full of energy and poetry.

5. What’s your favorite place in your town?
Home. It’s where I live, work, and create — a place where chaos and precision coexist in perfect balance.

6. If there was no music in the world, what would you do instead?
Coding. I would probably not be a great programmer, but I love the logic behind it — it feels a bit like composing with silence and structure.

7. What was the last record/music you bought or listened to?
A piece I just finished myself — for accordion, guitar, and electronics. We had the last rehearsal this morning, and it’s still echoing in my head.

8. Who would you most like to collaborate with?
Right now, it’s Rita, who is working with me — a Deaf performer whose movement and presence inspire me to rethink what listening can mean.

9. What was your best gig (as performer or spectator)?
You won’t believe it — it was 13 years ago, an immersive sound installation where we stretched 1,000 strings across the performance space. Three harps and one virtual harp resonated with the room itself. It felt like the architecture was breathing with us.

10. How important is technology to your creative process?
Essential — but not as decoration. Technology is a way of listening differently, not just producing more.

11. How do you plan to present the results of your research at Radialsystem?
Through a performance that merges sound and light. AI-generated soundscapes will respond in real time to performers’ gestures and physiological data, creating a constantly shifting environment that listens as much as it speaks.