Session 8

Presenters: Alex White, Federico Camera Halac

14:30pm: Alex White Rethinking the analog-digital dichotomy through the lens of contemporary modular synthesiser practice

Higher Ground is an electronic work based on the acoustic properties of the bells of St Mary’s Cathedral, Sydney. The attraction to cathedral bells was motivated by the desire to explore the sound spectra of church bells and their connection with people and urban spaces. This became even more important with the onset of the global pandemic Covid19 in 2020.

Higher Ground is an exploration in composing music based on sound spectra of cathedral bells. Initially, the bells were to be included in the piece, but due to Covid19 restrictions on bellringers, the approach turned to measuring the spatial features of the bells, or sound spectra, with particular analysis of the frequencies, volume and density of the sound. Other considerations included textures, in particular partnering instruments and sounds to create an interpretive ethereal response including intuitive expressive elements.

As mentioned, Higher Ground has been written using both intuitive choices, and informed choices. Informed choices of fundamental pitches and desired harmonics have been achieved through analysis of the acoustic properties of the ringing bells, using a recording of the bells provided by St Mary’s Cathedral. This analysis, shared with intuitive choices formed the foundational compositional material for Higher Ground.

This composition presented many challenges and opportunities to grow as a composer. These included: translating a large composition concept from its theoretical vision to paper, growing the seed through practical ongoing research and planning, and discovering unique compositional techniques to achieve the vision for the piece.

03:00pm: Federico Camara Halac DreamSound: Deep Activation Layer Sonification

Deep Learning in raw audio-based musical contexts has received much attention in the last three years, and it is still a rich field for exploration. At the intersection of Deep Learning and audio, some attempts were made for translating deep networks from image to audio applications.

DreamSound, an adaptation of the Deep Dream project into an audio-based musical context, is presented as the first translation that uses an audio-trained network, YAMNet. The results are analyzed in detail, and future work is discussed.