‘No sweeter sound than my own name’ – Reflections and Commentary on Reading Audio-Score Drafts

I have been working with Beavan Flanagan to develop a new vocal piece for me titled, no sweeter sound than my own name. Beavan and I are exploring the use of an audio score and along the way I have been writing a few reflections on the experience of ‘reading’ the score. A couple of weeks ago Beavan Flanagan and I met up again to try out an updated audio-score. What follows are my reflections on practicing the original audio-score and reading the updated audio-score. My reflections are accompanied by commentary from Beavan and a video excerpt of the run-throughs we did during our workshops.

Practice sessions with the first version of the audio-score from 7 July 2015

SCORE DETAILS
This score had only the breathing and humming parameters. I was sent individual tracks for each parameter and arranged the playback so that the breath parameter was sent to both ears, and the pitch parameter (hums) to my left-ear. I listened to the audio-score with traditional (air-conduction) headphones.

REFLECTIONS/COMMENTARY
MB: The speed of pitch change is too fast relative to the type of articulation that the hum pitches currently have. The articulation is segmented and step-like. Although, at the speed that some passages move, the result can be more melismatic when executed. This is not a problem, but I think it may be beneficial to represent this difference in the sound more clearly. This is a matter of communicating the aural information as directly as possible to decrease the amount of superfluous information being ‘read’ (or sonically attended to) at any one time.

BF: When I was making the pitched parameter, I found that through happenstance the scales and aleatoric rhythms sounded like some of the chanting that we have been hearing emerging from the mosque across the street. Thus in my listening of the score I think I tended to hear ‘beyond’ the step-like and somewhat mechanical nature of the sine tones, to something richer. Apparently though this was not conveyed when passing on the score to Michael, which should not be surprising as he perhaps did not have the associations with chanting that I did. I plan on adding small glides in between each stepwise motion in order to convey the desired effect more clearly, which should also make the humming easier to perform.

MB: Manually removing any redundancies of ‘hold’ in the breath track would be ideal.

BF: Obv, this was a glitch.

MB: It is often difficult to remember whether or not I am breathing in or out after information from the pitch parameter drops away and breathing is supposed to still be happening. My natural instinct is to assume a ‘hold’ position. Perhaps, if there were subtly different types of background noise (different colors such at white, grey, pink, brown, etc.) that persist beyond the ‘in’ or ‘out’ instruction (possibly reflecting the different types of noise produced by the ingressive or egressive flow of air?) it may be easier to return to the background parameter of breathing once a second-order parameter terminates. I’m imagining this as being akin to a suspension of the technique or instruction carried out through the aid of a subtle sonic cue.

BF: Agreed, although part of me also thinks that you could get used to this with little effort – personally I found this to be a relatively easy obstacle to overcome, during my own experience of trying to perform the score. However, adding the background noise would indeed render the score more representative of the resulting sound. Also, if it makes things easier for you, then I’m all for it.

MB: Careful consideration of the vertical alignment of each parameter is paramount. In general, I feel as though I need the text instruction of when/how to breath before any other information. I need to know in what manner the airflow is operating so that I can activate that breath. So, this means that there needs to be a slight gap or delay between the text instructions and pitched sounds.

BF: Yes, and I suspect that this will be useful when it comes to adding more layers/parameters. This brings up the issue of ‘parsing’ the different parameters in the performer’s mind: on a written score generally this is done using spatial layout, i.e simultaneous events are stacked vertically on the page, allowing for visual separation. This is less straightforward in the case of audio, and I think the idea of using slight delays between the introduction of different parameters which should ultimately be performed simultaneously could potentially work quite well. This would give you time to listen to each parameter in isolation immediately before performing all of the parameters simultaneously. In this fashion the audio information could also be given in advance, in order to give you a bit of time to understand it before vocalising – something to experiment with anyways.

The other thing to try out is spatializing the audio score across the surface of your head, using several transducers attached to different spots on the head. That way each parameter could be sent to a specific location, allowing you to ‘parse’ the information effectively. We currently have no idea whether or not this will work however…see below for further discussion of this.

Hearing/reading the updated audio score from 7 August 2015

SCORE DETAILS
The score had five parameters. Four parameters persist globally across the piece and are understood to me to be foundational background processes/material. The fifth parameter was a computer generated string of ‘text’ (gibberish – but still 1960 masculinity gibberish!) that I am to simply imitate and read naturally.

VIDEO DETAILS
The first video excerpt is of me reading the updated score for the first time, the second video is of another reading of the same score the next day. The score has changed throughout the collaboration process, but it’s end form will be static/deterministic and possible to be read by other performers.

TECHNICAL CONSIDERATIONS
MB: During these two recordings Beavan and I are working with the piezo-microphone attached to my throat externally with two bone-conduction microphones placed behind my ear-lobes. In the first video I wear a beanie-hat that provides pressure to the bone-conduction transducers as a way of obtaining a perceptually stronger signal-to-noise ratio and to reduce audio(-score) bleed. In the second video, I tie a scarf around my head with the bone-conduction microphones underneath as a way of having a greater degree of pressure applied to my head. As a result, my ability to hear the score is improved, but both approaches are still problematic as it is intended that the audio coming from the score not be perceived by the audience. [As a side note: I think it could potentially be interesting to have a small amount of bleed-through that the audience may hear. There is something about moving in and out of multiple parameters and the extreme locality of the sound that I think makes for a provocative and rewarding listening/watching experience for an audience member.]

BF: I have decided that I really don’t want the bleed. It’s just too distracting – the whole point really in making the audio score was to make its presence felt as little as possible. If I wanted the score to be present during the performance I would have just written one down.

MB: One possible solution that Beavan and I will be considering for the future is the use of a swimmer’s cap to hold in place the bone-conduction microphones. The bone-conduction microphones have also been encased in sugru (a patented multi-purpose, non-slumping brand of silicone rubber that resembles modelling clay). We speculate that a benefit of using bone conduction transducers is the possibility to locate the sounds at different points beyond a stereo (left-right) placement. This is especially relevant in terms of a tactile and physical engagement with the frequencies that are being sent to the transducers. With the end piece having somewhere in the estimated range of 7-8 parameters potentially interacting at once or in close succession to one another, this distributed and spatialized placement of bone-conduction transducers could be useful in directing my listening/reading attention around the score.

BF: I am still concerned about bleed. I know the tighter the transducers are on your head, the less sound gets transmitted into the air, however it remains to be seen whether or not the swimming cap will do the job of covering up the sound .

MB: Perhaps placing a thin layer of acoustic foam between the transducers and the swimming cap could eliminate the bleeding…

REFLECTIONS ON THE TWO RUN-THROUGHS
MB: I feel as though the score can be played at a much quieter volume. I may just need some time to settle into the lower amplitude of the score at the beginning of each performance. A count in similar to the one you provided before the playback of the piece could be useful to incorporate. I would be careful though about the use of numbers at a certain tempo. Maybe a single beep to signify that I need to begin listening and that the score has been activated. After that beep, any amount of time can pass, in which my listening is recalibrating to my aural environment in an attempt to quiet my mind’s ear. Once that amount of time has passed, the score begins and I perform.

One possible reason for keeping the audio-score as something that may be barely over-heard, is that it makes the surface-level activity of my performance more ‘active’. What I mean is, the surface can at times seem very flat (in terms of a kind of dynamic between actions), and the slight presence of the audio score seems to slightly ‘amplify’ the dynamic of interaction in performance – especially as an element that I respond to.

BF: Flatness is good, I want flatness, objectification, distance…these are all very important aspects of the piece. I don’t want the piece to be about your response to a score, I want you to be an object on stage from which sound is emerging. Even if the audience knows that you are the performer, It should in fact be unclear that it is you making sound, hence the closed mouth and absence of gesture, facial expression. There should be equal measures of distance and proximity – distance created by the electronic mediation of the sound, and proximity caused by the amplification of the mechanics of the voice box.

MB: After I reach an extreme of either ‘in’ or ‘out’ as a result of the breath score, and then have a medium-size or long ‘hold’, I find that I somehow physically “reset.” It is as if I unfreeze physically – as though I’m cheating… my eyes wander. This to me a sign that my attention has been distracted or I am becoming conscious of myself performing. It is difficult at this point to determine if this is happening through a lack of concentration on my part, or if it is because the holds are not possible to sustain for the prescribed durations and some slippage occurs. I suspect it is the former, and will be working to reduce this tendency.

At 7:50 in the second recording, sometime stops working musically.

I think I should lower my eyes during performance. The dead-ahead gaze is somehow off to me, and I tend to wander from it a bit. I think a downward gaze would be easier to sustain. Maybe m.b.v. shoe-gaze-like?

BF: I like the idea of a wandering gaze actually…not intentionally however. It’s as if there is a human being, somewhere hidden, or the remnants of a human being…

MB: I like this comment from Beavan: “In the ‘hold’ you hear the heartbeat”

The ‘hold’ is a powerful instruction. It is a cancelation of intentionally activated sound – a verbal instruction that freezes or paralyses bodily motion.

I am also finding that the breath parameter is splitting into two independent streams. The two streams are 1) the ‘in’+’out’ and 2) the ‘hold’. This is partially because I am starting to imagine sounds that will eventually make their way into future versions of the score. Specifically, I am thinking about the use of background noise-coloration attached to and associated with inward and outward flows of breath, with an absence of noise-coloration sound during the ‘hold’s. To me, it is almost like I am having sonic hallucinations that are the result of the personal discourse and dialogue with Beavan that manifest as ‘future sounds not-yet-present heard’.

BF: As we discussed, I think the ‘holds’ need to occur less frequently…their regularity somewhat neutralises them, and I think if they were more rare events they would be more ‘poignant’ somehow. But then again, perhaps ‘poignancy’ is not a desired effect, if I am wanting to create a flat piece…

MB: In both read-throughs I read only twelve of the twenty minutes of the score. This was because: after the introduction of the computer generated speech at around 10 minutes, there is no new audio material in the score. In the future though, I will continue to perform for the whole 20 minutes, even if it feels like the piece has run its course at an earlier point. I get the feeling that something about the nature of this piece could lend itself to pushing beyond what seems like a neat and ‘natural’ form – emphasizing an aesthetics of flatness perhaps.

more information on this project can be found @ the project’s hub

One thought on “‘No sweeter sound than my own name’ – Reflections and Commentary on Reading Audio-Score Drafts

  1. Pingback: Live performance of ‘No sweeter sound than my own name’ | Michael Baldwin

Comments are closed.