top of page

10.12.18 – Blog Post 11 – Reflection

The research process could be seen as a never-ending journey of discovery. Cliché though that may sound, it is definitely true. In this period of nearly three months, I have not only learned plenty of things I didn’t know before, but also in doing so I have unearthed even more questions for which I want to pursue answers in the future. Since it is decidedly impossible to know absolutely everything about a subject as broad as sound, that process of answering questions and generating new ones seems unlikely to yield at any point, for which I am actually quite glad.

​

The goal of my research blog is to document this journey as far as it goes, with a view to becoming a better composer and sound designer. ‘Better’ means a whole host of different things to me: more effective, more economical, wider influenced, more original, more recognised, and so on. Ultimately, the audio I produce having continuously conducted relevant research should be of a higher quality than if I had spent my time doing something else! Though that seems obvious now, when I started this blog I think that I was too much focussed and the ‘what’ and didn’t really address the ‘why’ until I started seeing how my research was influencing me. The whole point of research started to make sense for me in a new light at that point, and with this vision in mind, I aspire to continue using this blog as a way to collect my thoughts, to look back on as a reminder of what I’ve learned and why it is worth remembering.

​

Research is an exciting and often unpredictable practice to engage with; I could dig up something that I didn’t know before and gain that knowledge for myself, or I could dig up a route to something that nobody knew before and gain that knowledge on behalf of everybody. One day it would be great to build upon the research the experts in my field have published, so that I can push the boundaries of our knowledge of sound even further, rather than just my own.

​

The key for me has been to listen to as many different voices as possible. The more varied the sources, the more well-rounded I believe the results of research are. That doesn’t necessarily mean every source can bring knowledge directly; indeed, some can challenge answers and facts which you thought were set in stone, and thus cause more confusion in the short term. However the ‘can of worms’ method of generating new questions that I have been employing has been extremely beneficial to my own understanding in the long run, and will continue to be until I run out of rabbit holes.

7/12/18 – Blog Post 10 – Reaper, and Music Editing/Sound Design Tips

As part of a university assignment I have been learning how to use a Digital Audio Workstation (DAW) called Reaper, through YouTube tutorials, physical practice, and the advice of Josh Smith, a Senior Audio Designer at Splash Damage. Some of its functionality is directly comparable to that of Logic Pro X, my usual DAW of choice, however there are quite a few differences which give Reaper pros and cons in comparison to Logic. As such, this has been a useful software to research because I would now consider using Reaper over Logic for certain scenarios.

​

The most obvious tool I have found useful in Reaper is its automatic crossfade function. If you move two audio clips on the same track so that they overlap, a visible and editable crossfade curve is generated between them so that they can transition seamlessly between each other, fading out the volume of one clip and the fading in the other clip simultaneously. Although crossfading is also possible in Logic, it is not automatic like it is in Reaper, so it is simpler and quicker to achieve using the latter. As a result, Reaper would now be my preferred DAW for situations where the complex blending together of two elements is often required, such as editing a music file or an ambience recording.

6.PNG

Screenshot from a Reaper project, showing two crossfades blending together three separate sections of music

Another difference in Reaper is its effects plug-ins which come as standard when the software is downloaded. One I have found particularly useful is the ReaFIR equalisation plug-in, which provides a quick and easy way to make some extreme frequency cuts and boosts, allowing you to shape a sound very effectively. The reason I use the word “extreme” is because I’m so accustomed to Logic’s EQ plug-in, which works well but is better at smoother, less-drastic frequency control. Below, I attempted to recreate an EQ setting from ReaFIR in Logic’s EQ, just to demonstrate this visually in a side-by-side comparison:

10.PNG

(Reaper)

11.PNG

(Logic)

Screenshots from the ReaFIR EQ plugin from Reaper, and the standard EQ plug-in from Logic; the latter attempting to duplicate the settings from the former

As shown, the curve of the EQ is more severe in Reaper than it is in Logic, despite attempts to force Logic’s EQ to work to the same extreme as Reaper’s. The advantage of sharper frequency control is precision, and the advantage of smoother frequency control is subtlety, so I will be accordingly selective about when Reaper’s EQ is more appropriate to use than Logic’s.

​

Reaper shows how powerful it is with its video engine too. All the basics of video editing are present in the software, so Reaper could in theory be used as a competitor to Windows Movie Maker or iMovie, not just other DAWs. Coupled with the advantage of advanced audio editing capabilities as a primary function, and the fact the full version of Reaper is completely free, together explains why this workstation is quickly becoming an industry favourite. I don’t find Reaper’s general User Experience (UX) as pleasing as Logic’s, so I feel that Logic will remain my number one DAW, but I look forward to seeing what Reaper can do for me in the future whenever Logic isn’t best suited for the task at hand.

Not only did a couple of lectures from Josh Smith inform my knowledge of Reaper’s functionality, but also on music editing and sound design in general. Hearing an expert in the game audio industry talk specifically about my work and the work of my peers was absolutely invaluable. For example, he pushed the fact that “dialogue is king” (i.e. the most important part of a mix), which seems obvious but is also completely worth remembering. As a sound designer, it is so easy to get wrapped up in all the minutia and sound effects that one forgets to bring the dialogue forward so that the words are clear and comprehensible.

​

Josh also talked about reverb in trailers specifically. He made an interesting point that the narrative of a trailer is more important than conveying a realistic environment. He suggested that in a trailer, reverb should be used sparingly and only for a specific reason, rather than to simulate the space the sounds are occurring in. This is especially important if it’s a busy trailer with lots of short punchy sound effects, because reverb automatically pushes sounds into the background of a mix, when conversely they should all be heard in their full glory in the foreground. Similarly, a lot of the time a realistic ambience isn’t necessary in a trailer either, since an ambience layer will compete with the sound effects more than it will help them stand out.

​

One final tip from him was about realistic explosion sounds. When eardrums interpret an explosion sound in real life, the sound will distort initially as they receive a shockwave of air. As a tip Josh suggested that a small crunch sound, such as the sound of biting into a crisp, layered underneath the beginning of an explosion noise will elevate it to a more realistic level by simulating the distortion of the blast.

​

Sources:

​

2/12/18 – Blog Post 9 – ‘Arrival’ Soundtrack and Vertical Re-orchestration

I have grown a particular fascination with the soundtrack for the film Arrival (2016), composed by the late Jóhann Jóhannsson. The music follows a recent trend of what Kristopher Tapley calls a crossing of the “sonic barrier” (Tapley, 2016); blurring the line between the concepts of ‘music’ and ‘sound design’, which had traditionally been treated as separate disciplines.

​

I enjoyed reading and hearing about what Jóhannsson himself had to say about his soundtrack at the time. A ‘behind the scenes’ video on YouTube shows a sizeable 16-track analog tape loop reeling around the room, and it is explained that a lot of the sound texture for the film was created by layering up sustained piano notes over and over again on this tape loop. There was, of course, once a time where tape loops were the ONLY way to record sound, but nowadays the majority of audio specialists record digitally for convenience. In spite of this, the Arrival composer is quoted to have said, “There are circular motifs in the film – the logogram the aliens use, their written language. So I wanted to work with loops.” (Burlingame, 2016). The analog technique and ‘circular motifs’ seemed to inspire Jóhannsson to compose in a certain way, using the 16-track limitation as a way to build a thick texture rather than focussing on melody or rhythm like traditional composers might. This way of thinking could be seen as more a trait of sound design than it is a trait of composition, because it handles music more ‘constructively’ than ‘compositionally’.

​

He chose to use vocals as the film’s focus is on language and communication, but again the voices were used texturally: “using vowels, sounds, the voice as a textural instrument” (Burlingame, 2016). Additionally, he used simple blocks of wood as soft percussion layers at times, and minimal orchestral parts for yet more ‘drone’ layers. All of these layers together amount to an eerie, cyclic, ethereal, and otherworldly audio backdrop for the film, effectively supporting the visuals in exactly the way it should.

14.PNG

The poster for Arrival next to a photo of Jóhann Jóhannsson (retrieved from http://songexploder.net/arrival)

I have also engaged with my research into video game immersion on a practical level recently. For my job at the microgaming company I compose dynamic music often enough, as is the custom for most recent video games, using a technique called horizontal sequencing, where tracks transition into each other and adapt to the player’s situation. However I had never dabbled in the other dynamic music technique, known as vertical re-orchestration, until now. Hans-Peter Gasselseder states that this technique “adds and removes separate instrument layers in accordance to the portrayed intensity of the scene and adapts its content and expressive arousal characteristics to the actions of the player” (Gasselseder, 2014). On reading another paper by Gasselseder for my critical analysis, I stumbled across that simple binary explanation of how dynamic music can be achieved, and questioned why I had only ever used one of the two solutions. By lucky coincidence, an opportunity to use vertical re-orchestration presented itself at work; five boss battles in a row called for short musical loops, which grow in intensity every time a player progresses further. In the table below, the idea is shown visually.

15.PNG

Screenshot from my audio guide, to explain my vertical re-orchestration idea to the game developers

I made a plan, a first draft, and in this video (https://www.youtube.com/watch?v=6PSSuDKhUAQ) I compiled each short stem I wrote into in one linear track, so it is easy to hear each new layer entering and the intensity building. The beginning track is the music for the menu screen, and then at 0:14 ‘Boss1_stem.mp3’ begins and the structure follows what the table above demonstrates. At present I feel that the idea is conceptually good, but the intensity should build even faster as only a fraction of players will progress beyond the third boss, and I think that the first three stems only slightly build the intensity in comparison to the fourth and fifth stems. Therefore to improve my first attempt at vertical re-orchestration, I will experiment with the percussion and solo violin coming in earlier, or add more to the first few stems which make the building intensity more audible and obvious from the outset. All in all I believe this was a successful first step towards mastering vertical re-orchestration, which will come about with further practice and research on how other composers are achieving it.

​

Sources:

​

  • Burlingame, J., “Going Where No Scores Have Gone Before”, Variety; Los Angeles, vol. 334, pp. 40-43, 2016
     

  • Clark, K., “Arrival | Behind the Scenes | Eternal Recurrence | The Score”, Extra View YouTube Channel, https://www.youtube.com/watch?v=O_zpYNcXces, 2017, last accessed 5/12/18
     

  • Gasselseder, H., “Dynamic Music and Immersion in the Action-Adventure; An Empirical Investigation”, Proceedings of the 9th Audio Mostly: A Conference on Interaction With Sound, Article No. 28, 2014
     

  • Tapley, K., “Composers Cross the Sonic Barrier”, Variety; Los Angeles, vol. 333, pp. 113-114, 2016

25/11/18 – Blog Post 8 – Ambience Recordings, Production Libraries, ‘Chirality’

Since capturing ambience recordings of my own, I was interested to see if there was any cause for sending them to production libraries for sale. I was aware that entire CDs of nature sounds, such as the example from Audible below, were available to buy as aids for relaxation and meditation, and have also been proven to help manage stress and concentration. Further, the majority of audiovisual media includes ambiences of some sort in its soundtrack to make scene settings more realistic. So there is definitely a market for the purchase of ambiences, in the same way as there is a market for the purchase of conventional music.

16.PNG

Screenshot from the Audible website, showing information for an album of nature sounds

However despite an intensive search, there doesn’t appear to be any production libraries online with an “ambience” section; they might have ‘ambient music’, but not ambiences in the sense of recordings of spaces. There are also sample packs of field recordings available, such as this from Luftrum: https://www.luftrum.com/free-field-recordings/. Unfortunately, then one is downloading hundreds of sounds (taking up lots of digital storage space) when often one is just looking for a handful. The modern role of a production library is to “address the need for convenient, cost-effective music that is tailor-made” for the “1,000-plus television channels and round-the-clock advertising” (Raine, 2014), which more recently would also mention the huge amount of content on on-demand services such as Netflix and Amazon Prime. It is therefore surprising to me, especially given the ever blurring line between music and sound design, that production libraries haven’t started including a section for ambiences alongside their more typical ‘epic’, ‘dark’, ‘trailer’, ‘feel-good’ music search-terms. In my opinion it would be wise to jump on the trends and either suggest this notion to a production library, or even establish a new production library dedicated to ambience recordings of all types.

​

The only thing remotely close to what I was searching for was a non-profit online project called Nature Soundmap. It contains a wide range of field recordings from all over the world, collected into one place by voluntary contributors, and some of them have links to where they can be downloaded or bought from those contributors. As such, the individual licenses for the usage of those ambiences differ. This is different to a production library because payment would always be split between the library itself and the owner of the intellectual property, and in buying the audio file the customer always gains the right to use that track in their project.

17.PNG

Screenshot from the Nature Soundmap website, showing information for an example track

This research could set in motion a line of enquiry into the possibility of founding a new production library, looking from a music business perspective and aiming to hit the gap in the market that is evident from the research undertaken so far.

​

It is interesting to see how reflecting upon my research activities as a whole is impacting my practical work. For a recent university assignment I was asked to create the soundtrack for a short animation called ‘Chirality’, approaching the audio from a ‘sound design’ rather than a ‘musical’ direction. This assignment is now submitted so I’d like to share how I used the fruits of my research in the task. My submission can be found here: https://www.youtube.com/watch?v=swh-cfy7nTA.

​

The relevance of the Jurassic Park T-Rex paddock attack scene came to mind immediately while I was being encouraged to focus on sound design more than conventional music. I remember the way that tension was induced by the lack of music and intense foley, as though the sense of hearing had been sharpened and every little detail was audible, even while heavy footstep sounds dominated the foreground. Similarly in ‘Chirality’, a heavy animal’s footsteps as well as crashing rubble make up a large portion of the overall mix, so I made sure the other important sounds still had room to be heard. It was an especially difficult challenge to do so while in the cave setting, as the reverb applied to the bass sounds made them even harder to keep control of.

​

I found that the second half of the sound design seemed quite flat and one-dimensional to begin with, so I attempted to match the varying levels of intensity in the visuals by making the audio follow a similar curve. Using a couple of non-diegetic tonal elements helped to fill in any gaps when the texture needed thickening, effectively integrating hints of conventional music with the purpose of supporting the sound design, as I was being encouraged to do by my university tutor. I also incorporated Michel Chion‘s concept of microrhythms (Chion, 1994), with a chirping cricket providing part of the ambience in the first section, and that same cricket sped up and layered while the small creature is hanging off the edge of the canyon, forming a layer akin to an atonal tremolo string section to convey the tension of that moment.

Lastly, it really helped to book a Critical Listening room for ninety minutes to listen my audio on a very high-quality sound system. While in there it was easier to hear where I’d overlooked a couple of minor problems that I was then able to fix, and it was also useful to just sit and listen to the audio in its own right, bringing Pierre Schaeffer‘s theories on ‘reduced listening’ (Schaeffer, 2017) into play again. Listening to the sounds without any visual stimulus made it easier to focus on them in their own right, and I mastered the final audio track based on that period of critical listening.

​

Sources:

​

  • Anderson, M., “Nature Soundmap”, http://www.naturesoundmap.com/, last accessed 25/11/18
     

  • Chion, M., “Audio-Vision” Columbia University Press, 1994
     

  • Katz, D., “Audible”, http://www.audible.co.uk/, last accessed 25/11/18
     

  • Luftrum, “Luftrum Free Field Recordings”, https://www.luftrum.com/free-field-recordings/, last accessed 25/11/18
     

  • MacMillan, A., “Why Nature Sounds Help You Relax, According To Science”, http://www.health.com/, 2017, last accessed 25/11/18
     

  • Raine, M., “Writing For Production Libraries”, Canadian Musician Vol 36, 2014
     

  • Schaeffer, P., “Treatise on Musical Objects: An Essay Across Disciplines”, University of California Press, 2017

12/11/18 – Blog Post 7 – The Research Process, ‘The Birds’, Video Game Immersion

As I am about halfway through my research and enquiry topic, it seems appropriate to reflect briefly upon the process so far. What I have noticed is that the topic encourages me to actively seek out peer-reviewed sources about subjects I’m interested in, which is useful because it means I’m actually reading about sound and adding another well of information sources to my usual research. Typically I’d spend so much time just practicing and listening that I wouldn’t pick up a book or journal to read an expert’s view on elements of my craft, which seems a waste now that I am setting time aside to do just that. Having a blog to collect my reflections on my research into has been invaluable too; it really helps to organise my thoughts and ask myself the question, “so what did I actually get from that?”. Where there is a wealthy (and often overwhelming) input of information, the blog allows me to process that input and convert it into an output, filing everything in an order which makes sense of whatever I have researched. Lastly, another benefit I have seen is the now very direct lines of enquiry that I’m taking. Whereas before I would be interested and passionate about a very vague topic, I am now honing in on more specific areas which I find interesting. All of this is helping me to become a well-rounded “expert” on music and sound design, not just a musician/sound designer.

​

As part of my electroacoustic music research, I have recently read a chapter by Randolph Jordan about the use of electroacoustic music in film, with references to the work of Michel Chion and Pierre Schaeffer. This debates whether electroacoustic music is effective in the context of visual media. The ayes to the right see the potential for what Chion calls “added value”; an effective electroacoustic soundscape could bring a certain quality to how a scene is perceived, according to the director’s purpose, in exactly the same manner as traditional music would. One example is the jungle ambience I discussed from the film ‘Elephant’ (2003) earlier in this blog – that could be seen as electroacoustic music, with definite added value in its audiovisual counterpoint.

​

The noes to the left argue that electroacoustic music is a) to be listened to in a reduced way, as intended by Pierre Schaeffer, b) to be experienced through loudspeakers. This to me seems pedantic, but if you attach electroacoustic music to visuals, it invariably gains semantic meaning which isn’t ignorable for audioviewers. Listening to a sound for its meaning is technically selective listening rather than reduced listening, and so can it really be called electroacoustic music while part of a soundtrack, if reduced listening is impossible in that context? Similarly, films are only heard through loudspeakers while home-viewing if that home has a stereo or surround sound system, and since electroacoustic music relies so heavily on the loudspeaker medium for its effectiveness, it is argued that watching a film with electroacoustic music just through TV speakers is highly detrimental to the sonic experience.

​

An example of a film which uses electroacoustic music is ‘The Birds’ (1963), so I have been researching that specific case study. Oskar Sala used an early form of synthesiser called the Trautonium to create the sound of the birds (and many other sounds) in the film, usurping Hitchcock‘s usual choice of Bernard Hermann to create the ‘score’. Although it is not published anywhere precisely how Sala achieved such a feat, his intention could arguably be described as musical, rather than as Hermann put it, not “musical at all” (Wiersbicki, 2008). The soundtrack album available on iTunes certainly argues that case!

18.PNG

Screenshot from the iTunes app, showing the track listing for The Birds’ soundtrack album

The line between music and sound design is becoming more and more accepted as a blurry one, because although some things are clearly music and some are clearly sound design, examples like ‘The Birds’ can debatably be defined as both. As composer Patrick Kirst is quoted to have said, “the tradition that comes from a pitch-oriented score has been replaced by a sound-oriented world. Sound is not just a carrier of pitch anymore; it has its own character and personality” (Tapley, 2016).

​

Finally, I read an extremely interesting study related to my interest in the concept of spatial presence, and generally how to create an immersive game experience with audio. The source is called “Re-sequencing the ludic orchestra: Evaluating the immersive effects of dynamic music and situational context in video games“, and I have decided that I will cover it in greater detail in my critical analysis. In short though, the study looks at dynamic music and compares its immersive effects in the same game situation with non-dynamic low arousal- and high arousal-potential music, using data collected from players of the level with these music types applied. Interestingly the concept of spacial presence (or ‘self location’) is described as only one parameter of what overall contributes to a sense of immersion. The other variables in the investigation are ‘imaginary and sensory immersion’ (empathetic response to the narrative), ‘flow’ (enjoying performing the individual tasks as a result of being absorbed in the action as a whole), ‘suspension of disbelief’ (how real it felt), and ‘possible actions’ (how interactive it was). It had not previously occurred to me that immersion was more than feeling as though you were actually in the game; as it turns out there is so much more to it than that! See my critical analysis for more on this topic.

​

Sources:

​

  • Gasselseder, H., “Re-sequencing the ludic orchestra: Evaluating the immersive effects of dynamic music and situational context in video games”, Lecture Notes in Computer Science, 2015
     

  • Hitchcock, A., “The Birds”, Universal Pictures, 1963
     

  • Jordan, R., “Case Study: Film Sound, Acoustic Ecology and Performance in Electroacoustic Music”, Edinburgh Scholarship Online, 2007
     

  • Tapley, K., “Composers Cross the Sonic Barrier”, Variety; Los Angeles, Vol. 333, pp. 113-114, 2016
     

  • Wiersbicki, J., “Shrieks, Flutters, and Vocal Curtains: electronic sound/electronic music in Hitchcock’s The Birds”, Music and the Moving Image, 2008

31/10/18 – Blog Post 6 – ‘3 Schwestern’ Ambience Recording, Schaeffer’s Four Listening Modes, Impulse Responses

Recently I went on a business trip to Berlin with the games company I work for. The only sound equipment I took with me was my Zoom H4 portable microphone, mobile phone, laptop, and headphones. With such limited resources, I was even more determined while I was there to practice my craft via Pierre Schaeffer‘s proposed ‘listening modes’.

​

Schaeffer lists these modes in his Treatise on Musical Objects: An Essay Across Disciplines; ‘ouïr’ (passive hearing), ‘ècouter’ (active listening), ‘comprendre’ (selective listening), and ‘entendre’ (reduced listening). As a composer and sound designer, the ability to listen is my most powerful tool, and therefore it follows that I should be more switched on to the selective and reduced listening modes. ‘Selective’ refers to a focused form of listening, such as understanding what a person is saying in a noisy room, and ‘reduced’ refers to hearing and analysing a sound exclusively “as a sound”, disassociated from any connotations or meaning that sound has.

​

For example, one thing I did notice was that the morning birds in Berlin sound much more abrasive than they do in Hatfield where I live, by which I mean they occupy more of the high-mid frequency range of human hearing, between 500-2000Hz. Analysing the sound in this way is both selective and reduced – selective because I recognised and inferred meaning from the sound, reduced because I analysed the sound objectively in terms of its frequencies in comparison to another sound.

​

I managed to capture a really detailed ambience recording inside a restaurant called ‘3 Schwestern’ in Kreuzberg, using my Zoom H4 (follow this link to hear it: https://www.youtube.com/watch?v=e3_8eADlQno). Listening selectively, one can hear the violin and guitar duet playing a Django Reinhardt-esque tune, in the gypsy jazz style. It is difficult to pick out the words which are being said by the people in the recording, but one can at least notice a mix of genders and distances away from the microphone (foreground and background), as well as the occasional clink of tableware. Without knowing the context of the recording, one could probably infer the restaurant setting due to the reverberation (indicative of a certain room size), the association of clinking tableware with a sit-down meal, and number of different voices heard, and the fact that people are talking over the music (implying that the music isn’t the main attraction to the place). Further, one could also ascertain a certain class of restaurant, since predominantly middle-class and well-renowned restaurants could afford to hire a jazz duet to provide live music while everybody eats.

​

Listening in a reduced way, the frequency range of the recording could be described generally as muddy, as there is a real wash of sound below 200Hz or so. The standout higher frequencies are a consistently rhythmic scratching sound, some metallic clinking, the occasional short burst of white noise at varying volumes, and a rich sawtooth-like wave darting around the mids and above throughout the recording with fluid bends and vibrato in some of its pitches. This is an interesting exercise because even as I was typing that, I had to really force myself to describe the sounds without associating them with their respective sources (e.g. the “sawtooth-like wave”; the sound of the violin). This is therefore a useful exercise too, because if I was a sound designer on a project and was asked to recreate this ambience again, I would be able to approximate it to quite a detailed extent (without having to fly back to Berlin!) by matching the reduced listening description. Reduced listening forces you to hear every single sound as though it has been crafted, which is precisely the craft of a sound designer.

​

Apart from my research and practice of the listening modes, I have also furthered my investigation into impulse responses. I wanted to go beyond comparing ‘without reverb’ and ‘with reverb’, and start comparing impulse responses from different spaces and how those differing kinds of reverb affect the same sound. I repeated the same process as before, this time downloading six free impulse responses from the OpenAIR website to compare to the one I had already tested from the disused factory in York. As before, I applied the reverb to Mike Oldfield‘s “Tubular Bells” and compiled them all into a video, including a dry version for comparison: https://www.youtube.com/watch?v=3aRNI1EQCZA&feature=youtu.be.

​

Here are my notes on the effect of each space:
 

  • Disused Factory – long decay, very slightly bassier sound

  • Tvisöngur ‘Sound Sculpture’ – medium decay, brighter sound

  • Tunnel – short decay, noticeable bass boost

  • Goseck Circle – initial direct echo and then quiet but medium-length decay, bassier

  • Car Park – long decay, obvious bass boost

  • Dromagurteen Stone Circle – less delayed echo than Goseck (sounds like a chorus effect) plus a quiet and long echoey decay, slightly bassier

  • Grain Silo – longest decay (big wash of sound), slight bass boost
     

The next step into this vein of research will be to create my own impulse response and test it.

​

Sources:
 

  • Murphy, D., and Shelley, S. “OpenAIR”, http://www.openairlib.net/ , last accessed 10/10/18
     

  • Schaeffer, P., “Treatise on Musical Objects: An Essay Across Disciplines”, University of California Press, 2017
     

  • Valiquet, P., “Hearing the Music of Others: Pierre Schaeffer’s Humanist Interdiscipline”, Music and Letters, Vol. 98, Oxford University Press, 2017

22/10/18 – Blog Post 5 – Creating Spatial Presence with Audio

Today I have begun a line of enquiry about ‘spatial presence’ within video games. I read about a study into the video game experience and how music contributes to player enjoyment, in an article in the Media Psychology journal titled “Effects of Soundtrack Music on the Video Game Experience”. The article discussed spatial presence as one of its criteria for the study, defining it as the feeling of being physically located in an environment (despite not actually being physically there). This particular concept inspired me because player immersion is an important part of gaming; it’s an escape into another reality, and if the audio plays a role in creating that escape, then it’s crucial as a composer and sound designer that I know how best to achieve it.

​

The study specifically dealt with the ‘soundtrack’ as solely the music, which I would argue was an oversight because ambience and sound effects could be just as important in creating an immersive game environment as the music is. The two focus groups played Assassin’s Creed: Black Flag, one group with music, and one with just sound effects. If I had conducted the study, I would have included a third focus group who played the game with no sound at all, just for comparison, and maybe even a fourth who played it with just music and no sound effects.

​

Nevertheless, the study found that more people agreed with statements such as “I felt like I was actually there in the environment of the presentation” in the group who played the game with music than in the group who played it without. This shows a clear and undeniable correlation between the music heard and the spatial presence experienced by the player.

​

Looking wider, I made a mind-map of my thoughts about spatial presence, relating it to ambience and sound effects too. I hope to find papers soon which reflect and/or add to these ideas.

19.PNG

A mind-map showing my thoughts on what types of audio content could affect spatial presence

I think that for non-diegetic music to create the feeling of being in an environment, it would need to be empathetic to the emotions of the situation (e.g. isolation, fear, euphoria, excitement), so that it sets the right atmosphere to put the player in the right frame of mind, driving them forward. In my own work for the microgaming company, often I try to create a flowing and almost hypnotic mood for the player, so that the music subliminally persuades them to keep playing, just as music in supermarkets persuades customers to keep shopping.

​

Diegetic music, however, only needs to sound as if it is coming from the environment itself, as though it belongs in the environment, and should therefore react accordingly to the player’s movements. An example might be if you were to pass a street musician playing a folk tune on a hurdy-gurdy in the game. Both of these uses of music should draw the player into an immersive state, in my opinion.

​

A layer of ambience in the sound design does the same job as diegetic music in many respects. It still reacts to the movement’s of the player, but is more there as a subliminal scene-setter, not often noticed until there’s an absence of it, or if the visuals aren’t there as an aid. It creates the background of the environment, and therefore should promote spatial presence. If a player were to close their eyes while playing Black Flag, the ambience alone should be enough to tell them they’re on a ship at sea, for example. Ambiences can often portray a ‘heightened sensitivity’ version of a realistic ambience, full of detail that could be noticed if a player was looking out for them, which is why I mention hyper-realism in my mind-map with a question mark.

​

Sound effects which sound like they are coming from the environment are equally important to immerse the player in the action. These sound effects could be both anticipated and unanticipated, just as they are in reality. Detailed little triggers as a player explores an open-world map, for example, would be particularly effective. Anticipated: if they were to disturb a flock of birds, they could hear them flapping as they fled the tree above them. Unanticipated: if an unseen enemy was to sneak up behind them, they could hear the snapping of a twig.

​

It is my theory that a blend of all of these elements together create a ‘sound-world’ that promotes a sense of spatial presence. It is my aim to find more sources related to video game sound and see if there are any studies which prove/disprove/add to my ideas.

​

Sources:

​

  • Klimmt, C., Possler, D., May, N., Auge, H., Wanjek L., Wolf, A., ‘Effects of soundtrack music on the video game experience’, Media Psychology, 2018

12/10/18 – Blog Post 4 – Audiovisual Counterpoint, Drum Tuning, Ian Ring

Having read through another section of Audio Vision by Michel Chion, I’ve been inspired by his description of a concept he calls “audiovisual counterpoint”, by which he means a relationship or ‘harmony’ between sound and visuals. On a subliminal level, a disagreement or dissonance between what the viewer sees versus what they hear can be used to give the overall experience added value.

​

In a lecture this week an example of such a counterpoint in the film ‘Elephant’ (2003) was discussed. In a scene halfway through the film, the visuals show a boy walking through a school corridor, but the sound design describes a bustling train station and a children’s park instead, all over a recording of Beethoven‘s Moonlight Sonata. The overall effect is unsettling; all the elements together don’t make aural sense, but in such a way that it’s hard to place your finger on exactly why the scene seems so disconcerting. If the sound design had described a school corridor instead, the scene wouldn’t have had the same effect.

​

Later in the film, two students with guns are prowling the school, hunting down everybody still left in the building. Instead of corridor ambience, this time there is a jungle ambience as the backdrop, which makes the gunmen seem like game hunters, or even predators stalking prey. This shapes the way the viewer sees them, even if they don’t consciously notice the counterpointing ambience. This subtle and creative sound design technique is something I would love to incorporate into my own audio work, as in certain scenarios it can be incredibly effective.

​

I have also been watching more of the ‘Studio Pass: Periphery’ course online, which has recently been discussing drum tuning and given me some valuable information about the best way to tune drums for recording purposes. I have learned that for most scenarios:

​

Snare Drums

​

  • Should be tuned no lower than B

  • Sound good in a rock context at C# (e.g. Dave Grohl tunes to C#)

  • Generally sound and feel the best at E

  • Have a resonant head that doesn’t hugely affect the overall tone, but are typically tuned around a G/A
     

Toms

​

  • Are more tonal than snare drums, so the intervals between the resonant heads (bottom) and the batter heads (top) are important

  • Should be tuned in fourths from the high to low toms (e.g. D, A, E, B), similar to guitar tuning

  • Follow this method for achieving a desired tone: tune the resonant head down one semitone, and then the batter head a minor third down from that (e.g. desired tone=D, resonant head=C#, batter head=A#)
     

It was suggested that as a guitarist I should be quite good at tuning drums, because I’m more sensitive to relative pitch and matching pitches. As such, this might be something I should consider doing myself if I’m in a drum recording situation in future, now that I know what to do; previously I would have probably assumed the drummer themselves would be the best person to tune the kit.

​

Lastly I want to discuss the Canadian composer Ian Ring. While researching for an essay about the relationship between mathematics and music last year, I discovered a piece of computer code Ring wrote in order to generate every possible 4/4 rhythm which uses note durations between a semibreve (whole note) and semiquavers (sixteenth notes). But the other day I discovered that he has also made a code for an in-depth study of every possible musical scale too, which is an exciting find. I think analysing music theory in a purely mathematical way can provide an interesting viewpoint for how music actually works, and having access to every possible rhythm in 4/4 AND every possible scale in the common 12-tone equal temperament system could really help a musician if they’re ever in a rut of using the same rhythms and scales all the time. The webpage also features a really useful “Scale Finder” where you can input the intervals you’re using and find out information about that scale, its modes, its features, and other scales related to it.

20.PNG
21.PNG

Screenshots from Ian Ring’s Scale Finder, showing an example of inputting a scale and the information which is generated from that input

Sources:

​

8/10/18 – Blog Post 3 – Audio-Vision, No Such Thing As A Fish, Impulse Responses

I have a few thoughts to share from the past week about my recent research activities.

I have started reading a book called “Audio-Vision” by composer and director Michel Chion, who offers a unique insight into the relationship between sound and visuals, together with editor and sound designer Walter Murch in the foreword. So far I have read up to the end of the first chapter and taken notes throughout, out of which I’ll select a few poignant ones to discuss.

22.PNG

The cover of Michel Chion’s Audio-Vision (retrieved from https://cup.columbia.edu/book/audio-vision/9780231078993)

Murch highlights that he himself was influenced by hearing Pierre Schaeffer and Pierre Henry‘s “Premier Panorama de Musique Concrète” on the radio. He had already been experimenting with sounds on a tape recorder at this point, and after hearing the electroacoustic work of Schaeffer and Henry, was inspired to keep experimenting and eventually go into sound design. This ties nicely into my studying of Musique Concrète and how it could be beneficial as a sound designer to listen closely to electroacoustic music in general. In fact, Chion was even mentored by Schaeffer and Henry at a time, and so Chion’s perspective of sound comes very strongly from the world of electroacoustic music.

​

Chion writes as part of his introduction, “we never see the same thing when we also hear; we don’t hear the same thing when we see as well” (Chion, 1994). To me this attitude towards audio-vision is paramount. Treating audio as 50% of the experience of watching a film, rather than as an afterthought, would definitely make a sound designer treat their craft much more carefully; it is equally as important as the visuals. In my own experience, budding film-makers almost never consider the sound with much care, and certainly not the respect it deserves for how difficult it is to get right! The quote from Chion, maybe with the examples he lists too, might be a good way to get across the value of sound to future commissioners.

​

I was really interested in the way Chion discussed ‘microrhythms’ (Chion, 1994) in sound design, such as the pitter-patter of snowflakes, or the rumble of machinery, as a replacement for rhythmic music. Although not necessarily foreground sounds, they could still subliminally indicate changes of pace by quickening or slowing. Similarly, the reaction to tremolo strings as a description of tension could be achieved more subtly by naturally fluctuating “tremolo ambiences”, such as nocturnal insects chirping. I thought they were interesting examples to highlight as ways that sound design can do the job traditionally done by music – an idea I mentioned in reflection to the Jurassic Park concert.

​

Aside from reading this week, I also listened to a podcast on Spotify called “No Such Thing as a Fish”, which in episode 225 (“No Such Thing as an Interesting Riddle”) had a section about foley. They mentioned the BBC public sound effects database, which I already had bookmarked to look into as a good reference list. Although not available for commercial use, there are 16,000 sound effects there which could be useful as temporary placeholder sound effects in future projects, so I can get an idea of the sound I’m aiming for when I record/edit it all myself. In the podcast they also discussed some great foley art anecdotes, some of a comedic nature, but others which were great ideas for how to achieve certain sounds. One example I hadn’t heard about before was the famous rolling boulder in Indiana Jones, which was apparently made using a car rolling down a slope on gravel.

​

Lastly, I have begun my investigation into impulse responses. I want to embark on a project of collecting the reverb tails from interesting spaces and collating them into a list of importable impulse responses for convolution reverb plug-ins. To start with I simply looked up methods of IR recording, and discovered that the most simple way that I could do it myself would be to play a short burst of white noise from a speaker and record the reverb as a 24/96 stereo .wav using my Zoom microphone. Christopher Winter, an audio tutorial-based content creator on YouTube, suggested that 25 milliseconds was a good duration to make that source sound.

​

I was aware that it was possible to import impulse responses into the ‘Space Designer’ plug-in in Logic Pro X, but had never tested that until today. I downloaded a free impulse response taken of “Terry’s Typing Room”, from the now disused Terry’s chocolate factory in York, and imported that file into the Space Designer.

23.PNG

Screenshot of the User Interface for the Space Designer reverb plug-in, with the Terry’s Typing Room impulse response applied

Then I tested it by programming the opening of Mike Oldfield‘s ‘Tubular Bells” on a dry electric piano, and then applying the Terry’s Typing Room reverb to it for comparison, which worked perfectly. This means that all my plans for the impulse response project should work fine in theory!

​

Follow this link for a demo of the impulse response test described: https://www.youtube.com/watch?v=GXCg0EjBZx8&feature=youtu.be

​

Sources

​

30/9/18 – Blog Post 2 – Studio Pass: Periphery, Musique Concrète, Jurassic Park in Concert

I have done three new research activities in the last few days that are worth discussing.

​

I purchased an online course made by Adam “Nolly” Getgood (bass guitar) and Matt Halpern (drums) from the band Periphery, and began watching it. The course is called “Studio Pass: Periphery” and deals with a range of subjects relating to studio recording, such as mic positions, drum tuning, and how to approach mixing. Although Periphery are a progressive metal band, Nolly and Matt talk about these subjects very generally and insist that the skills and knowledge they cover will be transferable to all genres of music, and as a result there will be extremely useful tips and tricks which should inform the production of my own music. I like the sound of Periphery‘s mixes, particularly the punchy drums and the way they mix synthesisers and guitars together for a really thick and powerful texture in the lead melodies. I look forward to seeing what I learn from the course and how I can apply it.

​

Aside from watching the videos of Nolly and Matt, I have also been reading up on Pierre Schaeffer and Musique Concrète; the art of recording sounds and manipulating/mixing them to create music. Schaeffer was the experimental composer who pioneered this compositional practice. I found out that he was the first person to combine recorded sounds to be listened to acousmatically (i.e. listening to the sounds just ‘as sounds’, without any link to the sound’s source), and the first to meaningfully reverse, speed up, and slow down audio for a specific purpose. Researching Musique Concrète is important from a sound design point of view, since sound design often combines direct and acousmatic sounds, i.e. sounds with a specific source, and sounds without a specific source, associated with them. A big part of my job as a sound designer is to experiment with the timbre of different sounds to find one that fits the brief, often taking a leaf out of Schaeffer’s book by reversing or altering the speed of recordings as forms of manipulation. I have only scratched the surface of my research into this subject so far, but I’m preparing a presentation on it for 24th October so I will be diving more in-depth soon.

​

Lastly, and most excitingly, last night I went to the Royal Albert Hall to watch Jurassic Park in ConcertJohn Williams is arguably the most influential film composer in history, and his score for Jurassic Park was one of the main reasons I became interested in film music in the first place. The Czech National Symphony Orchestra performed every music cue in perfect synchronisation with the film playing on the cinema screen behind them, and seeing/hearing the music in this way really brought home some of Williams’ genius into even more prominent light for me.

24.PNG

My own photo from the Jurassic Park in Concert event

The real takeaway lessons were as follows:

​

Firstly, part of the art of effective film scoring is knowing which scenes need music and which do not. Williams strikes that balance perfectly in Jurassic Park; in some of the tension scenes where you might expect music to play an important role in creating that suspense, the complete lack of music makes it even more unsettling as it subverts the expectations of the viewer. In the T-Rex paddock attack scene, a scene I must have watched a hundred times, seeing the orchestra remain dormant and hearing the sound design speak for itself really drove home that point, and it’s a lesson I definitely want to consider in my own film compositions.

​

Secondly, the music for ‘Mr DNA’ in the animated scene of the film is a brilliant nod to the kind of ‘mickey-mousing’ music used in Disney and Warner Bros cartoons. Whether a parody or just a homage, Williams nails that sound. That particular track was unfortunately not included in the edition of the score I listen to on iTunes, but having heard it played in all its glory by the CNSO, I will definitely be paying more close attention to it in isolation and in context in the film. The ‘mickey-mousing’ score is a very difficult but popular brief for animators to give, so knowing that track inside out might well come in useful as a reference for future commissions.

​

Thirdly, Williams is a master of orchestration, often using different instrumentation as a means of developing material, which not only makes the repeated theme seem less repetitive, but also smoothens out the changes in mood that the music undergoes. That’s not really something I had considered at any great length before, so I want to try and incorporate that technique in my own compositions too. Particularly, I have noticed that I often neglect woodwind and percussion instruments and focus on strings and brass more, so I would certainly like to use some more of those neglected sections when I develop my own thematic material in future.

​

Sources:

​

  • De Reydellet, J., “Pierre Schaeffer, 1910-1995: The Founder of ‘Musique Concrète.’” Computer Music Journal, 1996, pp. 10–11
     

  • Brümmer, L., “Why Is Good Electroacoustic Music So Good? Why Is Bad Electroacoustic Music So Bad?” Computer Music Journal, 1994, pp. 7-8
     

  • Creative Live, “Studio Pass: Periphery”, https://www.creativelive.com/, 2018, last accessed 10/10/18
     

  • Spielberg, S., “Jurassic Park”, Universal Pictures, 1993

25/9/18 – Blog Post 1 – Introduction

Welcome to my blog! Here I will document everything I research relating to music and sound for film and games.

​

I work as a composer and sound designer for a microgaming company, and receive commissions here and there to create original audio for visual media.

​

When I write music just for fun, I’ve been recently inspired by Adam Neely  and Ben Levin. They are both on YouTube and make videos about music in general, but often compose music of an experimental nature and discuss that music on their channels. They are influential to me because they compose music with a lot of advanced music theory knowledge behind them, which is similar to the way I compose too, but they equally have a lot experience in music production. As a result, I both relate to them and learn from them, as I have less experience with production and technology than I do with composition.

​

Film and game composers who have inspired me recently are Greg Edmonson/Henry Jackman for their music in the “Uncharted” (2007) games, Michael Giacchino for his music in the “Jurassic World” (2015) and “The Incredibles” (2004) films, and Gustavo Santaolalla for his music in the game “The Last of Us”. Edmonson and Jackman inspire me with the way they make the music during gameplay and cinematic scenes link together so seamlessly, just as the visuals do. Giacchino inspires me with the way he can set up any mood with his music cues, writing thematically but changing the instrumentation to fit what is happening on-screen. Santaolalla inspires me with his experimental, perhaps even unorthodox method of composing, where he records and manipulates sounds, and blends them with traditional acoustic instruments to create atmospheric music.

​

It is the work of Santaolalla which I want to research further, and more generally the blurred line between music and sound design. It is a trend among soundtrack composers recently to write in this way; other examples include the “Gravity” (2013) music by Steven Price, and the “Arrival” (2016) music by Johann Johannson. Each of these composers use sound design creatively and unconventionally in their scores, and as it seems to be trending I want to find out how, where, and why that composition technique might be used. Further, I want to incorporate that knowledge into my own compositions.

​

Another area I want to research is impulse responses. While editing sound effects on a daily basis I often apply digital reverberation to make them sound like they are being heard in a certain type of space. I make that decision based upon what makes a sound seem realistic in a given cinematic scenario; e.g. a hand clap heard in a deep canyon would need different reverb settings to a hand clap heard in a bedroom. The way digital reverberation works is via impulse responses, which are recordings taken in locations to capture how a sound behaves in that space, to then be applied digitally to any given sound afterwards. I’m fascinated to find out what weird and wacky reverbs I could create by taking impulse responses in uncommon places, e.g. in an empty fridge, which I could then add to recordings to make interesting sounds for use in my sound design or composition projects.

​

Sources:

​

  • Bird, B., “The Incredibles”, Walt Disney Pictures, 2004
     

  • Cuaròn, A., “Gravity”, Warner Bros. Pictures, 2013
     

  • Levin, B., YouTube channel, https://www.youtube.com/channel/UCLuHOqDilyLQT4NPXQuVN4Q, 2009, last accessed 7/12/18
     

  • Naughty Dog, “The Last of Us”, Song Computer Entertainment, 2013
     

  • Naughty Dog, “Uncharted: Drake’s Fortune”, Sony Interactive Entertainment, 2007
     

  • Neely, A., YouTube channel, https://www.youtube.com/channel/UCnkp4xDOwqqJD7sSM3xdUiQ, 2006, last accessed 7/12/18
     

  • Trevorrow, C., “Jurassic World”, Universal Pictures, 2015
     

  • Villeneuve, D., “Arrival”, Paramount Pictures, 2016

bottom of page