Categories
matt's blog post-production blog

Week 2: Audiovisual Analysis

Today in class we discussed elements and methods of analysing sound design. In particular the observation methods of renowned film theorist Michel Chion.

“Audiovisual analysis is important when it comes to understanding the ways in which a scene or entire movie utilises sound and images in combination. By analysing such details we can “deepen our aesthetic pleasure and understand the rhetorical power of films. (Chion, 1994)”

What Michel is implying here is that by participating in such activities I am supposedly able to uncover a film’s messages and the techniques it uses to convey them with the use of images or sound design while also heightening my sense of appreciation for a film’s artistic qualities.

In his book “Audio-vision: Sound on Screen” he details a method of audiovisual analysis that he coined called the “masking” method. The focus of this analysis method is to watch a certain scene multiple times, watching with sound and video together, again but muting the sound and then again doing the same thing but cutting the video instead.

As he says in his book: “This gives you the opportunity to hear the sound as it is and not as the image transforms and disguises it; it also lets you see the image as it is and not as sound recreates it. (Chion, 1994)” Michel is under the assumption that sound can “mask” or obscure certain aspects of a film and direct your attention to focus on specific elements within a scene, basically manipulating the audience’s perception of what they see on the screen.

He uses the 1963 film “The Silence” as an example in his book: “We only need cut the sound to demonstrate that here we have a typical effect of added value. Without sound, the tank appears to move regularly and without difficulty on its treads, aside from one brief pause. It’s the sound that makes us view, or rather, audio- view, a tank that’s seen better days and moves with difficulty. (Chion, 1994)”. This is a key part of his theory with the relationship between sound and image in film, where sound can add significant “added value” to the visual experience by influencing how the viewer interprets the scene.

This experiment was put to the test in class by watching some scenes from a few films and then discussing our thoughts and opinions on our observations. One example we had a look at is the opening scene from “Mad Max: Fury Road”. During the initial half of the scene there was a heavy focus on the diegetic music being performed. Without the audio I had trouble grasping the severity of the fast-paced angsty nature of the film, however it allowed me to imagine or interpret how the scene would sound just based off the images shown to me, which after watching with audio and comparing, the idea I had built up in my head only had a passing resemblance to what the final product produced (potentially hearing sounds that weren’t actually there).

Something else that the author mentions in his book is the relationship of music and its direct relation to how differing tones can completely change how a scene is conveyed to the audience. He calls this the arranged marriage of sound and image. Changing music over the same image forcefully illustrates the phenomena of added value, synchresis, sound- image association, and so forth. By observing the kinds of music the image “resists” and the kinds of music cues it yields to, we begin to see the image in all its potential signification and expression. (Chion, 1994)”

We looked at this experiment with a clip from the film “No Country For Old Men”. Listening to the clip with different soundtracks conveyed a differing tone from the original, sometimes completely altering the perspective of what I was watching. However whats more important in this scene is that the lack of music (and dialogue) in the original scene showcases that sometimes the deliberate exemption of music itself can evoke emotions more effectively than with music. In this case amplify the uneasy suspense and emphasise the overall gritty nature of the movie.

These are vital experiments that are likely to guide my research and give me countless ideas for sculpting the sound of my post production artefacts

Chion, M. (1994). Audio-vision: Sound on Screen. New York: Columbia University Press.

Categories
matt's blog post-production blog

Week 1: Audio Post Production

Audio post-production is the creation and manipulation of sounds that are synchronised with a moving image. This is specifically during the post-production phase after everything has been filmed. This includes sound designing, the creation of sound effects and automatic dialogue replacement. Within our class we took a look at the evolution of these technological shifts from analogue to digital. With this it has drastically changed the way sound is edited, mixed and produced.

In the early days of film sound, sound effects, dialogue, and music were recorded onto magnetic tapes. Engineers had to manually cut and splice these elements and then and layer them onto a reel-to-reel tape machine, while also editing these clips on analogue mixing desks. This process is physically demanding and time consuming as any edits require precision in cutting physical tape and reassembling it with each splice being irreversible. However even with the significant limitations these vintage machines are still used today for their warm dynamic tone.

In class we looked at the evolution of sound design in cinema by comparing four separate versions of the final scene of King Kong. By doing this I was able to grasp how far sound design technology has come. Starting in 1933 with a completely mono mix with only 3 layers in the audio for foley, dialogue and music. Continuing to 1976 with some of the first uses of stereo surround sound, and then up to the late 2000s with full complete 7:1 setups and dozens of layers of foley.

With the latest version of King Kong using a new technology called Dolby Atmos. It expands upon the existing 5.1 and 7.1 surround-sound set-ups and adds sound channels coming from overhead, engaging the audience in a dome of audio. With atmos you can produce up to 118 simultaneous sound objects, allowing the sound designer to place each individual sound and dialogue in specific points in the sound field rather than directing them to certain channels. These artefacts can be moved around and manipulated and moved around within the space, generating a 3D sound space. (Roberts, 2022)

In conclusion sound design technology has come a long way in the years that have passed. From tape machines to Fantasound to dolby atmos they have reshaped the film and music industry, giving us new opportunities to create art.

Roberts, B. (2022). Dolby Atmos: What is it? How can you get it? [online] whathifi. Available at: https://www.whathifi.com/advice/dolby-atmos-what-it-how-can-you-get-it.

Categories
matt's blog mixing blog

Reflective Report: Group Project

For this project we decided to write something emulating early 70s folk music. Isaac wrote a fairly complex but simple sounding chord progression in an open D tuning and we all decided to build off that. We used 2 microphones, allowing us to capture different aspects of the acoustic guitar’s sound as well as widen the stereo image.

The setup was a little something like this

Using the stereo micing technique on the guitars greatly enhanced the overall sound providing natural width and depth to the tone, accompanied by a room mic 2 metres from the guitar creating a full natural reverberated sound without the use of external plugins. Which in turn helped with emulating the studio recordings of Nick Drake whos album ‘Pink Moon’ was one of my chosen mix references.

stereo mic setup with room mic

As the track varied in tempo, we used teamwork to change the speed of the metronome as Isaac played through the song so we could overdub instruments easily, improving our problem solving skills and increasing our collaborative creativity It helped to keep the song interesting to the listener providing a segue into a new faster upbeat section.

We didn’t do anything new with the bass, using micing techniques that we had used multiple times already in previous sessions. Why change something that works? We recorded him playing and produced a dark sound that suited the vibe of the track we were going for.

bass amp setup + DI

During reflection of the final rough mix we realised we weren’t overly happy with the main vocals and decided to head back to the studio once more to re-record Isaac’s main vocal using an sm7b which provided a much warmer tone that emphasised his baritone vocal range and aided in bringing the vocals to the forefront of the song as well as tightening up the quantisation of the take.

this is what the vocal setup was like

For the mixing side of the project I aimed to keep things simple, using little plugins and keeping the mix fairly dry worked well in keeping with the 70s aesthetic. I initially had more vocal takes and reverb in the mix but after listening back a few times and consulting my mix references

I cut back to better suit the tone I was going for and add a slight build to the track that would engage with the listener. Cutting and pasting Hojis vocal takes to create a seamless sound that emphasises the harmony with Isaac. Along with this i used echo and reverb sends to enhance the stereo image and depth of their mellow singing voices.

my edit and mix windows shown above on protocols

After sending a first draft to my tutor, He gave me some constructive feedback on what I could do to improve my mix. This was highly useful for me as when you’ve been working on a mix for a few hours or a couple days it can be easy to become biased to my creation. Fresh ears can provide a different perspective and identify areas that can spark new ideas and creative approaches.

I incorporated Matt’s feedback into my work accordingly and as a whole, I felt the mix was just that little bit more cohesive.

On top of my stereo mix, I also completed a 7:1 surround mix. As I had never used surround sound to mix ever, it took me a while to get used to and learn to effectively utilise it. It was like opening a whole new can of worms with mixing. I could pan tracks in a complete 360 degree image. I used this new playback system to greatly enhance the subtleties of my mix, for example I had automated the tape echo samples pan and swirl around the listeners head.

Collaborating with my peers has provided a supportive environment where I’ve been able to explore and express my musical ideas with great success, providing me with newfound knowledge on recording, mastering and most importantly, teamwork. It also helped to refine my technical skills and obtain higher levels of precision in my mixes. Learning about mew techniques such as mic placement or surround mixing will prove a great skill for me to utilise down the line, opening up new possibilities in the world of creative mixing.

Categories
matt's blog mixing blog

Week 10: Reverb

Reverb is an intergral part of music production. Its roots trace back to the early days of sound recording where natural reverb was a byproduct of recording in large areas such as churches and concert halls. As technology evolved, so did the methods for creating and manipulating reverb, leading to its essential role in modern music.

In the 40s and 50s, studio engineers began to experiment with artificial reverb. The echo chambers at Abbey Road Studios are a big example. These chambers were rooms with highly reflective surfaces, where sound from a speaker would bounce around before being picked up by a microphone. This technique created a lush, natural reverb that could be added to recordings.

Abbey Road echo chambers

Later on in the 60s plate reverb was invented. A mechanical method where sound waves would reverberate across a large, suspended metal plate. The EMT 140 plate reverb (which was the first of its kind), offered a more controllable and consistent reverb effect compared to echo chambers. Plate reverb is used widely today in studios, providing the shimmering, dense wash that I personally have used on almost all of my compositions.

Plate reverb hardware

In the 1980s, reverb technology took another leap forward by creating digital reverb units. The Lexicon 224 allowed for precise control over various reverb parameters and introduced the ability to create entirely new reverb sounds that weren’t possible before.

Lexicon 224

Today reverb is available in several formats from hardware units to software plugins. Modern digital audio workstations (DAWs) like Ableton or Pro Tools come stocked with reverb plugins that emulate classic hardware and offer increased parameters for customisation.

Pro Tools reverb plugin

Producers often use techniques such as pre-delay to separate the reverb onset from the dry signal to enhance clarity in the signal. Additionally, using EQ on reverb tails helps avoid frequency buildup and ensures the reverb complements the mix rather than overwhelming it.

Categories
matt's blog mixing blog

Week 9: A Professional Music Producer

For this blog week we are tasked with researching a producer of our choice, I chose to study Nigel Godrich.

In the industry of music production, there are few names that carry such admiration as Nigel Godrich. Being a world renowned producer for his work with incredibly iconic artists such as Radiohead, Pavement, Beck and R.E.M. He has left a huge mark on the world of audio engineering.

Nigel pictured above

His discography spans multiple genres and multiple generations. His collaborations with Radiohead on albums like “OK Computer” and “Kid A” have completely redefined the boundaries of experimental rock while his work with Beck on “Sea Change” and “Morning Phase” showcases his versatility and ability to capture a diverse array of sounds in music and genres.

One of his defining characteristics is his ability to create and manage immersive audio landscapes that draw listeners into a world of depth and texture. For example, when recording Radiohead’s magnum opus “OK Computer” at St. Catherines Court, many of the songs were recorded in separate areas of the house to add a certain atmosphere. The acoustic guitar part for the song ‘Exit Music’ was recorded in a stone stairwell, while Let Down’ was recorded in a ballroom at 3 o’clock in the morning with most of the overdubs being tracked live in the same room, adding to the overall open vibe of the album.

The house where they recorded OK Computer

His mixes are known for their clarity, warmth, and ability to evoke emotion, whether it’s the haunting melancholy of a Radiohead ballad or the ethereal beauty of a Beck composition. Studying Nigel Godrich’s approach to mixing has provided insights that will for sure shape my future work as a music producer.

His fearless experimentation and intuitive understanding of the bridge between art and technology serves as a big inspiration to push the boundaries of my creativity. By incorporating elements of his production and mixing techniques into my own workflow, I will be able to elevate the quality of my productions and create more immersive listening experiences that will surely resonate with the listeners on a deeper level.

Hi-Fi News. (2021). Radiohead: OK Computer Production Notes. [online] Available at: https://www.hifinews.com/content/radiohead-ok-computer-production-notes [Accessed 1 May 2024].

nigelgodrichproducer (n.d.). Nigel Godrich and the Studio. [online] A blog dedicated to the work of Nigel Godrich., A blog dedicated to the work of Nigel Godrich. Available at: https://nigelgodrichproducer.tumblr.com/.

Categories
matt's blog mixing blog

Week 8: Multichannel Audio Solutions

Within last week’s blog, i went into detail about the history of surround sound, its inception, uses and its benefits. For this week I will continue my journey with research on audio immersion.

The newest of the new technology in surround sound, Dolby Atmos is a leading multichannel audio technology that was introduced in 2012. It revolutionised cinema audio by adding height channels to the mix which created a three-dimensional soundstage, unlike traditional channel based audio systems where sounds are assigned to specific speakers. With Dolby Atmos, sounds are treated as objects that can move freely around the listener. This revolutionary approach unlocks a new level of immersion for the audience as sounds can originate from above, below, and all around, creating a lifelike audio environment that mirrors real life perception.

Atmos cinema setup

Immersive audio like this is not just limited to cinemas, Dolby Atmos has rapidly expanded across various entertainment services. Today, it is not uncommon to find Dolby Atmos in your home cinema systems, headphones and if you are a producer, in your DAW’s which brings the experience into the comfort of our own homes.

On top of Dolby Atmos there are multiple other multichannel systems including DTS:X, Auro-3D, Sony 360 Reality Audio and THX Spatial Audio. All of these multichannel audio solutions cater to different applications, and offer varying levels of immersion and spatial accuracy. Each technology has its unique features and benefits, providing options for consumers and professionals seeking high-quality audio experiences.

THX home cinema

As technology continues to evolve, the possibilities for immersive audio are endless. From live concerts streamed in Dolby Atmos to personalised audio experiences tailored to individual preferences, the future holds lots of opportunities to utilise this incredible sound system.

 Morrison, Geoffrey. “Surrounded by Woods all around: Dolby Atmos explained”CNETArchived from the original on 2020-01-21. Retrieved 2020-01-21.

Categories
matt's blog mixing blog

Week 7: History Of Surround Sound

Surround sound is an audio technology that aims to enhance the width, depth, fidelity and spatialisation of sound reproduction. It achieves this by utilising multiple audio channels from speakers that have been strategically positioned around the person listening, With this these additional outputs can create the illusion of sound coming from multiple directions creating a realistic audio. environment.

Common speaker arrangement in current cinemas

The most common setup for the use of surround is the 5:1 configuration, which makes use of 6 speakers – a speaker in the middle, left/right speakers in front, left and right surround speakers positioned to the sides or behind the person listening and finally a subwoofer for the low frequency audio.

5:1 surround setup

The idea of surround sound can be dated back to the early 20th century. One of the first documented uses of surround sound experimentation was in 1940 when the Walt Disney produced film “Fantasia” introduced the concept of multichannel audio through the use of what was called the ‘Fantasound’ system (Wierzbicki, 2014). Developed with the help of Bell Labs, Fantasound made use of numerous speakers strategically placed around the theatre to greatly enhance the viewer’s experience in a rich wide sound that would greatly increase the immersion of the film.

Fantasound being setup in the 40s

With the success of surround sound within films in the era, the usage of the technology gained headway, leading to newer systems aimed at further enhancing the cinematic experience. In 1977 the original Star Wars came out into theatres, captivating cinemagoers with its use of ‘Dolby Stereo’ a fairly new system that made use of quadrophonic surround sound. (Bordwell, Staiger, & Thompson, 1985). This innovation with sound set a new standard for audio in cinemas.

Poster for star wars advertising dolb stereo

Wierzbicki, J. (2014). Fantasound. In J. Richardson & C. Gorbman (Eds.), The Oxford Handbook of New Audiovisual Aesthetics (pp. 345-362). Oxford University Press.


Bordwell, D., Staiger, J., & Thompson, K. (1985). The Classical Hollywood Cinema: Film Style and Mode of Production to 1960. Columbia University Press.

Categories
matt's blog mixing blog

Week 6: Future Of Mastering

Mastering is the final step in the production process for audio, where the tracks are balanced, polished and optimised for distribution across streaming platforms and physical media. Usually mastering engineers use their keen ears, expertise and specialised hard/software to enhance the music, ensuring the best quality for the final product.

With the future of mastering and the evolving world of technology, mastering engineers utilise a blend of artistry and technical skill to hone in the mix, with the future poised to be revolutionised by the integration of AI driven tools and online services. Plugins like Ozone use machine learning algorithms and neural networks to analyse the users audio data and make intelligent decisions based on processing techniques to greatly aid the user in their workflow and guarantee more consistent results.

Ozone 11 AI tool

For example they can analyse audio data quickly and make processing decisions in real-time which reduces the amount of time spent on manual adjustments, leading to faster turnaround times for mastering projects. This includes tasks such as noise reduction, dynamic range optimization, and harmonic balancing.

Another progression in mastering is spatial audio. Spatial audio involves creating a sense of space and depth within the audio experience, allowing listeners to perceive sound as coming from various directions, heights, and distances. With the increasing rise of audio formats such as Dolby Atmos and Sony 360 Reality Audio, mastering engineers are tasked with not only preserving the artistic intent of the music but also enhancing it for multidimensional playback environments.

Artistic interpretation of spatial audio

www.dolby.com. (n.d.). What is spatial audio? How it works and how to use it. [online] Available at: https://www.dolby.com/experience/home-entertainment/articles/what-is-spatial-audio/#whatisspatialaudio.

Anderson, N. (2024). AI can now master your music—and it does shockingly well. [online] Ars Technica. Available at: https://arstechnica.com/ai/2024/02/mastering-music-is-hard-can-one-click-ai-make-it-easy/#:~:text=AI%2Dpowered%20mastering%20systems%20allow [Accessed 15 Apr. 2024].

Categories
matt's blog mixing blog

Week 5: The Loudness War

The ‘loudness wars’ are an ongoing ‘battle’ within the music industry. A competition of sorts between producers and mastering engineers to make their tracks as loud as humanly possible. This trend was started in the early 00’s by the mannerism that ‘louder is better’ and fueled by advancements in technology along with new consumer listening habits, the quality of new music of the time was heavily impacted.

WIth record makers striving for loudness, dynamics have been sacrificed, leading to a huge loss in depth and fidelity in the music. As each new track tried to be the loudest on the radio, the subtleties and nuances that make music rich and engaging have been compressed and crushed, leading to listener fatigue and decreased enjoyment.

The main example for this which was discussed in class is the heavy metal band ‘Metallica’s’ “Death Magnetic” album which is infamous for its heavily compressed and distorted sound. The album received widespread criticism from both fans and audio engineers for its poor sound quality and absence of dynamics.

Death Magnetic waveform compared to Guitar Hero release version.

In the book “Mastering Audio: The Art And The Science” by the engineer ‘Bob Katz’, he highlights the degrading effects of excessive loudness on the quality of music. He mentions how the hyper compression can lead to a diminished emotional connection to the music.

With my new found knowledge of mastering, I can make sure to not make the same mistakes previous mix engineers have made, keeping dynamics rather than selling out and brickwalling my efforts to get attention,

Mastering studio

Hiatt, B. (2008). Death Magnetic. [online] Rolling Stone. Available at: https://www.rollingstone.com/music/music-album-reviews/death-magnetic-250620/ [Accessed 11 Apr. 2024].

Wikipedia. (2020). Loudness war. [online] Available at: https://en.wikipedia.org/wiki/Loudness_war.

Katz, B. (2013). Mastering audio : the art and the science. New York: Focal Press.

Categories
matt's blog mixing blog

Week 4: History Of EQ

The history of EQ began in the early 20th century alongside the rise of audio technology. It was mostly utilised in recording and broadcasting studios back then to adjust the audio signals’ frequency response. Bass and treble frequencies were adjusted using basic passive circuitry, like as filters and tone controls, in the early days of EQ.

Hardware EQ

With the advancement of technology over time, EQ gained sophistication and was able to provide more accurate and adjustable settings. The development of parametric equalisation in the 1950s and 60s gave engineers the ability to modify the filters’ bandwidth, or “Q,” as well as their frequency and amplitude. This made it possible to produce audio with more accurate tweaks and fine tweaking.

Eight Band Parametric EQ

Since then, EQ has emerged as a vital instrument in a variety of audio fields, including live sound, post-production for films, and music creation. EQ capabilities increased even further with the introduction of digital audio technology, providing a wide range of filter types, adjustable curves, and real-time analysis tools.

One of the more recent advancements in EQ technology is Dynamic EQ, which marries dynamic processing with traditional EQ principles. This dynamic adaptation to audio characteristics provides enhanced control over frequency balance, particularly beneficial in scenarios involving dynamic audio content such as music recordings or live performances.

Eargle, John M., and Chris Foreman. “The Microphone Book.” Focal Press, 2004.
Rumsey, Francis, and Tim McCormick. “Sound and Recording: Applications and Theory.” Focal Press, 2014.
Katz, Bob. “Mastering Audio: The Art and the Science.” Focal Press, 2014.