Categories
matt's blog post-production blog

Week 10: Reflection and Submission

Over the past 9 weeks, I have learnt plenty. By being placed in a new uncomfortable environment, tasked with challenges I have never faced before such as audiovisual analysis or dialogue editing. I have been able to research and study completely new production, recording and mixing techniques, with one of the first being field recording. Something that I have used on not just my London artefact, but on my final project as well by using the H5 kit to record audio clips in high detail to then implement into the short film I chose.

In this final class before we submit our work, our lecturer Diego gave us valuable feedback to incorporate into our work. It is crucial to get feedback from a fresh set of ears (especially from someone who does sound design for a living) as when you are working on a project like this, your ears can become disillusioned to the mistakes or creative choices you have made. Since Diego has worked in the field for many years, he was able to tell me exactly where i went “wrong” and explain how I could fix it as well as provide me with additional ideas for what I could potentially add to improve my project overall.

For example in the astronaut clip artefact, It was pointed out to me that as the astronaut is moving through the ship, the sound of the sliding door closing behind him does not get any quieter as he moves further away To fix this I added automation to lower the volume in time with the movement of the astronaut away from the door, as well as add some reverb and pre-delay (as per DIego’s suggestion) to exemplify the dynamic soundscape of the spaceship interior and push the audio to the ‘back’ of the mix.

here you can see the updated automation I have used to do this

For the second of the two ADR clips, Diego was able to show me how to effectively use the fabfilter plugin to EQ match the main villain’s voice lines. Before he showed me this I was struggling to match the tone, leading me to use more of the boom mic dialogue. The audio from this however, provided a much less realistic and believable tone for the villain character (too much room noise and not enough bass). By using fab filter and sticking to the characters mic over the boom I was able to bring back the more intense vocal sound for the character while providing a contrast clean anough to not notice any drastic changes in the tone of the vocal delivery.

the EQ curve that I used to match the vocal takes from fab filter

I was also able to receive feedback from my peers in class, who recommended I used additional equalisation on the forest footsteps artefact to cut out the high end in one of the footstep samples to make it sound more relevant to the material the character is walking on. This is another detail that went completely unnoticed by me until it was pointed out to me.

this is the eq I applied to the track

Throughout these past weeks, writing and researching these blog posts has made me a lot more confident in myself and my ability to create, mix and produce within the realm of post-production and sound design, you can see this evidenced in my work, utilising a number of new skills on my project such as field recording and sound processing. Over the weeks of class I have been able to improve on my skills, creating a more and more polished final project as I progressed through this terms classes.

Categories
matt's blog post-production blog

Week 9: Exports and Deliveries

Today we looked at preparing and exporting projects for deliveries. When you are tasked with completing a project for someone, every client will have their own criteria or pre-requisites you will have to abide by for when it comes time to delivering the finished product. For example Netflix requires all audio be mixed in 48khz and 24 bit depth. They also require a 5:1 mix, the stereo mix is completely optional. If you are given a project to complete and you are not given a set criteria to follow, it is a good idea to ask to make sure the customer is getting what they need.

Netflix’s prerequisites for deliveries

If you are mixing in 5:1 for example, you may need to downmix your project if required by the criteria you are given. Downmixing is the process of combining all the audio channels of a surround mix into a stereo session, typically to ensure compatibility with playback systems that cannot support the original format as well as preserving the details of the stereo image when being played through these playback systems. Some examples of this are a 5.1 movie soundtrack being downmixed for playback on a laptop with stereo speakers or a multitrack music recording (with separate vocals, instruments, etc.) being downmixed into a standard stereo mix for distribution.

When it comes to folding down a mix from 5:1 to stereo, there are two commonly used methods. LORO and LTRT. LORO, meaning left only, right only is the more commonly used method. In this procedure, the side channels are brought forward towards the centre (with the level reduced by 3db) and the LFE is discarded. LTRT, meaning left total, right total is slightly more complex in its execution. This procedure works by combining the side channels to create an “S” signal which is then added to the mix -90 degrees and +90 degrees out of phase to the left and right channels respectively.

Diagram for downmixing

Both methods have advantages and disadvantages. If a sound is mixed to a side channel encoded in LTRT, when decoded it will still be there. However, if played back undecoded that sound may be inaudible because of the phase alignment with the L and R channels. This is not an issue with LORO encoding, the phase is unaffected with the creation of a new mix with the L and R channels at lower overall levels. “creating a sense of distance without compromising the intelligibility of the program” (Media, 2023).

Each encoding format offers different approaches to stereo playback. LTRT is useful when it comes to working with a mono signal. In this format, a decoding matrix can be used to create a stereo image from a mono signal. On the contrary, the LORO format is useful because it uses two separate signals to create a more in-depth stereo image. It enables you greater flexibility when it comes to placing sounds in a stereo field.

By researching these methods and requirements, this will greatly help me downmix to stereo with ease, simplifying my delivery process and guiding me to stick to the right criteria. For my last surround sound mixing project in first year, I did not check the pre-requisites for my submission delivery, leading me to receive a lower grade than expected. This time round though, this will no longer be a problem because of the research I have done today.

Netflix | Partner Help Center. (2016). Post Production Branded Delivery Specifications. [online] Available at: https://partnerhelp.netflixstudios.com/hc/en-us/articles/7262346654995-Post-Production-Branded-Delivery-Specifications#h_01GBBMVW3XFY9FHESGX72KBJNK [Accessed 21 Jan. 2025].

Media, E. (2023). Downmixing LtRt vs LoRo. What’s the Difference? – Enhanced Media – Medium. [online] Medium. Available at: https://enhancedmedia.medium.com/downmixing-ltrt-vs-loro-whats-the-difference-d2071837e123 [Accessed 19 Jan. 2025].

Categories
matt's blog post-production blog

Week 8: Mixing for Surround Sound

Today we delved into the challenge of multi-channel mixing for films, specifically 5:1. A big difference between 5:1 and standard stereo sound setups is the extra speakers to the rear of the listener. These extra speakers provide a much greater sense of depth, with the sound being able to travel around you in 360 degrees. However, this setup comes with additional obstacles, mainly potential phase issues. Eric Dienstfrey provides a great example of this from Apocalypse Now and its Dolby sound system in his journal saying “mixing track six with tracks two and four would inadvertently mute sound effects intended for the rear loudspeakers. Dolby consultant John Iles recalls that this phenomenon, known as “signal cancellation” (Dienstfrey, 2016).

The speaker arrangement for Apocalypse Now

In 5:1 you are able to create a more immersive soundscape, to make the listener feel as if they are potentially inside the film itself, providing z contrast between the locations shown on screen. Eric Dienstfrey again provides an excellent example of this in his journal, describing 2 scenes from the film Tron (1982) “The use of atmospheric effects accentuates the differences between Los Angeles and the computer world. For instance, when Kevin, Lora, and Alan converse in Kevin’s small apartment, their voices and the noises of the city remain in the front loudspeakers (0:19:10). In contrast, when Kevin, Tron, and Ram discuss their plan to attack Master Control, the buzzes of the mainframe’s neon lights emanate from every channel (0:48:15) (fig. 4). In order to emphasize the digital world’s stadium-like size” (Dienstfrey, 2016). The sounds coming directly from the front speakers in the apartment scene, directly involving the viewer in the conversation. In contrast to this, in the next scene mentioned, the sound from all sides emulates the disorientation the characters would feel when placed in the centre of a stadium. As I said before these details from the sound design in the film really help to emulate the locations shown on screen, vastly improving the viewer’s experience while watching.

The Tron characters mentioned in the paragraph above

When it comes to mixing dialogue in 5:1, according to Diego, the raw dialogue is almost always placed on the centre channel, with the side channels being used to create the surround image by bleeding in the reverb from the actor’s vocals. If the dialogue tracks are sent to the side speakers to obtain the position within the camera frame, the audio can jump around and become jarring if not done correctly. For example it is excellently utilised in the film “Gravity” with Sandra Bullock’s dialogue coming from all sides as she and the camera swing around in the depths of space. For a dialogue-heavy film however, this may not be ideal, too many jumps or volume changes could ruin the flow of the film.

The set of Alfonso Cuaron’s Gravity

This was an issue I ran into when mixing my dialogue edit artefacts, specifically the casino scene. Though these clips are in stereo rather than 5:1, the point I made above still applies. I initially tried to match the dialogue volume and tone to the the distance of the characters from the camera for realism. After completing my mix and listening back however, I realised it didn’t sound right. I compared my mix to the poker scene from ‘Casino Royale’ since they were stylistically similar, and noticed that no matter how far the camera is from the characters or how loud they are speaking the volume of their dialogue stays relatively the same level. Something like a volume change would be best expressed when empasising a certain aspect of the scene, intentionally drawing the viewers focus to that point. I used these details from comparing scenes to improve my dialogue edit by toning down my use of perspectives, making the scene a smooth watch.

A shot from the poker match scene in Casino Royale

i don’t plan on mixing any of my current projects in surround sound, but when the time comes for me to start, the information I have learnt from today’s class and through my research will be invaluable to me in overcoming the challenge that is multi-channel mixing. Being able to design and mix a project in surround is a crucial and valuable asset to have, almost everything on screen nowadays is mixed using a 5:1 speaker setup. It is also useful to know that anything mixed in surround sound can be folded down into a stereo session with ease on pro tools, something I didn’t know about last term when I was mixing my project in 5:1.

Dienstfrey (2016). The Myth of the Speakers: A Critical Reexamination of Dolby History. Film History, 28(1), p.167. doi:https://doi.org/10.2979/filmhistory.28.1.06.

Categories
matt's blog post-production blog

Week 7: Dialogue Editing pt. 2

In this week of class, we continued with dialogue editing, expanding on our newfound knowledge from last week. When you are handed a project (such as such as what I am tasked with for example), you are almost always working with a strict time constraint. Which is why in today’s class we had a look at time saving techniques to make the mixing and editing process easier and quicker, while still being able to produce the same quality product.

Starting where I left off in our last studio session, the aaf had been uploaded and synced perfectly with the timecode and all the audio files were assorted into groups (MX, DX and FX) and subgroups (for each character, etc.). As I mentioned last week, dialogue editing is not always easy and it can take a lot of time, which is why in this session I created a mix template. Creating a mix template will be greatly useful for me with future ADR mixing sessions as I don’t have to spend extra time creating new folders and subgroups, it will all be there for me when I open a new session.

My organised Pro Tools session

Because I have assorted each of the dialogue clips to a track for each character, I am able to use plugins like a de-noiser for all of each characters dialogue at once, rather than going clip by clip to save a bunch of time. However, this doesn’t mean I don’t have to add additional plugins or editing directly to the clips themselves, if required I may do so. In this short clip I was given there are some vocal lines from the same characters that need balancing together. Some lines of dialogue from the operator have a pre-existing radio vocal effect placed on them, I had to make sure that any clips without this effect that needed it, were mixed to sound similar enough to each other to not distract potential viewers. Initially I tried using fab filters eq match setting to accomplish this task but I could not get the plugin to work correctly. Instead, I used the stock pro tools eq plugin to emulate the radio vocals, eliminating everything but the midrange frequencies to create this boxy vocal effect.

Certain elements of the dialogue are slightly louder than others within each vocal delivery. To deal with this I use the clip gain to balance out each syllable where needed. In certain cases this can be vastly time-consuming, To combat this and save precious time, I wrote in automation changing the volume levels of the actors voices as the clip played. Using the digital desk in studio 3 and the touch / latch function on protocols I was able to automate the vocals to the optimal smooth level that i was content with.

Some of the gain editing I have applied to the boom mic channel

Learning all these techniques from this week’s class and implementing them into my project will be a great aid to me when it comes to sound designing the short film I chose. Since I have a strict deadline to abide by, these techniques I have utilised today will guarantee I use my time in the studio effectively.

Categories
matt's blog post-production blog

Week 6: Dialogue Editing

Today we looked at dialogue editing in films. This is an important part of the post-production process which hones in on cleaning up all the sound issues from the set and smoothing out all the actor’s vocal performances. This must occur as smooth and articulate dialogue is one of the foundations of any movie and the first step that goes into a sound mix.

In class, we were tasked with editing 2 scenes from a short film. After loading in our aaf file from week two and lining up the timecodes, I organise all the tracks and audio clips by copying them into groups and subgroups, using the shortcut ctrl + option while dragging clips so that I am able to keep the original copies of the aaf that I can compare back to as well as retrieve a fresh unedited clip if I were to make a mistake. This shortcut also has the added benefit of locking the clips in place, so as I drag them to a new track the clips have no chance of going out of sync. This will be useful for me to organise and clean up my pro tools session, makinge my mixing process easier in the long run, especially since I’m working with a strict time constraint in each studio session. It will also make it a breeze to export separate audio stems since all the audio is organised into the correct subgroup.

Dialogue editing is not always an easy process, but it’s a crucial and necessary one. You will spend a great amount of time selecting the best microphone sources, smoothing background noise from cut to cut, and removing non-dialogue production sound effects for use on their own tracks. Empty spaces are filled with room tone, and unwanted sounds like heavy mouth clicks or noises that distract from the viewing experience are taken out.

Through research, I discovered a book to help guide my workflow. Titled “Dialogue Editing for Motion Pictures: A Guide to the Invisible Art”, this book by John Purcell provides a detailed breakdown of the dialogue editing process. Starting the preparation for the edit to finalising dialogue tracks. The book has helped me understand not only the “how” but also the “why” behind each step in the editing process.

For example in this passage he discusses shot balancing, which is a key aspect of dialogue editing that ensures the dialogue sounds consistent, even when recorded in different environments or conditions. The focus on creating a “living scene” reinforces the idea that dialogue editing is as much about storytelling as it is about technical precision. Effective shot balancing removes the “mechanics of filmmaking” to immerse the audience fully in the story. It highlights how editing choices influence the audience’s perception of the dialogue, ensuring it feels natural and contributes to the scene’s authenticity.

This is a piece of information I called back to numerous times while mixing my dialogue editing artefacts. I was able to use it as a guide for my editing process, since it was the first time I had taken on such a task, it was a key asset in giving me an initial image of what I had to do, bypassing my creative block and helping me to produce a believable dialogue edit mix.

Purcell, J. (2015). Dialogue editing for motion pictures: a guide to the invisible art. Focal Press.

‌Enhanced Media. (n.d.). What is dialogue editing and why is it important for your film? | Enhanced Media – Audio Post Production Company. [online] Available at: https://enhanced.media/blog/2021/10/6/what-is-dialogue-editing-and-why-is-it-important-for-your-film.

Categories
matt's blog post-production blog

Week 5: Foley Sound Design

In this week of class, we looked at using props and materials to record foley. This is a vital step in the post-production process in which already recorded sounds are replaced and enhanced to underscore the visual effect or action on screen. An example of this is fist-fight scenes in action movies, which are usually staged by the stunt actors and therefore do not have the actual sounds of blows landing.

Foley studio

We used some objects to create foley for two separate clips. In the first clip, a man walks through a forest with a backpack. Each of us took turns recreating and adding sounds to the session, recording footsteps, fabric movements, shuffling of bags, etc using a shotgun mic to record each. We had to do this a couple of times to get each footstep and rustle in time with the video.

Sound designer recording footsteps

With the next clip however we were tasked with recreating an on screen zombie bite. We had to use creative sound substations or “foley artistry” to get the desired sound. Some of the things we used included, chewing chocolate for the initial bite, snapping celery to create the sounds of bones snapping and peeling oranges to emulate the tearing of flesh.

This is a technique that is vital for me to learn and experiment with. By doing this I am able to emphasise the emotion or tone of my project. For example, exaggerated or stylised sounds like the use of celery to emulate bones breaking are highly effective in creating a more dramatic experience as well as giving me the ability to manipulate the audio to suit the narrative and visuals of my projects. The piece of text below from the article “Sync Tanks: The Art and Technique of Postproduction Sound backs up my claim.”

(Wels, 1995)

In the article “Unpacking a Punch: Transduction and the Sound of Combat Foley in Fight Club” the author describes many methods of sound substitutions that they used to generate the punching sound effects for the film. These pieces of text pictured below show how they creatively use sound substitutions to emphasise the aggression in of each blow.

(Hagood, 2014)

Foley isn’t just using objects to make sounds but also taking those sounds and editing them. In the second clip for example, to make the bite more convincing, the audio was pitched down an octave or so and some equalisation was added. Another thing we did was record “groaning” sounds and use a plugin called reformer (which we used in week 3 initially) to completely transform the vocalisations using the library of sound emulators.

(Wels, 1995)

Hagood, M. (2014). Unpacking a Punch: Transduction and the Sound of Combat Foley in Fight Club. Cinema Journal, 53(4), pp.98–120. doi:https://doi.org/10.1353/cj.2014.0048.

Wels, E. (1995). Sync Tanks: The Art and Technique of Postproduction Sound. 21(1/2), pp.56–61.

Categories
matt's blog post-production blog

Week 4: Field Recording and Ambience

This week of class, we were given two new short clips for which to design sound. A drone view of London and a short animation of an astronaut in space. with this, I was handed a H5 stereo field recorder and told to spend the next hour exploring Elephant and Castle gathering sounds.

Field recording is the action of capturing sounds and audio recordings in an open environment without the aid of studios. Recording in the “field” means to work outside with no walls or booths or control room filled with audio engineers and high tec computers. With field recording it is all about capturing audio straight from the source.

What’s great about the H5 recorder is the X/Y stereo mic pattern. The X/Y pattern is the most commonly used stereo technique. It is used to mimic the way our ears work (binaural), relying on the time delay of a sound that arrives at one microphone compared to another which means it is able to provide a deep sense of ambience.

Before I went out to record sounds I watched the clip through a couple of times while down important ‘hit points’ (you might call it) to collect sounds for. By doing this, I could record everything I needed without having to go back out and record more, saving myself lots of time and energy. For example, there was lots of visible traffic going from left to right in the clip, so I parked myself in front of a busy street and

Since it was my first time using a field mic there was a slight learning curve. I learnt this the hard way with a bunch of my first recordings being completely unusable since I hadn’t gain staged the microphone correctly. Another thing I figured out was the mic was very directional and that I had to point at exactly what I wanted to hear or else it wouldn’t be picked up. I had to be careful not to use too much gain as the field recorder could capture sounds from extremely far that could bleed into the audio. Luckily for me the clip I was given benefitted from this with the ambience I recorded capturing all the minor details of what you hear walking through the streets of London.

For the astronaut clip, there were some sounds I wanted to add that would have been almost impossible to recreate myself so I utilised an application I found called Soundly, which gave me the ability to download ready-made sound effects in an instant. This will be useful for me down the line when I’m struggling to get the right tone for a piece of foley.

I came back and loaded everything I recorded into Ableton and spent the next hour or so lining up all of the usable audio clips and panning them to create my sound design piece. When it came to exporting the files I ran into a challenging issue involving the coding of the video clip, so after troubleshooting with no success, I had to use another application to export it. Diego recommended I try Davinci Resolve, so I installed it and roughly figured out how to use it so that I could successfully export my project.

Field (2020). Acoustic Nature. [online] Acoustic Nature. Available at: https://acousticnature.com/journal/what-is-field-recording?srsltid=AfmBOoq395ccrC8F388oEC9qxpQKd1mXQxwSvVl7fpSBgrzOjryq3aV0 [Accessed 2 Dec. 2024].

‌Www.sfu.ca. (2020). Field Recording. [online] Available at: https://www.sfu.ca/sonic-studio-webdav/cmns/Handbook%20Tutorial/FieldRecording.html.

Categories
matt's blog post-production blog

Week 3: Synthesising, Layering and Processing

In this first artefact workshop, we were tasked with providing sound for two clips. We had to take these short animations and bring them to life using different sound editing, layering, and synthesis techniques that we were taught in class. This was a great opportunity for me to expand my knowledge of sound design and prepare for the final project submission.

The first clip I was tasked with completing was for a user interface loading screen. I had to create all the sound design for this using nothing but sound synthesis. SInce I had experimented already experimented heavily with synthesis in my first year projects, this was a breeze for me. I was able to use 3 different synths to create swirling pads and rumbling bass motifs that lined up with the movement of the animation – Operator, wavetable and analogue, each of which producing a unique layer to add to the overall sound.

For the basis of the artefact I used 2 different sounds of an analogue synth to provide an underlying drone layer. I used an analogue synth for this because of the warm tone it produced. It was also easier to use to produce a full sounding pad in comparison to the other synths which was a welcome bonus. I also used it to create a thumping bass which lined up with the outbursting arrows in the animation. This added character to the video, providing a sense of movement in the image.

one of the analogue synths I have created

I used an arpeggiator accompanied with the operator synth to animate the lightbulb forming. I wanted to create a sound that emulated the movement of the small pieces forming together and the fast arpeggiator was useful to create that sound. I used the operator synth to get a clean thin tone to match the material of the glass lightbulb. Something that the analogue synth might have struggled to replicate.

the operator synth i designed

In the next clip, I was given a choice between three different character animations to bring to life through sound design. I chose the steampunk robot. In this clip, we see a robot walking on what seems to be a metallic surface (perhaps a submarine exterior?) on the ocean floor.

To obtain some of the sounds for the robots’ vocalisations and movements, I used a plugin called Reformer. This plugin allows you to perform pre-recorded audio with a microphone to style, shape, and match your behaviour to what is happening on screen. It uses an algorithm to choose splices of clips to best suit your performance, with several sound libraries to choose from.

I used the electric setting to emulate the sparks on the inside of the robots head. This is something that would be fairly difficult for my to replicate myself or find in a sound library so by using the reformer plugin I am able to create and design sounds to fit my narrative. By doing this I am essentially able to save vasts amounts of time and stress if I were to try record them myself.

The reformer settings I used

You can see a perfect example of reformer being used by sound designer Tsvi Sherman in the youtube video linked below. In the video he is able to use the plugin to design foley using his vocal input to sculpt the dynamics of a scene from the video game ‘Witcher 3’. As you can see, the plugin is simple, effective and very quick to use.

Sometimes to create a sound instead of recording the desired object for the action you want, it is more beneficial to layer multiple different sounds on top of one another to create a more dramatic effect. Usually for something like film or tv, the authentic sound can be percieved as dull or uneventful. This is where layering comes in handy to create more dynamics and add depth to the foley.

A great example of this is in the film ‘Jurassic Park’. In the sound design commentary for the film, the sound designer Gary Rydstrom analyses all of the layers that went in to designing the sound of the T-Rex. Showing us the first on-screen appearance of the dinosaur in the film, he explains how they used the sounds from all sorts of animals including a lion, whale, and most importantly a baby elephant. All of these sounds were combined to create the magnificent roar you hear from the dinosaur, giving an already intense scene a terrifying edge.

As I mentioned earlier with authentic sounds sometimes being perceived as dull or boring, Gary explains in the video that if they were to design the sound of a T-Rex the way it would have sounded realistically, the only noises the T-Rex would make would be the “gurgling of its stomach”, leading to a much less intense moment in the film. Which is why occasionally you have to over-emphasise certain aspects of the sound design to really bring out a reaction in the viewers. You can see this video linked below.

Categories
matt's blog post-production blog

Week 2: Audiovisual Analysis

Today in class we discussed elements and methods of analysing sound design. In particular the observation methods of renowned film theorist Michel Chion.

“Audiovisual analysis is important when it comes to understanding the ways in which a scene or entire movie utilises sound and images in combination. By analysing such details we can “deepen our aesthetic pleasure and understand the rhetorical power of films. (Chion, 1994)”

What Michel is implying here is that by participating in such activities I am supposedly able to uncover a film’s messages and the techniques it uses to convey them with the use of images or sound design while also heightening my sense of appreciation for a film’s artistic qualities.

In his book “Audio-vision: Sound on Screen” he details a method of audiovisual analysis that he coined called the “masking” method. The focus of this analysis method is to watch a certain scene multiple times, watching with sound and video together, again but muting the sound and then again doing the same thing but cutting the video instead.

As he says in his book: “This gives you the opportunity to hear the sound as it is and not as the image transforms and disguises it; it also lets you see the image as it is and not as sound recreates it. (Chion, 1994)” Michel is under the assumption that sound can “mask” or obscure certain aspects of a film and direct your attention to focus on specific elements within a scene, basically manipulating the audience’s perception of what they see on the screen.

He uses the 1963 film “The Silence” as an example in his book: “We only need cut the sound to demonstrate that here we have a typical effect of added value. Without sound, the tank appears to move regularly and without difficulty on its treads, aside from one brief pause. It’s the sound that makes us view, or rather, audio- view, a tank that’s seen better days and moves with difficulty. (Chion, 1994)”. This is a key part of his theory with the relationship between sound and image in film, where sound can add significant “added value” to the visual experience by influencing how the viewer interprets the scene.

This experiment was put to the test in class by watching some scenes from a few films and then discussing our thoughts and opinions on our observations. One example we had a look at is the opening scene from “Mad Max: Fury Road”. During the initial half of the scene there was a heavy focus on the diegetic music being performed. Without the audio I had trouble grasping the severity of the fast-paced angsty nature of the film, however it allowed me to imagine or interpret how the scene would sound just based off the images shown to me, which after watching with audio and comparing, the idea I had built up in my head only had a passing resemblance to what the final product produced (potentially hearing sounds that weren’t actually there).

Something else that the author mentions in his book is the relationship of music and its direct relation to how differing tones can completely change how a scene is conveyed to the audience. He calls this the arranged marriage of sound and image. Changing music over the same image forcefully illustrates the phenomena of added value, synchresis, sound- image association, and so forth. By observing the kinds of music the image “resists” and the kinds of music cues it yields to, we begin to see the image in all its potential signification and expression. (Chion, 1994)”

We looked at this experiment with a clip from the film “No Country For Old Men”. Listening to the clip with different soundtracks conveyed a differing tone from the original, sometimes completely altering the perspective of what I was watching. However whats more important in this scene is that the lack of music (and dialogue) in the original scene showcases that sometimes the deliberate exemption of music itself can evoke emotions more effectively than with music. In this case amplify the uneasy suspense and emphasise the overall gritty nature of the movie.

These are vital experiments that are likely to guide my research and give me countless ideas for sculpting the sound of my post production artefacts

Chion, M. (1994). Audio-vision: Sound on Screen. New York: Columbia University Press.

Categories
matt's blog post-production blog

Week 1: Audio Post Production

Audio post-production is the creation and manipulation of sounds that are synchronised with a moving image. This is specifically during the post-production phase after everything has been filmed. This includes sound designing, the creation of sound effects and automatic dialogue replacement. Within our class we took a look at the evolution of these technological shifts from analogue to digital. With this it has drastically changed the way sound is edited, mixed and produced.

In the early days of film sound, sound effects, dialogue, and music were recorded onto magnetic tapes. Engineers had to manually cut and splice these elements and then and layer them onto a reel-to-reel tape machine, while also editing these clips on analogue mixing desks. This process is physically demanding and time consuming as any edits require precision in cutting physical tape and reassembling it with each splice being irreversible. However even with the significant limitations these vintage machines are still used today for their warm dynamic tone.

In class we looked at the evolution of sound design in cinema by comparing four separate versions of the final scene of King Kong. By doing this I was able to grasp how far sound design technology has come. Starting in 1933 with a completely mono mix with only 3 layers in the audio for foley, dialogue and music. Continuing to 1976 with some of the first uses of stereo surround sound, and then up to the late 2000s with full complete 7:1 setups and dozens of layers of foley.

With the latest version of King Kong using a new technology called Dolby Atmos. It expands upon the existing 5.1 and 7.1 surround-sound set-ups and adds sound channels coming from overhead, engaging the audience in a dome of audio. With atmos you can produce up to 118 simultaneous sound objects, allowing the sound designer to place each individual sound and dialogue in specific points in the sound field rather than directing them to certain channels. These artefacts can be moved around and manipulated and moved around within the space, generating a 3D sound space. (Roberts, 2022)

In conclusion sound design technology has come a long way in the years that have passed. From tape machines to Fantasound to dolby atmos they have reshaped the film and music industry, giving us new opportunities to create art.

Roberts, B. (2022). Dolby Atmos: What is it? How can you get it? [online] whathifi. Available at: https://www.whathifi.com/advice/dolby-atmos-what-it-how-can-you-get-it.