Category: Solent Acoustics
The following post gives a brief summary of a research paper submitted to Reproduced Sound 2018 primarily written by Ludovico Ausiello, with contributions from Lawrence Yule, Giacomo Squicciarini, and Chris Barlow.
A system for performing fast, accurate and objective assessment of the time-frequency response of guitar soundboards has been developed, using an application of the sine-sweep method commonly used to retrieve impulse responses of acoustic spaces or electro-acoustic devices.
Welcome to RM707, the newly configured spatial audio lab at Solent university! The loudspeaker installation here allows us to pan sounds around in full surround sound, using a system known as ambisonics. One of the major drawbacks to traditional surround sound formats (5.1, 7.1 etc.) is that they’re channel based, meaning that the loudspeakers have to orientated in a particular way (to ITU spec), and when mixing audio around there are a very limited number of positions to work with. Ambisonics allows us to decode audio to any loudspeaker configuration, including loudspeakers positioned at different heights, as well as being able to pan sounds to positions in space rather than just to particular channels.
Sonic booms are something that most people will have heard at some point in their lives, perhaps from planes passing by at an airshow, or from a bull whip (yes, the tip travels faster than the speed of sound!), but what exactly are they? and how do they produce such an incredible noise? This posts explores the acoustics of sonic booms.
In many cases the use of hearing protection is essential for protecting yourself from loud and potentially dangerous noise, but there are many different types of hearing protection available for many different scenarios. Knowing which type of protection is the most appropriate for you is important, the guide below should help you to make a more informed decision about which to be using and when.
The modern sound level meter is a powerful tool with many useful functions, but what are the most important things to know? This post aims to act as a simple to follow guide.
Written by Lee Davison
As acousticians, we know (or like to think!) that the sound around us affects us in ways that most people don’t realise. Whether it’s reverb in your classroom that means you can’t hear the teacher properly, or in the shower making you think you’re a great singer, the acoustic spaces around us have a pretty profound effect on the way we experience life, that often goes unnoticed.
This makes you wonder what the ideal acoustic specification for a space is. What’s the best reverb time for music, or the best noise level for concentrating, or perhaps being creative? This is the question that Ravi Mehta, Rui Zhu and Amar Cheema undertook to answer in their 2012 paper; “Is Noise Always Bad? Exploring the Effects of Ambient Noise on Creative Cognition”.1
Have you ever covered your ears with your hands to protect yourself from loud noise? That’s the closest to natural hearing protection that we’ve got, but just how much does it reduce the sound pressure level reaching your ear? And what’s the best method? This experiment aims to find out.
In our previous reverberation time measurement tutorial an impulse response, created by bursting a balloon, was used as the measurement signal. This is a quick and simple method of carrying out a reverberation time measurement, but may not be the most accurate method. In this tutorial we will look at an alternative method that can provide improved results.
The World Health Organisation states that loud noise is the single biggest preventable cause of hearing loss in the UK. Due to advances in portable media player technology, users are now able to store and play music for much longer. Due to this, there is a huge potential risk for overexposure to noise using these devices. It is now estimated that over 4 million young people in the UK are suffering with the effects of noise induced hearing loss from listening to amplified music in the UK.
Research carried out by the Massachusetts Institute of Technology, along with Microsoft and Adobe, has been used to extract audio data from video by analysing the tiny, imperceptible vibrations that occur in objects when they are subject to a sound.