NAM

'Dynamic range mis-match'?

Recommended Posts

Thanks. So am i right in my understanding that the 'resolution' is ultimately set by the microphones and 16bit is more than is needed to sample/capture that analogue signal? Is that related to the SNR noise of the mic?

I suppose in my case there must be a limiting SNR for the photomultiplier tubes?

Indeed. Condenser microphones have a certain amount of self-noise. Small mics have more self noise, but tend to have a smoother frequency response, large mics are more coloured but produce more output. That's why recordings are done with a wide range of microphones. A Kick drum mic needs to be able to handle much higher SPLs without distorting than a microphone that might be used for orchestral capture as part of a crossed pair or Decca Tree.

The microphone's self noise sets the lower limit, say 14dB-A, and the distortion of the mic sets the upper limit, say 134 dB (For this example I used the AKG C414B-ULS) This gives a potential dynamic range of 120dB, but as mentioned before, this is way in excess of the dynamic range available in the studio. What I do when recording is to set the mic-amp gain so that the loudest bits in rehearsal get to -10dBFS. This allow 10dB of headroom for the musicians playing that bit louder when it's for real rather than at rehearsal, and allows for occasional unexpected peak which didn't come out at rehearsal.

When playing back the recording, the acoustic noise at the recording venue, whether traffic, air con or musicians/audience shuffling is hugely in excess of the noise floor of my microphones and mixer. I cannot hear any hiss at all in my recordings, the background noises swamp these.

S.

Share this post


Link to post
Share on other sites

Ok...and the distortion limit of mic is (depending on mic tech?) similar to excursion limits of speaker drivers and/or something else?

I guess the maximum limit on the pmt tubes i deal with would be if smething was still saturating the tubes at lowest setable laser power with 0 gain...i'm presuming it identifies saturated by electronically or software recognising that the number of photon events has gone beyond the linear workable range of the tube...

I never work with anything that bright...

Share this post


Link to post
Share on other sites
...also...i guess 16bit was chosen for a reason?:P

It was, as it was all that was available at the time, if I remember correctly the first CD players were in fact using 14 bit DACs. Ultimately you should get higher resolution using 24 bits to describe a waveform than a 16 bit system, especially with the increased sample rate, but as said above, it doesn't necessarily mean that you'll get a higher dynamic range capability, mainly for all the reasons mentioned above.

Share this post


Link to post
Share on other sites
Ok...and the distortion limit of mic is (depending on mic tech?) similar to excursion limits of speaker drivers and/or something else?

I guess the maximum limit on the pmt tubes i deal with would be if smething was still saturating the tubes at lowest setable laser power with 0 gain...i'm presuming it identifies saturated by electronically or software recognising that the number of photon events has gone beyond the linear workable range of the tube...

I never work with anything that bright...

Distortion limits on condenser microphones are dependent on diaphragm excursion and the internal amplifier's clipping limits. I suspect the latter comes first, as the physical movement of the diaphragm is very small indeed. Many microphones have a +10dB or even a +20dB switch which puts a pad between the microphone capsule and amplifier to allow higher SPLs. This is common when a microphone is placed on the bell of a saxophone or inside a kick-drum. Local SPLs of over 120dB are possible, so a pad will reduce the level into the amplifier. This further points to the amplifier being the limiting factor, not the diaphragm excursion.

With dynamic (moving coil) microphones, the limit is the physical excursion of the coil, which works exactly like a moving coil loudspeaker, but in reverse. In fact, you can use a moving coil loudspeaker as a microphone, something that is sometimes done in non-critical applications like an intercom. There's no amplifier involved in a moving coil microphone other than the one in the mixer, so the gain can simply be turned down if the level's excessive. Dynamic mics are used for vocals, especially on stage, where they're a lot more rugged than a condenser (think Roger Daltry swinging his SM58 about) and for very high SPL applications, like kick drums, where the ability to handle the extreme SPL is more important than any subtleties of absolute fidelity.

As to why 16 bit was chosen, I don't know why, but possibly it was felt to be more than adequate and was about the limit for A-D & D-As at the time. In fact, Philips preferred to use 14 bit DACs, but 4 times oversampled which gave 16 bit (actually closer to 15.5) resolution. As I understand it, 44.1kHz sampling was chosen for CD because 48k resulted in more data than could fit on a CD given that it was felt commercially necessary for a CD to hold 1 hour of recording. Later of course, CDs were able to hold up to 80 minutes at 44.1, so 48kHz sampling could have been used and still achieved 60 minutes, but by then the standard had bee set. One overwhelming reason for the acceptance of CD was that it was a standard, the Red Book was accepted by all participants in CD, including the music companies.

S.

Share this post


Link to post
Share on other sites

As to why 16 bit was chosen,... but possibly it was felt to be more than adequate and was about the limit for A-D & D-As at the time. In fact, Philips preferred to use 14 bit DACs, but 4 times oversampled which gave 16 bit (actually closer to 15.5) resolution. As I understand it, 44.1kHz sampling was chosen for CD because 48k resulted in more data than could fit on a CD .

Marketing and the perception of progress. Anything new had to improve visibly and audibly on what the consumer and pro worlds back then perceived as the best in SNR. For consumer that would have been cassette with Dolby C, for pro Dolby A or perhaps nascent SR (disregarding dbx which, specifically in its type II consumer guise, surely had its problems). So SNR had to improve on 80dB give or take in the living room, and perhaps 90dB in the studio.

The digital recorders of the time were 16 bit or 18 bit. Sampling rates 50k, 48k (not sure), 44.1k.

Most European companies working on consumer digital were proponents of 14bit, and Philips were confident enough to plan the development of 14 bit DACs only.

The story goes that they were surprised by Sony's insistence on 16 bit, quickly having to develop the concepts of oversampling with digital reconstruction filtering and noise shaping to get to a notional 16 bit resolution on playback.

I don't buy that. You don't do that 'quickly'. It must have been planned from before. The more so as the CAE software to get from filter spec to silicon on its own was a multi-year research project, a project that Philips was executing at that time ...

44.1kHz because it conveniently fit onto PAL and NTSC video tape. This helped the fast and relatively economic proliferation of digital for two-track recording and mastering. Must be one of the causes for its quick acceptance.

Share this post


Link to post
Share on other sites
When Skalpol posted those dithered files we were getting beyond 16bit resolution then and if you whacked up your amp you could hear that albeit only just.

The popular saying that "dithering 16 bit brings beyond-16bit resolution" is actually not true.

Dithering a 16 bit channel decorrelates the quantisation distortion from the payload signal, turning it into white noise.

The result is that the N-bit digital channel becomes the equivalent of an analogue channel with summed noise over

the bandwidth of interest at minus 6*N, plus the dither noise. Let's say -93dB for 16 bit and summed over 20kHz.

Such a channel can pass as much information as an analogue channel with the same SNR and the same spectral noise

distribution.

And the cowboy stories that "we can hear into the noise floor" are also just that. There is no magic going on.

Imagine listening to a 1kHz tone at a low level, through 16 bit?

The total noise is indeed at -93dB, but only a small fraction of this broadband noise is located in the bands

adjacent to 1kHz, and only these bands contribute to any potential masking of the payload 1k signal. If the noise energy

in these adjacent bands approaches the energy in the 1kHz signal the latter is lost, almost regardless of the noise

energy in the other, non-adjacent bands.

If one were to monitor a fade to noise of a 1kHz tone on an FFT spectrometer with the FFT bin widths set similar to

the ear's critical bands' widths then it quickly becomes clear that we do not hear much below the noise floor.

Now adding noise shaping to the mix allows to bend the digital channel's spectral noise versus frequency, increasing

the channel's resolution (=decreasing its noise) in a band were we are more sensitive, at the cost of

decreasing the resolution (=increasing the noise) in a band were we are less sensitive.

Share this post


Link to post
Share on other sites

Mono & Stereo : Interview with Dan Lavry of Lavry Engineering

We see that year by year there is kind of race for bits and high sampling rates. Where do you think this will stop?

Regarding bits: The ear can not hear more than about 126dB of dynamic range under extreme conditions. At around 6dB per bit, that amounts to 21 bits, which is what my AD122 MKIII provides (unweighted).

Regarding sample rate: The ear can not hear over 25-30KHz, therefore 60-70KHz would be ideal. Unfortunately there is no 65KHz standard, but 88.2KHz or even 96KHz is not too far from the optimal rate. 192KHz is way off the mark. It brings about higher distortions, bigger data files, increased processing costs, and all that for no up side! People that think that more samples are better, and that digital is only an approximation, do not understand the fundamentals of digital audio.

What rate and bits are enough for today music reproduction and recording?

Regarding processing bits:

For music production, for adding and mixing many channels, for various digital processing, we need more bits. One must make a distinction between processing bits and conversion bits. Say for example that you have 32 channels, each channel made out of 24 conversion bits. If you sum the channels you end up with 31 bits. At the end of the process, the 31 bit sum can be reduced back to say 24 bits, or to 16 bits, because the ear can not hear 31 bits (186dB dynamic range). It is best to have a lot of processing bits. How many, it depends on the number of channels and on the type of processing.

Regarding the rate:

One has to make a distinction between the audio sample rate and the rate of a localized process:

The audio sample rate is the rate that carries the music data itself. Roughly speaking, the audio bandwidth itself is slightly less then half the sample rate. A 44.1KHz CD can contains music to about 20KHz.

At the same time, there are many cases when we use much higher “localized rates”. Such higher rates do not increase the musical content. The higher rates still offer the same original bandwidth of the sample rate. We up sample or down sample between localized rates for various technical reasons. For example, virtually all modern DA’s operate at 64-1024 times the sample rate speeds (in the many MHz range). Operating at such high rates simplifies the requirements of the anti imaging filter (an analog filter located after the DA conversion). The decision about the ideal localized rate depends on the technology and the task at hand. It is an engineering decision, not an ear based decision. As always a poor implementation may introduce Sonics, and it would be wise to refrain from the often encountered practice of far reaching false generalizations, so common in the audio community.

When CD came on horizon nobody talked about jitter. Now days everyone is having his own philosophy around it. Can you elaborate on this subject please. What is the real importance and how to approach this?

Jitter is not only an audio issue. I was dealing with jitter issues in medical conversion, way before the days of digital audio. Jitter is an issue for all conversion (video, instrumentation, telecom, medical, industrial controls…).

The concept of conversion is based on two requirements:

“Taking precise snap shots”

Taking the snap shots at evenly spaced intervals, and playing them back at the same evenly spaced intervals.

Think of a movie camera with an “unsteady motor”, or a playback film projector with a motor that rattles between too slow and too fast. Either case will distort the outcome, and the distortion depends on both the jitter (speed variations), and on the subject itself. Jitter would not do much harm to a steady object, but it does alter the view of a fast moving object. Similarly, in audio, the distortions due to uneven timing (jitter) is due to the interaction between the clock imperfection and the audio itself. Unlike tube, transformer or many other distortions, the outcome due to jitter is NOT predictable or repeatable. There is no such thing as “good sounding jitter”. There are many types and causes of jitters. What we hear is not only about the jitter amplitude and frequency. It is also about jitter type.

Conversion jitter may alter the sound significantly. At the same time, transferring data (that was already converted) between say AD and a computer, or between other digital sources and digital destinations, does not call for great jitter performance. It is only during the conversion process that jitter needs to be very low. Data transfer jitter is not much of an issue, when you are only moving ones and zeros.

Share this post


Link to post
Share on other sites
Marketing and the perception of progress. Anything new had to improve visibly and audibly on what the consumer and pro worlds back then perceived as the best in SNR. For consumer that would have been cassette with Dolby C, for pro Dolby A or perhaps nascent SR (disregarding dbx which, specifically in its type II consumer guise, surely had its problems). So SNR had to improve on 80dB give or take in the living room, and perhaps 90dB in the studio.

The digital recorders of the time were 16 bit or 18 bit. Sampling rates 50k, 48k (not sure), 44.1k.

Most European companies working on consumer digital were proponents of 14bit, and Philips were confident enough to plan the development of 14 bit DACs only.

The story goes that they were surprised by Sony's insistence on 16 bit, quickly having to develop the concepts of oversampling with digital reconstruction filtering and noise shaping to get to a notional 16 bit resolution on playback.

I don't buy that. You don't do that 'quickly'. It must have been planned from before. The more so as the CAE software to get from filter spec to silicon on its own was a multi-year research project, a project that Philips was executing at that time ...

44.1kHz because it conveniently fit onto PAL and NTSC video tape. This helped the fast and relatively economic proliferation of digital for two-track recording and mastering. Must be one of the causes for its quick acceptance.

In the mid 70s, I worked for Philips and first heard CD in 1977 or 78. This was before Philips and Sony started co-operating and the Philips conception of CD was as a digital replacement of the Compact Cassette for in-car use. The CD had to be sufficiently small so that the resulting playerwould fit into a DIN car-radio slot using the technology of the day. This mean that the disc had to be rather smaller than CD turned out later, and consequently couldn't hold as much data. The two demo discs I heard used, if I remember correctly, 32kHz sampling at 14 bits, or possibly even 12 bits. It was already very much better than analogue cassette and LP playback in terms of frequency response flatness and low noise.

It was the subsequent co-operation with Sony that turned CD into a home HiFi medium, which however resulted in larger discs, and which delayed the availability of in-car players for several years until the manufacturing technology allowed.

One Professional Digital Audio machine of the era did indeed sample at 50kHz, the 48k rate came slightly later as it did indeed fit with video rates. CD sample rate of 44.1 as Werner said, also comes out of video technology usd to make CD masters.

60 X 245 X 3 = 44.1 KHz

50 X 294 X3 = 44.1 Khz.

It is a rate that's common to both 50Hz and 60Hz TV countries. 48k would have been a better rate still, but my understanding is that the extra amount of data so created would have reduced the playing time of a CD below the minimum of 1 hour that was considered commercially essential.

S.

Share this post


Link to post
Share on other sites

I don't think anyone would disagree that the dynamic range afforded by 16 bit is sufficient. So if a recording has appropriate levels to use the dynamic range of the medium, no problem. In the process of getting to the final recording the extra headroom from additional bits can certainly be useful though.

Sample rate wise 44.1Khz really pushes the digitial reconstruction at high frequencies - 88.2/96Khz so the maximum frequency you're trying to recover is ~1/4 the sample rate is much simpler. (Note that quite a lot of current dacs keep aiming to reproduce frequencies up to half the sample rate... whether that is beneficial or not).

Sadly perception being what it is, just going to 88.2/96Khz while sticking to 16bit wouldn't come across as a proper "upgrade" to Joe Public, hence more bits. And once you've got Joe Public happy that a higher rate/more bits is better, then even more must be better.... so you get a demand for 176.4Khz/192Khz.

-Brian.

Share this post


Link to post
Share on other sites

Sample rate wise 44.1Khz really pushes the digitial reconstruction at high frequencies - 88.2/96Khz so the maximum frequency you're trying to recover is ~1/4 the sample rate is much simpler.

How that?

Can you show me one example of a conventional player with seriously impaired reconstruction of the 10-20kHz octave?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.