Jump to content

MQA trying to get the anti compression crew on side.


Recommended Posts



There are two schools of thought. Perfection cannot ever exist in this imperfect world.

And.... Whatever comes from this world is perfect just as it is. Just by it being, shows it is meant to be, and that is perfect!

Much depends on a person's point of view.. sort of like glass half full or half empty.

Link to comment
Share on other sites

15 hours ago, eltech said:

One other thing I've been meaning to say is that I think people somewhat are silly for thinking that improvements to sound are infinite. 

I have done audio recording.

I am pleased to inform that the sounds recorded at 44.1khz are an accurate capture of the audio. When played back, they sound the same.

 

Indeed. Well recorded audio is "fine" ....  the important thing is, that once it's recorded, don't damage it.    This is one of the things which MQA are attempting to do (avoid damage).

 

15 hours ago, eltech said:

In percent, how much better is 192khz or DSD, or 30ips tape better than CD?

Blind tests say no different!

They are all capable of storing/producing audibly identical content.

 

... and so, if they all produce the same output ... then they will sound the same.

 

There are many varied reasons why they might not produce the same output in practise though  (mostly due to "what people did", as opposed to specific details/limitations of the format).

Link to comment
Share on other sites

17 minutes ago, davewantsmoore said:

Indeed. Well recorded audio is "fine" ....  the important thing is, that once it's recorded, don't damage it.    This is one of the things which MQA are attempting to do (avoid damage).

I respectfully disagree with you.

If music is recorded at a particular sample rate and bit depth, the best way to not damage it is to provide the listener with a bit perfect copy of that recording without using processing to modify it for distribution. 

 

1 hour ago, davewantsmoore said:

They are all capable of storing/producing audibly identical content.

 

... and so, if they all produce the same output ... then they will sound the same.

 

There are many varied reasons why they might not produce the same output in practise though  (mostly due to "what people did", as opposed to specific details/limitations of the format).

Seems like we agree

  • Like 1
Link to comment
Share on other sites

3 hours ago, eltech said:

I respectfully disagree with you.

If music is recorded at a particular sample rate and bit depth, the best way to not damage it is to provide the listener with a bit perfect copy of that recording without using processing to modify it for distribution. 

You misunderstand.... this is exactly the ideaology of MQA.   They're just not afraid of the "evil filter".

 

In some ways, putting out the original copy isn't the best way to ensure it's undamaged like you suggest   Playback hardware which can't support high rates will resample and harm the audio.   Playback hardware which oversamples may harm the audio.

 

Audiophiles can fix this, by carefully resampling the audio themselves .... but this requires a computer, is usually not well automated, and isn't suitable for the every-man.

 

For the masses, this new system automagically (carefully) convolves the correct sample rate for your playback chain, using filters/settings most appropriate for your chain  (eg. undo known issues in your converter, etc.)    ie. it allows the every-man to ensure that the audio is least damaged.

 

The reality is, that unless a competing system offers the same types of abilities as MQA .... then it is inferior to the (potential) performance of MQA.     (correction for recording chain issues, and correction for playback chain issues, access to better less damaged sorce material)

 

 

You argue that MQA should eschew processing .... but this assumes the processing is bad.   Processing can improve audio playback.    Your recording and playback have flaws which can be corrected.... and MQA system is flexible enough for them not to have to stay still on this in future (they can update the pre and post filtering).

 

Based on this, the real worry about MQA adoption should not be whether they can deliver high (est)fidelity ... or continue improving fidelity.   It's definitely possible - and makes good busines sense for them to do so.    The real worry should be that improvements in compresison and filtering techniques will ALSO allow them to artificually reduce quality for those who do not pay.

 

 

Link to comment
Share on other sites



14 minutes ago, davewantsmoore said:

In some ways, putting out the original copy isn't the best way to ensure it's undamaged like you suggest 

There is no way then you can make something out of nothing and then call it real or like the original.

It's like saying a DBX unit can make better dynamic rage without effecting the originals integrity.

 

Every time MQA was demo'ed to me against the original non MQA pcm, it's let me flat and wondering what's it all about. One time after the demo the one (pushing) demoing it kept on about temporal lobe functions at which stage I went and had coffee.  

 

Cheers George

Edited by georgehifi
  • Like 1
Link to comment
Share on other sites

On 3/26/2018 at 1:11 PM, davewantsmoore said:

Analysis of MQA decoded content shows (the expected) content above 48khz.    Can you provide some verification of what you're saying? ... or explain what I've misunderstood of it.

 

 

 

Heh, that's pretty usual.  :)

 

My understanding, is that there is no "maximum" .....  although a practical upper bound would be ~17bits, because recordings don't have more dynamic range than this.    2L were waxing on about their 14bit of DR in the MQA demo files (which is a lot!).

 

"not anything like"

 

Even though I'm (obviously?) not a bit proponent for MQA .... this is a bit of a stretch to my understanding.

 

What is lossy about MQA? .... The noise floor is replaced by 'noise' .... and above 44/48 where there's no content, it's discarded.

 

... so what comes out of an MQA decoder (can be) very much like what went in.

 

Almost certainly.   

 

What do you mean by this?

Read the actual analyses online of what's actually in an MQA file that are freely available online. Or the details of how it compresses, and not the marketing speak. You have misunderstood it. Anything above 48k is stripped out. Bob Stuart himself has admitted on tape that MQA files are 15-17 bits in depth, no more. 

All of that by definition means it is lossy. Lossy has a specific definition in this context, i.e., the original file can be perfectly reconstructed from the compressed file. This isn't true of MQA, as it "throws away" bits it thinks are unnecessary. MQA thinks it is only throwing away bits you can't hear so it is as good as lossless. Many who have listened beg to differ. 

 

If you see an MQA file  on playback that says it is a 4X or 8X  sample  rate, that is a lie. The original master before the MQA version may have been that, but as I've said, the MQA encoding process discards lot of bits. When you send it to an MQA DAC, any file that was originally above an 96k playback rate is upsampled to give you a "fake"  4X or 8X file. That's just how it works. Spend some time reading up on it if you care, I don't need to do your homework for you. 

Edited by firedog
Link to comment
Share on other sites

2 hours ago, davewantsmoore said:

You misunderstand.... this is exactly the ideaology of MQA.   They're just not afraid of the "evil filter".

 

In some ways, putting out the original copy isn't the best way to ensure it's undamaged like you suggest   Playback hardware which can't support high rates will resample and harm the audio.   Playback hardware which oversamples may harm the audio.

 

Audiophiles can fix this, by carefully resampling the audio themselves .... but this requires a computer, is usually not well automated, and isn't suitable for the every-man.

 

For the masses, this new system automagically (carefully) convolves the correct sample rate for your playback chain, using filters/settings most appropriate for your chain  (eg. undo known issues in your converter, etc.)    ie. it allows the every-man to ensure that the audio is least damaged.

 

The reality is, that unless a competing system offers the same types of abilities as MQA .... then it is inferior to the (potential) performance of MQA.     (correction for recording chain issues, and correction for playback chain issues, access to better less damaged sorce material)

 

 

You argue that MQA should eschew processing .... but this assumes the processing is bad.   Processing can improve audio playback.    Your recording and playback have flaws which can be corrected.... and MQA system is flexible enough for them not to have to stay still on this in future (they can update the pre and post filtering).

 

Based on this, the real worry about MQA adoption should not be whether they can deliver high (est)fidelity ... or continue improving fidelity.   It's definitely possible - and makes good busines sense for them to do so.    The real worry should be that improvements in compresison and filtering techniques will ALSO allow them to artificually reduce quality for those who do not pay.

 

 

You have drunk a large pitcher of MQA flavored kool aid.  MQA isn't using "the correct" sample rate for your chain or  the "filters/settings" most appropriate for your chain. That's all propaganda by them. 

MQA DACs have a set of 16 preset filters. Same ones for all MQA DACs, AFAIK (except for dCS, which wrote their own filters with MQA just for their DACs). On playback it picks one of the standard 16 it thinks is most appropriate. That's a guess by them. How do you know the one they pick is the most appropriate and best sounding? To whom?  Do they let YOU decide?

 

And by the way, most of the MQA enabled DACs on the market also filter non MQA material with MQA filters - unless you specifically set them up not to, once you've activated the MQA filter. Mytek are one example. Once you playback in MQA filter mode, you stay in MQA filter mode (unless you manually change the mode, even with non MQA files). How is that the "most appropriate" for your chain and DAC?  On some MQA DACs, you can only playback with MQA filters - no non MQA filters on board. Does that sound "most appropriate" to you? 

 

Edited by firedog
Link to comment
Share on other sites

2 hours ago, davewantsmoore said:

You misunderstand.... this is exactly the ideaology of MQA.   They're just not afraid of the "evil filter".

 

In some ways, putting out the original copy isn't the best way to ensure it's undamaged like you suggest   Playback hardware which can't support high rates will resample and harm the audio.   Playback hardware which oversamples may harm the audio.

 

Audiophiles can fix this, by carefully resampling the audio themselves .... but this requires a computer, is usually not well automated, and isn't suitable for the every-man.

 

For the masses, this new system automagically (carefully) convolves the correct sample rate for your playback chain, using filters/settings most appropriate for your chain  (eg. undo known issues in your converter, etc.)    ie. it allows the every-man to ensure that the audio is least damaged.

 

 

But they target audiophiles.   As you say, we already know how to do it right.  So, if it really is just for the masses after all, I smell a cynical marketing exercise to make people pay licences and lock them into MQA.

Link to comment
Share on other sites



2 hours ago, georgehifi said:

There is no way then you can make something out of nothing and then call it real or like the original.

I can't agree that preventing the audio from being distorted (ie. further from the original) is either "something from nothing", or less like "the original".

 

Quote

Every time MQA was demo'ed to me against the original non MQA pcm, it's let me flat and wondering what's it all about. One time after the demo the one (pushing) demoing it kept on about temporal lobe functions at which stage I went and had coffee. 

Indeed.  It's not clear to me whether the differences MQA claim are audible....  and there's certainly a lot of things which can go on, which will cause one to sound different from the other.

Edited by davewantsmoore
Link to comment
Share on other sites

3 minutes ago, davewantsmoore said:

There seems to be too much 'bias' in this post for it to be sensible to comment.....   lest I say that applying a filter which corrects an error, is not "something out of nothing".

I am sorry Dave but that doesn't make any sense.  All you can say definitively is that the filter alters the original.  

Link to comment
Share on other sites

33 minutes ago, firedog said:

You have drunk a large pitcher of MQA flavored kool aid.

No, I don't think so.

 

33 minutes ago, firedog said:

You have drunk a large pitcher of MQA flavored kool aid.  MQA isn't using "the correct" sample rate for your chain or  the "filters/settings" most appropriate for your chain.

MQA will render the audio for any sampling rate as they convolve the encoded data, hence they can pick the most 'appropriate' rate for a converter (ie. so the converter does not resample)    Are you saying they are not doing it?  (Ever?) That they "can't" do it?  or what?

 

Yes, current DACs have fixed filters.  Yes, some have bugs.  The system described though, allows an internet connected device to download any filter.

 

Quote

Do they let YOU decide?

Yes, I would like to decide ... but only a very small segment of the market does. 

 

Quote

How is that the "most appropriate" for your chain and DAC? 

Well, it's clearly not.  It's clearly a bug.

 

54 minutes ago, firedog said:

Anything above 48k is stripped out.

I'm not able to find any mention of this anywhere  (?!)

 

55 minutes ago, firedog said:

Bob Stuart himself has admitted on tape that MQA files are 15-17 bits in depth, no more. 

Of course.... what's the shame in that?   It's all the bits needed to represent the above the noise floor content.

 

MQA can theoretically be any number of bits  (eg. you could have 23bits of PCM, and 1bit of encoded data ..... if you could find a sensible recording with this much real dynamic range).

 

 

58 minutes ago, firedog said:

All of that by definition means it is lossy. Lossy has a specific definition in this context, i.e., the original file can be perfectly reconstructed from the compressed file.

Sure - so they should be more careful and say that they are lossless above the noise floor ... if they turn of their correction of the DAC, and of the original recording.

 

59 minutes ago, firedog said:

MQA thinks it is only throwing away bits you can't hear so it is as good as lossless. Many who have listened beg to differ. 

It is almost certainly not the "reduction in bits" causing any audible difference..... as these are below the noise floor of the original.

 

1 hour ago, firedog said:

If you see an MQA file  on playback that says it is a 4X or 8X  sample  rate, that is a lie. The original master before the MQA version may have been that, but as I've said, the MQA encoding process discards lot of bits.

Discarding bits, does not affect the sampling rate of the audio.   These things are not related.

 

So what you are saying is that if I encode audio that has content above 48khz, with an MQA encoder .... that I won't get that content back when I decode it ?    I'm not able to find any mention of this anywhere.

1 hour ago, firedog said:

Spend some time reading up on it if you care, I don't need to do your homework for you. 

... but that's what I did... and what I found what that these frequencies are not being lost like you claim.     I'm not asking you do "my homework" .... but just tell me where you heard what you heard, so I can investigate.   You seem angry.

Link to comment
Share on other sites

28 minutes ago, aussievintage said:

I am sorry Dave but that doesn't make any sense.  All you can say definitively is that the filter alters the original.  

If you the original .... which is XYZ

Then the original gets damaged .... and so it is XZY

... and now you apply a filter, so it is XYZ again.

 

Then you can say definitively that you are closer to the original.

 

I"m not saying that MQA is great or anything .... just that the system as described, gives them the opportunity to filter your audio.... and by doing that, the opportunity to remove distortion.

Link to comment
Share on other sites

43 minutes ago, aussievintage said:

But they target audiophiles.

Hah.   Yes, they "target" audiophiles.   Audiophiles need to be convinced in order to win over a big enough mindset of the the content producers.

 

The goal, is not audiophiles .... but to onboard the production and distribution channels, and then to deliver the format to the mass market.    "Audiophiles" are but a drop in an ocean of the mass-market.

 

... but why stop there.   Encoding unique data into audio files, and taking control of playback hardware ..... means you can do just about anything.

Link to comment
Share on other sites



2 hours ago, davewantsmoore said:

No, I don't think so.

 

MQA will render the audio for any sampling rate as they convolve the encoded data, hence they can pick the most 'appropriate' rate for a converter (ie. so the converter does not resample)    Are you saying they are not doing it?  (Ever?) That they "can't" do it?  or what?

 

Yes, current DACs have fixed filters.  Yes, some have bugs.  The system described though, allows an internet connected device to download any filter.

 

Yes, I would like to decide ... but only a very small segment of the market does. 

 

Well, it's clearly not.  It's clearly a bug.

 

I'm not able to find any mention of this anywhere  (?!)

 

Of course.... what's the shame in that?   It's all the bits needed to represent the above the noise floor content.

 

MQA can theoretically be any number of bits  (eg. you could have 23bits of PCM, and 1bit of encoded data ..... if you could find a sensible recording with this much real dynamic range).

 

 

Sure - so they should be more careful and say that they are lossless above the noise floor ... if they turn of their correction of the DAC, and of the original recording.

 

It is almost certainly not the "reduction in bits" causing any audible difference..... as these are below the noise floor of the original.

 

Discarding bits, does not affect the sampling rate of the audio.   These things are not related.

 

So what you are saying is that if I encode audio that has content above 48khz, with an MQA encoder .... that I won't get that content back when I decode it ?    I'm not able to find any mention of this anywhere.

... but that's what I did... and what I found what that these frequencies are not being lost like you claim.     I'm not asking you do "my homework" .... but just tell me where you heard what you heard, so I can investigate.   You seem angry.

Not angry. Just tired of hearing MQA marketing propaganda repeated as fact.

Quote

Yes, current DACs have fixed filters.  Yes, some have bugs.  The system described though, allows an internet connected device to download any filter."

No. Pretty much every MQA DAC that has been checked so far has the same set of 16 filters built in. The filters aren't DAC dependent or unique to the DAC. dCS is an exception. It has it's own specialized filters built in. None of the others  "load" the necessary filters from the Net.  On playback, one of those 16 is used, partially dependent on the source file tagging, and partially dependent on the DAC. 

 

When you throw away bits you certainly can effect the effective rate of the audio sampling - it depends what bits you throw away. And as I said, MQA encoding specifically throws away anything to do with frequencies above 48k - as they say it doesn't exist in most recordings and isn't important anyway. If you are throwing away those bits you are effectively changing the sample rate of the original file. So even if the original recording was 176 or 192k or above- that isn't what's there after the "folding". When you see it in playback being called a 4X or 8X file, it is a result of the upampling that is done as part of the MQA "unfolding". 

 

How do I know this? Well right here on this site is the thread "how mqa works" - which explicitly backs up my claim.

As does the "MQA technical analysis"  thread at CA; 

As does the "MQA tested" series at Stereophile - they try to dance around it by saying "it doesn't matter, etc.", but admit it is true. In fact the author (who is pro MQA) even says something to the effect of "buy those 24/192 downloads while you still can". 

Or try the "MQA: A Review of controversies, concerns, and cautions" at CA. Same findings. 

 

As far as it "fixing" what is wrong with a recording: that's a matter of taste, certainly not a fact. Even Paul McGowan at PS Audio, who is now including "first unfold" in his DAC network cards ("bridge") says that fully unfolded MQA is lossy and on a highly resolving system MQA playback is a reduction in  sound quality vs. the original hi-res. See his video chats. 

 

And by the way, the fact that most MQA DACs have difficulty turning off the MQA filters - isn't a bug. It's a feature of the standard MQA setup. It's possible - but difficult and expensive -  to avoid, so most manufacturers of MQA certified DACs are accepting it, and not telling their customers that that is how their MQA DAC works. (Aurender has admitted that they are now including only MQA filters on their MQA enabled DACs). Mytek and others never mention the issue.

 

It's just one of many examples of how the whole MQA rollout has been full of obfuscations, misleading claims, and also straight out lies. 

 

If you actually want to know what MQA is and does, you need to read articles where they analyze it and test it, not most of what's written about it. Most material written about MQA so far is either just reprints of PR material from MQA or rewrites of MQA supplied material - not written from actual knowledge of what it does or any checking to see if MQA claims are true. 

 

Here's an example of how the MQA process works:

This is the process for a 384k master:

compression:

-downsample to 192k with leaky MQA filter ('leaky' meaning it allows aliasing distortions to leak into the playback under 20k)

-downsample to 96k with (leaky) MQA filter

-fold into 48k space using Quadrature Mirror Filter pair 1

 

note: the downsampling is "throwing away bits" as it were.

=============================================

unfolding/decompression:

-unfold 48k to 96k using Quadrature Mirror Filter pair 2

-upsample to 192k using leaky filter

-upsample to 384k using leaky filter

-light the blue LED

The decoder may or may not upsample to the original rate of the master. It depends on the capabilities of the DAC chip and other parts. The blue light comes on regardless, of course.

 

The upsampling is necessary b/c they've thrown away all those hi-res bits during the encoding into the lossy MQA format. Without the upsampling you can't get back a 192 or 384k file. 

 

How many of you are surprised MQA works this way? And you are, because they've tried to hide it in all their descriptions. Until recently they were calling it lossless. Now that they've been caught lying about it, they are obfuscating, and saying something to the effect that it is "perceptually lossless"(my terminology). 

Edited by firedog
  • Like 2
Link to comment
Share on other sites

1 hour ago, davewantsmoore said:

If you the original .... which is XYZ

Then the original gets damaged .... and so it is XZY

... and now you apply a filter, so it is XYZ again.

 

Then you can say definitively that you are closer to the original.

 

The fix they apply is a fudge, not error correction in any mathematical sense.  You do not end up with XYZ, although they claim it sounds like it.   Many listeners are saying they are wrong.

  • Love 1
Link to comment
Share on other sites

12 hours ago, firedog said:

How many of you are surprised MQA works this way? And you are, because they've tried to hide it in all their descriptions. Until recently they were calling it lossless. Now that they've been caught lying about it, they are obfuscating, and saying something to the effect that it is "perceptually lossless"(my terminology). 

 

I don't think anyone would be surprised because this has already been discussed in other MQA threads on the forum. Most people would agree their marketing let them down badly; I have not seen anyone defend their marketing. If they had described MQA as 'the modern day version of MP3 for hi res audio', then few would have problem with that. 

Link to comment
Share on other sites

12 hours ago, firedog said:

When you throw away bits you certainly can effect the effective rate of the audio sampling - it depends what bits you throw away.

No.

 

In MQA case, the content above the downsampling is being encoded and stored below the noise floor  (ie.  in the bits you say they are "throwing away") .... so perhaps I can see where you get this idea.

 

.... but the bits below the noise floor are useless, so why not reuse them for something?    The idea that "throwing them away" is "bad" is nonsense.

 

12 hours ago, firedog said:

How many of you are surprised MQA works this way?

I'm not.  Although you are sure that I am  ;)

 

"throwing away"

"leaky"

 

.... hand-wave through what is happening, inferring it is bad, or audible, or not a good way to treat audio.

 

Quote

"perceptually lossless" (my terminology)

I like it.

 

It describes exactly what is going on.

 

Content below the noise floor is gone (replaced by encoded noise)

Content above ~20khz is "lossy" with a focus on timing precision

 

12 hours ago, firedog said:

The upsampling is necessary b/c they've thrown away all those hi-res bits during the encoding into the lossy MQA format. Without the upsampling you can't get back a 192 or 384k file. 

The high frequency samples (not "bits") are not "thrown away".   They're encoded, and used when the "unfolding" (blerg, stupid term) is done.

 

12 hours ago, firedog said:

Until recently they were calling it lossless. 

... but nobody who understands how MQA works was "fooled" by that.... because it is as you say "perceptually lossless".   It is immediately apparent to anyone that is not "literally lossless" ....   as they store frequencies above 1x using bits below the noise floor. 

 

 

There's no reason to keep content below noise .... and there's no reason to reproduce sounds above 20khz ..... but you need to handle the audio in a way which preserves the timing  (something which is rarely done).

Link to comment
Share on other sites



1 minute ago, LHC said:

MQA as 'the modern day version of MP3 for hi res audio', then few would have problem with that. 

... but that wouldn't be a very accurate portrayal in the minds of the masses who rightly or wrongly associate MP3 with "lower audible quality".

Link to comment
Share on other sites

4 hours ago, davewantsmoore said:

but you need to handle the audio in a way which preserves the timing  (something which is rarely done).

I think this point is debatable.

If Rebook is transparent, as confirmed in various tests, then timing must also be transparent.

Or should I say "perceptually" transparent?

 

 

Link to comment
Share on other sites

4 hours ago, davewantsmoore said:

The high frequency samples (not "bits") are not "thrown away".   They're encoded, and used when the "unfolding" (blerg, stupid term) is done.

 

 

4 hours ago, davewantsmoore said:

It describes exactly what is going on.

 

Content below the noise floor is gone (replaced by encoded noise)

Content above ~20khz is "lossy" with a focus on timing precision

 

Only if you drink their Kool-Aid uncritically. Their leaky filters add lots of high frequency noise to the unfolded result.  Earlier you said there was no proof they were throwing bits away - and yes they are . By definition if you downsample a higher res file to 2X from 4X or 8X sample rates you are throwing bits away. It has nothing to do with them encoding those bits- they aren't encoding them, they are throwing away anything encoded above 48k in the original (before compression).  That's simply a fact, and no fancy explanations make it not so. Yes, their content above 20k is lossy - in other words, bits thrown away.  That's what lossy means. Your quotes above contradict each other and show you still don't understand the basics of what the MQA process is. What do you think the high frequency samples are in a digital file? They are "bits". That's why a 24/192 version of a file is lots bigger than a 24/96 version of the same file. MQA throws all of those "extra" bits away. 
 

 

Below the noise floor? Perceptually lossless? Not the first compression scheme to make this claim. Multiple users, writers, and audio professionals have subjected themselves to blind listening tests and decided MQA, at least in many cases, sounds worse than the original. Sure not everyone agrees. But if it was so obviously better, everyone would hear the improvements, or at least not reduced SQ. 

 

What is  their "timing precision"? They haven't actually explained it or shown what is happening. It's just a phrase you are believing on faith because they told you it was true. In their "presentations" they show slides which actually don't show timing accuracy improvement in spite of their claims that they do. It's all part of their proprietary format that they refuse to let us know exactly what it does. 

 

In short, it's a clever lossy compression scheme that they claim is at worst transparent and at best can even improve the SQ of the file it is made from. They use a poor filter which trades frequency accuracy for what they claim is timing accuracy and leaves lots of aliasing artifacts in the final result. Some people like how it sounds. Others don't. Certainly it is no SQ revolution. Just another format with some run of the mill filters that aren't anything new and can be essentially duplicated by anyone with some basic knowledge and software. 

 

But unfortunately it's a closed proprietary format that could give MQA and the record labels the ability to reap royalties at multiple places in the production and distribution chain, as well as use it's built in DRM abilities to control at what SQ level you are able to play back your files. Only a very naive person would think this scheme is about SQ. It's not. It's about giving market power back to the corporate interests in the music industry, and finding ways to prevent actual master quality files getting into the hands of the music consumer (they've admitted as much), and at the same time using the scheme to generate additional cash flows to the industry.  If this wasn't true - the full MQA decoding (not just the first fold) could be done in software. No need for "MQA DACs"  and that level of control over the recording and distribution chain.

 

So tell me again why we need it? What problem does it solve? What does it give us?

  • Like 3
Link to comment
Share on other sites

  • Volunteer

 

4 minutes ago, firedog said:

So tell me again why we need it?

It gives us a blue light

 

4 minutes ago, firedog said:

What problem does it solve?

We don't currently have a blue light. 

 

5 minutes ago, firedog said:

What does it give us?

A blue light 

  • Like 1
Link to comment
Share on other sites



  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...
To Top