Jump to content

Chord Electronics Owners & Discussion Thread


Recommended Posts



6 hours ago, bhobba said:

...except to say what Rob considers a Tap requires more processing power than what Miska thinks of as a Tap.

Sure, but let's look at exactly what I've said .... rather than your faux outrage about what you think I said.

 

a)   A faster computer than what Chord have used can computer sinc more accurately within a limited timespan

b)   A faster and more flexible computer, can compute sinc over a long ('infinit') period

 

 

B is what the article posted on the Chord website mentioned.     A is interesting, as it invokes a number of caveats both on the audio and computer side.     Would you really want to use a faster processor (perhaps not, due to power, cost, etc.), and do you really need to use a faster processor given what you're doing with the "WTA", in that you've compensated for your imprecise version of sinc.

 

The intention of my comment was simply to say that if the approximation of sinc is so important (as claimed by RW) ... then you could do it better now (by using more computing grunt), or you could do it better tomorrow when more computing power is available at the same power/cost/complexity as the A200T.

 

6 hours ago, bhobba said:

BTW HQ player is the real deal - it does sound good but the reports are the M-Scaler is better.

I can't help thinking that PCM seems better for this approach...   unless DSD goes really really fast.

 

 .... or really the specific DAC.   All of the big players in this space use their own "blameless" DAC .... that obviously makes a lot of sense, as you can't very well say that approaching very very accurate versions of the waveform are important .... and then ship it off to a DA which does 'who knows what to it'.

Link to comment
Share on other sites

47 minutes ago, Sime V2 said:

My understanding is that an

Yes, in a round about way this is right.     You wouldn't make an M Scaler with an x86 CPU.... it's too expensive, powerful, big, and it does too many things you don't need.

 

The article talks about a x86 CPU, cos if you were going to what they suggested  (ie. spend a lot of time and power to accurately represent the sinc function), then you would need that x86 CPU.

 

47 minutes ago, Sime V2 said:

Of course this one was probably chosen because of cost and the right amount of horse power to achieve the 1m tap goal.

Yes, and due to the power requirements .... as the power supplies found in x86 computers (where you might have 100A moving) are very easy to use in an audio device...  You can isolate yourself from them, but that's not very practical, for a device like the M Scaler which needs to be in one small box, placed close to other devices, and play nicely with everything......   ie. unless you would go to (relatively) extreme and impractical lengths, you'd find that the difference hardwares ended up with differing SQ.

 

I'd very much imagine that part of the answer to "why didn't you use a more powerful DSP ... or an x86"  (aside from all the impracticality and cost) would be "I couldn't make it sound as good".

 

47 minutes ago, Sime V2 said:

And as far as I know, he’s only using 540 of the total cores.  

Really they're not even "cores" in the equivalent sense to an x86 ..... so this discussion is off the deep end stupid.

47 minutes ago, Sime V2 said:

Running an M-Scaler with a X86 will be plagued with traditional PC issue, where a FPGA has zero (?)

Sound about righ?

Yes.   If you replace "zero" with "more manageable".    The main problem with a x86 is it's just too "big".

Link to comment
Share on other sites

1 hour ago, davewantsmoore said:

A few who also like to quick fire reply about things which they don't really understand, from the look of few of the replies you already have."My hearing cuts off at 12000Hz"   Heh.

If you can't tell humor when you see it - well that does make discussions hard to follow.

 

You might like to have a look at Vandium 50's response and check out his bio.  

 

Thanks

Bill

 

 

Link to comment
Share on other sites



1 hour ago, Sime V2 said:

Running an M-Scaler with a X86 will be plagued with traditional PC issue, where a FPGA has zero (?)

Based on the replies over at the technical forum I am involved with it seems to be the situation.   Modern Intel processors seem to have the grunt, but are plagued with other issues.

 

Thanks

Bill

Link to comment
Share on other sites

I think Dave's approach seems to be like using a big jackhammer for a small nail.

 

For the device RW has created, is perfectly suited for the situation it is intended.

 

Now bring on the listening impressions!

Looking forward to reading about the current tour. Well done Sime.

Edited by rocky500
  • Like 1
Link to comment
Share on other sites



53 minutes ago, Sir Sanders Zingmore said:

Hang on, didn't you just spend the last umpteen posts arguing that they don't have the grunt ??

Well golly gee - an expert has said the modern processors have the grunt (just - my math indicated you need about 200 billion multiplications per second) - but has other issues.  As for 10 year old processors - well you can look up the specs of those and do the math then post it.  Here is the analysis:

 

An AVX512 processor can do two 8-wide double precision FMA instructions per cycle. For example, an i7-7800X runs at 3.5 GHz, so you have 56 billion multiplications per second per core. You have 6 physical cores, so in principle you have the horsepower. In practice, this would require that you can keep the cores fed, and these million multiplications involve information that is always in registers. That is almost certainly not the case, and it's also much easier to achieve with an FPGA. The other problem is that Intel chips throttle when one part of them gets hot, so you run the risk of it slowing down the FP arithmetic if you tried to run it flat out. Again, an FPGA is less prone to this.

 

So yes I goofed - but the result is the same - the Intel processor is unsuitable for it.

 

Thanks

Bill

Link to comment
Share on other sites

4 hours ago, bhobba said:

You might like to have a look at

 

Quote

this type of processing is what the XC7A200T is well suited for

You are correct on this front .... but it was never in conjecture.    The XC7 can do this work in a lot less power due to its architecture.

1 hour ago, bhobba said:

Well golly gee - an expert has said the modern processors have the grunt

His answer is less than perfect.... the next poster is closer.

Quote

The other problem is that Intel chips throttle when one part of them gets hot, so you run the risk of it slowing down the FP arithmetic if you tried to run it flat out.

There's a significant difference to "can throttle", and "do throttle".    A sensibly deployed CPU doesn't get hot enough to throttle, even when running at full tilt 24/7.

Quote

10 cores, and running at 3.0 GHz. The vmulps instruction can multiply 16 pairs of floats in half a clock (there's a latency of 4 clocks if all values are in registers). So at 3.0 GHz, my computer could theoretically do 6.0 × 10 9 6.0×109 multiplications of 16 pairs of floats, or 96 billion multiplications per second. That's in one half of one core. Delegating the work to additional threads would bump up the throughput.

At 10 cores, this is just shy of 2 trillion (2000 billion)....  which is the (ballpark) number I quoted earlier  (I might have said 1 trillion?  I don't recall).

 

 

Anyways.... as I've said we're a long way from where we began, which was the simple statement that you can have more than 1 million taps if you want ..... and that a x86 style CPU can absolutely (just like in the article referenced) take as long as it wants to compute the sinc construction of the waveform, which would be "much better" than what the M Scaler does.     It isn't at all controversial.

Quote

the Intel processor is unsuitable for it.

This was never in question (aside from the approach mentioned in the article, which would require an x86 cpu) ..... The implication was just that 1 million taps isn't a breakthrough, (even in FPGA/ASIC land) .... and if the quality depends so closely on the approximation of sinc, then they'll be able to "upgrade" this soon.

Edited by davewantsmoore
  • Like 1
Link to comment
Share on other sites



4 hours ago, bhobba said:

You might like to have a look at Vandium 50's response and check out his bio. 

Someone's "bio" doesn't make facts any more or less true.    "Appeal to authority" when we are looking for facts, not opinions, is weak.

 

Edited by davewantsmoore
Link to comment
Share on other sites

2 minutes ago, davewantsmoore said:

The implication was just that 1 million taps isn't a breakthrough, (even in FPGA/ASIC land) .... and if the quality depends so closely on the approximation of sinc, then they'll be able to "upgrade" this soon.

On that we can agree - its only a breakthrough in Rob's own WTA filter.  

 

We will need to wait and see if other makers follow down the same path.   But care must be taken in statements - witness the difference in opinion on what a Tap is between Rob and Miska.   I am pretty sure I know what Rob means by a tap - but what Miska means I do not know - he claims 2 million already in his HQ player (and I know from personal use it is good) which would mean about 400 Billion multiplications per second for 96k upsampling - even with six cores and a co-processor that's pushing it somewhat using an I7 - yet I have had his HQ player running on a pretty basic I3 - so I am unsure exactly what he means by a tap.

 

Thanks

Bill

Link to comment
Share on other sites

21 minutes ago, davewantsmoore said:

Someone's "bio" doesn't make facts any more or less true.    "Appeal to authority" when we are looking for facts, not opinions, is weak.

Of course not - but a guy with a PhD means what he says should be looked at and not dismissed.   I personally found his analysis insightful and considered things I did not think of eg often with Intel processors you use a co-processor which brings down a multiplication to just a couple of clock cycles.  And Rob taking so long to code the million taps - as one person said correctly - he doesn't get that (after thinking on it neither do I really) - a filter should be rather simple to do - it may have something to do with the WTA filter - who knows.  Rob said he did half a million pretty quick - its the million that took a while.

 

Thanks

Bill

Edited by bhobba
Link to comment
Share on other sites



A huge shout out and thanks to Sime for flying down expressly for this day and bringing his hardware at his own expense.

 

Whilst others are off on their way to listen to the mscaler in another system I'll give my rundown of my experience with it. All listening tests were done sighted so if this is of issue to you, please skip my post right now. As you would have seen in the picture above, my system has an MSB reference DAC which is fed DSP modified 44/48/88/96 data from a PC. Normally my system has the DSP acting as a crossover to feed active subs, but with the variable latency (up to 600ms) the mscaler would have added, I deemed this to be inadequate for a fair comparison since the alignment between main speakers and subs would have been way out. So my speakers were used as full range for comparison with the mscaler in, and various amounts of upsampling from the base frequency was done where possible, or used in bypass mode. One extremely unfortunate limitation for this comparison was clear even before we started. The mscaler works best when run at its highest sample rates over 700kHz, which would require use of the proprietary chord DAC dual coaxial interface which my DAC doesn't support. Furthermore, I contacted MSB in advance to discover that the coaxial input as I have it configured on my DAC only supported 192 as the maximum input frequency, even though the DAC allegedly supports up to 3072 - it only supports this with the network renderer. This means at best we were only listening to 1/4 of the capability of the mscaler. The most interesting part of this experiment to me was the use of long filter oversampling in combination with an R2R AKA ladder DAC. At first we listened to a lot of my regular sample tracks, these are almost exclusively classical, and all some form of high-res - all 24 bit, most 88 or 96, with some 44. My experience with the top of the line Chord DAC (the DAVE) in the past was very unsatisfying - it sounded nothing short of awful in my system (see my system showcase thread for my review) - so I was wary of it being perhaps the Chord sound and that introducing the mscaler would bring back that sound. The good news is that it did not in any way change the sonic signature of my DAC to make it sound like the DAVE.  Any difference imparted to 96/24 sources was quite subtle, but none of them were bad. As we went down to 24/44 sources the difference became more obvious and was most pronounced by the time we got to 16/44 sources.

 

There was a subtle improvement in top end sweetness, with a little less edge to the sound, voices sounded a little less harsh and more full bodied and warm, bass notes seemed more focused in space, and the presentation a little more distant and relaxed. Other reviews have described more speed, impact, and greater bass leading edge - I did not experience that, which is interesting since that's meant to be one of its aims. With the 16/44 sources, we could do this in steps going to 16/88 and then 16/176. Each step amplified the changes of the one before. There was nothing negative about the change to the sound, which was very reassuring. It was certainly a fascinating and rewarding experiment, but it's a crying shame that we couldn't really remotely test the limits of what it had to offer. Lack of time and unbalanced inputs meant we really couldn't just replace my DAC with the qutest that Sime brought along to see how much it improves that particular DAC that supported the highest resolutions. Was this a night and day difference? No, not on high res sources where it was quite subtle, but was clearly advantageous on the lower resolution sources. Had we been able to go to 768kHz, who knows.

 

Will I be buying one? No. Given most of my source is highres these days, there's very little to be gained for the price, especially given I would only be using 1/4 of its upscaling capacity. If I was still listening to predominantly CD I'd seriously consider it. I think a more interesting experiment is to compare a $10k standalone DAC with a $1K similar technology DAC chained to this $7.5K upsampler. It'd be interesting to see what's actually more important. In fact, a simple but very quiet "technical" type DAC, like the top Topping DAC at $500 would be interesting since we'd be moving all the filtering in the audible range and most of the oversampling to the upsampler, but alas 384 is the highest you'd be able to use there as well. Maybe if Chord offered an upsampler at half the price that was aimed more at the general market it would be worth its weight in gold, but they obviously have to lean people towards buying their DACs as well.

Edited by Ittaku
  • Like 9
Link to comment
Share on other sites

45 minutes ago, Ittaku said:

Will I be buying one? No. Given most of my source is highres these days, there's very little to be gained for the price,

This sums it all up perfectly and refreshingly honest, since 80% of my music is hires my curiosity has been squashed (not that i had $7500 to spend anyway).

 

  • Like 1
Link to comment
Share on other sites

2 hours ago, bhobba said:

Of course not - but a guy with a PhD means what he says should be looked at and not dismissed.

Yet, you immediately dismissed what I had said...... and not even based on anything "kinda close".

2 hours ago, bhobba said:

I personally found his analysis insightful and considered things I did not think of eg often with Intel processors you use a co-processor which brings down a multiplication to just a couple of clock cycles.

At the risk of splitting hairs.... there was no 'co-processor' in his answer.

2 hours ago, bhobba said:

And Rob taking so long to code the million taps - as one person said correctly - he doesn't get that (after thinking on it neither do I really) - a filter should be rather simple to do - it may have something to do with the WTA filter - who knows.  Rob said he did half a million pretty quick - its the million that took a while.

I can (attempt to) explain this, if you'll allow.

 

It took so long, not to select the filter which had the optimum sound quality.....   The scenario being that when you 'truncate' sinc .... perhaps you want to modify it a little - rather than just stopping it .....  and he did a lot of subjective research into what he felt sounded best.

 

It's a bit like saying a crossover filter is easy enough to design.... If we have a target transfer function, then designing the filter to apply to the speaker to get there, is a simple engineering problem.     However, the challenge is selecting the target transfer function

in the first place.

 

2 hours ago, bhobba said:

witness the difference in opinion on what a Tap is between Rob and Miska

There is no real difference, this is a misconception.

2 hours ago, bhobba said:

yet I have had his HQ player running on a pretty basic I3

The filters aren't always as difficult to run as a simple back of the envelope might expect.... especially when the oversampling rate is so high (as in HQplayer, which can oversample the CD 512 or more times ....   vs Chord with 16x) ......   there's also the possibility that you weren't running filters which used '2 million taps'.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Ittaku said:

The most interesting part of this experiment to me was the use of long filter oversampling in combination with an R2R AKA ladder DAC.

That is how Phasure does it.     Their reasoning is quite interesting.

 

2 hours ago, Ittaku said:

3mhz

I've been waiting to say this ever since you mentioned you were going to try the M Scaler.

 

 

Have you considered feeding your DAC via the network port .... using a SDM type of upscaler (like HQplayer), including the needed DSP.

 

Then if you need a seperate feed for your subwoofers (eg. to apply additional / different DSP) .... then take a split of the analogue output of the MSB, into some sort of DSP which has an analogue input and output.

 

?

Link to comment
Share on other sites

1 minute ago, davewantsmoore said:

That is how Phasure does it.     Their reasoning is quite interesting.

 

I've been waiting to say this ever since you mentioned you were going to try the M Scaler.

 

 

Have you considered feeding your DAC via the network port .... using a SDM type of upscaler (like HQplayer), including the needed DSP.

 

Then if you need a seperate feed for your subwoofers (eg. to apply additional / different DSP) .... then take a split of the analogue output of the MSB, into some sort of DSP which has an analogue input and output.

 

?

Considered many many times. The room correction in my DSP is essential, otherwise my system sounds schmit, and the Dspeaker has the crossover built in with its multiple outputs making it a nice one box solution. Ideally I'd offload the DSP to a completely custom solution within my PC. I don't use windows BTW, as my linux PC setup involves a custom hacked clementine (stupid thing doesn't play 24 bit by default) and pulse audio +/- tried various customised upsamplers. Given enough time, anything may happen...

Link to comment
Share on other sites



  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...
To Top