Jump to content


Full Member
  • Content Count

  • Joined

  • Last visited

Community Reputation

653 Superstar

1 Follower


  • Rank
    5000+ Post Club

Profile Fields

  • Location
    Brisbane (ex-DTV Forum member)
  • Country

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have to agree, aussievintage, that it is not clear how the sound of the amplifier would be affected. And of course before pondering that question in depth, it could be useful to identify which particular equipment has given subpar performance with a particular active-neutral connection orientation. * * * In many parts of the world, non-earthed plugs (as used to power double-insulated equipment from a power point on the wall) can be inserted in either orientation. The manufacturer should design their product to perform properly whichever way the plug happens to be inserted into the power point by the customer. The opening post appears to take the view there will be an audible difference if the mains active and neutral connections are interchanged. (A few posters to this thread have indeed reported hearing a difference. However, others have reported hearing no difference.) I would suggest that failure to hear a difference with particular equipment if the active and neutral connections are swapped does not necessarily imply that the listener has "cloth ears". A simple and plausible explanation would be that there happens to be no audible difference, for the particular equipment, in these circumstances.
  2. Yes you have found your answer! The meaning of the CD player output figure is quite specific. You asked: " For CD what does "0dBFS 2.3VRMS Signal Output" really mean?". It simply means that when playing a test sine wave recorded on the CD, at the maximum possible recording level the 16-bit red book standard can accommodate, 0dB, the analogue output will be 2.3V RMS (plus or minus small manufacturing tolerances). This will be the case at 100Hz, 1kHz or 10kHz and frequencies in between. There is no RIAA equalisation curve to complicate things! Modern CDs are typically mastered so that a piece of music reaches, or almost reaches, 0dB, for at least some of the time. The meaning of the magnetic cartridge sensitivity is also specific but it is a much more complicated story. A 0.4mV cartridge output at 1kHz when tracking a sine wave cut with a tip velocity of 5cm/s is one thing. How the signal is processed before being sent to the cutting head for a particular disc is another. The frequency response and the levels sent to the cutting head are manipulated so as to get the best overall result. It is a compromise between on the one hand attaining a high tip velocity so as to improve the signal to noise ratio, and on the other hand reining in the tip velocity so as to guarantee trackability, lessen THD and IMD, avoid sacrificing an extended low frequency response, and perhaps even to fit in extra material on that side of the disc. In practice the recorded level achieved on a stereo vinyl disc can be greater than what the standards might suggest. The following graph shows measured recorded levels for a selection of vinyl discs. (The measurements were reported in a Shure Technical Seminar in 1978.) It can be seen that some discs were found to have a recorded level at 1kHz well in excess of 5cm/s. (This graph appears on the webpage http://pspatialaudio.com/max_velo.htm )
  3. On Netflix. Recommended. Anthony Hopkins' acting is, as usual, superb. Some Italian, Spanish and Latin, but most dialogue in English. From Wikipedia: "The Two Popes is a 2019 biographical drama film directed by Fernando Meirelles and written by Anthony McCarten, adapted from McCarten's play The Pope which premiered at Royal & Derngate Theatre in 2019.[3][4] Predominantly set in the Vatican City in the aftermath of the Vatican leaks scandal, the film follows Pope Benedict XVI, played by Anthony Hopkins, as he attempts to convince Cardinal Jorge Mario Bergoglio, played by Jonathan Pryce, to reconsider his decision to resign as an archbishop as he confides his own intentions to abdicate the papacy."
  4. Certainly -- and I think this is well known as an important issue for magnetic cartridge preamps -- a variation in the load impedance for a magnetic cartridge can potentially make a noticeable audible difference in the frequency response performance of the cartridge. (This is without worrying about anything further along in the reproduction chain that might further modify the frequency response.) So we are looking at the impedance of the connecting cable to the input of the phono preamp, and the input impedance of the phono preamp itself, as potentially being very important for achieving a nominally flat response. I guess though (and this may be the type of point you're making here, Muon) that if the rest of the system tilted the frequency response, then one might wish to counteract that by deliberately making the phono preamp tilt the frequency response in the opposite direction.
  5. Possibly because it is hard to retain a memory of how something sounds, and possibly because how something sounds can depend on one's mood at the time of listening. Have you ever tried making recordings of the same track of a vinyl disc as amplified by a range of different phono preamps? Listening to playbacks of the various recordings could make it easier to decide which phono stage you prefer. (At least for the particular track that was chosen for the test! And for the particular cartridge.)
  6. A lot of people report finding the book helpful. Others criticize it for erring on the side of unsubstantiated subjective claims that echo doubtful audiophile lore. See for example https://www.amazon.com/gp/customer-reviews/R1OC2CRW52TJ0I/ref=cm_cr_dp_d_rvw_ttl?ie=UTF8&ASIN=0978649362 I'm sure there's a lot in it that's helpful but I would recommend taking its recommendations with a grain of salt, and seeking confirmation from other sources.
  7. As no one else has replied, I'll have a go. @JN*, the DAC output is high (according to the specs up to 4.2V RMS, or 2.1V per phase) and it's possible that if you connect to an amplifier that is powered down that that powered down amplifier will present a non-linear load. That could lead to distortion if it affects the signal being routed to the amplifier in use. So I'd be a bit wary about connecting "both to the unbalanced outputs via a splitter", though that could work out ok - you'd need to trial it and evaluate. I note there is a headphone output. That would be likely to provide isolation, i.e. connecting one amp to that source would most likely not affect the signal going to the line level balanced and unbalanced outputs. [There would be the possibility of an earth loop issue if using the headphone ouput for one amp, and the unbalanced line out for the other amp.] It;s possible the balanced and unbalanced line outputs of the DAC are fully independent of each other -- that would depend on the circuit configuration. I haven't tried to look into that.
  8. I certainly have encountered some Toslink male connectors that wobble if pushed when engaged in the female socket. However I've never had one that lost signal when it was wobbled with my finger. (Oftentimes I find a Toslink connector will fit snugly and tightly.) I guess if you had a thumping bass causing vibration of the floor that this might lead to a small wobble in a loose Toslink connection. Whether that would result in an audible difference I don't know, but I strongly suspect not. The Toslink receiver chips these days tend to iron out any short-term jitter. In any case, the speed of the light in the fibre and any air space in the connection is so high that wobble would likely result in a timing difference that would require advanced laboratory-grade equipment to measure, it would be so minute. So there may well be no need for a Toslink connection to be wobble-free. If anyone has seen published research on the effect of Toslink connector wobble on audio signal timing perhaps that could be linked to.
  9. I can understand your confusion. A lot of internet cables on the market are extremely expensive and a lot of retail salespeople will recommend them. There is a lot of controversy in audiophile forums over interconnect cables with one camp insisting you need very expensive cables and the other camp suggesting that for most purposes the cheap cables will do a perfectly good job, audibly indistinguishable from much more expensive cables. I myself happen to align myself with the second camp.
  10. I don't know why you describe the book as "excellent". That page you've reproduced sets out a set of audiophile views about alleged deficiencies in sound quality. It provides no evidence that the deficiencies would be audible. It's something anyone with a rudimentary knowledge of the technologies could have typed out off the top of their head in the 1980s. To my mind that really is not good enough in a lengthy publication about high end audio. (It's perhaps acceptable in a short, informal, magazine article. Or perhaps someone posting a subjective view to this forum.) With a book one would expect footnotes, providing references to properly conducted double blind tests, or at least to measurements. One would expect quantification of the claimed deficiencies. One would expect commentary on how technology has evolved over the decades to address and combat such deficiencies. For example the TOSLINK interface at the receiving end for a device manufactured in 2019 is likely to be more effective at dealing with jitter than a device manufactured in 1983. Technology has moved on and an author of a lengthy work would need to develop an understanding of that before being in a position to write authoritatively on TOSLINK as it relates to "high end audio". Is that page you have reproduced followed [or preceded] by an explanation that if a buffer follows the interface with a suitably well controlled clock, any jitter will be eliminated, or at least substantially reduced? That is a significant matter that would need to be mentioned for completeness.
  11. Is it audibly so? That's an important matter to pin down. We see so much written about 44.1/16 being indistinguishable from 96/24 for human ears. If there is any audible difference it may be at an only "just noticeable" level, and then only with "killer" test samples. This may partly explain why so many devices do not tie the DSP sample rate to the SR of the incoming stream.
  12. I find ABC classical more satisfying to listen to in the car using FM rather than its DAB+ simulcast. (I find typical bitrates on YouTube quite ok for classical music, but not the 80kbps "nominal" (less for the actual audio) used by the ABC for its classical DAB+ station.) I don't bother with MBS light as I find the 64kbps nominal bitrate too frustrating to try to listen to. In Brisbane, in our house we use DAB+ for the clock radio to give us the news in the morning, but not as a substitute for FM. It's clear that some people are much more tolerant of the low to medium bitrate HE-AAC used for DAB+ radio in Australia, than others. A small number of the DAB+ stations do have (relatively) high actual bitrates. 4KQ doesn't, at least not at the moment. My DAB+ radio currently shows a nominal bitrate for it of only 48kbps. The bitrate for the audio would be lower. For my ears the audio sounds like like low bitrate mp3. Both 4BC1116 and Macquarie Sports are currently broadcasting in Brisbane with a nominal bitrate of 104kbps, despite the content mostly being "talk" rather than music.
  13. (The MQA process alters the audio, intentionally, so it could sound slightly different. MQA versions call for a separate specialized discussion.) Generally, there should be no difference* and with a Digital Audio Workstation capable of importing two decoded versions for comparison and subtracting one from the other that subtraction should result in a solid null. The difference products should be signals of low enough intensity and of a spectral distribution to be inaudible for human ears, notably quantization noise. One factor if using hardware to decode and play files is that the playback device will not necessarily generate the same output level for different formats. For example a DSD version may not result in the same playback reference "0dB" level as a PCM version. ________ * Downsampling to 44.1kHz can create minor issues because of the different filters that could be applied. The differences tend to be slight and subtle, barely audible. As for bit depth reduction to 16 bits, if done with appropriate noise-shaped dither, this appears to give a result indistinguishable from higher bit-depths, for standard recording and playback levels, for human ears..
  14. If a hi-res format of itself made as clear a difference as you report then the hi-res industry would have taken over the audiophile market years ago. Also there would be one or more published formal listening tests confirming the readily heard difference. However these outcomes have not eventuated. It appears that the best result the hi-res industry can cite is a statistical analysis from combining data from different studies. This analysis showed a success rate of slightly over 50% in being able to distinguish hi-res from 44.1/16, and this slight success appeared to be not due to chance. I am referring to the paper published in 2016 titled A Meta-Analysis of High Resolution Audio Perceptual Evaluation. The absence of even a single formal study demonstrating a clear ability by test subjects to distinguish hi-res from Redbook CD is notable. __________ It appears you were trying to compare files with different sample rates. Your method of testing would not have guarded against different DAC performance at a 44.1kHz sample rate. A protocol that sidesteps that issue is as follows: 1. Downsample the hires file to 44.1/16 using noise shaped dither.. 2. Upsample the result of 1 to the same sample rate as the hires file. 3. Compare the hires file and the twice resampled file under blind conditions. Tools that could be used: Audacity (for the downsampling, and upsampling). Foobar2000 with ABX plug-in for automated blind testing. You could use Audacity for informal comparisons by importing the two files for comparison and using the solo or mute buttons to alternate between tracks. People who go to the trouble of undertaking this type of exercise may be surprised at their inability to tell any difference whatsoever.
  15. My recent post was about bit-depth only, specifically the result of noise-shaped dithering a 24 bit file to 16 bits. You appear to be talking about changing the sample rate as well, as I presume the 24 bit versions you used had sample rates higher than 44.1kHz. Was that the case? Changing the sample rate would lead to a different discussion. (There are indeed different filtering possibilities for 44.1kHz that can make an audible difference, depending on what choices are made.)
  • Classifieds Statistics

    Currently Active Ads

    Total Sales (Since 2018)

    Total Sales Value (Last 14 Days)

    Total Ads Value (Since March 2020)
  • Create New...