advertisement


Audiophile Network Switches for Streaming ... really ?

With all (and I do mean all) due respect, no.

This suggests that a switch makes a difference only either (a) where the user has an overactive imagination and/or (b) where the streamer and/or DAC they use is poorly designed. You really do need to hear the difference a switch makes to even a well-designed DAC before you tie yourself to that mast.

The noise is not audible in the sense most people would intepret that. We're not talking hisses and pops and stuff. You only "hear" it when you've heard your system with and without it.

Think high quality passive preamp like Music First Audio Baby Ref. I thought I had a had a great pre, very transparent, didn't get in the way of the music. When I replaced it with a Baby Ref, the difference was jaw-dropping. With the former preamp, the system sounded great; no "distortion", just great music; but with the Baby Ref, drums were visceral, double bass plucks in the room etc etc. A Baby Ref passive pre can't add anything to the the music; all it can do is to show what getting out of the way sounds like - what removing distortion sounds like. It's incredible.

I'm not seeking to equate the scale of impact of a switch (installed just before the streamer not just anywhere) with that of something like an MFA Baby Ref of course; I merely seek to point out how the absence of something you didn't previously realise was there - because you can't hear it explicitly - can make a serious difference. I just wish more people would risk exposing themselves to the experience so they could speak from both theoretical and practical bases.
I think you are talking about clarity there, which is not always the absence of noise. Clarity could be an improvement in Freq response and bandwidth, a change to the group delay, or a different harmonic spread, for example. Some "noise", for example harmonics can provide a better listening experience, and is readily measurable at levels that can be heard.

Established earlier in this thread was that the sound of a switch wasn't changing the actual data but more likely the "sound" that people were hearing was an artefact of some noise leaking on to the gnd and power of the streamer from the switch. One has to consider that the DAC and its own PSU could also be adding noise to its own gnd and power lines. Streamer designers understand this and understand the joy of a mixed signal environment and will, unless incompetent, ensure that regardless of where the noise comes from, will filter out that out of band noise - noting that any in band noise could be measured in the audio Freq range just like our ears can hear it However, because there is no evidence, measured that is, of in band noise, it doesn't exist - for some that will be hard to grasp, but the area of science and technology in play here is very well understood, we are not at the edge of science here. And no one on this thread so far has demonstrated, or communicated in an accepted method of capturing and presenting evidence of this type to make it reviewable and reliably, constantly, repeatable, that within the performance of our ears any change to the sound occurs when, all other things being constant, the network switch has been changed.

Take a switch,. measure the output of the DAC, change the switch, measure the output of the sample DAC. Subtract one from the other, plot on a log graph, please :)

Apologies for the ramble...
 
I think you'll find a transformer pre can add plenty to the music, inductive coupling, ringing, harmonic distortion, just like active pre's

I can't find any specs for it online.
 
"But I see a bump so I must be able to hear it! Oh, the Y axis scale, that doesn't matter does it....but look a bump, I hear it, I hear it!"

:D:D
If you read the source article for context, this is likely what happens when a AC powered PC, floating at half mains is plugged in. The leakage though the ~30 pF transformer coupling is tiny, but still measurable with modern test equipment.
No way that you can hear it though.
 
According to Jussi Laako the relevant range is above 20KHz and up to 1MHz.
Not much of a relevant feast...
To pass EN 55032 Class B conducted emissions for CE on a domestic product, the conducted noise in the 150 kHz to 30 MHz has to be below about 50 dBμV .
This is tiny
 
This suggests that a switch makes a difference only either (a) where the user has an overactive imagination and/or (b) where the streamer and/or DAC they use is poorly designed. You really do need to hear the difference a switch makes to even a well-designed DAC before you tie yourself to that mast.
If it's really as easy as adding a cheap switch just before the streamer, and so effective, then why wouldn't streamer manufacturers just stick a cheap switch inline in the box? It seems like this would be a no-brainer. OTOH it's less sexy if it's not another box I guess.
 
I think you are talking about clarity there, which is not always the absence of noise. Clarity could be an improvement in Freq response and bandwidth, a change to the group delay, or a different harmonic spread, for example. Some "noise", for example harmonics can provide a better listening experience, and is readily measurable at levels that can be heard.

Established earlier in this thread was that the sound of a switch wasn't changing the actual data but more likely the "sound" that people were hearing was an artefact of some noise leaking on to the gnd and power of the streamer from the switch. One has to consider that the DAC and its own PSU could also be adding noise to its own gnd and power lines. Streamer designers understand this and understand the joy of a mixed signal environment and will, unless incompetent, ensure that regardless of where the noise comes from, will filter out that out of band noise - noting that any in band noise could be measured in the audio Freq range just like our ears can hear it However, because there is no evidence, measured that is, of in band noise, it doesn't exist - for some that will be hard to grasp, but the area of science and technology in play here is very well understood, we are not at the edge of science here. And no one on this thread so far has demonstrated, or communicated in an accepted method of capturing and presenting evidence of this type to make it reviewable and reliably, constantly, repeatable, that within the performance of our ears any change to the sound occurs when, all other things being constant, the network switch has been changed.

Take a switch,. measure the output of the DAC, change the switch, measure the output of the sample DAC. Subtract one from the other, plot on a log graph, please :)

Apologies for the ramble...
No apologies required.

A switch can’t improve frequency response or bandwidth. As you say, it doesn’t/can’t change the data, which would be required if the switch were able to do this.

I am sure there are people here who could devise a method which would satisfy those who suggest all is in the mind. That won’t be me.
Bizarrely I have better things to do with my time than to try to develop and apply such a method, simply to “prove” something self-evident if one listens.

All the best.
 
If it's really as easy as adding a cheap switch just before the streamer, and so effective, then why wouldn't streamer manufacturers just stick a cheap switch inline in the box? It seems like this would be a no-brainer. OTOH it's less sexy if it's not another box I guess.
Good Q. Maybe it’s the future!

There are things which can be done to improve on a “cheap switch” but IMHO the best value for money comes from spending that first £20-30 on something like a Zyxel GS108B and a 0.5m Cat6 cable.
 
I think you'll find a transformer pre can add plenty to the music, inductive coupling, ringing, harmonic distortion, just like active pre's

I can't find any specs for it online.
Quite possibly. My point is simply to draw the comparison that this is another domain where the listener only perceives/acknowledges a “problem” when they have a comparator which is the problem-less alternative.
 
To pass EN 55032 Class B conducted emissions for CE on a domestic product, the conducted noise in the 150 kHz to 30 MHz has to be below about 50 dBμV .
This is tiny

If you say it's tiny I am not the person to disagree. But Jussi "Miska" Laako (HQPlayer's developer) is an expert and he says that it affects the clock and D/A chip. He's posted measurements a couple times but he posts a lot that I cannot find them right now.
He talks a bit about Ethernet network-generated noise in this topic (he posts several messages):

Best Ethernet Cards for Streaming
https://audiophilestyle.com/forums/...for-streaming/?do=findComment&comment=1163705
 
To pass EN 55032 Class B conducted emissions for CE on a domestic product, the conducted noise in the 150 kHz to 30 MHz has to be below about 50 dBμV .
This is tiny
Thanks. Yes, if people don't resonate with 50 dBμV then I think that's 0.32 millivolts. Compare that to 2,000 millivolts full scale (FS) output (or more) from a DAC on its analogue side and it's already less than 1 part in 6000 of noise, or at least 76 dB below FS on the digital side of the DAC.

Yes, it's tiny. There should be not a lot of noise going into the digital side of a DAC in the first place if you have functioning CE-marked network products.

Then to complete the story, @tuga posted a treatise I thought was good, on circuit layout for D/A converters showing ways to make sure only a tiny fraction of that noise reaches the analogue side - which is where it can cause the IMD mischief that is postulated. Even with a very modest 40 dB of isolation between the digital side and the analogue side we are down to noise no greater than 116 dB below FS.

Then the intermodulation mechanism itself is lossy. Getting at least 20 dB loss is no great achievement. But we are now down to IMD products that might fall into the audio band at no worse than 136 dB below FS. For comparison, thats the largest range from hearing threshold to pain threshold. IMHO you need a very high FOMO index indeed to worry (which is what the industry promotes, of course). And I do know of Rob Watts' claims.

IMHO, we need to postulate another mechanism to explain what some people experience.
 
An "improvement" (if any) usually wears off after a while and all becomes "normal" again.
Some people are happy with that while others start to chase a new fix, a new experience of improvement (if any) and on it goes from one cable to the other, from one box to the other and a steady stream of gadgets.
I have read posts describing what must be a massive change in the frequency response and yet it is just another cable.
I can experience myself that there's an improvement, but often it is just my imagination or very marginal. Which an A-B test easily reveals. I still like to be in placebo heaven though for a couple of hours and laugh at myself.
Today I adjusted the freq response according to a measurement I made with a calibrated mic. Now, that is what I call an improvement, but I know it will wear off in a couple of days.
 
An "improvement" (if any) usually wears off after a while and all becomes "normal" again.
Some people are happy with that while others start to chase a new fix, a new experience of improvement (if any) and on it goes from one cable to the other, from one box to the other and a steady stream of gadgets.
I have read posts describing what must be a massive change in the frequency response and yet it is just another cable.
I can experience myself that there's an improvement, but often it is just my imagination or very marginal. Which an A-B test easily reveals. I still like to be in placebo heaven though for a couple of hours and laugh at myself.
Today I adjusted the freq response according to a measurement I made with a calibrated mic. Now, that is what I call an improvement, but I know it will wear off in a couple of days.

Perhaps this habituation that you refer is why I think that short-term A/B testing is insuficient to validate or invalidate audibility.
Some issues require a particular sound/programme to reveal themselves and a change in your own system+room, your "normal" or reference, may be more obvious than in an unknown space listening to an unknown system. And the opposite can be true as well, an unfamiliar space and system may make some issues more obvious. It probably depends on the listener too.
When I tested Redbook vs. High Res I listened to three tracks in Redbook for a month then spent a whole morning comparing those tracks to their High Res counterparts (all PlayClassics files had been produced for this specific pupose). Having read Siau, Waldrep, Weiss, my expectations had been conditioned to "no difference" and yet the differences where obvious. Not huge, but obvious. I still buy mostly Redbook by the way. For me the mastering is the most important aspect and I find it difficult to justify the extra cost of high-res downloads.
When I took the Philips Golden Ear Challenge I struggled to pass the highest bit-rate mp3 vs Redbook test. I think that this happened because of the unfit for purpose test tracks and also because I was using budget headphones and the desktop headphones output. Since then I have been listening to Spotify and find that in some tracks the limitations of their compression algorithm is obvious vs. my own CD copies of the same master.

I agree that room correction will potentially make a larger difference than many tweaks. This is of course room/speaker dependent.
I feel no shame in saying that I think I perceive a difference, sometimes I am not certain that there's one, and I prefer not to worry about cables any more than getting the most affordable fit-for-purpose available.

For some, deviations from flat frequency response seems to be the only issue that is worth addressing (the Spinorama cult is a good example). Others wish to go a bit further, and they may not actually care that much about frequency response flatness.
There are many ways to practice audiophilia.
 
I'm with you on the spinorama. It's a great thing to look at to see how easy a speaker is likely to be to integrate with a given room and listening position, but ultimately you'll still end up treating the room and adjusting speaker placement to get what measures good, sounds best.

They are all but tools to give a picture of what might be, there's still no replacement for a final in room validation, listening, measuring.
 
OK i have not read this thread, but I’m sure i know exactly how it goes.

So i worked in IT for some years, i know the bits are bits mantra and for data transmission i think that holds true. However, and it’s a big however i use standard netgear switches in my audio chain and the most important thing I’ve found is power.

A good LPS on a switch can help enormously with streamed music, do i understand why…… NOPE, but it sounds better to me.

Would i lay out hundreds/thousands on an audiophile switch, no i would not. I would buy a decent Netgear pro switch and power it with a good LPS.

Ive measured with network analysers and oscilloscopes, can i see the difference. Not reliably, can i here the difference reliably, yep.

Anyway bits are bits right!!!!
 
You’ve got a good handle on the thread without reading it!
What you say about adding a power supply agrees with what Alpha Audio said in one of their group tests.
I’d be interested to know which NetGear switch you use and which power supply?
I’ve just bought a secondhand Cisco 2960-C which has a built in power supply. Before this I was using a patch cable from wall socket directly to my server; I’m not sure I can hear a difference now that I’ve installed the switch.
Thanks.
 
Good Q. Maybe it’s the future!

There are things which can be done to improve on a “cheap switch” but IMHO the best value for money comes from spending that first £20-30 on something like a Zyxel GS108B and a 0.5m Cat6 cable.
Another can of worms is re-opened.
 
More wood for the fire:

Why are 802.3x, 802.1p and 802.3az important to HQPlayer?

802.3x to avoid packet losses and re-sends on devices that cannot cope with constant full network speed data flows.
802.1p to deal with traffic prioritization.
802.3az to keep network electrically quiet.

Note that 802.3x applies also for switches for example for cases where you have two switches connected to each other over a gigabit link. On one switch, two computers could be trying to send gigabit speed flow over to the other switch. That means 2 Gbps worth of traffic trying to go over 1 Gbps link. 802.3x is most efficient way to manage the situation. If you have a 48 port gigabit switch connected to another switch through one port, in worst case you could have 47 Gbps worth of aggregate traffic trying to go over 1 Gbps link.
 


advertisement


Back
Top