Mike
Would like to hear your view on this quote taken from the page of Acousence about their GISO LAN isolator:
"...devices which serve as a "bridge" between the analog and digital world (AD or DA converters) have always been [using] {word added by jabbr} special components which are recommended (AES) or even obligatory (EBU) in the standardisation guidelines of the professional studio industry; so-called transformers - small hardware elements, which transfer a signal in a purely inductive manner without a physical connection to the conductor - prevent or at least decrease these disruptive influences."
Full text at: http://www.artistic-fidelity.de/index.php/en/giso-isolator
Though the Rednet itself isn't attached to the analogue device itself, it is the beginning of the chain that is.
Wouldn't the quote imply there are benefits to be had by adding a GISO in front of a Rednet?
Cheers
Honestly? I think this is marketing hype. Ethernet connections using USP cables are already transformer coupled on all connected pins by design, AES should also be transformer coupled by design.
So in connecting the PC to the Rednet box, then out through AES, you have isolation at multiple points without adding any special additional hardware.
PC -> Transformer -> RJ45 -> Ethernet Cable -> RJ45 -> Transformer -> RedNet Interface -> Transformer -> AES -> Transformer -> DAC.
Everything is encapsulated in packets on the Ethernet network, with built-in error checking and re-transmits. You can easily monitor the network connection to determine if there is any packet loss or errors in the data stream, in which case it is most likely a faulty cable, NIC card or other hardware issue causing the problem, not interference. If there is noise somewhere in the connection that injects data into the packet, the checksum will fail and it will be re-transmitted. With the relatively small amounts of data we're talking about, it isn't even remotely coming close to hitting performance barriers of the network - especially if you are connecting directly from the PC to the Rednet box.
I know I'll probable hear something along the lines of "It's audio, it's different" for saying what I'm about to say, but the reality is, the underlying network infrastructure and protocols do not care what the data is. They are designed to assure it gets from point A to point B without errors.
I work in IT, specializing in enterprise storage. We have storage racks in extremely busy data centers running 40 gigabit Ethernet connections across 4x 10Gbe using Cat 6a cables under the floor in data centers that have potential levels of electrical interference you would never even begin to see in a home environment. Even within the rack, you're talking about a rack switch for the management interfaces, two or more storage controllers, which are high end servers with 1.5TB of RAM and 32-64 CPU cores, then 8 disk shelves with 24x drives in each... over 1.5PB of raw storage in a rack. Multiple power supplies in each box, lots of cabling. Not an audiophile power conditioner, cable or other gadget in sight. Noisy, high speed/high volume fans all over the place. Yet these things can run for months at a time between maintenance windows or reboots, flat-out, pushing even the 40 gig network connection to it's limit, serving thousands of client machines... with zero packet loss or network errors.
I'm not saying there aren't potential ways to improve what gets from the PC to the DAC, not getting into clocking or anything like that. From a pure data integrity standpoint, talking about the data that the PC sends across the network - if all of your hardware and connections are good, if you're using a well built cable that meets or exceeds spec (like the BJC cables) the data you feed into the network is going to be *exactly* the data that arrives at the Rednet box, and no isolation device is going to change that.