I haven't read through the whole thing, but wouldn't it make more sense to first simply compare the signal going into the cable with the signal coming out. Basically loop a cable from an output to the input of a computer (obviosuly, don't have it pass-though input to output, cuz then you will just have an infinite loop of monkey death). Have a program monitor the input feed (by monitor, i mean make grpahs, etc, not actualyl play it), and then play something known through the output. In theory, since its digital, and optical, the input and output should be equal, so if you run a diff test on the input and output, it should be all zero. Do this for each cable, and unless the diff test ends up being non-zero on one or all of them, you know the cables are all equal. Only if the results of the output test are different for each cable do you need to worry about figuring out how to compare the cables. Sorry if this doesn't make sense, or has already been addressed. I didn't read all the pages. EDIT: also, due to the nature of optics, I'd imagine the quality of the actual jacks (as in the transmitter and reciever) would have more effect than the cables themselves. Maybe a test can be doen on this as well.