Schiit Happened: The Story of the World's Most Improbable Start-Up
Oct 19, 2024 at 3:37 PM Post #168,601 of 179,578
Hmm, thinking about the 2ch/speakers part of the upcoming Schiitr.

Salk is no more (he retired, didn't find anyone he was willing to sell it to) so those are out.
Tyler Acoustics is comparable but has a long wait.
I don't like the British sound (B&W, Harbeth, LS3/A etc).
Maggies are fun and "cheap" but need a lot of space.
I do like my Philharmonics and they'd certainly do Schiit bass/Moffat Bass some true justice.

Are there any TX speaker makers? (looking for such I came across the interestingly named companies:
Playmate Enterprises and Skynet Security)

I'm British. And I'll try not to take offence...
 
Oct 19, 2024 at 3:39 PM Post #168,602 of 179,578
.... “double blind” testing is that neither the recipient nor provider know what was being tested till the results are in. ....
How about "double blind purchasing": You get a number at the door and when you leave you have to buy whatever the number matches, neither Schiit nor the purchaser knowing ahead of time what it is. :kimono:
 
Oct 19, 2024 at 3:41 PM Post #168,603 of 179,578
I'm British. And I'll try not to take offence...
Hey, it's just a personal preference. I owned B&W years ago so I acknowledge they have their charms.

Now, if you really want to get offended, I think the French speaker manufacturers are eclipsing their Anglo rivals...
 
Oct 19, 2024 at 3:43 PM Post #168,604 of 179,578
Oct 19, 2024 at 4:06 PM Post #168,606 of 179,578
Rythmik subwoofers might be a good pairing with Schiitr 2 channel room. They are Texas-based.
 
Oct 19, 2024 at 4:07 PM Post #168,607 of 179,578
Happy Saturday!

1000002063.png


*Older stuff is better, but a fun listen.

And for the heck of it...

1000002064.png


And some Jazz. That's it for today. 😏 👍

1000002066.png
 
Last edited:
Oct 19, 2024 at 4:16 PM Post #168,608 of 179,578
You've got the juices flowing and the wallet quaking ...
I'd probably use my boxed-up Vidar where I have my Gjallarhorn and use the GJ with the above instead of the Rekkr.
Thinking a bit more about this....

Asgard 3 and Gjallarhorn same form factor? Good synergy? That's one less box (a DAC) so I could also step up to a slightly larger streamer because the SR11 seems unavailable?
 
Oct 19, 2024 at 4:22 PM Post #168,609 of 179,578
Hey, it's just a personal preference. I owned B&W years ago so I acknowledge they have their charms.

Now, if you really want to get offended, I think the French speaker manufacturers are eclipsing their Anglo rivals...

Haha...I'm not really. I'm a fan of pretty anything that sounds good.
 
Oct 19, 2024 at 4:32 PM Post #168,610 of 179,578
Hey, it's just a personal preference. I owned B&W years ago so I acknowledge they have their charms.

Now, if you really want to get offended, I think the French speaker manufacturers are eclipsing their Anglo rivals...
Now them's fighting' words! :laughing:

Your throw down brought this to mind:
 
Oct 19, 2024 at 5:09 PM Post #168,611 of 179,578
Warning; geeky. Double-blind testing ("DBT"). Going through the weeds in search of rabbit holes to dive into.

--------------------
I'd argue that having a non-human "test administrator" doing the switching fully meets the INTENT of double-blind testing, if not the standard wording that tries to implement that intent, given one added set-up requirement, and therefore "the way I would do it", as described below, does qualify as double-blind testing.

The intent of double-blind testing is to eliminate opportunities for human biases to confound the results.


IMO this process I'm describing works would work for substitutions of most of the gear in the signal path, but NOT:
* headphones and/or the cables connecting them
* the music source, unless those can be time-synched
* speakers. (May be possible with a lot of work. Not going there today.)

Disclaimer: As a person who has worked as data analyst for almost 40 years, I am actually very comfortable with decision-making based on (partially) subjective factors. That said, there's a lot of power/confidence to be gained by more rigorous/objective/repeatable evaluations. With the exception of Paladin79's group, audio consumers don't do DBTs very often because it's so difficult. But IMO the Schiitr could have some gear set-up to make DBT of selected components pretty darn easy, and my blue sky dream is that the Schiit community could and should take advantage of that opportunity. Jason did ask for blue sky dreams.

The "one added set-up requirement" is that the A/B switch is the LAST piece in the signal path before the transducers of choice, and that the signal is always running in parallel through the entirety of both the A-path and the B-path. The listeners can see all the gear, but cannot visually detect whether what they are hearing is from Path A or from Path B, because in fact both paths are always active, at some set volume. Tubes are glowing, VU meters are moving, the DAC/s are signaling 24/96 bits or whatever, etc. The switchbox itself cannot have a visible indicator of A vs B.

Here's the NIH's National Cancer Institution's online definition of a double-blind study: "A type of clinical trial in which neither the participants nor the researcher knows which treatment or intervention participants are receiving until the clinical trial is over. This makes results of the study less likely to be biased. This means that the results are less likely to be affected by factors that are not related to the treatment or intervention being tested."

--------------------

One of the most important of the biases is confirmation bias, where one's pre-test expectations tilt the evaluations. The expectations of the person administering the test, if s/he knows which of the alternatives being tested ... say Bifrost vs Yggy ... can be manifested in subtle facial expressions or body language. And humans are exquisitely evolved to detect body language in others. So the test admin's feeling of "I really like this Yggy that's hiding behind the sheets" can be inadvertently communicated to the listener. Maybe not to every listener. Probably many listeners would not be consciously aware. But even a subtle "this other person likes this choice better" becomes a factor when the listener's brain compiles all of its multi-dimensional inputs AND PRIOR KNOWLEDGE into a reduced binary choice on that particular sample, i.,e, is it A or is it B.

Let's call my automated test administrator Mr. Robo Schiitr. A dedicated public servant. Let's assume in this experiment that Yggy is "Effect A" and BiFrost is "Effect B". While both pieces of gear are in plain sight of the human participants, Mr Robo doesn't recognize a Bifrost, doesn't recognize an Yggy, has no knowledge of what they do, has no personal impression of either, has never read a review or talked to anyone about either of them. The listener/s can't ascertain Mr Robo's preconceptions because (a) Mr Robo has none (b) Mr Robo has no body or indicators to use for non-verbal communications and (c) his process & timing is identical on every iteration. There is no communications path between Mr Robo's subjective mind and the listeners; the listeners are completely blinded in regards to Mr Robo's knowledge. Unlike when a human does this part.

Mr Robo's job has these phases:
1 - when the "start the listening session" button is pushed, he runs some process that equalizes the final output signal strength in electronics-path-A and electronics-path-B. We can't use this version of double-blind on headphones or speakers, so the sensitivity of whatever's being listened to is a constant, and no additional adjustments are needed here to balance perceived loudness. This deals with another well-known/well-studied contributor to listener bias.
2 - display "A", and play the music selection through path A as a reference.
3 - display "B", and play the music selection through path B as a reference.
4 - then consult a pre-computed random numbers table, read the next entry which will say either A or B, set the output signal path accordingly, display "sample #1", and after a small gap start the music selection
5 - do phase 4 nine more times.
6 - When that's done, when they are ready, the listener/s hit a "give us the sequence" button, and Mr Robo confesses that the sequence used for this specific test was AABABBBABA.

The listeners then score themselves. "I recognized A or B correctly 6 of 10 times" is relatively easy to score and interpret, using a handy-dandy poster on the wall. If you're doing more elaborate scoring on a variety of sound attributes like the Paladin79 semi-secret society, bring a pre-built spreadsheet that will do the calculations.

-----------
Nuances:

* Q: why test sequences of 10 plays?
A: That's arbitrary. You have to find a happy balance between sample size (more is always better, at least up to some point) vs how long the test/s will take.

* Q: why balance 5x 'A' and 5x 'B'? It's supposed to be a random draw. Random draws of an extended binary series usually do not balance exactly, so forcing a 5-5 balance means it's not fully random.
A: True. But if you allow pure random, sooner or later you'll get an AAAAAAAAAA sequence, which isn't going to be helpful. If forced balance bothers you, you need to consult with a better statistician than me.

Q: Same piece of music played 10 times, or 5 different music selections each played twice?
A: That's tricky. With 5 musical selections, I think you'd have to play them in order. So now the experiment changes from "is this 1 of 10 soundbites Path A or Path B", to "is this 1 of 2 soundbites Path A or Path B", replicated 5 times. Because if Sample #3 is B, then Sample #4 must be A. To me that additional constraint means (loosely stated) there's fewer degrees of freedom, and therefore my gut is that same piece * 10 yields more information than the 5 tests of pairs. But OTOH, if it's bring-your-own music, you may need multiple soundbites to hear all the kinds of things you are listening for. Again, maybe an actual statistician should be consulted. Or mimic an existing A-B test protocol designed by a statistician.

Q: How long before listener fatigue sets in, and that starts contaminating the results?
A: Personally, I have no idea. But that will absolutely be a factor at some point. I'm sure that's been studied. Google is your friend.

Q: In doing A/B on different tubes, you're assuming that say Valhalla #1 is identical to Valhalla #2, and so that any differences you think you hear are purely caused by the different tubes. But there will inevitably be some unit-to-unit variations between the Valhallas.
A: Yep. Can't control for everything. You have to assume that 2 working Valhallas introduce sound variations that are much smaller than the difference between the tubes you are testing. But consider that you are also assuming that the Sylvania tubes you happen to have in hand are good representatives of what's typical for their class, and the KenRads you happen to have are likewise typical of their class. And all the tubes are somewhat used, and marching towards the end of their life cycle. (With enough gear and time, you could measure both Valhallas to ensure they are operating close to design specs.) If those are assumptions you are not willing to make, you should state your published or internal-to-you conclusions in precise language to prevent generalizations you believe are inappropriate, as in "On 6-July-2025 at the Schiitr, I preferred these specific 4 used KenRad tubes in Valhalla3 #123456 over these other specific 4 used Sylvania tubes in Valhalla3 #123501 ... YMMV."

TL;DR? Your wisdom & discipline is impressive.
Sorry for any grammatical errors I missed.
Fascinating comments.😜 Today is my day to watch college football so I will read your post more thoroughly later.
Some things apply to my experiences, many do not. We use large groups and the test subjects may not even know the type of product under test, just how a specific sound trait does when comparing the music we use.😃

I think you know my group eliminates confirmation and anticipation bias. Those setting up gear are not involved in the testing. I rarely pay attention to anyone who knows they are listening to specific gear worth x amount of dollars.

We also train those listening, we use a reference system as well as music prepared to show off specific sonic traits. A home listener most assuredly would use similar traits yet does he or she consistently listen for 25 of them and have the music on one CD with recordings of amazing quality? That is hard to say.

Regardless what we do is not for the masses unless I assist in something for Schiit or as one friend did for an audio show.
 
Last edited:
Oct 19, 2024 at 5:44 PM Post #168,612 of 179,578
You get one guess as to who this belongs to... :wink:

No clue. Who? 🤣

I use it to fix the broken Barbie dolls you send me to repair every week.
 
Last edited:
Oct 19, 2024 at 5:50 PM Post #168,613 of 179,578
No clue. Who? 🤣

I use it to fix the broken Barbie dolls you send me to repair every week and ultimately keep for myself.

Explains why I never get them back. 🙄
 
Last edited:
Oct 19, 2024 at 5:53 PM Post #168,614 of 179,578
Fascinating comments.😜 Today is my day to watch college football so I will read your post more thoroughly later.
Some things apply to my experiences, many do not. We use large groups and the test subjects may not even know the type of product under test, just how a specific sound trait does when comparing the music we use.😃

I think you know my group eliminates confirmation and anticipation bias. Those setting up gear are not involved in the testing. I rarely pay attention to anyone who knows they are listening to specific gear worth x amount of dollars.

Definitely what your group does is grander in scope and scale, and has more moving parts, than what I am envisioning for the Schiitr.

I am assuming that the typical use-case at the Schiitr is that someone arrives wanting to know if, everything else being equal, that person prefers Yggy MIB or LIM. Or Aegir vs Tyr. Or can differentiate Bifrost over Modi Multibit enough to justify the cost. Or wants to have fun with tubes.

Some of these comparisons surely happen often enough that Team Schiit might consider setting up a permanent MIB vs LIM station, etc. Plug in your headphones and boom, you're ready to go.

Won't be able to replicate all the good or bad synergies of the other gear in people's normal home setup, of course. Or their room, for speakers.

It just occurred to me that what I have described does NOT HAVE to be used for DBT testing. The parallel nature of it means it could also be used for manually controlled fast-switching, after using the automated volume equalizer. Unequal volumes is most pernicious bias in A vs B comparisons, per my understanding. So a good & easy way of dealing with that would be a big step forward.

Best Saturday of the Week, so far !!!?!?!?!
 
Oct 19, 2024 at 5:58 PM Post #168,615 of 179,578

Users who are viewing this thread

Back
Top