Schiit Happened: The Story of the World's Most Improbable Start-Up

Oct 19, 2024 at 9:40 AM Post #168,586 of 182,625
@DougD those were some great idea, and stimulated another one (at least for me, likely not you):

With (soon to be) 3 phono preamps in the lineup, it would be excellent if:

1. Comparison station. To do it right, it would of course require 3 separate - preferably identical - turntable/arm/cartridges; else hard to do any blind/double blind due to physical necessity to move cabling (it’s not like you can take phono-level signals through a switcher before the preamp without destroying the sound…)

2. Bonus points, have a set of identical musical pieces (especially good would be great Mercury or RCA classical LPs, or great Jazz LPs versus their digital variants). This would be easier to blind/double blind and would be a MIND blower for many people; might just introduce LP/Analog to a new generation and drive growth in that market share of Schiit business.

3. At least two separate acoustically isolated 2 channel listening rooms, but MOAR better of course. Definitely have both planars and point source, maybe also good horns…

Might be time to take a step back and re-reflect on what the PURPOSE of the Schiitr is, its primary mission: Brand Awareness? Drive Revenue/profit? Grow your TAM?
Personally I would use one turntable and switch over to each phono preamp under test. Levels would have to be adjusted of course then the outputs of each preamp would be switched before the preamp and amp. 1-1, 2-2, 3-3 and someone else should decide which phono preamp goes where. It requires an accomplice and that person controls the switches. All you should see or know is you are hearing 1,2, or 3. I have also split the signal from the same source device but I prefer switches.🤪

In Jason’s shootout I split the source signal to four Sagas then used a switch box so the Sagas would connect to an amp and speakers one at a time. In another case I build four identical amps. I best not get into scoring, your gear and your ears.😉

You do have to understand that all a switch box does is complete a circuit and those I have built use the highest quality switches, cabling or wire. Eventually you are using cables to go from one device to another. That is unavoidable.

By early next year I may well be able to build a proper test setup since I can lay my hands on multiple high end phono preamps including Schiit of course. Friends can run it through their lab since I keep less test gear now. When completed most any homeowner could try the setup.
 
Last edited:
Oct 19, 2024 at 10:53 AM Post #168,587 of 182,625
Personally I would use one turntable and switch over to each phono preamp under test. Levels would have to be adjusted of course then the outputs of each preamp would be switched before the preamp and amp. 1-1, 2-2, 3-3 and someone else should decide which phono preamp goes where. It requires an accomplice and that person controls the switches. All you should see or know is you are hearing 1,2, or 3. I have also split the signal from the same source device but I prefer switches.🤪

In Jason’s shootout I split the source signal to four Sagas then used a switch box so the Sagas would connect to an amp and speakers one at a time. In another case I build four identical amps. I best not get into scoring, your gear and your ears.😉

You do have to understand that all a switch box does is complete a circuit and those I have built use the highest quality switches, cabling or wire. Eventually you are using cables to go from one device to another. That is unavoidable.

By early next year I may well be able to build a proper test setup since I can lay my hands on multiple high end phono preamps including Schiit of course. Friends can run it through their lab since I keep less test gear now. When completed most any homeowner could try the setup.
I predict that the differences between Mani, Skoll, and Stjarna (and particularly Mani/Skoll and Stjarna) will not require any blind testing at all. 🤣
 
Oct 19, 2024 at 11:07 AM Post #168,588 of 182,625
I predict that the differences between Mani, Skoll, and Stjarna (and particularly Mani/Skoll and Stjarna) will not require any blind testing at all. 🤣
Those may be the results.🤪 I already have two of the three, but If I go to any trouble I may as well throw in a few more phono preamps and get say a half dozen people as test subjects since it may be in my home. Drinks to be served after, never before or during.🤣 I could swap each device into my main system, have the group score each one using our system and test music then leave the room as I set up the next gear. Maybe even blind folds for actual blind testing.🤔
 
Oct 19, 2024 at 12:29 PM Post #168,589 of 182,625
At IBM semiconductor plant, I used many scopes....these were the faovrites:

1729355299767.png
1729355350941.png

But our overall workhorse favorite was the venerable 465..

1729355629390.png

The we had a dozen of these rack mount beauties:
Used to measure rise and fall times of write pulses into memory chips and modules. Ns stuff.
And relationships when the signals were strobed into the buffers and memory...
Millions of reads and writes with all kinds of mathematically dictated patterns for memory tests....
1729355969642.png
 
Last edited:
Oct 19, 2024 at 1:46 PM Post #168,590 of 182,625
You get one guess as to who this belongs to... ;)

1729359984218.png
 
Oct 19, 2024 at 1:47 PM Post #168,591 of 182,625
Oct 19, 2024 at 2:06 PM Post #168,592 of 182,625
Oct 19, 2024 at 2:31 PM Post #168,594 of 182,625
I stumbled upon a 58 sec Youtube video about coffee enthusiasts.
Compare to audio enthusiasts

Why Are Coffee People So Pretentious?

[I don't believe they aren't necessarily pretentious]
 
Oct 19, 2024 at 2:41 PM Post #168,595 of 182,625
I stumbled upon a 58 sec Youtube video about coffee enthusiasts.
Compare to audio enthusiasts

Why Are Coffee People So Pretentious?

[I don't believe they aren't necessarily pretentious]
hehe - quite simply, IMHO is in the eye of the beholder... whether because one shares the interest or can trace parallels to what one does, thinking depends on the people! :D
 
Oct 19, 2024 at 2:49 PM Post #168,596 of 182,625
I've been mostly offline for a while due to Hurricanes Helene & Milton ... but am now caught-up again, and I'm kinda surprised there hasn't been a lot of discussion on this invitation from our host. Brain-storming about OUR Schiitr, V2.0.

Lots of storms in the Doug brain. Some thoughts:

(1) Never been to one, but By FAR the biggest Canjam etc complaint I've seen over the years is +/- "I thought I liked XYZ, but it was so ******g noisy I can't be sure enough to buy it."

Which is a problem when I suspect a huge motivation for people to visit the Schiitr nowadays is to audition the 2-channel amps. A portion of the product line that is presumably much more important now than it was when the Schiitr O.G. was established.

Managing the noise levels seems like a major issue.

(2) I never got to visit the S.O.G. I'm in Florida, so it would not be a day-trip. My combat radius is around 4 hours to target + 2 to 4 hours on-site. People who can do day trips to the Schiitr are very lucky. I suspect day-trippers would especially like to "buy & carry" if they're primarily auditioning the bigger speaker amps. And I'm guessing they're likely to largely arrive on weekends around 10-11 and leave 2-3, contributing to mid-day congestion. Do you have a reservation/scheduling system?

(3) If/when I make a visit from long range, it will be because I want to audition some gear that falls into at least one of these categories:
* high-cost
* heavy, so high-cost to return
* high-nuisance value if I get it wrong ... which for me and I suspect most people would be amps for a primary 2-channel system
* accessories that I'm not sure will offer benefit of significant value to me in terms of sound quality ... e.g., Loki Max vs Lokius.
* (possibly) the Syn experience. Although there the pitch is attractive enough for me personally that I probably don't need an audition to make a decision. (He says, peering across the room at his broken 7.2 AVR major consumer brand receiver.)
* novel products The Gadget and The Big Thing.

As a repeat buyer and quasi-fanboy, I'm not visiting with the intent to buy mid-range or low cost products. I have enough trust to just buy those the normal way. But you should have some feel for this from your experience at Schiitr O.G. I might listen to some while I was there, just for the experience, time permitting.

My intent may, or may not, be typical for other visitors from long range. Of which I see two main types: (A) those who come to San Antonio specifically to audition stuff in person, and (B) those who are visiting San Antonio for some other reason ... possibly a family vacation ... and can fit in a visit to the Schiitr. Type A people will be very annoyed if they arrive and the place is so full they barely get to listen to their primary targets. Do you have a reservation/scheduling system?

(4) tombarnard1 sagely commented:
* "There is a MASSIVE lack of places to test drive headphones."
* "Partnerships are interesting, spreads the risk, and keeps you[DD edit: r finances] focused on what you really want to be: a producer of things that aren't headphones."

I suspect one of the things that helped make Schiitr-CA successful is its simplicity. Per my understanding, it was normally staffed by just one person. If you need 2 people on-site, that gets a lot more expensive to operate. You may not need the Schiitr to be hugely profitable, but it's not going to be sustainable if it doesn't at least break-even.

Partnering in some fashion with selected companies that primarily make and sell headphones could be very synergetic, likely would increase visitor counts, and unlike speaker auditioning rooms, h/p listening stations don't take a lot of floor space. But ... you gotta have a way of having the use of your space & your gear pay its own way. Maybe visitors who are auditioning headphones supplied by other companies pay $10-$25 per hour to the house for auditioning time. (Which also disincentivizes them them from dawdling all day.) While having sell-able inventory of headphones on-hand for buy&carry would be ideal for a fun experience, it may not be absolutely necessary. OTOH, some international travelers returning home possibly may have some "no import duties up to $xxx" regulations that save them taxes for stuff they're carrying in their luggage.

Speakers ... seem a lot more difficult unless you stick to just a few models. Not my area of expertise so that's all I will say.

But IMO co-sharing space with a local coffee/food/liquor/etc company adds complexity & risk with little additive synergy. If you own and they rent, you're in the landlord business. If they own and you rent, all incompatible uses will be resolved in the landlord's favor. I've been to a few places that serve alcohol ... yes, it's true, although I have not been banned anywhere yet ... and three things I have noticed is that the patrons lend to be very loud, to be very annoying to people wanting quiet, and to stay a long time. Is that crowd going to help the business? Plus, I don't know about Texas, but in some states serving alcohol has a LOT of regulations and bureaucratic overhead. None of which is fundamental to the Schiitr's purpose. IMO, it'd be an unnecessary dilution of focus.

(5) (automated) double-blind testing ... yes please! DACs and amps for sure. (My use of "double-blind" in this context means you'd know which 2 sets of gear are in the test, but some automated process pseudo-randomly feeds you A or B, and you don't know which one was Sample #7 until the end, when you can mark up your scorecard/notes, and discover you can, or cannot, tell the difference. Or that the results are statistically ambiguous.)

(5+) ... super-bonus funland ... same conceptual type of double-blind testing, but for this one it's bring-your-own TUBES. The A & B amps are a matched set pair of Schiit Lyr 3s or Freyas or whatever. That would need to be a "by the hour" dealie. Bring 2 or 3 friends and a bunch of tubes, have a party. Could be popular.

(6) vinyl as a source: less than zero interest for me. One of my personal quirks.

(7) there are apparently some people out there who have not yet tried, or have not yet had a good experience with, streamed music. Some of them might want to try it with a "known to be good" setup.

(8) misc other operational things:

* I started my work life in retail .. groceries. People will shoplift a bottle of ketchup if they think they can get away with it. That made me paranoid for life. With multiple rooms, you'd be very vulnerable. Prominent security cameras are a good deterrent. Transparent room walls like recording studios where feasible. I'd put in a set of complimentary lockers and not allow opaque backpacks etc into the audio rooms. If that wasn't enough, you could require photo-ids to enter. You don't want to give an impression of being unfriendly to potential customers, but you might have to, with a lot of expensive and highly portable gear on hand.

* When I take my car to my local trusty small-town repair shop, oddball parts they need can be delivered to them in an hour. Find some local delivery service that can deliver inventory from your main San Antonio facility to the Schiitr quickly with short notice, as a contingency. It would be inefficient to use Schiit staff to do that. If your reservation/scheduling system captures what people think they're most interested in, that could help make sure the Schiitr is stocked appropriately for the day/weekend.

* have a "you're visiting the Schiitr ... great, here's an FAQ" page on the Schiit website describing how people can bring/connect their own music. As a practical issue, it takes a lot longer to form a definitive opinion when one has to listen to unfamiliar music. Time is money. Or more on-task auditioning per hour of clock time. Help people know how to connect their music/sources, before they arrive.

TL;DR? I don't blame you. Probably a wise choice.
Excellent post and ideas which made me consider two things:

1. Retail as an experience - one of the reasons Apple 🍎 stores are amongst the most profitable retailers per square foot is they provide excellent customer experiences from start to finish (I do not like Apple, but one has to respect good practices).

2. Side note, my headphone set-up seems to be more 'present' when I turn on my Loki Mini, even with tone controls defeated. Perhaps it is providing extra voltage over the switcher I'm using before it.
 
Last edited:
Oct 19, 2024 at 2:55 PM Post #168,597 of 182,625
“(5) (automated) double-blind testing ... yes please! DACs and amps for sure. (My use of "double-blind" in this context means you'd know which 2 sets of gear are in the test, but some automated process pseudo-randomly feeds you A or B, and you don't know which one was Sample #7 until the end, when you can mark up your scorecard/notes, and discover you can, or cannot, tell the difference. Or that the results are statistically ambiguous.)”

This is an interesting concept yet the definition of “double blind” testing is that neither the recipient nor provider know what was being tested till the results are in. I set up such a test for Schiit. Tubes were concealed and only had a random number on the outside, after the test was completed a sealed letter was opened to tell them which tubes were which.🤪

My group does single blind testing, the test subjects know the gear after the results are in.

I do like the idea of automating but of course you have to know which gear was chosen at a specific time as it randomly switches. 🤪

Warning; geeky. Double-blind testing ("DBT"). Going through the weeds in search of rabbit holes to dive into.

[edit: I've tweaked and/or added a few words since I first posted this.]
--------------------
I'd argue that having a non-human "test administrator" doing the switching fully meets the INTENT of double-blind testing, if not the standard wording that tries to implement that intent, given one added set-up requirement, and therefore "the way I would do it", as described below, does qualify as double-blind testing.

The intent of double-blind testing is to eliminate opportunities for human biases to confound the results.


IMO this process I'm describing would work for substitutions of most of the gear in the signal path, but NOT:
* headphones and/or the cables connecting them
* the music source, unless those can be time-synched
* speakers. (May be possible with a lot of work. Not going there today.)

Disclaimer: As a person who has worked as data analyst for almost 40 years, I am counter-intuitively very comfortable with decision-making based on (partially) subjective factors. That said, there's a lot of power/confidence to be gained by more rigorous/objective/repeatable evaluations. With the exception of Paladin79's group, audio consumers don't do DBTs very often because it's so difficult. But IMO the Schiitr could have some gear set-up to make DBT of selected components pretty darn easy, and my blue sky dream is that the Schiit community could and should take advantage of that opportunity. Jason did ask for blue sky dreams.

The "one added set-up requirement" is that the A/B switch is the LAST piece in the signal path before the transducers of choice, and that the signal is always running in parallel through the entirety of both the A-path and the B-path. The listeners can see all the gear, but cannot visually detect whether what they are hearing is from Path A or from Path B, because in fact both paths are always active, at some set volume. Tubes are glowing, VU meters are moving, the DAC/s are signaling 24/96 bits or whatever, etc. The switchbox itself cannot have a visible indicator of A vs B.

Here's the NIH's National Cancer Institution's online definition of a double-blind study: "A type of clinical trial in which neither the participants nor the researcher knows which treatment or intervention participants are receiving until the clinical trial is over. This makes results of the study less likely to be biased. This means that the results are less likely to be affected by factors that are not related to the treatment or intervention being tested."

--------------------

One of the most important of the biases is confirmation bias, where one's pre-test expectations tilt the evaluations. For example, More Expensive is More Better. The expectations of the person administering the test, if s/he knows which of the alternatives being tested ... say Bifrost vs Yggy ... can be manifested in subtle facial expressions or body language. And humans are exquisitely evolved to detect body language in others. So the test admin's feeling of "I really like this Yggy that's hiding behind the sheets" can be inadvertently communicated ... aka leaked ... to the listener. Maybe not to every listener. Probably many listeners would not be consciously aware. But even a subtle "this other person likes this choice better" becomes a factor when the listener's brain compiles all of its multi-dimensional inputs AND PRIOR KNOWLEDGE into a reduced binary choice on that particular sample, i.,e, is it A or is it B.

Let's call my automated test administrator Mr. Robo Schiitr. A dedicated public servant. Let's assume in this experiment that Yggy is "Effect A" and BiFrost is "Effect B". While both pieces of gear are in plain sight of the human participants, Mr Robo doesn't recognize a Bifrost, doesn't recognize an Yggy, has no knowledge of what they do, has no personal impression of either, has never read a review or talked to anyone about either of them. The listener/s can't ascertain Mr Robo's preconceptions because (a) Mr Robo has none (b) Mr Robo has no body or indicators to use for non-verbal communications and (c) Mr Robo's process & timing is identical on every iteration. There is no communications path between Mr Robo's subjective mind and the listeners; the listeners are completely blinded in regards to Mr Robo's knowledge. Unlike when a human does this part.

Mr Robo's job has these phases:
1 - when the "start the listening session" button is pushed, he runs some process that equalizes the final output signal strength in electronics-path-A and electronics-path-B. We can't use this version of double-blind on headphones or speakers, so the sensitivity of whatever's being listened to is a constant, and no additional adjustments are needed here to balance perceived loudness. This deals with another well-known/well-studied contributor to listener bias.
2 - display "A", and play the music selection through path A as a reference.
3 - display "B", and play the music selection through path B as a reference.
4 - then consult a pre-computed random numbers table, read the next entry which will say either A or B, set the output signal path accordingly, display "sample #1", and after a small gap start the music selection
5 - do phase 4 nine more times.
6 - When that's done, when they are ready, the listener/s hit a "give us the sequence" button, and Mr Robo confesses that the sequence used for this specific test was AABABBBABA.

The listeners then score themselves. "I recognized A or B correctly 6 of 10 times" is relatively easy to score and interpret, using a handy-dandy poster on the wall. If you're doing more elaborate scoring on a variety of sound attributes like the Paladin79 semi-secret society, bring a pre-built spreadsheet that will do the calculations.

-----------
Nuances:

* Q: why test sequences of 10 plays?
A: That's arbitrary. You have to find a happy balance between sample size (more is always better, at least up to some point) vs how long the test/s will take.

* Q: why balance 5x 'A' and 5x 'B'? It's supposed to be a random draw. Random draws of an extended binary series usually do not balance exactly, so forcing a 5-5 balance means it's not fully random.
A: True. But if you allow pure random, sooner or later you'll get an AAAAAAAAAA sequence, which isn't going to be helpful. If forced balance bothers you, you need to consult with a better statistician than me.

Q: Same piece of music played 10 times, or 5 different music selections each played twice?
A: That's tricky. With 5 musical selections, I think you'd have to play them in order. So now the experiment changes from "is this 1 of 10 soundbites Path A or Path B", to "is this 1 of 2 soundbites Path A or Path B", replicated 5 times. Because if Sample #3 is B, then Sample #4 must be A. To me that additional constraint means (loosely stated) there's fewer degrees of freedom, and therefore my gut is that same piece * 10 yields more information than the 5 tests of pairs. But OTOH, if it's bring-your-own music, you may need multiple soundbites to hear all the kinds of things you are listening for. Again, maybe an actual statistician should be consulted. Or mimic an existing A-B test protocol designed by a statistician.

Q: How long before listener fatigue sets in, and that starts contaminating the results?
A: Personally, I have no idea. But that will absolutely be a factor at some point. I'm sure that's been studied. Google is your friend.

Q: In doing A/B on different tubes, you're assuming that say Valhalla #1 is identical to Valhalla #2, and so that any differences you think you hear are purely caused by the different tubes. But there will inevitably be some unit-to-unit variations between the Valhallas.
A: Yep. Can't control for everything. You have to assume that 2 working Valhallas introduce sound variations that are much smaller than the difference between the tubes you are testing. But consider that you are also assuming that the Sylvania tubes you happen to have in hand are good representatives of what's typical for their class, and the KenRads you happen to have are likewise typical of their class. Note also that all the tubes are somewhat used, and all are marching towards the end of their life cycles. (With enough gear and time, you could measure both Valhallas to ensure they are operating close to design specs.) If those are assumptions you are not willing to make, you should state your published or internal-to-you conclusions in precise language to prevent generalizations you believe are inappropriate, as in "On 6-July-2025 at the Schiitr, I preferred these specific 4 used KenRad tubes in Valhalla3 #123456 over these other specific 4 used Sylvania tubes in Valhalla3 #123501 ... YMMV."

TL;DR? Your wisdom & discipline is impressive.
Sorry for any grammatical errors I missed.
 
Last edited:
Oct 19, 2024 at 3:13 PM Post #168,598 of 182,625
...
TGIF!!! ... one of the most renowned hops producing regions in the world. Thus, our fresh hop beers are among the most direct-to-kettle available anywhere.
....
We live near Hopkins Farm, which is a farm+brewery with the best name ever.
 
Oct 19, 2024 at 3:32 PM Post #168,599 of 182,625
I find your lack of faith disturbing.

Warning; geeky. Double-blind testing ("DBT"). Going through the weeds in search of rabbit holes to dive into.

--------------------
I'd argue that having a non-human "test administrator" doing the switching fully meets the INTENT of double-blind testing, if not the standard wording that tries to implement that intent, given one added set-up requirement, and therefore "the way I would do it", as described below, does qualify as double-blind testing.

The intent of double-blind testing is to eliminate opportunities for human biases to confound the results.


IMO this process I'm describing works would work for substitutions of most of the gear in the signal path, but NOT:
* headphones and/or the cables connecting them
* the music source, unless those can be time-synched
* speakers. (May be possible with a lot of work. Not going there today.)

Disclaimer: As a person who has worked as data analyst for almost 40 years, I am actually very comfortable with decision-making based on (partially) subjective factors. That said, there's a lot of power/confidence to be gained by more rigorous/objective/repeatable evaluations. With the exception of Paladin79's group, audio consumers don't do DBTs very often because it's so difficult. But IMO the Schiitr could have some gear set-up to make DBT of selected components pretty darn easy, and my blue sky dream is that the Schiit community could and should take advantage of that opportunity. Jason did ask for blue sky dreams.

The "one added set-up requirement" is that the A/B switch is the LAST piece in the signal path before the transducers of choice, and that the signal is always running in parallel through the entirety of both the A-path and the B-path. The listeners can see all the gear, but cannot visually detect whether what they are hearing is from Path A or from Path B, because in fact both paths are always active, at some set volume. Tubes are glowing, VU meters are moving, the DAC/s are signaling 24/96 bits or whatever, etc. The switchbox itself cannot have a visible indicator of A vs B.

Here's the NIH's National Cancer Institution's online definition of a double-blind study: "A type of clinical trial in which neither the participants nor the researcher knows which treatment or intervention participants are receiving until the clinical trial is over. This makes results of the study less likely to be biased. This means that the results are less likely to be affected by factors that are not related to the treatment or intervention being tested."

--------------------

One of the most important of the biases is confirmation bias, where one's pre-test expectations tilt the evaluations. The expectations of the person administering the test, if s/he knows which of the alternatives being tested ... say Bifrost vs Yggy ... can be manifested in subtle facial expressions or body language. And humans are exquisitely evolved to detect body language in others. So the test admin's feeling of "I really like this Yggy that's hiding behind the sheets" can be inadvertently communicated to the listener. Maybe not to every listener. Probably many listeners would not be consciously aware. But even a subtle "this other person likes this choice better" becomes a factor when the listener's brain compiles all of its multi-dimensional inputs AND PRIOR KNOWLEDGE into a reduced binary choice on that particular sample, i.,e, is it A or is it B.

Let's call my automated test administrator Mr. Robo Schiitr. A dedicated public servant. Let's assume in this experiment that Yggy is "Effect A" and BiFrost is "Effect B". While both pieces of gear are in plain sight of the human participants, Mr Robo doesn't recognize a Bifrost, doesn't recognize an Yggy, has no knowledge of what they do, has no personal impression of either, has never read a review or talked to anyone about either of them. The listener/s can't ascertain Mr Robo's preconceptions because (a) Mr Robo has none (b) Mr Robo has no body or indicators to use for non-verbal communications and (c) his process & timing is identical on every iteration. There is no communications path between Mr Robo's subjective mind and the listeners; the listeners are completely blinded in regards to Mr Robo's knowledge. Unlike when a human does this part.

Mr Robo's job has these phases:
1 - when the "start the listening session" button is pushed, he runs some process that equalizes the final output signal strength in electronics-path-A and electronics-path-B. We can't use this version of double-blind on headphones or speakers, so the sensitivity of whatever's being listened to is a constant, and no additional adjustments are needed here to balance perceived loudness. This deals with another well-known/well-studied contributor to listener bias.
2 - display "A", and play the music selection through path A as a reference.
3 - display "B", and play the music selection through path B as a reference.
4 - then consult a pre-computed random numbers table, read the next entry which will say either A or B, set the output signal path accordingly, display "sample #1", and after a small gap start the music selection
5 - do phase 4 nine more times.
6 - When that's done, when they are ready, the listener/s hit a "give us the sequence" button, and Mr Robo confesses that the sequence used for this specific test was AABABBBABA.

The listeners then score themselves. "I recognized A or B correctly 6 of 10 times" is relatively easy to score and interpret, using a handy-dandy poster on the wall. If you're doing more elaborate scoring on a variety of sound attributes like the Paladin79 semi-secret society, bring a pre-built spreadsheet that will do the calculations.

-----------
Nuances:

* Q: why test sequences of 10 plays?
A: That's arbitrary. You have to find a happy balance between sample size (more is always better, at least up to some point) vs how long the test/s will take.

* Q: why balance 5x 'A' and 5x 'B'? It's supposed to be a random draw. Random draws of an extended binary series usually do not balance exactly, so forcing a 5-5 balance means it's not fully random.
A: True. But if you allow pure random, sooner or later you'll get an AAAAAAAAAA sequence, which isn't going to be helpful. If forced balance bothers you, you need to consult with a better statistician than me.

Q: Same piece of music played 10 times, or 5 different music selections each played twice?
A: That's tricky. With 5 musical selections, I think you'd have to play them in order. So now the experiment changes from "is this 1 of 10 soundbites Path A or Path B", to "is this 1 of 2 soundbites Path A or Path B", replicated 5 times. Because if Sample #3 is B, then Sample #4 must be A. To me that additional constraint means (loosely stated) there's fewer degrees of freedom, and therefore my gut is that same piece * 10 yields more information than the 5 tests of pairs. But OTOH, if it's bring-your-own music, you may need multiple soundbites to hear all the kinds of things you are listening for. Again, maybe an actual statistician should be consulted. Or mimic an existing A-B test protocol designed by a statistician.

Q: How long before listener fatigue sets in, and that starts contaminating the results?
A: Personally, I have no idea. But that will absolutely be a factor at some point. I'm sure that's been studied. Google is your friend.

Q: In doing A/B on different tubes, you're assuming that say Valhalla #1 is identical to Valhalla #2, and so that any differences you think you hear are purely caused by the different tubes. But there will inevitably be some unit-to-unit variations between the Valhallas.
A: Yep. Can't control for everything. You have to assume that 2 working Valhallas introduce sound variations that are much smaller that the difference between the tubes you are testing. But consider that you are also assuming that the Sylvania tubes you happen to have in hand are good representatives of what's typical for their class, and the KenRads you happen to have are likewise typical of their class. And all the tubes are somewhat used, and marching towards the end of their life cycle. (With enough gear and time, you could measure both Valhallas to ensure they are operating close to design specs.) If those are assumptions you are not willing to make, you should state your published or internal-to-you conclusions in precise language to prevent generalizations you believe are inappropriate, as in "On 6-July-2025 at the Schiitr, I preferred these specific 4 used KenRad tubes in Valhalla3 #123456 over these other specific 4 used Sylvania tubes in Valhalla3 #123501 ... YMMV."

TL;DR? Your wisdom & discipline is impressive.
Sorry for any grammatical errors I missed.
 
Oct 19, 2024 at 3:35 PM Post #168,600 of 182,625
2024 Chapter 10
Schiitr 2: The Sequel


...
What’s Your Ideal Schiitr?
...
Hmm, thinking about the 2ch/speakers part of the upcoming Schiitr.

Salk is no more (he retired, didn't find anyone he was willing to sell it to) so those are out.
Tyler Acoustics is comparable but has a long wait.
I don't like the British sound (B&W, Harbeth, LS3/A etc).
Maggies are fun and "cheap" but need a lot of space.
I do like my Philharmonics and they'd certainly do Schiit bass/Moffat Bass some true justice.

Are there any TX speaker makers? (looking for such I came across the interestingly named companies:
Playmate Enterprises and Skynet Security)
 

Users who are viewing this thread

Back
Top