Testing audiophile claims and myths
Feb 10, 2024 at 6:41 PM Post #17,206 of 17,336
There is nothing forced about an ABX. The test subject can switch back and forth as many times as he wants before making a decision, and he can listen to as long a sample as he wants. I think the articles linked in the first post say that, don’t they?

If you want to claim you can reliably hear differences that can’t be measured, you need to prove that in a test that eliminates expectation bias and perceptual error. That is a double blind test. If you don’t do that, we can just say that consciously or unconsciously, you’re peeking and the difference might be in you, not the equipment.

There is a process to follow for the scientific method. You don’t start guessing why something exists before you can prove it does exist. Audiophoolery is full of solutions to problems that don’t really exist. Manufacturers employ whole advertising departments to do just that. There are enough audible ways for sound fidelity to be messed up to not waste time worrying about inaudible ones.

If you aren’t going to accept controls on tests, you can pretty much “prove” any mjumbo jumbo that you want. The mind and subjective preferences are fully capable of distorting reality like a funhouse mirror. What you think you hear may not be what you are actually hearing. Blind listening tests with level matching and direct A/B switching between samples is the best tool we have for finding out what you actually hear.
 
Last edited:
Feb 10, 2024 at 7:54 PM Post #17,207 of 17,336
@bigshot
You don’t start guessing why something exists before you can prove it does exist.
There is a little bit of that in hypothesis formation, you can't design a proper test and control for confounding variables if you aren't clear on what the objective of the study is and thus what the operational parameters are. The trick is to control for operator bias, hence the use of DBT.
 
Feb 10, 2024 at 8:35 PM Post #17,208 of 17,336
There is nothing forced about an ABX. The test subject can switch back and forth as many times as he wants before making a decision, and he can listen to as long a sample as he wants. I think the articles linked in the first post say that, don’t they?

If you want to claim you can reliably hear differences that can’t be measured, you need to prove that in a test that eliminates expectation bias and perceptual error. That is a double blind test. If you don’t do that, we can just say that consciously or unconsciously, you’re peeking and the difference might be in you, not the equipment.

There is a process to follow for the scientific method. You don’t start guessing why something exists before you can prove it does exist. Audiophoolery is full of solutions to problems that don’t really exist. Manufacturers employ whole advertising departments to do just that. There are enough audible ways for sound fidelity to be messed up to not waste time worrying about inaudible ones.

If you aren’t going to accept controls on tests, you can pretty much “prove” any mjumbo jumbo that you want. The mind and subjective preferences are fully capable of distorting reality like a funhouse mirror. What you think you hear may not be what you are actually hearing. Blind listening tests with level matching and direct A/B switching between samples is the best tool we have for finding out what you actually hear.
“Forced choice” is what the style of question is called. An ABX is an example of a forced choice test.

The article makes no guesses as to why something exists. It starts with an observation of a phenomena and asks an expert outside the field to figure out a measurement method that might indicate what particular equipment might be doing. That external expert is a sceptic, too.

To say the process of observed phenomena - questioning - experiment design - experiment - results isn’t the scientific method is plainly inaccurate.

I came here in the hope of learning something and some have been helpful, but you’re being dismissive, aggressive and dogmatic, while not being willing to open a door to discuss the article at hand. Your latest comment is one to which I’m sure many would take offence and its lack of directly addressing the article makes me suspect (apologies if I’m wrong) that you’ve not taken the time to read it all and consider what it’s saying. Using terms like “audiophoolery” and “mumbo jumbo” have no place in a scientific discussion. And when you make assertions about advertising departments you’re making accusations of fraud. That’s pretty serious. Also not a part of a scientific discussion. This emotive style of writing doesn’t serve to convince me of anything, nor does it help my genuinely sought-after understanding.

At its core, the article details a process that measures the effect that two different power cables, an isolation platform and a power conditioner have on the signal output by a CD player, both individually and as a system. It finds repeatable results that indicate these devices have a real effect on the final analogue output. Those results were repeatable in different environments and at different locations, and seem to be well above the threshold that could reliably be attributed to faults or self-noise of the measuring equipment itself.

Yes, the question of wether or not these changes are audible is not addressed and so would indicate future experimentation is needed in order to understand what is happening and if it can possibly validate the original hypothesis.

I didn’t mention anything at all about claiming anyone could hear something that “can’t” be measured. That implies an impossibility that a scientist would never lock themselves into. “That claim can’t be verified by any known measuring methodology” is the scientific response. The inquisitive mind wants to know for sure either way and so delves deeper in pursuit of an answer.

Part of science is being able to say, “I don’t know”. It’s essential.
 
Last edited:
Feb 10, 2024 at 10:56 PM Post #17,209 of 17,336
There's no point arguing the usefulness of blind testing. It's an accepted part of science and it's used in things that are much more important than stereo equipment, like medicine. This forum was established to allow a place in Head-Fi where controlled scientific tests could be discussed. If you don't "believe" in them, that is fine, but that gives me the right to dismiss you from the conversation with me. You're in the wrong forum to be barking up that tree.

The first step in the scientific method is verified, repeatable observation of a particular phenomenon. We don't skip proving ghosts exist and start out discussing what they have for lunch. It also helps to limit your variables, so we can't just say that what is being measured is caused by the test procedure (ie: the ADC capture). And finally, when it comes to recorded sound, abstract numbers on a page mean nothing unless they are related to the thresholds of human perception.
 
Last edited:
Feb 10, 2024 at 11:59 PM Post #17,210 of 17,336
There's no point arguing the usefulness of blind testing. It's an accepted part of science and it's used in things that are much more important than stereo equipment, like medicine. This forum was established to allow a place in Head-Fi where controlled scientific tests could be discussed. If you don't "believe" in them, that is fine, but that gives me the right to dismiss you from the conversation with me. You're in the wrong forum to be barking up that tree.

The first step in the scientific method is verified, repeatable observation of a particular phenomenon. We don't skip proving ghosts exist and start out discussing what they have for lunch. It also helps to limit your variables, so we can't just say that what is being measured is caused by the test procedure (ie: the ADC capture). And finally, when it comes to recorded sound, abstract numbers on a page mean nothing unless they are related to the thresholds of human perception.
Yeah, you’ve not read it, you don’t address anything I’ve actually asked about, and you’re back with the same message. I get it, you’ve closed that door and that’s fine. It’s a good thing for me that I don’t need your validation and I’m not at all moved by your dismissal. What’s that Pink Floyd line…banging your heart against some mad bugger’s wall

You don’t know me, my background, my training, my research, my qualifications, my professional competencies or my publishing history, yet you attacked. I don’t know yours, either, so I tried to be polite and engage with what you said. I was hoping you might do the same, but clearly my expectations were I’ll-founded.

“This forum was established to allow a place in Head-Fi where controlled scientific tests could be discussed” is awfully specific. The title of the forum doesn’t say that, it merely says “sound science”. Your statement precludes questions, novel thinking and new avenues of inquiry. Your posts to me are brutally closed to anything that doesn’t fit your understanding and, worse, your pre-biased thinking. It’s not scientific at all, it’s the cultish scientism that is too often hidden behind in order to justify one’s own world view.

Thanks for making someone with a genuine enquiry feel so unwelcome. Big shot indeed.

(That’s not the first step in the scientific method - you’re conflating steps that aren’t even next to each other in the process)

(You miss-quoted me. I did t say I didn’t believe in ABX testing, merely questioned its validity when trying to ascertain very small differences. That’s an accepted problem with forced choice methods. I’m not an outlier in that.)

(Advances are made reasonably often in what humans can perceive. The idea of the five senses being the be all and end all are behind us. Certainly neuroscience and physics are quite uncertain how the brain processes a great many things, hearing being one of them. It’s absolutely the case that we can perceive a far greater frequency range than our ear-brain system can, and much work is being done on how it is we “have a feeling” when we walk into a room and the mood is different from where we’ve just been that has more than learning to read body language behind it. There’s so much to be uncertain about that holding fast to concepts proven long ago might not get us to deeper, fuller understandings.)

(Medicine isn’t listening to sounds on a hifi system. I’ve never seen a placebo study that has anything to do with aural acuity, but I’ll admit I haven’t looked too hard)
 
Last edited:
Feb 11, 2024 at 2:41 AM Post #17,211 of 17,336
I’m happy to admit that I’m not sure it’s a good thing to do forced choice testing for differences that are so small/subtle, under conditions that can produce changes to participants’ psychological and physiological states - we all know how much our musical experiences are affected by mood, sleep, alcohol, stress, time pressure, diet…

But returning to the interesting idea in the article, the measurements and discussion in the second half pertaining to the addition of the full suite of tweaks and also each tweak individually seem plausible. I can see that it doesn’t, however, give any evidence as to whether those variances would be audible.

However, the scope of the article wasn’t to determine that. It was merely asking, I hear differences that the usual measurements I see don’t account for, so is it possible to find a different system of measurement that might start to explain what I hear? (I’m obviously paraphrasing).

Viewed from that perspective it seems interesting and has piqued my intellectual curiosity. It certainly seems to suggest more investigation would be worthwhile to follow the rhesus through to a point where a solid conclusion could be drawn.

Thanks to those who tried to help my understanding!
The current measurement methodologies do not take physiological or cognitive psychological variances into account, so I think it stands to reason that a far more involved study would be required to test such long term changes, and utilizing an invasive technology measuring neurological impulses would be necessary to gather sufficient data.

Cognitive psychology was not my specific field of expertise, but from what I saw of this field, it is still relatively undeveloped due to the difficulties in causally linking electrical neural activity to specific cognitive processes involved with sensory perception and interpretation. The ongoing development of Neuralink indicates to me that we may be on the brink of a breakthrough in the field, a simultaneously daunting and promising prospect.
 
Feb 11, 2024 at 3:38 AM Post #17,212 of 17,336
If you won't acknowledge the necessity for applying controls to listening test, and isolating the element being tested, there really isn't much point discussing it. These things are fundamental.

I don't have any interest in discussing feelings or psychology. I'm only interested in fidelity. You can feel free to discuss that, but it's off topic for this particular thread. This thread is about controlled tests to determine audible fidelity, not impressions that affect how you subjectively feel about sound. You'll get more support for that stuff in other forums in Head-Fi.
 
Last edited:
Feb 11, 2024 at 4:15 AM Post #17,213 of 17,336
Great, so the premise is workable. How about the conclusions drawn? It seems very much

1. We hear something not explained by the usual measurements
2. Let’s ask some measurement experts in a different field to tackle the problem
3. Looks like we can correlate what we hear to what they’ve measured, and it’s all repeatable, not random

The fact that it’s power cables and equipment supports is interesting, too.
I am with bigshot on the importance of ABX in this discussion and how the starting premise of "I hear things, therefore I need to find a measurement that explains it" falls apart if one has not first isolated pure perception of the sound (the physical, acoustic signal that reaches your ears prior to your brain processing it and integrating all other possible information it has available to it). The assumption is that all the audible differences reported were in sighted and perhaps not volume-matched listening, mind subject to the variable of room comb filtering (I haven't read into whether headphones have been considered "superior" for ABX of systems in certain contexts per their removing the FR variations from comb filtering as one's head moves). If one has not first completely removed the possibility of perceptual bias (or other things failed to be controlled for) as having influenced one's perception of the given signal, then what is the point of skipping to checking if the signal itself is what was changing? If under controlled listening as in ABX/DBT, the differences are rendered inaudible, then any measured differences as in the article's null tests and all the controls they did toward making high-quality nulls, no matter how consistent, can be ruled out as being inaudible differences (under the conditions of ABX listening) or below the threshold of audibility. If we are making it a requirement that the user be able to see the gear and know what they are listening to mind forsake volume controls among others, we are no longer purely studying how the gear's supposed modifications of the signal affect audio perception, and gear engineering would very much have to extend away from the field of electronics engineering toward psychology (I don't know if more rigorous research into how to market a DAC or cable to consistently sound a certain way to the most people under sighted listening could fall under "psychoacoustics"...). Now, I don't know how one would test whether sighted listening could actually enhance listening acuity of measurable differences (If you take two cables that produce a virtually perfect null against one another, how would you know that differences in sighted acuity for test signals through either are due to an enhancement of the preferred choice as opposed to a suppression of the nonpreferred choice? I suppose if the thresholds are measured in terms of the loudness or percentage distortion, the sighted thresholds could be compared to the ABX/DBT thresholds.), but we at least have examples such as the McGurk effect for how extrasonic stimuli can influence our perception of truly identical signals.

As for null tests showing improvements inaudible in controlled listening, to me this at least shows measurements of diminishing returns that can be appreciated just as much as one's paying for a COSC-certified Swiss watch. It is an extrasonic pleasure and satisfaction like the distortion performance of my EQed Meze Elite where I can also barely if at all hear the measurably quite worse distortion of my subpar (overall particularly for multitone distortion, and due to November QC issues) HE1000se unit.
 
Last edited:
Feb 11, 2024 at 5:04 AM Post #17,214 of 17,336
@AussieMick
Just got to reading the article. It seems to me that the main point of the article was to make the contention that a signal processing chain utilizing their product (power cable and vertex racks) resulted in measurable differences in a null test between the the control and experimental setups in relation to the source file as reference, thus explaining the perception of improved sound in a listening test.

This brings up two points in my mind.
1: the measured differences are essentially interpolation errors or straight up calculation errors by the DAC when processing the digital signal into an analog signal. The contention here is that the cable and racks caused a relative increase in processing accuracy of the DAC vs the standard equipment, which is a distinction that is way out of my expertise (seems like something an electrical engineer will have to chime in on).
2: No mention on how the listening tests were performed throws the contentions here into question. I read the opening blurb waiting for any mention of a controlled test sample and proper DBT setup, but the lack of any such mention leads me to believe there were no such controls. This is a huge problem because of how susceptible people are to subliminal cues and sighted biases, so this calls for a need to properly conduct an audibility test to see if such slight interpolation errors cause a perceptual difference.

I suppose I might be interested in why their equipment causes an improvement to the DAC's performance in terms of accuracy, but first audibility has to be proven without question before causality can be assigned to their products.
 
Feb 11, 2024 at 5:08 AM Post #17,215 of 17,336
I am with bigshot on the importance of ABX in this discussion and how the starting premise of "I hear things, therefore I need to find a measurement that explains it" falls apart if one has not first isolated pure perception of the sound (the physical, acoustic signal that reaches your ears prior to your brain processing it and integrating all other possible information it has available to it). The assumption is that all the audible differences reported were in sighted and perhaps not volume-matched listening, mind subject to the variable of room comb filtering (I haven't read into whether headphones have been considered "superior" for ABX of systems in certain contexts per their removing the FR variations from comb filtering as one's head moves). If one has not first completely removed the possibility of perceptual bias (or other things failed to be controlled for) as having influenced one's perception of the given signal, then what is the point of skipping to checking if the signal itself is what was changing? If under controlled listening as in ABX/DBT, the differences are rendered inaudible, then any measured differences as in the article's null tests and all the controls they did toward making high-quality nulls, no matter how consistent, can be ruled out as being inaudible differences (under the conditions of ABX listening) or below the threshold of audibility. If we are making it a requirement that the user be able to see the gear and know what they are listening to mind forsake volume controls among others, we are no longer purely studying how the gear's supposed modifications of the signal affect audio perception, and gear engineering would very much have to extend away from the field of electronics engineering toward psychology (I don't know if more rigorous research into how to market a DAC or cable to consistently sound a certain way to the most people under sighted listening could fall under "psychoacoustics"...). Now, I don't know how one would test whether sighted listening could actually enhance listening acuity of measurable differences (If you take two cables that produce a virtually perfect null against one another, how would you know that differences in sighted acuity for test signals through either are due to an enhancement of the preferred choice as opposed to a suppression of the nonpreferred choice? I suppose if the thresholds are measured in terms of the loudness or percentage distortion, the sighted thresholds could be compared to the ABX/DBT thresholds.), but we at least have examples such as the McGurk effect for how extrasonic stimuli can influence our perception of truly identical signals.

As for null tests showing improvements inaudible in controlled listening, to me this at least shows measurements of diminishing returns that can be appreciated just as much as one's paying for a COSC-certified Swiss watch. It is an extrasonic pleasure and satisfaction like the distortion performance of my EQed Meze Elite where I can also barely if at all hear the measurably quite worse distortion of my subpar HE1000se unit.
Excellent, thank you. I’m starting to figure out what it is that doesn’t sit right with me with what’s being proposed. Bear with me…

It’s clear from the tests that the two power cables, isolation platform and power conditioner have a measurable effect on the output of the CD player. This doesn’t seem under dispute at all. Correct?

The question is whether or not those effects are audible to a listener, for which we’d need an ABX type scenario to determine. Yes?

If the ABX came out with 95% positive, how would we then know that the measured effects were the cause of the heard effects? I can’t see where that’s been addressed, but I assume that if there’s an absence of any other factors that it could be said that the two effects are correlated. Is that right?

If we’re in agreement about those three things, great. If not, I’m up for hearing why.



In relation to whether an ABX is needed or not, what percentage of people would need to hear and describe similar/same aural differences for us to agree that we could forgo the ABX?
As an extreme example, we can all look into the sky and see clouds. We might then wonder what they are, what they’re made of, how far away they are, and develop experiments to discover this information. We’d never think to formulate a test to know if we’re not being fooled by bias, we all just accept there are clouds in the sky. It’s self-evident.
So what’s the threshold for sighted listening comparisons? If we get 20 people in a room and all 20 hear broadly the same outcomes, is that enough? What about 15 of the 20? And what if the 5 who don’t hear it can be taught to hear it in ways that are, on the face of it, plausible? “From my listening position, the saxophone is an inch inside the left speaker, but when the devices are added it’s more like four inches inside the left speaker.”

It goes to the reason we have the ABX in the first place and why we’re worried about confirmation bias, etc.
 
Feb 11, 2024 at 5:10 AM Post #17,216 of 17,336
@AussieMick
Just got to reading the article. It seems to me that the main point of the article was to make the contention that a signal processing chain utilizing their product (power cable and vertex racks) resulted in measurable differences in a null test between the the control and experimental setups in relation to the source file as reference, thus explaining the perception of improved sound in a listening test.

This brings up two points in my mind.
1: the measured differences are essentially interpolation errors or straight up calculation errors by the DAC when processing the digital signal into an analog signal. The contention here is that the cable and racks caused a relative increase in processing accuracy of the DAC vs the standard equipment, which is a distinction that is way out of my expertise (seems like something an electrical engineer will have to chime in on).
2: No mention on how the listening tests were performed throws the contentions here into question. I read the opening blurb waiting for any mention of a controlled test sample and proper DBT setup, but the lack of any such mention leads me to believe there were no such controls. This is a huge problem because of how susceptible people are to subliminal cues and sighted biases, so this calls for a need to properly conduct an audibility test to see if such slight interpolation errors cause a perceptual difference.

I suppose I might be interested in why their equipment causes an improvement to the DAC's performance in terms of accuracy, but first audibility has to be proven without question before causality can be assigned to their products.
Cheers. That makes perfect sense, thank you.
 
Feb 11, 2024 at 6:14 AM Post #17,217 of 17,336
Cheers. That makes perfect sense, thank you.
Subsequent thought: has anyone null tested their Quantum Qx4 power purifier unit against a range of general purpose UPSs? Seems like the more pertinent test here.
In relation to whether an ABX is needed or not, what percentage of people would need to hear and describe similar/same aural differences for us to agree that we could forgo the ABX?
This falls into argumentum ad populum, intrinsic and extrinsic bias is a universal phenomenon, so DBT is the only acceptable standard both scientifically and legally (in the US anyway).

One of the great sins of my field is regressive memory therapy. There is a book called Making Monsters that details what happened due to clinical psychology's failure as a profession to acknowledge the fallibility of human memory and perception to extrinsic bias. It's a poignant reminder to take human perception with a grain of salt when trying to consider it as evidence.
 
Last edited:
Feb 11, 2024 at 7:21 AM Post #17,218 of 17,336
In relation to whether an ABX is needed or not, what percentage of people would need to hear and describe similar/same aural differences for us to agree that we could forgo the ABX?
No amount of people having a belief should turn that belief into an objective fact. You will find groups of people believing in just about anything, including clearly impossible stuff. We need more than a bunch of people and a common belief to establish facts.
About clouds in the sky, You're thinking about it as someone today with some knowledge about what clouds are. There was a time when it was clear to almost everybody that the sun and the moon were moving around us. It's knowledge, not self-evident observation, that later made most people believe the moon is indeed going around us while we are going around the sun and turning on ourselves at the same time(something we don't actually feel, not as a rotation at least, even though it happens).
I think it's hard to have a group of people clean of all knowledge and preconception about a particular subject. And harder still to know, without extra data, when the conclusion of such a group are valid or not.
There is also the issue of group, which has some significant impact on what some people will say or not say. There can be a clear difference between a group seemingly agreeing and how many people in that group actually agree in their head or just go with the flow for various reasons(fear of looking foolish, desire to fit in).

And on top of it all, when it comes to listening tests, we know we can convince people of changes without any existing in the sound. Like by showing visibly different devices, telling people about the price difference, the differences in tech and design, maybe going as far as to prime them on what differences they should look for. Some people are so skilled at this, they should count as mentalists. Then many people will "hear" differences. And among those who don't think they did, you'll still get a few to say they did for reasons previously mentioned.



If you have an idea, and we can set up a test to try and disprove it, then disproving it means the idea was wrong. While failing to disprove it, if the test is solid enough, might strongly suggest that the idea was right(until more evidence comes either way).
On the other hand, if you have an idea and just pick up whatever explanation you think best explains the idea, how do we know it's the one correct explanation? Trying to find what agrees with us is simply not productive when it comes to facts. It's trying to destroy a testable idea and see it stand strong against our efforts that make the idea strong.
Science and experimentation at large work based on that very concept.


The confidence people have in their beliefs is not a measure of the quality of evidence but of the coherence of the story the mind has managed to construct.
– Daniel Kahneman
 
Feb 11, 2024 at 7:23 AM Post #17,219 of 17,336
Just wondering what people think? Please don’t reply until you’ve read all of it. Some of it makes sense, some of it is a tad sensationalist in the writing. But the premise seems sound.
https://www.nordost.com/downloads/NewApproachesToAudioMeasurement.pdf
What I think is that it’s just another typical example of a certain type of audiophile marketing. The type that goes beyond simple assertions, impressions and cherry-picked testimonials and presents the marketing as scientifically valid. We see this in audiophile manufacturers’ “white papers” and even very occasionally as actual “scientific papers”, although in most cases just presented as an “article”, as in this case.

Much of the problem with the article can be summed up in this extract: “… as big and as obvious and as musically important as the differences we’d just been demonstrating were, no one had yet managed to measure them successfully – a stunning indictment of the current state of audio measurement, as well as its focus.” - There’s only two options: Either those big, obvious and important differences were just imaginary and didn’t actually exist, in which case they obviously can’t be measured or they were actually real and could be measured. The article describes a “null test” (as others have mentioned), which as far as I’m aware dates back to well before WWII and was widely used by the 1960’s. Certainly when I got into the industry in the early 1990’s it was already a standard basic test taught to all audio engineering apprentices/students and had been for at least 2 decades or more. What truly is “a stunning indictment”, in fact a truly shocking indictment, is that a company which specialises in products to transfer audio signals has never heard of an ancient, standard basic test for audio signals that even an apprentice engineer should know, and thinks it’s some sort of novel “New Approach”!! Can this really be true or is it just marketing BS? If it really is true, would you buy high priced products from a company that knows less about audio signalling and testing than an average apprentice?

The rest of the article doesn’t detail the exact methodology or how it addresses the various potential pitfalls when null testing an DAC/ADC loop back. For example, just a single sample offset can result in a relatively huge difference file and there are many processes occurring in such a loop back chain; anti-image filter, reconstruction filter, anti-alias filter, decimation filter, noise-shaped dither, TDPF dither, gain adjustment, upsampling, downsampling, bit reduction, etc. Even if everything is accounted for/eliminated, there certainly are conditions under which adding a power conditioner to the chain could cause audible differences, for example a poorly implemented PSU in the DAC or a particularly poor quality mains power supply (outside the range a decent PSU would be expected to deal with). However, as mentioned, none of the graphs had readable scales and no reliable evidence was provided about any of the claimed differences being audible, which leads on to this:
I’m happy to admit that I’m not sure it’s a good thing to do forced choice testing for differences that are so small/subtle, under conditions that can produce changes to participants’ psychological and physiological states …
Compared to what? Sighted testing also “produces changes to participants psychological and physiological states” because the subjects are still testing/comparing rather than just casually listening, and are focusing their listening. So in addition to all the same potential problems with ABX, we've got a whole bunch of additional (cognitive/bias) problems introduced by sighted testing/comparison! And, while the potential problems with ABX can’t be completely fixed, they can be very significantly mitigated, even to the point of insignificance (with sample size).

G
 
Feb 11, 2024 at 9:17 AM Post #17,220 of 17,336
@AussieMick
There is also the issue of group, which has some significant impact on what some people will say or not say. There can be a clear difference between a group seemingly agreeing and how many people in that group actually agree in their head or just go with the flow for various reasons(fear of looking foolish, desire to fit in).
This is the main reason why argumentum ad populum is a fallacy. People who score high in agreeableness are averse to conflict, thus do not tend to speak up and disagree with a preponderant opinion (see Asch's conformity experiment, Milgram's experiment is a classic example of conformity to authority) even when the opinion is obviously incorrect.

@gregorio
So these power conditioner things... are these just fancy UPSs? Has anyone done comparative tests on these?
 
Last edited:

Users who are viewing this thread

Back
Top