pinnahertz
Headphoneus Supremus
- Joined
- Mar 11, 2016
- Posts
- 2,072
- Likes
- 739
If you've been reading the thread....In interpretive qualitative research, there is no usage of bias. Bias is related to positivist research, and is not used in the tradition of interpretive research. Actually, objecting to how bias is traditionally used in the nature sciences, rejecting it completely, is so comon, that there is a large number of papers dealing with it. For really good reasons. Well founded, and well argument ed reasoning.
You might have discerned that the kind of testing we're talking about fits two basic forms: the statistical analysis of subjective opinion with regard to the basic question, "Is there an audible difference between A and B?", and if that analysis returns a statistically significant trend greater that random guessing, the next phase of testing, the statistical analysis of the subjective opinion with regard to the basic question, "Which is better, A or B?" can be performed. The data structure is a simple binary response in each case and does not include the more broad spectrum of data that might be generated by interpretive qualitative analysis where data is comprised of description and opinion, which of course, is not directly or easily analyzed by simple statistics. My error might be the use of the term "qualitative testing", which you then interpreted as "interpretive qualitative research", which I was not referring to.
In this particular case, the first question, "Is there an audible difference?" can only be answered by statistical analysis of binary data. If you don't apply statistical analysis of a significant quantity of data, then if you were testing "Can a person predict the future", and your test was the ability of a subject to "call" a coin flip in advance, your assumption could be that the correct guess of a single coin-flip indicates the subjects ability to predict the future.The notion that everything needs to be statistical significant is also a highly contested argument. Also, for very good reasons. There are plenty of papers, of highly regarded scientists, that argue against the need for being statistically significant.
In the coin-flip example, you have a single subject and a single coin flip test with a single binary answer. The resolution of that data is poor and includes a very high degree of "noise" that is equal to the data itself; the correct result could be a random correct guess OR the subject could be predicting the future. The resulting data has a noise level equivalent to the data "signal", and is therefore meaningless with regard to the question, "Can a person predict the future?" From a single correct response, conclusion could only be "yes". However, statistical analysis of a quantity of data from a number of tests increases the signal to noise ratio of the result by averaging the test responses, then returning a number rather than a binary result. If we test many subjects many times, the degree to which the results are different than random noise (guessing) will indicate a probability that someone can predict the future. In other words, the results of the test are a ratio of subject responses to random noise, which returns a probability figure, not a binary result.As for meaningless, well, what do you mean by "in fact completely meaningless"? When did meaning become a fact? How do you factually prove meaning?
This kind of testing is valid and used in scientific research all the time, most notably in drug efficacy testing. The first question, "Does use of the drug return a result different than the placebo?" could be followed by "Does the drug result in improvement in patient condition?" Both require statistical analysis of binary data. That might then be followed by full interpretive qualitative analysis of patient impressions and side-effects.