My understanding is that beyond 1/6th stuff stops being all that perceptually relevant, but this could also depend on the individual. So maybe it's like as the sample size increases you lose granularity in the process, and certainly there are bound to be some narrowband peaks that can be more perceptually relevant than others where they don't look particularly egregious when smoothed to 1/6th. So I typically go for 1/12th or 1/24th because it's a good balance of readability completeness of data representation.
To answer your question, I don't think we should be smoothing the data beyond 1/12th because I think it matters that all the data be visible, but when showing measurements against the Harman target it makes sense to
also show it smoothed to the same degree so people can see the apples to apples result, or as I like to call it "sound signature". It's also bound to be a more useful jumping off point for people getting into EQ than trying to do all the fine-grained adjustments to match a target. I see this all the time where folks will hate on a given target because they tried to EQ to match it, when in reality any EQ above 4-5khz needs to be done by ear anyway.
The other reason not to smooth the data beyond 1/12th though is for the benefit of seeing it relative to the ear transfer function, which is a more fine-grained reference point. Now I should be clear that there's some debate abouts the usefulness of some of those fine-grained features, but in my testing so far with the B&K 5128, for over-ear headphones it does yield some interesting results. I'm not sure how well-known this is but the
calculated DFHRTF for the 5128 (not the one supplied with it) is the most clear ear transfer function out of all the rigs we use, since the one for GRAS KEMAR is based on the large format 0065 pinna and the 4128's is smoothed. We do have the data for the KB5000, thanks to Oratory1990's work on that but the calculation is still in progress -
Blaine goes into more detail on these here.
With all of that said, it can be difficult to really determine the benefits of fine-grained FR vs fine-grained ear transfer function on person-specific basis since each headphone is bound to behave slightly differently on each head. At the moment I'm erring on the side of "more data is better", and I'm quite confident we can learn a lot more about the subjective characteristics that people love in headphones by improving the analysis of the data - in fact I think we already have, certainly in the case of in-ear headphones (it turns out acoustic Z really matters).
All of that is to say that if we were content with the older paradigm of showing a fine-grained result against a coarse-grained target, we wouldn't need to bother with all of this.
There is something else though that I may as well put here since some of this is a discussion about Harman after-all, and I saw this come up in that thread on ASR as well and it has to do with the circle of confusion. This is just my personal opinion on things, and not necessarily backed up by anything in particular, but I strongly suspect that the current discourse surrounding the topic would be very different if Dr. Olive hadn't been as focused on solving that problem.
What I mean by this is that if you look at how the preference research is done, it doesn't really line up with the idea that some folks are running with - the idea that Harman OE 2018 is the one true curve that everything must match. In fact it lines up a lot better with the kind of cluster analysis that gets done with preference research in general. Simply put, if we discard the goal of solving the circle of confusion, the established preference groupings would be better reflected as 'targets' rather than 'target'. This is how it is with preference research in other industries as well.
While my own viewpoint on 'fidelity' is slightly different from Blaine's, I suspect we're aligned on this point, and its one of the reasons we want to encourage the use of preference boundaries or ranges to provide a more complete picture of a headphone's performance relative to known segments. When you go to the grocery store you don't typically see just one flavor of anything, and that's precisely because there are different preference groups that the market is serving. And, when you look into the Harman research beyond the headlines or what some folks say on ASR you realize that while yes there is a dominant group, there's a good case for having different preference groups accounted for as well.
The paper on segmentation in particular is useful for this for anyone wanting to get into that.
I'm also encouraged to see that Dr. Olive is working with the 5128 on some new research, so it'll be interesting if anything changes based on that. Either way, new preference research is bound to be conducted (or maybe we get a different manufacturer to make their preference research public?), and we can always incorporate that into what we're doing too.
TL;DR - Targets, not target - but maybe we'll get there with new research.