Objectivists board room
Apr 9, 2017 at 7:16 AM Post #3,481 of 4,545
  @Yuri Korzunov
 
Without any wish to interfere in the discussion, but for sake of clarity let me try to summarize my understanding:
  1. Your wish or suggestion is to have a distribution in native resolution of DAW's project,
  2. Your goal is to develop your own DAW or improve existing DAWs for pro & consumer end users.
 
My maths years are quite far away but I do understand rounding,floating, etc issues.
Nevertheless what I would be more interested in is what would be the added value or benefits of such?
Can you estimate the advantages you are expecting?
Rgds.


Hi @Arpiben,
 
> Your wish or suggestion is to have a distribution in native resolution of DAW's project,
 
Yes. Though may be some copyholders don't want do it by probable piracy reason.
But technically, there no isuues.
 
> Your goal is to develop your own DAW or improve existing DAWs for pro & consumer end users
 
I thought about it. Currently we don't ready to creating serious DAW (midi+audio+automation+multy hardware compatibility), due limited resources.
Though during development of our converter software we got successfull, as I think, implementations of technologies for 1-bit audio, that may be scaled in other products.
 
I hope, this year we can show our new software (though as free demo). While I will not sound, that it is software (surprise :). In the end of March 2017 it began to work in first approach.
 
 
> Nevertheless what I would be more interested in is what would be the added value or benefits of such?
> Can you estimate the advantages you are expecting?
 
In studio workflow we have:
1. Recording
2. Mixing
3. Post-production.
 
First stage is taken in integer, of course. But it may be transformed to float point easy way for further calculations.
 
Last two stages may passed thru several DAWs.
Implementation of calculations are easier (transparent for programmer) in float point format.
You worry nothing. Neither rounding, nor overload (especially for 64-bit float).
Any time you can scale signal to frames that you need and almost don't lose something.
You do many processings, but noise level remains about -170 ... 200 dB.
Some pluging may work in integer format, as example. But, probably, with time it will replaced to new plugins that have float point interface and processing.
 
Hence, for better keeping of quality, all-record transfer thru DAWs may be implemented in float point.
 
 
I permanently collect information that is possible do with audio files.
 
There are DSPs of end-users:
1. Re-sampling/re-bit-depth/re-PCM/DSD.
2. Room correction.
3. Sound enchancing.
 
I'd like keep studio quality (noise level) until DAC input.
 
 
Resume: for further end-user processings, I'd prefer float-point-audio-file distribution.
 
Apr 9, 2017 at 7:23 AM Post #3,482 of 4,545
   
The advantage of a DAW's resolution over say 24bit resolution is truncation or rounding errors down well below the -300dB level instead of at around -138dB. That's nonsense of course as far as the consumer is concerned, as even truncation 100 times higher in level (than -138dB) is typically inaudible. However this (-138dB) level of truncation/rounding error is a potential consideration when cumulatively summing together hundreds of such truncation/rounding processes which can occur during the processing/mixing of a recording.
 
G


What about multiplying 24 bit integer on 0.9957367252 and same in pure integer environment?
1000 times.
For FIR as example?
 
Apr 9, 2017 at 7:47 AM Post #3,483 of 4,545
  What about multiplying 24 bit integer on 0.9957367252 and same in pure integer environment?
1000 times.
For FIR as example?

 
You're joking right, how can you not know this? Do that 1000 times at say 56bit or even 48bit integer and the error is where, somewhere down around -200dBFS? About 1,000 times below the error from truncating to 24bit which is another 100 or so times lower than audibility, that's just ridiculous!
 
Quote:
  Resume: for further end-user processings, I'd prefer float-point-audio-file distribution.

 
Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience! What's going on here, it this attempt at obfuscation leading up to marketing your product?
 
The response to this for others: There is some benefit to higher integer or floating point processing capability if the end-user is going to be applying further processing. HOWEVER, this is an issue of the processing environment, NOT of the distribution format! This is the same basic issue as in the studio: We record at 16 or 24bit and then mix in a much higher bit depth environment. Although in the studio we often apply hundreds of different processor instances (and therefore thousands or many tens of thousands of calculation steps) rather than just one or a few processors.
 
G
 
Apr 9, 2017 at 8:23 AM Post #3,484 of 4,545
> You're joking right, how can you not know this? Do that 1000 times at say 56bit or even 48bit integer and the error is where, somewhere down around -200dBFS? About 1,000 times below the error from truncating to 24bit which is another 100 or so times lower than audibility, that's just ridiculous!
 
56bit and 48bit integer are not standard formats of PC programming languages.
Also PC's CPU/FPU work with standard 32/64-bit float, isn't it?
 
Do you know real alternatives between 24 and 32/64-bit floats with overload and precision advantages?
 
 
Also:
 
FIR's coefficient multiplication precision impact to exactness on filter features. It is not only noise matter.
 
As example, for DSD modulator filter (there IIR, as rule) precision is big matter for providing of stability.
Of course, it can be released on integer calculations. But it harder in design efforts. Or you need incease headroom reserve for input level overload. Why it need?
 
 
Quote:
   
Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience! What's going on here, it this attempt at obfuscation leading up to marketing your product?
 
The response to this for others: There is some benefit to higher integer or floating point processing capability if the end-user is going to be applying further processing. HOWEVER, this is an issue of the processing environment, NOT of the distribution format! This is the same basic issue as in the studio: We record at 16 or 24bit and then mix in a much higher bit depth environment. Although in the studio we often apply hundreds of different processor instances (and therefore thousands or many tens of thousands of calculation steps) rather than just one or a few processors.
 
G

 
> Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience!
 
What you develop?
 
 
 
> What's going on here, it this attempt at obfuscation leading up to marketing your product?
 
"Obfuscation" and "marketing" it's technical argues? :)
 
I don't have label. I don't sell records. For my business is not matter there are either 24 bit or 32-bit float or other format.
Give me 16-bit. I will process it :wink:
 
 
 
> We record at 16 or 24bit and then mix in a much higher bit depth environment. Although in the studio we often apply hundreds of different processor instances (and therefore thousands or many tens of thousands of calculation steps) rather than just one or a few processors.
 
You written: "hundreds". It's so.
Many algorithms mathematically are non-integer.
 
It is reasons for using of float point and worry nothing about problems with:
 
1. implemetation non-integer math as integer,
2. precision and
3. overload.
 
Apr 9, 2017 at 9:09 AM Post #3,485 of 4,545
   
[1] > Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience! What you develop?
 
[2] "Obfuscation" and "marketing" it's technical argues? :)  
[3] You written: "hundreds". It's so. Many algorithms mathematically are non-integer.
 
It is reasons for using of float point and worry nothing about problems with:
 
1. implemetation non-integer math as integer,
2. precision and
3. overload.

 
1. Hundreds of commercial music, TV and film products and you? Not that it matters though, this is all basic digital audio signal processing, something even audio engineering students are taught.
2. Exactly!!
3. I can't believe you still are not getting it. Is it even possible for a developer not to know? Those are all valid reasons for the processing environment to be say 32 or 64bit float, NOT the audio distribution format!!
 
G
 
Apr 9, 2017 at 9:26 AM Post #3,486 of 4,545
   
1. Hundreds of commercial music, TV and film products and you? Not that it matters though, this is all basic digital audio signal processing, something even audio engineering students are taught.
2. Exactly!!
3. I can't believe you still are not getting it. Is it even possible for a developer not to know? Those are all valid reasons for the processing environment to be say 32 or 64bit float, NOT the audio distribution format!!
 
G

 
> Although in the studio we often apply hundreds of different processor instances (and therefore thousands or many tens of thousands of calculation steps) rather than just one or a few processors.
 
I'm about number of applied processing.
 
> Hundreds of commercial music, TV and film products and you?
 
What you mean?
 
 
> Is it even possible for a developer not to know?
 
Also technical argue?
 
 
> Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience!
 
What you develop?
 
Apr 9, 2017 at 9:43 AM Post #3,487 of 4,545
 
[1] I'm about number of applied processing.
[2] Also technical argue? [3] What you develop?


1. Why? You think a consumer is going to apply more processors than a mastering engineer or Re-recording Mixer?
2. No, it's not possible to have a technical argument with someone who doesn't know or is trying to obfuscate the technicalities, that's my point!
3. Hundreds of commercial music, TV and film products.
What have you developed? And, what has what I've or you've developed go to do with knowing the basics of DAW digital signal processing?
 
G
 
Apr 9, 2017 at 11:03 AM Post #3,488 of 4,545
Sounds like this another attempt to fondle some esoteric concepts that lack any practical application and couldn't possibly add value to the human experience. All I see here is an endless technical argument that eschews any real application. I think we should move onto some other topic.
 
Apr 9, 2017 at 11:03 AM Post #3,489 of 4,545
 
1. Why? You think a consumer is going to apply more processors than a mastering engineer or Re-recording Mixer?
2. No, it's not possible to have a technical argument with someone who doesn't know or is trying to obfuscate the technicalities, that's my point!
3. Hundreds of commercial music, TV and film products.
What have you developed? And, what has what I've or you've developed go to do with knowing the basics of DAW digital signal processing?
 
G

 
 
> No, it's not possible to have a technical argument with someone who doesn't know or is trying to obfuscate the technicalities, that's my point!
 
Interestingly, what is my profit from "obfuscation"?
 
 
 
> You think a consumer is going to apply more processors than a mastering engineer or Re-recording Mixer?
 
See here.
 
 
 
> Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience!
> Hundreds of commercial music, TV and film products.
 
How called your job?
 
 
 
> And, what has what I've or you've developed go to do with knowing the basics of DAW digital signal processing?

In DAW used some especial kind of digital signal processing?
 
 
You wrote:
> What's going on here, it this attempt at obfuscation leading up to marketing your product?
> What have you developed?
 
You don't know what is my business? Why you write: "obfuscation leading up to marketing your product"?
 
I develop audio conversion software. Go to link at my signature and check my skills.
 
Unfortunatelly, previous my works in radio communication branch you can't download from Internet and install at PC :wink:
 
Apr 9, 2017 at 1:33 PM Post #3,490 of 4,545
  [1] Interestingly, what is my profit from "obfuscation"?
[2] "You think a consumer is going to apply more processors than a mastering engineer or Re-recording Mixer?" See here.
[3] In DAW used some especial kind of digital signal processing?
[4] You don't know what is my business? Why you write: "obfuscation leading up to marketing your product"?

 
1. You're joking right? It's a common audiophile marketing tactic to obfuscate the facts and solve "problems" which are way beyond audibility.
2. So that would be a "no" then. You also wrote in your referenced post: "I'd like keep studio quality (noise level) until DAC input." - The noise floor of the recording after mastering would typically be -60dBFS or higher, therefore a dynamic range of 60dB. What would you gain exactly by distributing this 60dB dynamic range in say a 64bit float file instead of a 16bit file? And don't keep saying "truncation or rounding error when processing", because you can simply load that 16bit file into a 32 or 64bit container for processing and keep truncation or rounding error way below what's even possible to reproduce!
3. Because unless there's an unbelievable lack of knowledge/understanding here, that's the only explanation which makes sense!
 
G
 
Apr 9, 2017 at 1:58 PM Post #3,491 of 4,545
   
1. You're joking right? It's a common audiophile marketing tactic to obfuscate the facts and solve "problems" which are way beyond audibility.
2. So that would be a "no" then. You also wrote in your referenced post: "I'd like keep studio quality (noise level) until DAC input." - The noise floor of the recording after mastering would typically be -60dBFS or higher, therefore a dynamic range of 60dB. What would you gain exactly by distributing this 60dB dynamic range in say a 64bit float file instead of a 16bit file? And don't keep saying "truncation or rounding error when processing", because you can simply load that 16bit file into a 32 or 64bit container for processing and keep truncation or rounding error way below what's even possible to reproduce!
3. Because unless there's an unbelievable lack of knowledge/understanding here, that's the only explanation which makes sense!
 
G

 
Sorry, but it is very interesting:
> Honestly, that's a beginners misunderstanding, let alone a developer with 20 years experience!
> Hundreds of commercial music, TV and film products.
 
How called your job?
 
Apr 10, 2017 at 4:05 AM Post #3,492 of 4,545
   
2. By definition, EDM is mostly constructed with synths and samplers, it's very heavily processed and in addition is just about the most highly compressed music genre. So fewer bits are required for the limited dynamic range and there aren't all the natural harmonics to worry about, so no benefit of high sample rates either. In practise, no one can tell Hires from CD in blind testing any way, so hires is only really of theoretical benefit for any music genre.
 
3. IMO, subjective auditory benefits are the only benefits of Hires and that there aren't any objective auditory benefits.
 
4. In general "objectivists" are quick to explain what people think they hear as "perceptual biases" caused by various forms of marketing/suggestion because that's what ALL the reliable evidence indicates and, there is no objective measurement which accounts for many of the reported perceived differences! The problem with any argument counter to this, a massive logical hole! Which is simply: If we can't objectively measure it, we can't record it. This is true of audio recording since it was invented in the C19th to the latest digital technology. Tape recorders, ADCs, etc., contain no magic spells, they are entirely science/engineering based devices, which effective measure/convert energy which is stored and then replayed. Therefore, if you are hearing something which cannot be objectively measured then it cannot be anything other than a trick of your perception ("perceptual biases")! For example, have a look at this brief video, there is not, nor can there ever be, an objective audio measurement of the difference between baa and faa.
 

2.  Still don't understand why you say EDM benefits the least from Hi-Res.  You seem to be of the opinion that Hi-Res sources with sampling rates above 44.1 khz don't provide ANY perceivable auditory benefit, and I don't necessarily disagree.  If that's the case then genre should make no difference.  Whether the music is created electronically or acoustically should make no difference.  Digital processing, dynamic range, number of bits, all that should make no difference.  Sampling rates above 44.1 khz either have a perceivable auditory benefit or they don't, regardless of music genre.
 
4.  So my basic argument here was that the PM-3 made the fast synths in some Skrillex songs sound hyper detailed, probably due, at least in part, to the planar magnetic design.  You were quick to offer up that what I heard was probably "perceptual biases" from marketing.  Why would you jump to that conclusion?  I think it's more likely that the headphones themselves had something to do with what I heard.  Physical characteristics of the headphones would obviously be an objective thing, not subjective.  Why jump to subjective explanation without offering possible objective explanations?
 
"If we can't objectively measure it, we can't record it."  This is probably true.  However, we are not talking about recording.  We are talking about hearing.  Recording is limited by the capabilities of the instruments and equipment used to record.  Measuring equipment is also limited in its capabilities.  Is it possible that human hearing may have capabilities that today's instrumentation cannot objectively measure?  Is it possible that science hasn't discovered the necessary methodology yet to accurately and objectively measure certain aspects of human hearing?  Are you comfortable in saying that science has discovered everything there is to know about accurately and objectively being able to measure and explain human hearing?
 
Not with the McGurk Effect again.  I'm not saying perceptual biases don't exist.  Baa and Faa is easily explained -- years and years of conditioning to associate specific lip and mouth movements with specific sounds.  How did Oppo, the maker of the PM-3 headphone, or even the headphone industry as a whole, pre-condition me to associate something (what, I don't know) with the incredible detail in Skrillex riffs I heard?  Or is it more likely that some objective aspect of the headphones themselves was responsible for what I heard?  You can't tell me that all headphones have the same ability to reproduce detail and that all the variations in detail that people hear are due to subjective biases.
 
My question to all you objectivists, is can the planar magnetic driver design provide a faster and more detailed (better resolving) headphone than a dynamic driver design?  Also, which objective measurements of a headphone would show the level of detail and resolution that a headphone is capable of?
 
Apr 10, 2017 at 4:21 AM Post #3,493 of 4,545
  My question to all you objectivists, is can the planar magnetic driver design provide a faster and more detailed (better resolving) headphone than a dynamic driver design?  Also, which objective measurements of a headphone would show the level of detail and resolution that a headphone is capable of?

 
1. Details may be considered as specrtral components.
 
2. Distortions and noise may mask these details.
 
3. Distortions measurement may be covered as measurement input/output amplitude response for each frequency in operation band.
For measurements may be used SPL-meter and generator.
 
4. Operation band have different unevenness for each level value. It is other projection of point #3.
 
I consider unevenness of band in range 0 ... 20 kHz as good.
 
5. Main issue during considering of measurements, that published in manuals results may be integrated or fragmentary.
However comparison of detailed results (point #3 and #4) may also cause ussiues of comparison.
Because one tested headphomes may better work at higher levels, but other - for lower levels, as example.
So we can't estimate absolute advantages one of headphones.
Also need refer to ear sensitivity frequency curve.
 
Apr 10, 2017 at 4:23 AM Post #3,494 of 4,545
  Sounds like this another attempt to fondle some esoteric concepts that lack any practical application and couldn't possibly add value to the human experience. All I see here is an endless technical argument that eschews any real application. I think we should move onto some other topic.


While I personally don't mind them discussing whatever topic they desire, as long as it's not off-topic entirely, I also much more appreciate discussion linking objective measurements and concepts to some real world perceivable experiences.
 
With that said, maybe someone can answer my question:  Which objective measurements of a headphone quantify the perceivable detail/resolution we hear?  I know frequency response is one of them.  Harmonic distortion, another?  What else?  Also, do the existing gamut of measurements tell us the whole story about detail/resolution of a headphone, or is it possible that our ears are objectively picking up certain aspects of the sound that existing equipment and testing methods do not completely capture?  Please don't get into subjectivity and how the brain perceives sound.  Let's try to stay "objective" shall we?
 
Apr 10, 2017 at 4:37 AM Post #3,495 of 4,545
How is frequency response of a headphone typically measured?  Is it a tone sweep with only a specific frequency being generated at one time?  If this is true, then you do not have a wide range of frequencies (bass and treble) interacting with each other in the earcup chamber like you do with music, which would affect the sound, no?  So a frequency response test of a headphone is not a real world scenario, because with music playing, you have a bunch of varying frequencies being generated very quickly and interacting with one another.
 

Users who are viewing this thread

Back
Top