No, that’s pretty standard procedure, they’ve been doing that for many years. Modern concert halls are designed that way, computer modelling of materials and surfaces with calculations for different parts of the auditorium to optimise the listening experience for all audience members. I remember spending nearly an hour testing the acoustics in a new venue designed this way (about 20 years ago) and was amazed that the acoustics seemed to be the same wherever you went in the auditorium. Never experienced that before, even the world famous acoustic venues have at least some areas of the auditorium that are significantly different.But the NYT article is going over more advanced modeling of one location: full reconstruction of each area in the cathedral based on materials and and dimensions.
Nope, that’s also standard procedure and has been for several decades. Russell Johnson was the pioneer in this field and a world leader almost until his death. I was very fortunate to spend a couple of days with him when I was working with an artist giving the opening performances in a major new concert venue (nearly 30 years ago).I just find it amazing that they also have an acoustics team that can help dictate how the reconstruction should go.
Yes they can and have been able to do so for many years and to an extent likely better than what’s being done with Notre Dame, because you can actually go to those venues, record/measure the acoustics and then “convolve” an output modelled to that specific venue/location.I'm still skeptical that they would be able to model those venues at the same level as what's being done with Notre Dame now
What seems unusual in this case (although I can’t read the article) is that they’re trying to reconstruct a specific acoustic without knowing what it was originally. EG. Using computer modelling to predict what the acoustics would have been and then modelling methods/materials to recreate that/those acoustics. Rather than being able to go there and just record the impulse response/s. In other words, the whole thing would be effectively algorithmic as opposed to partially “convolution”, the impulse response would be calculated/modelled rather than recorded.
There wouldn’t be much point in that for consumers. If you could choose any sitting location in a world class concert hall, why would you choose a poor position (thereby rendering it a non-world class acoustic)? However, although you don’t find it in consumer products, you do commonly find it in professional products (and have for many years). It’s use is usually restricted to Film/TV though, where different locations within an acoustic space can be important.especially if your DSP won't let you set specific areas at The Berlin Philharmonic hall.
G
Last edited: