Not sure if you saw it earlier, but I primarily listen to Spotify (Premium account with HiDef streaming, which I believe is 320Kbps for most of the collection, but not all of it). As far as I know, this is fairly good, but isn't FLAC. I have a few albums in FLAC but I don't usually listen to them as Spotify is just easier most of the time.
You didn't bother to digest what I'm saying - file compression and recording quality are different things. You can be listening to FLAC, but if it's improperly recorded, then it doesn't matter if it's FLAC or 320kbps when it comes to soundstage
at least (meaning it can be pointless in all other aspects as well). At the very least there won't be any imaging in it to appreciate and the gain can be too high. Spotify Premium isn't properly recorded just because it claims to be "Premium" - that's like assuming that because you stick a gold sticker with the letters "HD" on something it magically makes it measure flat from 20hz to 20000hz. Nope.
Also, genre can affect how music is recorded, something I also mentioned. Let's take two extremes: dance music and classical. Classical music played live has a quartet in a line or a symphony with different sections, and as such, when properly recorded, you should hear where the cello is vs the violin or where the brass is and clearly where it is relative to the strings. Dance music by contrast is predominantly played with electronic synthesizers, which won't image as well (more on this below), and when played live it's usually intended to be played loud with speakers all over the place so that it will be equally loud all throughout da club so everyone keeps buying cocktails or bass everywhere so everybody keeps buying ecstasy instead of talking and basically renting seats on the cheap.
Now, as an extreme example of electronic vs acoustic instrument, putting a mic set up on a grand piano will get you panning on which keys are pressed because the location of the hammer is also different, while the speaker playing the sound from the synth isn't exactly going to play the sound of one key two feet from the sound of another key. It's just not going to happen. Even symphonic metal with complex layering on the guitars and the symphony, plus the main vocals, back up vocals, and the choir, will not have the panning piano effect if they use a synth instead of a grand piano when they record it, which is why in some cases it's not surprising if they actually use one in the studio even if they use a synth live (which also means they won't need to hire and rehears with a symphony for the entire tour, which is only possible if their audiences paid the same kind of money people would pay at Vienna or Dresden).
And then on top of that even if your music did have sufficient imaging info as recorded whatever is there is hampered by headphone physics, which you can't just overcome totally with DSP.
As for the type of music I listen to, if that matters any. I primarily listen to Mumford & Sons, John Mayer, Daft Punk, Imagine Dragons, Eminem and Kenny Chesney. I have started to listening to Jazz (after watching Whiplash) and do enjoy listening to it while programming and will listen to that style more frequently.
For the ones I'm familiar enough:
Eminem wouldn't really have imaging and soundstage (it's mostly his voice, a drum machine, and a synth) - even in cases where there is more than one rapper it's rare for any to image in such a way that those vocals won't be dead center. Even BTnH didn't do enough of that save for the vocals that, at any given point, are clearly in the background; contrast this for example to how vocals are done in some Therion albums or tracks, where the two female leads are imaged off-center (left and right) slighly even when the lead male isn't singing, or how in proper recordings of Broadway shows like CATS (the original cast recording for example vs the TV/direct to video, and the audio-only CDs released with each) images each voice as they would be positioned in the stage relative to each other, and in Phantom, there's one scene where Erik and Christine are on opposite sides of the stage and sing while moving closer to each other, and on the original, proper recording, you can hear their voices moving towards the center of the soundstage. You can test this on YouTube, which even though again it isn't lossless, still reaps the benefits of proper recording when it comes to imaging presentation.
Note: when they don't do it "properly" it doesn't necessarily mean it's totally just "wrong," but more of "they made it sound like a static location studio recording instead of a theater imaging." It may not image to wow the listener but doesn't mean all other aspects are improperly done.
Another example for vocals are The Corrs - I love their Unplugged recording because, while it doesn't spread their voices out like a proper string quartet recording, there's more of a left-right pan to the location of their voices than on the studio recordings. In reality I won't be surprised if this is more of an accident as Andrea's mic picks up her sisters' voices to her flanks, although in one track without her, her sisters are vaguely left and right off center in one duet.
John Mayer on the other hand should have more separate sound sources, but from I remember, imaging wise the guitars are in the center along with his voice, so while instrument separation is important tonally, it's not so clear as afar as soundstage spatial characteristics are concerned. The drums and other instruments like the second guitar when there is one should image somewhat more clearly along the left to right plane.
Daft Punk tonally should sound good, and have some imaging information, but note this is still electronic, so it's about as good as how some metal are recorded, but don't expect it to be like how it can be done with acoustic instruments. How much better than the gaming headsets of course still depends on how the response on the headphones in question are - if the gaming headsets aren't totally horrible in the midrange then a better headphone won't be that easy to appreciate for those who aren't as sensitive to the differences. Note that in some cases "sensitive" is more like "inflated perspective of variances," or at least, that the small improvement is exactly how they prefer it sounds like - in my case I have three amps that generally sound the same in the midrange, but two have a more audible "thump" from the bass drum, and less grating cymbals. Not a lot, but being able to crank it up for my favorite tracks without the cymbals scraping my ears is important to me.
Quote:
Originally Posted by
Saxi /img/forum/go_quote.gif
As for gaming, I do not use Dolby or any of the enhanced features. From what I have gathered from reading through these forums two years ago when I originally started really looking for better sound quality, without Dolby would be better. I do use the third party Xonar drivers which are suppose to have less latency and better sound. (Haven't noticed a huge difference myself). But Asus is pretty stale with their own drivers anyway.
I use headphones 8+ hours a day, I'd love to have a really nice speaker setup but I just can't do it in my environment. I haven't in a while but I did mess around with all the settings, turning on Dolby and settled on no Dolby and flat eq and keeping everything off. I do like SVN mode on the Xonar but it adds a ton of static especially in silent spots so I don't use it
Even without the DSP features gaming audio is still different from how music is recorded. The mere fact that music is intended for 2ch
speaker playback while some soundcards or even the games themselves already take headphones into account can mean a more precise directionality, if at least just the general direction and not realistic distance. And then there's that problem with whether your music has enough spatial info to begin with.
My gaming PC with all the gizmos on is a heck of a lot better on games and movies than what I use on 2ch music,
but, turn them all off, and my HD600 still sounds more precise than my HD330, but note that the differences are only large as far as headphones go. it will not sound like a difference like what you would get between a BT speaker with the drivers a a quarter of a meter apart and placed on a table against a wall vs a pair of proper monitors spaced 2m apart and 1.5m away from the walls in all directions.
-
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.