I think this is a good point. Why do lossless files cost more? Surely they aren't putting a dollar value on bytes? It's not a commodity in any real sense. I suspect it's exactly what you say: because they can. It's just become accepted that lossless files will cost more and people rarely think to question it.
Lossless files are larger but require one less step of processing. So they could charge more, they could charge less, they could charge the same. I see arguments on all sides. That's really up to the marketplace.
I know this -- audio coming from the artist/producer is 24/88 at least. That's about 3000k in mp3 terms. The mastering engineer always requests the highest resolution you are able to record\mix at. The mastering engineer then upsamples it to the highest multiple of that that his system works at (in this example 24/176).
After doing his work mastering (mainly EQ, volume, sequencing) he is paid to deliver the files in whatever format is needed for distribution. From 1985-2010 or so this meant 16/44 PCM lossless. Since he was usually above 16/44 working, he has to downsample and dither to get back to 16/44.
My mastering engineer would apply different dithers to different sets of files to let me preview them and choose the one which closest matched my original mix. None of them would 'match' (dither does that) but I would pick my favorite and from there he'd lossy them down to 320k mp3, 192k mp3, and 256k AAC and then ID and tag them for distro to the various store aggregators.
When Apple launched their "mastered for iTunes" program a few years back they recommended all music be submitted as 24bit and Apple would apply the lossy encoding and store both the master copy and the AAC, but only sell/stream the AAC. Apple is sitting on lots of 24bit masters and doing nothing with them.
Lossy coding attacks the
mix of the music and since I mix music, I take that personally. Not only are they selling us 10% of the data (art) and charging full price, they are destroying critical elements in the music and the mix. The layering and interplay of instruments and voices is what makes music tick.
It's called 'perceptual coding' because it is a method based on the inability to "reliably perceive" various types of sounds, and by that they mean people must describe them the same, identify them each time. I see how it works. You can't even get 3 people on head-fi to agree on terms and measurements for what they hear in mixed music. Science has to go to test tones, which don't help, since they have no mix, no interplay of instruments, no layers of sounds, no musical anything. You can't do much science with music, music is still too complicated and emotional for science.
Music is the magic here. Science doesn't really know how it's made or computers would have written all the amazing melodies already. They have written none. No computer has ever played a stringed instrument or sung anywhere close to a human. No computer has ever composed a good pop song, much less a symphony on it's own. Even percussion, which computers can be decent at, is still far better when played by a human than a computer. Of course they are tools to be used by humans.