I haven't really thought out how I'd implement this when I first wrote it
but the basics is that have a somewhat large set of training data like this:
character data | waifu? |
either:
- set of tags from a database
or:
- vocal signature
- image(s)
| true / false |
the point is that the output (waifu?) is some complex function of the input which we don't know.
the latter input option would be much more accurate if it could be done (I don't know how... yet)
by training a neuro network on a large set of known mappings, it can learn the complex function.
after it has been trained, I can input arbitrary character data and have it determine weather that's waifu material or not.
apply this test to all known characters and you now have an exhaustive list of all your waifus in existence.... ideally.
sometimes I run CUDA apps like waifu2x, but it's not really a real-time demand for compute power.
getting a flagship CPU is much more important than a GPU for me.
It would help when you have a lot of numbers to crunch tho.