Good point, but remember this 'your' own ChatGPT. Much like your own child, you can decide where it gets its knowledge, you are in total control. This means that you can vet and even train it to vet information according to your own standards and even beyond that as to not introduce bias. Therefore you can train it to research a subject and vet references according to certain standards and even have it get your approval before-during-or after. For example you can tell it to always trust information from MIT, never trust information from the X-platform and always cross-reference information until it reaches a trusted source....much the same as what you might do.
The point is, this is your ChatGPT, in every way.
Maybe...
But having to tell a ChatGPT which sources I deem trustworthy and which I don't is a laborious exercise. It assumes that a) I have the time to do that handholding of a ChatGPT during its training and b) I know a large number of reliable sources already.
This means that for a particular piece of information I need to do the hard work (i.e. the
verification) mostly myself anyway. It is really not as simple as cross-referencing until one reaches a trusted source because it requires a level of understanding of semantics and contextual knowledge that LLMs currently just don't have yet, although research in that area is ongoing. It it were
that easy then anyone could be an academic or university research associate.
Re. a
personal ChatGPT, I would rather do the research and learn myself and have the information in
my head instead of in the memory banks of a ChatGPT. That's the whole point of researching & learning. Delegating the research & learning process to a machine rather than your own brain is a dubious trend IMO. You end up with people with great ChatGPT skills but little actual knowledge and even less understanding. In effect by using ChatGPT you think you are speeding up your ability to navigate through the forest of information out there, whereas in practice you are also limiting your opportunities to learn.
Re. current ChatGPT solutions: There is a place for AI in research itself, to help speed up medical tests and help identify data trends etc. But I fear that ChatGPT type of LLM implementations are being rushed out prematurely without too much thought re. the consequences of injecting powerful actors like that into the worldwide flow of information. They need to do extensive systems stability (in this case world-wide information stability) research first before rushing ahead. From a physics/mathematical perspective they need to identify the feedback loops, attenuation and poles of the system first, otherwise it may blow up in the AI researchers' faces (and ours).
To be fair, misinformation feedback loops have existed for centuries already, likely millennia. But ChatGPT solutions will speed up the process so much that we don't have the time to mitigate the consequences. So we need to build in the stability constraints beforehand, and as far as I can tell from what I see that isn't happening, or only half-heartedly.