Dec 29, 2024 at 1:30 AM Thread Starter Post #1 of 11

JotaroKujo

Head-Fier
Joined
Oct 19, 2024
Posts
91
Likes
139
Location
Fukuoka, Japan
So how long before we [each one of us] has our own personal ChatGPT?

Your very own personal ChatGPT encoded to you and only you via bio-scan and/or other methods.

How would you like your own Ai assistant that knows as much or as little about you as you want, saving you money, making you money, planning your trips, diets, researching for you while you sleep....doing all the things we know Ai does.

Would you want one?
 
Dec 29, 2024 at 6:48 AM Post #2 of 11
I use it a lot for research, the more detailed the question the better the answer it gives. They're integrating it into most things so personalized results are just going to get more targeted. I wouldn't trust these assistants with a credit card to make purchases though. It's a huge time saver for detailed questions though. Buy a new phone and they all have personalized assistants using AI linked to your account history. I turn the smart features off for Gmail though, pretty intrusive.
 
Dec 29, 2024 at 11:31 AM Post #4 of 11
So how long before we [each one of us] has our own personal ChatGPT?

Your very own personal ChatGPT encoded to you and only you via bio-scan and/or other methods.

How would you like your own Ai assistant that knows as much or as little about you as you want, saving you money, making you money, planning your trips, diets, researching for you while you sleep....doing all the things we know Ai does.

Would you want one?
In my case that would never happen, at least not until LLMs specify the sources and references for their information. Until then it is just noise.

Judging by what I have seen LLMs produce so far, I will ignore them. I have seen them spout so much utter if not dangerous nonsense on the subjects that I am knowledgeable myself that I wouldn't trust them for the information they offer on any non-familiar subject. Information without references to sources has little to no value to me, and they never provide those.
 
Dec 29, 2024 at 7:56 PM Post #5 of 11
In my case that would never happen, at least not until LLMs specify the sources and references for their information. Until then it is just noise.

Judging by what I have seen LLMs produce so far, I will ignore them. I have seen them spout so much utter if not dangerous nonsense on the subjects that I am knowledgeable myself that I wouldn't trust them for the information they offer on any non-familiar subject. Information without references to sources has little to no value to me, and they never provide those.

Good point, but remember this 'your' own ChatGPT. Much like your own child, you can decide where it gets its knowledge, you are in total control. This means that you can vet and even train it to vet information according to your own standards and even beyond that as to not introduce bias. Therefore you can train it to research a subject and vet references according to certain standards and even have it get your approval before-during-or after. For example you can tell it to always trust information from MIT, never trust information from the X-platform and always cross-reference information until it reaches a trusted source....much the same as what you might do.

The point is, this is your ChatGPT, in every way.
 
Dec 29, 2024 at 8:09 PM Post #6 of 11
Dec 29, 2024 at 8:23 PM Post #7 of 11
It has made quick "google" searching much less annoying. I just wonder how long it will be 'til it becomes corrupted and starts giving sponsored answers. A few so called experts on YT have said if you want unbiased news to ask ChatGPT. The idea of just blindly trusting it because it's AI and not knowing when or if it'll start being fed info with ulterior motives is kinda scary. I dunno. It is interesting to think about the idea of giving it a task that would take some time. It's really a trip at just how quickly it works at answering basically any question.
 
Last edited:
Dec 29, 2024 at 11:51 PM Post #8 of 11
Lets see what Eric Schmidt the former CEO of Google has to say

 
Dec 30, 2024 at 7:25 AM Post #9 of 11
Good point, but remember this 'your' own ChatGPT. Much like your own child, you can decide where it gets its knowledge, you are in total control. This means that you can vet and even train it to vet information according to your own standards and even beyond that as to not introduce bias. Therefore you can train it to research a subject and vet references according to certain standards and even have it get your approval before-during-or after. For example you can tell it to always trust information from MIT, never trust information from the X-platform and always cross-reference information until it reaches a trusted source....much the same as what you might do.

The point is, this is your ChatGPT, in every way.
Maybe...

But having to tell a ChatGPT which sources I deem trustworthy and which I don't is a laborious exercise. It assumes that a) I have the time to do that handholding of a ChatGPT during its training and b) I know a large number of reliable sources already.

This means that for a particular piece of information I need to do the hard work (i.e. the verification) mostly myself anyway. It is really not as simple as cross-referencing until one reaches a trusted source because it requires a level of understanding of semantics and contextual knowledge that LLMs currently just don't have yet, although research in that area is ongoing. It it were that easy then anyone could be an academic or university research associate.

Re. a personal ChatGPT, I would rather do the research and learn myself and have the information in my head instead of in the memory banks of a ChatGPT. That's the whole point of researching & learning. Delegating the research & learning process to a machine rather than your own brain is a dubious trend IMO. You end up with people with great ChatGPT skills but little actual knowledge and even less understanding. In effect by using ChatGPT you think you are speeding up your ability to navigate through the forest of information out there, whereas in practice you are also limiting your opportunities to learn.

Re. current ChatGPT solutions: There is a place for AI in research itself, to help speed up medical tests and help identify data trends etc. But I fear that ChatGPT type of LLM implementations are being rushed out prematurely without too much thought re. the consequences of injecting powerful actors like that into the worldwide flow of information. They need to do extensive systems stability (in this case world-wide information stability) research first before rushing ahead. From a physics/mathematical perspective they need to identify the feedback loops, attenuation and poles of the system first, otherwise it may blow up in the AI researchers' faces (and ours).

To be fair, misinformation feedback loops have existed for centuries already, likely millennia. But ChatGPT solutions will speed up the process so much that we don't have the time to mitigate the consequences. So we need to build in the stability constraints beforehand, and as far as I can tell from what I see that isn't happening, or only half-heartedly.
 
Dec 30, 2024 at 6:27 PM Post #10 of 11

:L3000:
 
Last edited:
Dec 30, 2024 at 8:53 PM Post #11 of 11
Maybe...

But having to tell a ChatGPT which sources I deem trustworthy and which I don't is a laborious exercise. It assumes that a) I have the time to do that handholding of a ChatGPT during its training and b) I know a large number of reliable sources already.

This means that for a particular piece of information I need to do the hard work (i.e. the verification) mostly myself anyway. It is really not as simple as cross-referencing until one reaches a trusted source because it requires a level of understanding of semantics and contextual knowledge that LLMs currently just don't have yet, although research in that area is ongoing. It it were that easy then anyone could be an academic or university research associate.

Re. a personal ChatGPT, I would rather do the research and learn myself and have the information in my head instead of in the memory banks of a ChatGPT. That's the whole point of researching & learning. Delegating the research & learning process to a machine rather than your own brain is a dubious trend IMO. You end up with people with great ChatGPT skills but little actual knowledge and even less understanding. In effect by using ChatGPT you think you are speeding up your ability to navigate through the forest of information out there, whereas in practice you are also limiting your opportunities to learn.

Re. current ChatGPT solutions: There is a place for AI in research itself, to help speed up medical tests and help identify data trends etc. But I fear that ChatGPT type of LLM implementations are being rushed out prematurely without too much thought re. the consequences of injecting powerful actors like that into the worldwide flow of information. They need to do extensive systems stability (in this case world-wide information stability) research first before rushing ahead. From a physics/mathematical perspective they need to identify the feedback loops, attenuation and poles of the system first, otherwise it may blow up in the AI researchers' faces (and ours).

To be fair, misinformation feedback loops have existed for centuries already, likely millennia. But ChatGPT solutions will speed up the process so much that we don't have the time to mitigate the consequences. So we need to build in the stability constraints beforehand, and as far as I can tell from what I see that isn't happening, or only half-heartedly.
Okay then, thank you for your opinion.

I take it you're not on board.

J.
 

Users who are viewing this thread

Back
Top