ti delo on Nostr: Played around with what’s available in the open-source, decentralized AI space. ...
Played around with what’s available in the open-source, decentralized AI space. Found Hugging Face, which seemed to be user friendly enough. So asked it to help locate research studies pertaining to toxicity of a certain plant. It came back with two studies published in reputable journals.
I couldn’t locate either reference, so, further asked for exact citation info. It responded with paper title, the journal date, Volume, issue, pages and even the Digital Object Identifier (DOI) for the research papers.
Hmm…..still couldn’t locate these papers. So went to the journals directly and manually looked up the pages that were given me. The papers found had nothing to do with the plant or toxicity issues that I asked about. So confronted the AI with this discrepancy. It then verified my conclusion that the research I asked for did not exist in these locations and went on to apologize for any confusion that occurred and explained that it must have been hallucinating.
Hmmm…I had heard a little bit about this phenomenon. However, it got me to wonder if this was an aspect of intelligence that I didn’t fully appreciate before. Authorities and experts often have answers to questions asked of them, even though plenty are extrapolations (educated guesses, without being stated as so) that aren’t always accurate. So AI seems to be doing the same…
OK, AI is new, it’s learning. However, that it doubled-down and went so far as to provide a fictitious citation including a bogus DOI to support its original claim is disturbing. OK, to be fair, human authorities and experts also often double-down when caught in a fabrication.
So, I guess the real difference between human and artificial intelligence (at this point in evolution) is that when caught fabricating info, AI at least states “I’m sorry, I was hallucinating”
I couldn’t locate either reference, so, further asked for exact citation info. It responded with paper title, the journal date, Volume, issue, pages and even the Digital Object Identifier (DOI) for the research papers.
Hmm…..still couldn’t locate these papers. So went to the journals directly and manually looked up the pages that were given me. The papers found had nothing to do with the plant or toxicity issues that I asked about. So confronted the AI with this discrepancy. It then verified my conclusion that the research I asked for did not exist in these locations and went on to apologize for any confusion that occurred and explained that it must have been hallucinating.
Hmmm…I had heard a little bit about this phenomenon. However, it got me to wonder if this was an aspect of intelligence that I didn’t fully appreciate before. Authorities and experts often have answers to questions asked of them, even though plenty are extrapolations (educated guesses, without being stated as so) that aren’t always accurate. So AI seems to be doing the same…
OK, AI is new, it’s learning. However, that it doubled-down and went so far as to provide a fictitious citation including a bogus DOI to support its original claim is disturbing. OK, to be fair, human authorities and experts also often double-down when caught in a fabrication.
So, I guess the real difference between human and artificial intelligence (at this point in evolution) is that when caught fabricating info, AI at least states “I’m sorry, I was hallucinating”