Tech

This AI Chatbot Just Made Up A Reference For A Question I Asked, Here's What Followed

  • We're aware that AI chatbots can provide inaccurate information.
  • But what about weaving imaginative falsehoods, based on the requester's presumed biases?

Aravindan NeelakandanNov 13, 2023, 08:17 PM | Updated 08:17 PM IST
(Freepik)

(Freepik)


Can AI-powered chatbots lie?


But what is the level of such false information? When ‘New Scientist’ (16 September, 2023) asked OPenAI's ChatGPT whether generative AI can produce disinformation, the answer was affirmative:

However things go beyond such algorithmic agnosticism towards factual accuracy.


I entered into a conversation with one of the prevalent AI chatbots with a clear understanding of the terms and conditions:

Still I was—and am sure, the readers here too will be—unprepared for what follows.

Michael Witzel Repudiates Aryan Invasion


The first one is about the controversies involving Harappan script.

Aravindan Neelakandan (AN): There is a lot of Vedic symbolism in the seals though the language is considered proto-Dravidian. But why it is considered proto-Dravidian?


For anyone who knows about the controversies surrounding Harappan and Vedic histories, the name Witzel is associated with the staunch support for the Aryan invasion/migration model.


So I ask the AI again. And it still sticks to its claim.


Note the creative fabrication here. The information is not just inaccurate. It is fabricated to create an aura of trustworthiness. This goes far beyond simple factual accuracy, neutralism or agnosticism.

Asko Parpola for a Vedic Harappa

The second conversation regarding Harappa started a week later with a simple question on the presence and representation of horses in Harappan civilization.


I brought in Surkotada finds.

Conversation progressed.

AN: Is there a connection between this Unicorn and Indrik of Russian folk tradition and Indra?


Readers please note that till now it is a conversation that any generative AI can give. But from now on comes the intriguing part.




And the journal reference in Anthropos? I searched and could find no search reference.

This is quite disturbing.

This is because there is a real paper which deals with Unicorn and Vedic ritual published in Michael Witzel's 'Electronic Journal of Vedic Studies' (Gautama Vajracharya, Unicorns in ancient India and Vedic rituals). But here it prefers to fabricate a non-existent concept in a real book by a real author and then it makes up a non-existent title in a real journal by a real author.


B.B. Lal finds a Harappan-Brahmi continuity

Then after a break I asked the AI if it could get me a paper suggesting some continuity between Harappan and Brahmi script. Here is the response:




[There is some more crucial conversation here which I reserve to be presented with another deeper aspect of AI]

So later, almost a week after (and chatbot conversations do not have - at least from the perspective offered to the consumers- memory of previous chats), I brought back the topic of AI fabricating data.


AN: In my earlier conversation with you, you fabricated a paper title-on Vedic Asvins and Unicorn symbol and you attributed it to Parpola and even gave journal reference - all of which turned out wrong. Why did you fabricate such a paper? I want you to tell the reason without any apologies or algorithmic regrets.

Cutting down the algorithmic Mea-Culpa here is the response:


Notice the similarity of this answer to the response of ChatGPT to the New Scientist prompt on the related subject of misinformation.





Can a little dose of good old capitalist competition trigger some novel response here?




Please note the statement that the ‘fabrication’ of data could be ‘ a consequence of the limitations of the algorithms, rather than a deliberate design choice’. In other words, one can see that as an emergent property which is more than the ripples in a pond.



Join our WhatsApp channel - no spam, only sharp analysis