Commentary

What Can An Indic Framework For Regulating AI Look Like?

Akshaya Suresh

Sep 24, 2023, 02:40 PM | Updated 02:41 PM IST


Can homegrown principles and frameworks be looked at for regulation of AI?
Can homegrown principles and frameworks be looked at for regulation of AI?
  • If there are any knowledge systems in the world that has deeply analysed the human mind and developed a philosophy and tools to derive the best outcome from it, they are our Indic Knowledge Systems.
  • The growth of artificial intelligence (AI) and its benefits to humanity has been much talked about.

    Along with its proliferation, we are now seeing an increasing attention to regulate usage of AI. A declaration from the recently concluded G20 Meeting messages the importance of using AI responsibly.

    Prime Minister Narendra Modi recently remarked that there is a need for a global regulation on AI. India is also looking to host a global AI summit in October this year.

    It is clear that India wants to play a key role in the discussions on regulating AI. That begs the question — how should India think about creating rules for AI? Is there a framework we have in mind? Do we already have one that we can leverage?

    Artificial intelligence works much like human intelligence. To draw an analogy, the AI model is like the human mind, accessing lots of data, creating patterns from that data, learning from them, creating outputs, solutions, making decisions or predictions and learning from a feedback loop.

    If there are any knowledge systems in the world that has deeply analysed the human mind and developed a philosophy and tools to derive the best outcome from it, they are our Indic Knowledge Systems. So we have a readymade framework that we can base AI regulations on.

    Any regulation of AI will require defining guardrails along the entire lifecycle of the model: training, modelling, deployment and outcome generation.

    Can the following homegrown principles and frameworks be looked at for a framework for AI?

    1. Data Sources — Pramanam:

    AI is directly based on learning from a lot of data. This is how human beings learn too. Humans perceive information or data, gain some knowledge from it and we accumulate experience from responding with that knowledge.

    It thus becomes important to start with the right data sources and a scientific method to parse that data. Bharatiya parampara has a comprehensive framework, perfected over centuries, perhaps millennia to help with this. This is embodied in the Nyaya and Mimamsa Sastras.

    These Sastras classify the different sources of knowledge and the resulting knowledge as:

    • Pratyaksam (faculty of perception) resulting in Pratyaksam (direct/first-hand knowledge)

    • Anumanam (ingredients for inference) resulting in Anumiti (inferential knowledge)

    • Upamanam (ingredients for analogy) resulting in Upamiti (comparative knowledge)

    • Sabdha (valid testimony) resulting in Sabdabodha (knowledge from such testimony)

    • Arthapatthi (ingredients for presumption) resulting in Arthapatthi (presumption)

    • Anupalabdhi (evidence of absence) resulting in Anupalabdhi (knowledge of absence)

    The Nyaya and Mimamsa Sastras conduct a detailed enquiry to test these sources for completeness, consistency and accuracy.

    The same framework of enquiry can be imposed on the datasets that an AI model is trained on. Simply put, there needs to be regulations on what categories of data sets can be used in training and how to check for its lawfulness, consistency and accuracy.

    When trained on healthy datasets, the AI model will develop constructive outcomes.

    2. Biases — Dosah:

    After a detailed enquiry of the sources of knowledge, our Sastras also go on to create a framework to check fallacies and biases in each of them.

    It categorises the different dosah (biases and fallacies), teaches how to identify them and how to rectify them so that the resulting knowledge is not corrupt.

    AI is mainly subject to two kinds of biases: data bias and model bias. A data bias results from the inaccurate or insufficient sampling of raw data and a model bias means a bias in the algorithm. Both result in the AI generating an incomplete, inaccurate or unfair outcome.

    The Nayyayika/Mimamsa framework on tackling biases can easily be applied to an AI model. A responsible AI model is one that has been tested for biases.

    3. Challenge — Sastrarth:

    Bharath has had a long tradition of debate or Sastrarth. Every thesis is put to a challenge in a gathering of scholars who would rigorously question the basis for the finding/thesis, point out any fallacies and together arrive at a solution to perfect it.

    This tradition of challenge acts as guardrails so that the best possible outcome is arrived at by an intelligent being.

    Contextualised to AI, this means that there must be prescriptions or a forum where AI models can be made open and subject to scrutiny and challenge. Perhaps a regulatory body can be set up for this purpose?

    4. Outcomes — Prayojanam:

    मन एव कारणं मनुष्यानां बन्धमोक्षयो : this Upanishad vakyam explained the principle that if taken in the right direction, the mind becomes the instrument for liberation and if not, the same mind becomes the instrument of bondage.

    Our knowledge tradition places a lot of importance on what should be the ideal outcome of a knowledge system and sets out tools to arrive at the ideal outcome.

    In the context of AI, the same AI model, when set on the right parameters, can give positive outcomes and when not, can cause harm. So, there must be directions for AI to define the values that the AI systems are set on and orient it to constructive outcomes.

    There must be regulations prohibiting AI from generating certain outcomes. There must also be regulations to prescribe mandatory human intervention to avoid certain outcomes.

    In unsupervised learning, the machine can take a life of its own if there is no human supervision, and specifically, human discernment at certain points. Human intervention may be required at certain checkpoints so that no decision is made by an AI that result in harmful outcomes.

    5. Withdrawal — Sanyasa:

    A unique feature of our parampara is that, while there is a lot of importance placed on learning, importance is also placed on unlearning.

    For humans, it is prescribed that at some point, the mind must be calmed and withdrawn from all its stimulus. Sanyasa is prescribed when the mind is at an equilibrium so that the jiva can be prepared for its ultimate aim of moksha.

    One can wonder as to how this can be applied to AI. In the context of AI, this would mean proscribing self-replication and setting a regulation for when an AI model must be retired.

    Sanyasa for an AI could be in terms of the AI stopping any generation of insights or outcomes. The AI model can perhaps be opened so that there can be learnings from a ‘retired’ elder, but nothing more.

    6. Endpoint — Moksha:

    The journey of a human must culminate in brahmagnyanam (self-experience of the only existence, which is the paramatman) through agnyana nivritti (removal of ignorance).

    What can be the end journey of an AI? Should it be self-destruction where the AI model, along with all its data is deleted?

    This is a very important question our regulations need to answer as anything that survives forever will become too intelligent for its own and the world’s good.

    Akshaya Suresh is a Partner at the firm Commercial Law Advisors (CLA), Chennai. The views here are her own and do not reflect the firm’s views.


    Get Swarajya in your inbox.


    Magazine


    image
    States