Swarajya Logo

FLASH SALE: Subscribe For Just ₹̶2̶9̶9̶9̶ ₹999

Claim Now

Science

Artificial Intelligence: Can AI Ever Become Conscious, And Other Burning Questions

  • Can AI systems ever become “conscious”? In other words, can an AI system ever wonder, ‘Who am I?’
  • A look at this and other important questions from the world of artificial intelligence.

Arnab BhattacharyaMar 14, 2022, 08:26 PM | Updated 10:30 PM IST

Photo by Tara Winstead from Pexels


This is Part 2 of the two-part article covering the basics of artificial intelligence. Read Part 1 here.

We will round off the paradigms of artificial intelligence (AI) with important application areas.

Use of AI techniques has shown significantly better performance in many natural language processing tasks, including machine translation and question-answering, as compared to hand-coded rules by linguistics or other computer science methods.

Unfortunately, the above is true for only resource-rich languages such as English and other western European languages. Research is underway for Indian languages that lack large amounts of annotated data (that is, data with tags).

In computer vision, most sub-tasks such as object identification and background removal are routinely done by AI systems now. Robotics uses a lot of AI, from video processing to planning to reasoning to speech generation. It is this use of AI in robotics that has fuelled popular imagination in the form of science-fiction books and movies.

Issues in AI

We next discuss what AI can or cannot do and should or should not do.

Can AI systems ever become “conscious”? In other words, can an AI system ever wonder, ‘Who am I?’, the eternal philosophical question that has tormented human beings from time immemorial.

Intelligence and consciousness are different, and a system can act or think humanly or rationally without the need for being conscious. Many think that this is the crucial difference between an AI system and a human being, and why an AI system can never be “human.”

Close to this issue are the questions of whether an AI system can “feel” or have “emotions” or really “think”. It can certainly act emotionally, but that is no more than simulating an actor in a drama.

These questions, however, arise from our inherent tendency to equate AI systems with humans. As already discussed, that may not be the only way to evaluate AI systems. Our inability to clearly define the various terms needed for an objective evaluation (for example, what does “thinking” really mean) only adds to the debate.

The question of ethics of AI systems is another burning issue. While there can be some general guidelines, ethics is a notorious subject to specify unambiguously and keeps changing over cultures and times.

A particularly striking case is that of lethal autonomous weapons that kill or injure human subjects without human supervision. In principle, the technology to develop such machines is already in place. While it is understandable that nations may deter from employing such systems, non-state actors and terrorists may not have any such qualms.

While on one hand, the development and use of such machines for warfare seems morally unethical, on the other hand, it may be argued that their use reduces the use of human soldiers and, thus, saves lives.

The question is, thus, of ethics: whose lives are more important and who controls that decision.

Fairness of AI systems is yet another hotly debated topic.

Consider a loan-approval AI system that analyses the different attributes of a person and decides whether the person should get a loan or not. Intuitively, the system should be fair with respect to individuals and groups. A group can be based on race, skin colour, religion, or gender and ideally should not affect the loan decision process.

Fairness has several aspects including individual fairness, where two individuals belonging to different groups should have the same decision if their non-group attributes are the same, group fairness, where two groups should be treated similarly, and equal outcome, where two groups have roughly the same percentage of loan approvals.

Additionally, the aspects of equal opportunity and equal impact try to correct the biases in the training data by imposing overall global constraints. An interesting case of bias is data bias where different categories have different sample sizes, thereby nudging the AI system to optimise its performance by concentrating on the larger-sized category.

Researchers have exposed many such biases of AI systems repeatedly. For example, a well-known face recognition system performed quite accurately for white males while not so for black females.

Fairness leads to another important dimension, that of explainability or interpretability of an AI system.

When a loan is refused to a person, can the system explain to them why it was refused? This is especially important in legal AI systems. No society in the world would like a black-box AI system to sentence a person without proper explanation of which laws and statutes are violated and how.

Deep neural networks, while quite accurate, suffer the most from this problem. They are notoriously unexplainable. A large section of current AI research is, thus, focused on how to build systems that are both accurate and explainable.

Future of AI

We end this small introduction to artificial intelligence by speculating on its future. Given the current pace and amount of monetary and other resources put into AI research, it is conceivable that in the near future almost all human activities will be influenced by artificial intelligence.

This not only includes unskilled and semi-skilled jobs, but also highly skilled occupations. IBM Watson, for example, can help doctors and, in many straight-forward cases, can act as the doctor itself.

AI systems have started doing tasks that are thought to be innately human, such as drawing pictures, composing music, writing poems and stories, and even designing other AI systems. Where does this all lead to?

An immediate question is that of jobs. More intelligent automation systems will lead to massive job disruptions across the globe. Organisations and nations are grappling with how to tackle this stage. AI tax and minimum basic income are on the cards.

A relevant sociology question is, if humans lose jobs, what will they do with all the time in their hands, assuming minimum basic income takes care of the basic needs? Preventing an idle mind from becoming a devil’s workshop is indeed a challenge.

A different view, though, argues that AI systems will raise the demand for more skilled jobs including building and controlling AI systems themselves.

Finally, if humans can build AI systems that are more intelligent than humans themselves (for example, Deep Blue and AlphaGo defeated the best human players in a game), those AI systems can, in turn, build even more intelligent AI systems. This can lead to infinitely intelligent AI systems that start controlling humans instead of the other way round. Many books and movies feature this science-fiction theme.

How different is this from the ultimate omniscient, omnipresent all-knowing Ishwara? We leave the readers to ponder.

This concludes the two-part article on artificial intelligence. Read Part 1 if you haven't already.

This article has been published as part of Swasti 22, the Swarajya Science and Technology Initiative 2022. We are inviting submissions towards the initiative.

Also Read:

Join our WhatsApp channel - no spam, only sharp analysis