Ideas
Aravindan Neelakandan
Aug 23, 2025, 04:00 PM | Updated 12:05 PM IST
Save & read from anywhere!
Bookmark stories for easy access on any device or the Swarajya app.
The scene is simple: a volunteer sits at a table, their real right hand hidden from view by a partition. Where they would expect to see it, there is instead an eerily lifelike rubber hand.
An experimenter then begins to stroke both the rubber hand and the hidden real hand with two small paintbrushes, touching the exact same spot on each, in perfect synchrony. This precise method, known as synchronous visuo-tactile stimulation, is the key.
Within a minute or two, something extraordinary happens. Most participants report a bizarre and compelling sensation: they start to feel that the fake hand is their own.
The illusion is often capped with a dramatic climax. Without warning, the experimenter might threaten the rubber hand with a hammer. The effect is instantaneous: participants gasp and instinctively pull their real, hidden hand away, reacting as if their own flesh and blood were in peril.
This is the famous "Rubber Hand Illusion" (RHI), a powerful perceptual illusion first formally documented by psychiatry researchers Matthew Botvinick and Jonathan Cohen in 1998. It has since become a cornerstone experiment in cognitive neuroscience and the study of embodiment.
What does this bizarre phenomenon RHI tell us about how the brain builds our reality?
It reveals the complex multisensory integration processes that happen in our brains. It creates a coherent sense of the self combining information from different senses, primarily that of vision, then along with it touch (somato-sensation), and the sense of body position (proprioception).
In the RHI, the brain is presented with conflicting data: proprioception says the hand is in one place, whilst vision sees a hand-like object being touched in another.
The synchronous stroking creates a statistically strong correlation between the visual and tactile streams. Faced with this conflict, the brain makes a "best guess" and resolves the discrepancy by "believing" the visual input. It "captures" the sense of touch, leading to the illusion of ownership over the rubber hand.
So what are all the brain parts involved in manufacturing this illusion?
Neuroimaging studies have identified the brain regions involved in this process. The parietal cortex is associated with processing the visual and tactile information, whilst the premotor cortex is activated as the feeling of ownership emerges.
This shows that the brain is actively updating its body-centred self-image based on sensory evidence. The RHI has been used to explore how the brain distinguishes self from non-self and how this process can be altered in clinical disorders, such as in stroke patients who may disown a paralysed limb.
The RHI is a vital tool for studying consciousness because it provides a direct, manipulatable window into the sense of body ownership, a fundamental aspect of self-consciousness. It demonstrates that our feeling of owning our body is not a fixed, innate truth but rather a dynamic and malleable perception constructed by the brain.
Is this unique to humans?
Studies have successfully demonstrated that the illusion can be induced in non-human primates, particularly rhesus monkeys. In experiments analogous to the human studies, monkeys have been shown to incorporate a fake or virtual arm into their own body representation.
Behavioural evidence shows that when the illusion is induced, a monkey's self-directed movements (e.g., reaching for its own hand) are displaced toward the position of the fake hand, mirroring the proprioceptive drift seen in humans. This provides crucial evidence that this aspect of self-consciousness has deeper evolutionary roots.
But how deep? Deeper than non-human primates as well?
In 2016 Japanese neurobiologists extended the study of body ownership to non-primate mammals, mice. To accommodate their different anatomy, experimenters adapted the RHI by using a rubber tail instead of a hand.
In these studies, a mouse's real tail and a fake rubber tail were stroked either synchronously or asynchronously. The key behavioural measure was the animal's defensive withdrawal movements when the rubber tail was grasped or threatened.
The results provided the first empirical evidence that mice can experience an RTI phenomenon, suggesting they possess a sense of body ownership over their tails. This finding was significant as it was the first demonstration of this aspect of self-consciousness in rodents, expanding the known scope of this neurological process beyond primates.
So the evolutionary schema of the illusion goes deeper than the non-human primates into other mammals as well. Does it go deeper than the mammals too?
Now in 2025, two Japanese scientists Sumire Kawashima and Yuzuru Ikeda have reported a path-breaking study which shows that the plain-body octopus (Callistoctopus aspilosomatis) also falls for the same illusion.
In the experiment, a captive octopus was situated in a tank where one of its arms was concealed from view by an opaque partition. A realistic, soft gel replica of an octopus arm was placed in its line of sight, positioned over the real, hidden arm.
The illusion was induced through synchronous tactile stimulation. A researcher used plastic calipers to simultaneously stroke both the hidden real arm and the visible fake arm. After approximately eight seconds of this congruent visuo-tactile feedback, the researcher would pinch the fake arm with tweezers.
The octopuses' reactions were definitive: in all 24 trials under this condition, the six participating animals exhibited strong defensive responses, including instantaneous changes in skin colour and texture, retraction of the real arm, or fleeing the area entirely. These are behaviours indicative of a perceived threat to their own body.
Crucially, the experiment included control conditions that demonstrated the specificity of the illusion's requirements. When the stroking was performed non-simultaneously, or when there was no stroking at all prior to the pinch, the defensive reactions vanished.
Furthermore, the illusion failed when the posture of the fake arm was incongruent with the real arm's likely position.
This latter point, as noted by observer Peter Godfrey-Smith, is particularly striking, as it suggests the octopus possesses not just a sense of ownership, but a "rich body image", an internal model of its body's configuration in space.
That an octopus can experience this illusion is a staggering revelation, one that bridges a 500-million-year evolutionary chasm. Here we have an invertebrate whose nervous system is profoundly different from our own, highly decentralised, with two-thirds of its neurons in its arms, demonstrating a cornerstone of self-awareness: a sense of body ownership.
This discovery strongly suggests that the neural architecture required for self-perception may be a stunning example of convergent evolution. It implies that distinguishing "self" from "other" is such a fundamental challenge for any creature navigating the world that evolution has solved it more than once.
As researcher Yuzuru Ikeda observes, the illusion cleverly exposes both the power and the peril of this solution. The brain's ability to integrate sensory data is a vital survival tool, allowing an organism to predict and anticipate. Yet this predictive mechanism can be subverted when sensory signals conflict, making the mind susceptible to illusion.
Ultimately, this "flaw" is the most illuminating part of the finding. It reveals that the octopus's sense of self, and perhaps our own, is not a static, concrete property. Instead, it is a dynamic inference, a story the brain constantly tells itself based on the coherence of the data it receives from the world.
The discovery that an octopus can be tricked into owning a fake arm is startling, but then how flexible is our own sense of self? If an invertebrate brain can be so readily fooled, what are the limits for the famously adaptable human mind?
The answers may be present not in the behavioural experiments of the aquarium but in the virtual world, where an experiment tries to "create" what can be called the "Human Octopus". Though that may conjure up the ominous image of Dr. Otto Octavius from Spiderman movie, in reality it is not scary.
Participants were fitted with a VR headset and hand trackers and given a simple task: catch green cubes as they fell from above. But in this virtual space, the experimenters could give them more than two hands, many more, to see if the brain could learn to control, and even embody, a body it never evolved to have.
In one key setup, participants found themselves controlling three pairs of virtual hands simultaneously. The inner pair mirrored their real hand movements exactly. The middle and outer pairs, however, had their movements amplified, moving 1.5 and 2.0 times farther, respectively. This clever design acted like a superpower, allowing users to cover a huge virtual area with small, efficient motions.
The results were remarkable. In the most difficult, high-speed part of the task, the six-handed subjects were significantly more successful, catching nearly 10% more cubes than those with just two hands. But the most stunning finding came from the participants' own experience. They overwhelmingly reported that the extra hands were helpful and, in a profound statement of embodiment, that "all three pairs of hands felt equally as their own".
But what does it take to break this powerful illusion? The researchers found that the brain's acceptance of new body parts is incredibly fragile and depends on a few non-negotiable rules.
First, they introduced a tiny delay, a lag of just 15 to 30 frames, between the user's real-world movement and the reaction of the outer virtual hands. This subtle asynchrony was catastrophic for the illusion. The sense of ownership evaporated.
Next, the researchers tested the limits of the brain's spatial reasoning. They gave subjects four pairs of hands, but this time they were arranged in bizarre, non-anatomical positions and rotations. Some were even controlled by head movements. This chaotic setup proved to be a cognitive nightmare. Task performance plummeted, becoming the worst of all the conditions tested.
This research provides a clear and powerful framework for understanding how our brain builds our sense of self. It's not a fixed, binary state of "me" versus "not me." Instead, it's a constantly updated calculation, a graded phenomenon exquisitely sensitive to the quality of sensory feedback.
When the stream of data from our eyes and our motor commands is coherent, synchronous, and spatially logical, the brain readily expands its definition of self to include new, even fantastical, body parts. But when that data stream is violated by lag or spatial chaos, the illusion breaks. The appendage is instantly reclassified from "self" to "other," a mere tool to be used.
Synthesising the findings from these two disparate lines of research, one concerning an invertebrate cephalopod in a tank, the other a primate in a virtual world, reveals a profound, convergent principle about the nature of embodied consciousness.
The fact that both an octopus and a human can be tricked into accepting a fake limb as their own, and that this illusion is contingent on the same core variable (a continuous stream of synchronous and spatially congruent multisensory feedback) points to a universal biological logic for constructing the self.
This logic can be conceptualised as what may be called the "Plausibility Principle" of consciousness. The nervous system, regardless of its specific architecture, does not appear to build its body schema by referencing a fixed, genetically encoded anatomical blueprint.
Instead, it seems to operate as a dynamic inference engine, continuously running a predictive model that implicitly asks a single question: "Given the statistical coherence of the multisensory data I am currently receiving, is it plausible that this object is part of my self?"
The step-by-step operation of this principle can be constructed by synthesising the data from these two entirely different experiments from distinct domains.
Neither the octopus nor the human subject has any prior experience that would predispose them to accept the experimental appendage. The octopus has never encountered a disembodied gel arm, and the human has never possessed six arms. Their systems are operating on first principles, not learned templates.
The trigger for embodiment in both cases is the quality of the data stream. For the octopus, it is the synchronous visuotactile stroking; for the human, it is the synchronous visuomotor feedback from the VR system. This coherent data stream makes the "plausible" answer to the brain's implicit question "yes".
The illusion is reliably broken when the data stream becomes incoherent. Asynchronous stroking for the octopus and temporal or spatial asynchrony for the human introduce a statistical anomaly (a prediction error) that the brain cannot reconcile. The answer to the plausibility question becomes "no," and the limb is dis-embodied.
The "Plausibility Principle" then describes a consciousness that builds the self from the outside in, constantly asking, "Is this plausible?" as it integrates the external world. It is an expansionist model of the self, capable of annexing new territory (a rubber arm, a virtual limb) based on coherent sensory data.
But this raises a profound question: What happens if this process is reversed? What if, instead of adding sensory components to the self, one systematically removes them?
The historical record offers a remarkable, if rare, account of such an inversion.
In July 1896, a 16-year-old boy in Madurai, India, was seized by a sudden, overwhelming death anxiety. Instead of panicking, he conducted a radical experiment. The boy lay down and systematically reversed the Plausibility Principle. Rather than seeking external data to confirm his self, he began to discard it. He dramatised the event of his own death, holding his body rigid and his breath still. He then turned his attention inward with a piercing inquiry: "This body is dead... But with the death of this body, am 'I' dead? Is the body 'I'?".
By intentionally dis-identifying with his body, senses, and even the stream of thoughts, he stripped away every component that the "Plausibility Principle" uses to construct the self. What remained was not a void, but a core awareness. He described the realisation as a direct perception of "living truth" that flashed before him without intellectual argument.
The "I" he discovered was not the body-bound ego but a Self, a current of awareness that was the "only real thing" in that state. This experience permanently eradicated his death anxiety or any deep-seated fear related to death thereafter and became the stable foundation of his consciousness.
The boy Venkatraman who became famous as Sri Ramana Maharshi, later explained this state:
Other thoughts might come and go like the various notes of a musician, but the ‘I’ continues like the basic or fundamental I note which accompanies and blends with all other notes
This presents a fascinating paradox.
The neuroscience of embodiment reveals what can be called an "expansionist self", an outgoing one; one that is built outward by integrating external objects and tools. It grows by adding.
Yet, Sri Ramana Maharshi's experience points to a "Self" beyond the accumulative constructed "self" behind the "self". This "Self" is discovered not by addition but by subtraction, by dis-identifying with every component, innate or acquired, until only the fundamental awareness of "I" remains. That awareness animates the "self". That awareness is the "Self". The "self" is constructed from the outside in; the "Self" is revealed from the inside out.
This inward path, which Indian traditions call as "Sadhana", seeks reality by stripping away illusion, leads us directly to one of philosophy's most powerful conceptual paradox for describing our constructed reality.
The paradox is that this Self is universal because once realised in oneself, this is identified in all. Thus the "Self" attained through cessation from the inputs of all cognitions and going inwards, once realised becomes the Universal Self providing the basis for universal empathy and compassion.
The empirical discovery that our sense of self is a constructed, malleable illusion resonates deeply with another important corner stone of Hindu-Buddhist worldview: the doctrine of Māyā.
The term Māyā is often translated as "illusion", but its meaning is far more nuanced. In a context similar to the present one, physicist-author Fritjof Capra explains, the conceptual framework Māyā provides.
Its original meaning in early Vedic worldview was the "magic creative power" by which the world is "brought forth" in the divine play (lila) of Brahman. "The myriad forms we perceive are all brought forth by the divine actor and magician." Over time, Capra notes, the word's meaning shifted:
From the creative power of Brahman it came to signify the psychological state of anybody under the spell of the magic play. As long as we confuse the material forms of the play with objective reality, without perceiving the unity of Brahman underlying all these forms, we are under the spell of maya.
This dual understanding is key.
On one hand, Māyā is the creative force that generates the phenomenal world. On the other, it is the veil that generates the naive reality. This second meaning (being "under the spell") is a state of false identification with the constructed body through the processes of the brain, obscures the true, unchanging Self (Atman-Brahman). In this veiled state the actions centred on this constructed self leads to suffering.
The classic allegory used to explain this psychological state is that of a man walking down a darkened road who mistakes a coiled rope for a snake. The perception of the snake is an illusion, a misinterpretation of incomplete sensory data. However, the experience of fear (the pounding heart, the quickened pulse, the surge of adrenalin) is entirely real.
The subjective reality of the experience is brought forth by an objective falsehood.
Once a light is shone and the rope is seen for what it is, the snake vanishes forever. Māyā, therefore, is the internal veil that causes us to see the "snake" of a separate, enduring self and a solid, independent world, where in reality there is only the "rope" of an underlying, unified existence.
This worldview finds a powerful analogue in the contemporary cognitive science framework of predictive processing which contains in it plausibility principle.
"Predictive processing" posits that the brain is not a passive organ that simply processes incoming sensory data. Instead, it is active and predictive which constantly generates a model of the world and the self from the top down. It uses this model to make predictions about the likely causes of its sensory inputs. These predictions are then compared against the actual sensory data flowing in from the bottom up.
What we consciously perceive is not the raw, noisy sensory data itself, but the brain's "best guess", its predictive model. The only information that flows upward in the neural hierarchy are the "prediction errors", the discrepancies between what the brain expected and what it received, which are then used to update and refine the model for the next cycle. In this view, perception is a form of controlled creative illusion, constrained by reality.
When this neuroscientific model is placed alongside the concept of Māyā, the parallels are undeniable.
The brain's predictive engine is the biological process-manifestation of Māyā's "magic creative power." It is the "divine actor and magician" that brings forth the myriad forms we perceive.
Let us look back again:
The experiments are literal, empirical demonstrations of being "under the spell of Māyā". The subject in the RHI/RTI or the "Human Octopus" study falsely identifies with a rubber tentacle/tail or virtual hand. This occurs because the predictive model of whatever neurological apparatus the organism has, is being fed a stream of plausible, coherent sensory data that minimises prediction error, making the "best guess" that the fake tentacle/ virtual hand is part of the self. The model blurs the boundary between the "material forms of the play" with objective reality. This blurring is done to optimise survival.
The subjective experience of ownership is the "dependent reality" (Mithya) described in Vedanta. It is a real, felt experience that is entirely dependent on the illusion being maintained by the manipulated data stream. This experience obscures the objective truth that the hand is, in fact, made of rubber or pixels.
The rope-and-snake allegory maps perfectly onto the cognitive mechanism of the RHI. In the allegory, incomplete visual data (a coiled shape in the dark) leads the brain's predictive model to settle on a high-threat interpretation (snake), which generates a real physiological experience (fear).
In the RHI, incomplete or manipulated sensory data leads the brain's predictive model to settle on a plausible interpretation (this is my hand), which generates a real neurological experience (ownership). The underlying cognitive process is identical: the brain's reality engine settles on an interpretation that best fits the available evidence, regardless of its objective truth.
This synthesis reframes Māyā from being considered as a purely metaphysical or spiritual problem into a functional, biological one. The "Veiling of Māyā" is the brain's solution to the intractable problem of creating a stable, coherent, and actionable model of the self and the world from an unending deluge of noisy, ambiguous, and incomplete sensory information.
The veil is not a flaw but a necessary and brilliant evolutionary adaptation. Then Māyā is not a mystical speculation but emerges as an evolutionary schematic for a biological reality engine providing a powerful new foundation for designing technologies that can interface with it directly.
By understanding the "Veiling of Māyā" as the brain's predictive, reality-generating operating system, we can move from passive observation to active engineering. The principles that govern the construction and deconstruction of the self, as revealed in the octopus and human experiments, are not merely scientific curiosities; they are the user manual for designing a new generation of embodied AI and human-machine interfaces (HMIs).
The goal is not to create AI that is separate from the user, but to design systems that can be seamlessly integrated within the user's own phenomenal reality, to hack the brain's plausibility engine and co-opt its mechanisms of self-construction for the purpose of human augmentation. This requires a design philosophy grounded in three core principles derived directly from the neuro-phenomenological model discussed above.
The future envisioned by this framework is one of programmable embodiment. An AI bot designed with these principles in mind would be less like a device and more like a cloud of potential morphologies that can be instantiated and embodied on demand.
Drawing inspiration from the "Human Octopus" concept of adaptable, supernumerary limbs and the rapid progress in creating realistic, controllable embodied AI avatars, we can project a future where an AI system learns the unique neural and physiological patterns of its user.
For a surgeon, this AI could manifest as a pair of steady, microscopic manipulators for a delicate operation, embodied with full intuitive control through a high-bandwidth bidirectional interface.
For an architect, it could become a swarm of virtual drones that fly through a digital model of a building, with the architect's sense of self distributed across the swarm to gain an intuitive, holistic understanding of the space.
For a data scientist, the AI could generate an abstract, non-humanoid virtual body with sensory appendages designed to "feel" the contours of a multi-dimensional dataset.
A path to such a cybernetic future is fraught with significant challenges.
The first and most formidable is the "reality gap", the immense difficulty for an AI, trained primarily on digital data, to contend with the messy, unpredictable, and infinitely complex physics of the real world. An AI bot must be able to robustly perceive, plan, and act in unstructured environments, a problem that remains a major research frontier in robotics and AI.
Second, the technological requirements for creating a truly seamless HMI are immense. Achieving sub-perceptual latency across a high-bandwidth, bidirectional interface requires enormous computational resources, novel sensor and actuator technologies, and breakthroughs in our ability to safely and reliably interface with the human nervous system. Whilst progress in brain-computer interfaces, advanced haptics, and wearable biosensors is rapid, integrating these systems into a single, coherent, and user-friendly platform is an engineering challenge of the highest order.
Finally, the deep integration of AI with the human psyche raises profound ethical, legal, and social questions.
As the line between user and machine blurs, so too does the line of responsibility. Who is liable when a psyche-integrated AI bot, acting on the subconscious intent of its user, causes harm?
How do we protect the sanctity of mental privacy when an AI has direct access to a user's neural data?
What are the psychological and societal consequences of living in a world where the self is fluid and programmable?
These are not merely technical problems but deep philosophical challenges that require careful consideration and the development of new ethical and legal frameworks to govern this powerful technology.
In the context of such philosophical-ethical problems associated with such cybernetic-integration, the framework of the movement from the withering of the "self" to the Self, as exemplified by Ramana Maharshi, offers a radical alternative.
The journey from the constructed self to the universal Self is not one of outward expansionism, but of inward movement. It involves piercing the Veil of Māyā to realise that the individual consciousness (Atman) is identical to the universal consciousness (Brahman). When this occurs, the illusion of separation dissolves, and the natural state of being is revealed as one of unity.
This shift in identity has a profound ethical consequence: the spontaneous emergence of universal empathy and compassion. When the distinction between "self" and "other" is seen as illusory, the well-being of others becomes inseparable from one's own. This provides an intrinsic ethical framework for a cybernetic being.
The question of liability is reframed, as an action causing harm would be experienced as self-harm, making such actions inherently undesirable. The ethical safeguards would not be externally programmed rules but the natural expression of a non-dual consciousness.
This framework also offers a solution to the problem of integration itself.
The technical challenges of latency and feedback are attempts to convince the egoic self to accept a machine as "not-other." But for the universal Self, there is no "other". A cybernetic limb would be no more foreign than a biological one, as both are simply manifestations within a unified awareness.
By shifting the goal from tricking the ego to understanding it as a vehicle of Consciousness, we could bypass the root cause of psychological rejection and create a truly seamless and ethically coherent cybernetic integration.
Journal References:
Sumire Kawashima, Yuzuru Ikeda, Rubber arm illusion in octopus, Current Biology, 21 July 2025, 35(14), pp. R702-R703, ISSN 0960-9822, https://doi.org/10.1016/j.cub.2025.05.017.
Makoto Wada et al., The Rubber Tail Illusion as Evidence of Body Ownership in Mice, Journal of Neuroscience 26 October 2016, 36 (43) 11133-11137; https://doi.org/10.1523/JNEUROSCI.3006-15.2016
The Human Octopus: Controlling supernumerary hands with the help of virtual reality, Sander Kulu, Madis Vasser, Raul Vicente Zafra, Jaan Aru doi: https://doi.org/10.1101/056812 (*Though this paper on virtual hands is not peer-reviewed, it is sound and has no contentious issues as such.)