The Advaita-Vedanta School of Indian philosophy posits that every sentient mind is interconnected as a part of a universal consciousness. Mapped into modern technology, this could be viewed as the biological equivalent of the World Wide Web consisting of computers connected over the internet. The web that we see today had its genesis with the Ethernet, invented in 1973 and TCP/IP adopted in 1983. How far away are we from a similar network of minds?
Controlling machines with thought is the first step and Craig Thomas’ 1982 sci-fi novel, Firefox, was the first to predict thought-controlled aircraft. In just about 30 years since then, technology has progressed to the point where we have thought controlled-wheelchairs, not in research labs, but as a do-it-yourself project at Instructables! In fact, this technology has now reached the consumer level and companies like Emotiv sell headsets that pick up electrical signals from the brain and work as an input device, similar to joystick controllers for video games. The core technology behind all such devices is the ability to sense electrical signals in the brain in a non-intrusive manner and to discriminate between random noise and a signal corresponding to deliberate intention. While the problem is complex and non-trivial it is well within the domain of data science and signal processing. As detection and analysis of these signals become more granular, the corresponding control systems will become more complex and sophisticated. Perhaps it is a matter of time before such input devices will become as common as a mouse, a touchpad or even a touch screen.
The problem of reaching out to, and then controlling, one mind from another is an order of magnitude more difficult because of the uncertainty at each end of the communication process. While physiology identifies the muscles that influence a particular part of the body, say the hand, it is still not very easy to determine the nature and intensity of the electrical signal that will cause a specific muscle to contract and make the corresponding body part behave in a specific manner. However, the problem is certainly not intractable. As early as 2013, researchers at the University of Washington have demonstrated a noninvasive human-to-human brain interface that allows one person to control the movement of the hand of another. A more sophisticated system was demonstrated a year later where one person actually made the other perform a specific action like operating a game console. An obvious medical spin-off from this research is the technology that allows a paralyzed man to move his limbs again. This technology is also becoming easier to implement as, Greg Gage shows here how simple it could be to hook one person’s hand to another person’s brain and have it being controlled over-the-wire.
But can we move beyond muscle contraction and work with abstract thoughts and emotions? Can the pleasure of listening to music be conveyed to someone who cannot hear? Can the fear of impending death be felt by someone who is not dying? Can the answer to a mathematics problem be picked up by someone who does not know the answer from another person who does? In principle, it is only a matter of sensing and making sense of electrical signals but the complexity of implementation is very high. Making a muscle contract with thought is as simple as pressing a switch in one room and having a bell ring in another. But reading another person’s mind is like using a browser to access and understand the contents of files on a remote web server. However, with scientists like Phil Kennedy collecting data from their own brains, there is more than just hope.
If we map the problem to the domain of computer networks, then chronologically we are located somewhere between the invention of the Ethernet (1973) and the adoption of the TCP/IP protocol (1983). We can transmit signals from one body to another. What we need next is to encode brain signals with a markup language like HTML and access them through an HTTP application. Eventually, we would need a browser, pioneered by Tim Berners-Lee (1989), that interprets signals from diverse sources and also the equivalent of an xAMP stack -- PHP programs that extract MySQL data and serve it through an Apache server.
Are we running away with our imagination and talking science fiction again?
While abundant computing power helps, algorithms to make sense of information stored as unstructured data models are fiendishly difficult to build. One would initially need to decouple the browser from the server and work instead with a staging area where a data warehouse style extract-transform-load process would unload data from one set of minds, like the Hogwart’s Pensieve in Harry Potter novels! This data could then be accessed by another mind through a bionic eye that helps blind people to see by sending visual signals directly into the optical nerves in the brain. In fact building a browser into a bionic eye could be an independent first step that will allow a person to browse the existing World Wide Web before he ventures into the World Wide (mind)Web.
The English mystic poet, William Blake’s claim that “If the doors of perception were cleansed everything would appear to man as it is, Infinite” was explored by Aldous Huxley in his seminal work, The Doors of Perception, where he envisaged the use of psychotropic substances to open up the mind to an infinite global pool of thoughts and ideas. Today, we can replace narcotics with digital and biomedical technology.
From the discovery of the Ethernet in 1973, through the adoption of TCP/IP in 1983 and the creation of the browser in 1989, the World Wide Web came of age with the Netscape IPO in 1995. Viewed against this timeline, and adjusting for the acceleration of technology, we can expect the digital web to be cast into the World Wide (mind)Web within the next ten years.
Originally published in the IOT section of TheStack