26 Jan 2016

Android as an IOT platform

The term “platform” is very ambiguous when used carelessly by programmers and consultants in the information technology business. Under various circumstances, TCP/IP, Linux, Oracle and Java are all referred to as platforms even though they are neither similar nor comparable to each other. Even within the narrower definition of IOT we can look at platforms from at least three directions. First,  we have hardware based platforms like Qualcomm’s AllJoyn, Intel’s IOTivity, Apple’s Homekit  and Android/Brillo from Google. Second, we have different data transport protocols like XMPP - used in Internet Messaging (IM), MQTT - a publish / subscribe model for messages, DDS - another pub/sub model for data distribution services and AMQP - Advanced Message Queuing Protocol. Finally we have integrated, cloud based platforms from big and small companies like IBM Bluemix, Carriots, n.io, thethings.io, thingworx and many others that claim to provide end-to-end solutions to transfer information from one machine to another.

All this is very confusing for any programmer who has built traditional, multi-tier applications that have a human user in the front and an RDBMS at the back. How does he get into the exciting world of IOT? What are the components that he needs to understand and work with?

Let’s break up the problem into four components. First we need a sensor A that will detect a physical property like temperature, air-pollution or blood sugar and generate a digital signal. Second we need a mechanism B that supports data transfer. Third we need a device C, a processor with an OS, that supports a device driver for A and programming language that can interface with B. Finally, we need a program D to receive the data and store it in a persistent database. It is now our job to identify the components A, B, C and D.

A, the sensor, would be very domain specific and would have to be sourced from specialist hardware vendors. Many kinds of sensors are readily available along with their device drivers for Linux, Windows and Android.

An important requirement of component C is that it should be small, portable, have a low power consumption and yet provide continuous data connectivity. While dedicated devices can always be built to meet these requirements, IOT enthusiasts often begin with Arduino or Raspberry Pi. But an inexpensive Android device is perhaps a better option because not only does it meet these basic requirements but also supports a wide range of sensors natively. Moreover, given the frenzy around Android development, there are many tutorials available and android tagged questions in stackoverflow are answered quickly! So for the choice of C, Android is a good option.

Moving on to B, the data transport mechanism, the two simplest options are XMPP and MQTT. Both are available as open source implementations and work comfortably on the IP networks that an Android device routinely connects to using either WiFi or 3G/4G data service. XMPP, formerly known as Jabber, is the basis of all Internet Messaging or “chat” services, like Google Chat, and there are free servers and services available for building and testing applications. XMPP also offers the security of a login and a password along with the concept of authorised “friends” or “contacts”, but this can become a challenge if we have to allocate and manage IDs for each and every machine in the IOT network. MQTT, formerly known as IBM MQ Series until it was made open source, offers a convenient publish / subscribe model where any device can publish messages under a topic into a central server, like Mosquitto or HiveMQ, and which can be retrieved by any other device that subscribes to the same topic on the same server. Both XMPP and MQTT have Java and Python libraries that allow applications developed in either of these languages to transmit and receive data as text strings. These libraries are available not only for Windows and Linux platforms but so can be used for in Android apps.

With these libraries it is now entirely possible to write applications on component C that will transmit data. Corresponding applications on component D, which could be a standard Linux server running Python, Java and either SQLite or MySQL, would receive the data, decode and process it with business logic and store it in a persistent database for subsequent analysis and display.


If we leave aside the integrated, proprietary, cloud-based platforms then the Android platform with applications developed with either XMPP or MQTT libraries is a very viable option. Why do we say so? First Android devices are dirt cheap and available at retailers like Walmart for US$10. Remove the unnecessary audio and video components and wholesale prices will drop to half of that. Second the gigantic and ever expanding community of Android app developers represents a huge pool of talent and expertise that can be leveraged inexpensively. Third, Android already has native support for all popular data transfer mechanisms like 3G / 4G / WiFi / Bluetooth / NFC and is likely to support anything else that comes in the future. Fourth, Android devices routinely support many kinds of sensors for gathering biometric, motion, position and environmental data and thanks to the pressure from app builders there are many third party sensors in the market for biomedical and additional environmental data all of which are very useful for IOT applications.

But the fifth and most important reason to bet on an Android based IOT is the massive ecosystem around it. Google has not only invested US$ 3.5 billion in acquiring the home automation company Nest but has also thrown its weight behind a brand new product, Brillo, that is based on Android and extends it with Weave, a data communication platform for IOT. Of course, not all Google products are equally successful -- Google+ or Google Wave are nowhere as popular as its search engine, Gmail or Youtube -- but with an 80+% market share Android is miles ahead of the competition in the 2+ billion smartphones that are in the market today.

This article was originally published in theStack

21 Jan 2016

Building the World Wide (mind)Web

The Advaita-Vedanta School of Indian philosophy posits that every sentient mind is interconnected as a part of a universal consciousness. Mapped into modern technology, this could be viewed as the biological equivalent of the World Wide Web consisting of computers connected over the internet. The web that we see today had its genesis with the Ethernet, invented in 1973 and TCP/IP adopted in 1983. How far away are we from a similar network of minds?

Controlling machines with thought is the first step and Craig Thomas’ 1982 sci-fi novel, Firefox, was the first to predict thought-controlled aircraft. In just about 30 years since then, technology has progressed to the point where we have thought controlled-wheelchairs, not in research labs, but as a do-it-yourself project at Instructables! In fact, this technology has now reached the consumer level and companies like Emotiv sell headsets that pick up electrical signals from the brain and work as an input device, similar to joystick controllers for video games. The core technology behind all such devices is the ability to sense electrical signals in the brain in a non-intrusive manner and to discriminate between random noise and a signal corresponding to deliberate intention. While the problem is complex and non-trivial it is well within the domain of data science and signal processing. As detection and analysis of these signals become more granular, the corresponding control systems will become more complex and sophisticated. Perhaps it is a matter of time before such input devices will become as common as a mouse, a touchpad or even a touch screen.

The problem of reaching out to, and then controlling, one mind from another is an order of magnitude more difficult because of the uncertainty at each end of the communication process. While physiology identifies the muscles that influence a particular part of the body, say the hand, it is still not very easy to determine the nature and intensity of the electrical signal that will cause a specific muscle to contract and make the corresponding body part behave in a specific manner. However, the problem is certainly not intractable. As early as 2013, researchers at the University of Washington have demonstrated a noninvasive human-to-human brain interface that allows one person to control the movement of the hand of another. A more sophisticated system was demonstrated a year later where one person actually made the other perform a specific action like operating a game console. An obvious medical spin-off from this research is the technology that allows a paralyzed man to move his limbs again. This technology is also becoming easier to implement as, Greg Gage shows here how simple it could be to hook one person’s hand to another person’s brain and have it being controlled over-the-wire.

But can we move beyond muscle contraction and work with abstract thoughts and emotions? Can the pleasure of listening to music be conveyed to someone who cannot hear? Can the fear of impending death be felt by someone who is not dying? Can the answer to a mathematics problem be picked up by someone who does not know the answer from another person who does? In principle, it is only a matter of sensing and making sense of electrical signals but the complexity of  implementation is very high. Making a muscle contract with thought is as simple as pressing a switch in one room and having a bell ring in another. But reading another person’s mind is like using a browser to access and understand the contents of files on a remote web server. However, with scientists like Phil Kennedy collecting data from their own brains, there is more than just hope.

If we map the problem to the domain of computer networks, then chronologically we are located somewhere between the invention of the Ethernet (1973) and the adoption of the TCP/IP protocol (1983). We can transmit signals from one body to another. What we need next is to encode brain signals with a markup language like HTML and access them through an HTTP application. Eventually, we would need a browser, pioneered by Tim Berners-Lee (1989), that interprets signals from diverse sources and also the equivalent of an xAMP stack -- PHP programs that extract MySQL data and serve it through an Apache server.

Are we running away with our imagination and talking science fiction again?

While abundant computing power helps, algorithms to make sense of information stored as unstructured data models are fiendishly difficult to build. One would initially need to decouple the browser from the server and work instead with a staging area where a data warehouse style extract-transform-load process would unload data from one set of minds, like the Hogwart’s Pensieve in Harry Potter novels! This data could then be accessed by another mind through a bionic eye that helps blind people to see by sending visual signals directly into the optical nerves in the brain. In fact building a browser into a bionic eye could be an independent first step that will allow a person to browse the existing World Wide Web before he ventures into the World Wide (mind)Web.

The English mystic poet, William Blake’s claim that “If the doors of perception were cleansed everything would appear to man as it is, Infinite” was explored by Aldous Huxley in his seminal work, The Doors of Perception, where he envisaged the use of psychotropic substances to open up the mind to an infinite global pool of thoughts and ideas. Today, we can replace narcotics with digital and biomedical technology.

From the discovery of the Ethernet in 1973, through the adoption of TCP/IP in 1983 and the creation of the browser in 1989, the World Wide Web came of age with the Netscape IPO in 1995. Viewed against this timeline, and adjusting for the acceleration of technology, we can expect the digital web to be cast into the World Wide (mind)Web within the next ten years.

Originally published in the IOT section of TheStack