DC/OS as a platform for communication bridges

10 May 2016
By Rafael Zubairov, Senior Architect

Today IoT is expanding it’s borders and creeping into our daily lives. Devices are placed everywhere, various size from tiny things to monstrous automatic machinery. The internal device architecture and design form the constraints on the device behavior. Device protocol, operational cycle, and time are often limited by batteries, the underlying hardware, and existing libraries. The variety of protocols and message types means a typical solution to unite them involves implementing bridge-adapters that are capable of transforming data into a common format.

Bridges are often seen as a pipe accepting messages and passing them to the next collector or adapter-bridge. The central business logic resides in the middle. Central logic responsible for accepting and handling all the messages. The replies are sent back, routed through the channels back to the device and client. A typical solution will require several type of adapters to be deployed. With an increasing number of bridges, fault tolerance requirements should be kept. At this point, cluster maintenance and monitoring becomes important task.

Deploying adapters should include automated installation, monitoring, and restart in case of failure. This leads to the need for the devops team time to provide all the infrastructure and scripts. One of the possible solutions could be to use a DC/OS based cluster. It provides you most of the required functionality from scratch. With it you get cluster monitoring, simplified deployment, application health checks, and scaling.

Let’s see a couple of examples how one can leverage the power of a DC/OS based cluster and utilize application deployment and monitoring for simplicity. We assume you already have a DC/OS based cluster, if not you can see instructions on how to set one up on our blog or on the official DC/OS website.

In our examples we’ll connect scriptr.io with two data sources – a streamed device data via IoT-X (streamapn) and an IBM IoT Cloud. The first is a websocket based data connector providing data to any connected client. The bridge for that particular case will connect to streamapn, consume data sent by device and send messages to the scriptr platform.

The source code is available on the GitHub repository, and in this article let’s highlight a couple of features. The bridge accepts commands from scriptr clients, such as message flow control and requests for stats. You can see that the bridge utilizes a zookeeper instance for storing the current state. A Docker image is built and then published to Amazon Container Repository Service.

The second example consists of IBM IoT Cloud devices, Node-RED which consumes data from device and sends data to the Mosquitto MQTT broker, a DC/OS deployed bridge and scriptr. In order to access the Mosquitto broker deployed to DC/OS you will need to enhance your cluster with the marathon-lb package.

The Marathon-lb package when installed deploys HAProxy to a public agent node. HAProxy is capable of transferring connections from the outside world into special public agents. The Mosquitto marathon file contains a special label, forcing marathon to deploy it to the public agent. The business logic is deployed with the second container that uses the DC/OS internal DNS service to find the IP and ports for the Mosquitto broker.

Example source code is available on GitHub.


Add Comment

Name Mail Website Comment