DeviceHive Predictive Maintenance Application with SAP

13 July 2016
By Gleb Popov, Senior Java Developer at DataArt

The DeviceHive project development team has explored the possibility of integrating the DeviceHive data platform with SAP Hana DB. DeviceHive allows third party IoT device developers to concentrate on business tasks and the distinctive features of a project by shifting the data and device management to the platform. SAP Hana DB provides analytical tools integrated with data storage.

This approach can be used as a reference architecture solution based on devices that use Ubuntu Core. In our example, the devices collect data from the sensors and transmit it to the cloud for further analysis.

The DeviceHive platform allows us to collect data sent by devices in various ways. One of the most convenient ways that is available immediately after server installation is to use the data stream in an Apache Kafka server. The data flow available from Kafka topics contains device notifications. Aggregation and data analysis received from the sensors on the fly makes it possible to create a real-time monitoring system.

DeviceHive_scheme1

We discovered several possible ways to integrate DeviceHive with SAP:

  1. Use SAP HANA Smart Data Streaming and implement a custom data adaptor reading from Kafka;
  2. Use SPARK as a bridge between Kafka and SAP Hana;
  3. Write a custom application for the SAP Cloud platform, which reads directly from the remote Kafka queue and stores data in the local SAP Hana DB.

SAP SDS

In the case of SAP Hana, the solution can use one of several available data-ingestion mechanisms. The SDS package is native to the SAP platform and can be used to develop its own Data Adapter. This solution makes it possible to create a comprehensive real-time analytics system using the rich set of features provided by the Smart Data Streaming solution.

This approach seemed to work best as it requires only one integration point (class) in this case. The implemented adaptor will be configurable and reusable and the processing of thresholds will be performed inside Smart Data Streaming.

This approach has one possible disadvantage- using Smart Data Streaming requires a license. If the user already has a license, this approach is certainly the best.

Spark Streaming

An alternative solution would be to use Spark Streaming, which has functionality similar to SDS. Spark Streaming makes it possible to use general-purpose programming languages, as well as built-in libraries. Since built-in libraries can be used a Spark-based solution allows complex analytical computations and integration of the data flow with other systems.

Architecturally, HANA’s Smart Data Access Component connects to Spark SQL through the ODBC driver and issues SQL queries for retrieving data. The resulting data set is treated as a remote table in HANA and it can be used as the input for all advanced analytics functionalities in HANA, including modeling views, SQL Script procedures and predictive text analytics.

Custom Application

This is the simplest approach that does not require any additional software access/changes in the SAP cloud platform. In this case, a Kafka connector will be a part of a web application that visualizes device notifications and alert data.

We use the simplest approach as an example of integration, leaving further research and development as a possibility by highlighting alternative connectivity options.

Implementation

DeviceHive_scheme2

For our R&D we chose the custom application approach as it is based on open source solutions and gives more integration freedom. We didn’t have any specific requirements limiting us, in this case so this option appeared to be faster and more universal. However, we encourage our users to consider other options as per project requirements.

The above diagram shows DeviceHive sending sensor diagnostic information to the Kafka Notifications queue. SPARK processes the Notifications queue data and puts the results back into the Alerts queue. Two Kafka connectors on the SAP cloud platform listen to both Notification & Alert queues and put the data into Hana DB tables. The Web UI retrieves data from those tables through Ajax calls to UiServlet and displays charts using the D3 JS library.

Hibernate was used as the DB interaction layer with two Entity classes:

The notification class for device diagnostic information mapping, “LatestNotifications” is used to display filtered information for the device on the UI with the given GUID within the requested time period.

@Entity
    @Table(name = "Notification")
    @NamedQueries({
        @NamedQuery(name = "AllNotifications", query = "select n from Notification n"),
        @NamedQuery(name = "LatestNotifications",
            query = "select n from Notification n " +
                    " where n.timestamp > :fromTime " +
                    "   and n.deviceGuid = :deviceId " +
                    " order by n.timestamp")
    })
    public class Notification {
        @Id
        @GeneratedValue
        private Long id;

        @Basic
        private String deviceGuid;
        @Basic
        private String notification;
        @Basic
        private Timestamp timestamp;
        @Basic
        private String mac;
        @Basic
        private String uuid;
        @Basic
        private Double value;
        ...
    }

The alert class mapping of alerts / thresholds “LatestAlerts” is used to filter out Alerts for a device with the given GUID within the requested time period.

    @Entity
    @Table(name = "Alert")
    @NamedQueries({
        @NamedQuery(name = "AllAlerts", query = "select a from Alert a"),
        @NamedQuery(name = "LatestAlerts",
            query = "select a from Alert a " +
                    " where a.timestamp > :fromTime " +
                    "   and a.deviceGuid = :deviceId " +
                    " order by a.timestamp")
    })

        public class Alert {
        @Id
        @GeneratedValue
        private Long id;

        @Basic
        private String deviceGuid;

        @Basic
        private Timestamp timestamp;
        ...
    }

Don’t forget to add a binding between the DB schema and Java Application to a connection to Hana DB.

public class LocalEntityManagerFactory {
        private static EntityManagerFactory emf;

        public static void init() {
        if (emf == null) {
            try {
        try {
                InitialContext ctx = new InitialContext();
                DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/DefaultDB");

                Map properties = new HashMap();
                properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);
                emf = Persistence.createEntityManagerFactory("kafka-consumer", properties);
                System.out.println("EMF created");
            } catch (NamingException e) {
                e.printStackTrace();
                }
        }
    }

Message consumption from Kafka:

    public class ConsumerGroup {
    ...

    public ConsumerGroup(String zookeeper, String groupId, String notificationTopic,
                            String alertTopic) {
                notificationConsumer = Consumer.createJavaConsumerConnector(
                createConsumerConfig(zookeeper, groupId));
                alertConsumer = Consumer.createJavaConsumerConnector(
                createConsumerConfig(zookeeper, groupId));
                this.notificationTopic = notificationTopic;
                this.alertTopic = alertTopic;
    }

    ...

    private static ConsumerConfig createConsumerConfig(String zookeeper, String groupId) {
            Properties props = new Properties();
            props.put("zookeeper.connect", zookeeper);
            props.put("group.id", groupId);
            props.put("key.deserializer", StringDeserializer.class.getName());
            props.put("value.deserializer", StringDeserializer.class.getName());
            return new ConsumerConfig(props);
        }
    }

Tags: , , , ,


Add Comment

Name Mail Website Comment