Concurrency in IoT: Don’t Try to Framework it Away

frank Reactive Blocks 0 Comments

With modern programming languages, it’s no problem to handle lots of code and be productive. But there’s one exception that get’s even the most routined programmer to sweat: Concurrency! Luckily, for some applications, concurrency is merely an option to make code run faster. For IoT applications, however, concurrency is an inherent property, not optional, and critical for the whole system.

Let’s have a look at the challenges of IoT applications, and especially those parts running on gateways. These are the points in a system that connect local sensors and actuators to the cloud. (Some call such gateways also concentrators, service gateways or clients.) Examples are roadside stations, onboard units during transport, or home automation gateways.

Even if an application “only forwards data” between sensors, actuators and cloud services, several things are going on in parallel. IoT application gateways need to weave together streams of events that happen in their environment and that arrive at their interfaces. The environment produces data and requires output at its own speed. Another source of concurrency is that IoT application gateways need to operate autonomously. Without a user interface directly attached, they are controlled remotely, usually via the same network that is used to send application data. That’s another source of events that has to be handled.

Therefore, IoT application gateways are hubs of concurrent behavior. The main problem is: In IoT and M2M applications, concurrency is not something that we can just “framework away”. This means, it’s not a good idea to treat concurrency issues in the same way for all applications, simply because applications are very different. For instance, one application sends messages, and the connection goes down. The system, however, does not stop producing more data. Should you start to buffer all unsent messages? Only some? Only those of a certain importance and only for a certain maximum time? It depends!

Other simplifications are usually not desirable, either. For instance, we can try to reduce the complexity of concurrency by building our application in a more sequential way, trying to do one thing at a time only. This, however, slows down the application’s response times. Instead, we should make it obvious how our application handles concurrency. That’s one reason we built Reactive Blocks for. Reactive Blocks makes it straight-forward to express concurrency and build IoT and M2M applications.

So let’s have a look how Reactive Blocks complements Java programming. Within Java methods, you can do everything that is possible in Java. When these methods are called, however, is determined by graphics. That makes it easier to describe how methods relate with each other. And it’s possible to do concurrent programming without coordinating many threads.

Simple Delays

Have a look at these two Java methods. They switch an LED on or off. To blink the LED, we can run them with a delay in between them. For that, we just add a timer:

concurrency-delay

If we programmed this delay in code, we would need to twiddle with the thread that executes the methods. Nothing wrong with that, but it makes our application more complicated to understand. Plus, we need to care about threads to begin with.

Periodic Tasks

Similarly, if we have a periodic task that we want to execute, we can use a dedicated building block that triggers our method and all the behavior that comes after it. Here, for instance, we periodically check the temperature.

concurrency-periodic-timer

Reactive Blocks also ensures that Java methods are executed one at a time, so that they do not accidentally affect each other.

Joining

Then there’s joining: Often we need to acquire data from several sources, and combine the data before proceeding. In the example below, we combine data from two sensors, temperature and humidity. The operation combine is called after both values arrive, no matter in which order. In this way, we don’t need to check manually if both values arrived, and we also do not need to synchronize any threads either.

concurrency-joining

Buffering

A useful block is also the buffer. It contains a standard Java data structure that stores incoming jobs, so that they can queue up. However, it also keeps track of the state of the downstream process, and provides it with a new job whenever it says it’s ready for the next one. In the example below, the buffer to the right accepts messages to send. It hands them over to the serialization and sending blocks, and leaves them the time they need for sending one message after the other.

concurrency-buffer

Limiting Traffic

The limiter is another block that takes time into account. It acts as a filter that observes the signals at its input and limits how many are forwarded. This is useful when a sensor can produce arbitrary readings, but your application only processes them at a certain rate. The example below is taken from our example of an intruder detection system. It compares pictures taken with a camera to detect movements. The limiter prevents that we send many SMS messages within a short time, just because too many pictures changed frequently.

concurrency-limiter

These were some examples of how building blocks coordinate Java methods to handle concurrency. If you’d like to try it for yourself, install Reactive Blocks and get started!

Share on FacebookTweet about this on TwitterShare on LinkedInGoogle+Email to someone

Leave a Reply