Piper Push Cache Documentation Download Samples Contact  

OpsLog (Operations Log)

The OpsLog sample is a set of operations tools for a JSON message based logging system. This is the same logging that Piper Push Cache uses. Included is a simple operations console for real-time web page error and warning monitoring as well as sample code for tools that can be used for writing and processing JSON based log files in three languages. A discussion of the sample can be viewed at Building Realtime, Live-Updating Displays with JavaScript and Piper, a JSON Push Cache.

As with Piper, the logging system is message based and uses NATS messaging. Applications log fatal, errors, warnings, info, and debug messages by publishing a JSON message. Messages are collected by a log writer and by Piper Push Cache. The log writer writes all log messages into a running file for processing and analysis. Piper makes log entries available for viewing on a web page.

As applications such as the mockapp generate log events they are published on NATS and received by Piper. Piper writes all the log messages to a change log file as it pushs errors and warnings to the operations console. The Mill application can be used to process the change log filtering and formating the output. Examples of using the mill include filtering for just errors and warning or simply converting the log records into a classic format.

Log Messages

Applications are expected to publish log messages as a JSON object with the following fields.

kind Always set to 'Log' indicatings the JSON is a log entry. The sole purpose is to make it easy to identify the JSON and make filtering more efficient.
level A fixnum corresponding to the severity of the log message. Values map to severity with 0 for a fatal, 1 for and error, 2 for warning, 3 for info, and 4 for debug.
when Seconds since epoch GMT. Up to 9 decimal places can be used for additional precision.
who A string identifying the application that produced the log message.
where A string indicating where the log entry was produced within an application. This could be simply the module name or the a file and line number. Piper entries show the log feature the error occurred in.
what A string describing the reason for the log message.
tid An optional string that can be used for tracking across set of processing steps on the same entity. 'tid' stands for Tracking Identifier. Use is similar to a package tracking-id in a delivery service. Piper log records include this when to associate a connection with the log entry.

Log File

The log file consists of multiple log messages written as separate JSON documents one after another in the file. This is not handled well by some languages such as Java or Python as the build in JSON parsers don't handle more than one JSON document in a file but Ruby, Go, and C can deal with a stream of separate JSON documents easily.

Piper Push Cache has several options for managing groups of records. Since log records are like any other record they can be written to the change log. The format of the change log includes not only the message but a record identifier. The Piper mill application can be used to extract the log messages from the change log. The mill application is an expanded version of the Ruby, Go, and C versions in this sample. It has many more options and is faster due to optimizations made over the sample versions.


There are two aspects to logging. Log entries are produced and after than they must be available for analysis. Analysis is done more easily if all the log entries are in one place hence the desire for centrailized logging. Entries with structure also make analysis easier. Relatively unstructured syslog entries are much more difficult to extract information from than structured entries. JSON is a well understood and supported format for structured data.

As logs can become large it is more efficient to be able to process them one entry at a time. Having a log file that is a sequence of separate JSON entries allows each entry to be read separately. An alternative would be to make the log file one single JSON array but that approach is not as fault tolerant and requires huge amounts of memory to process as the whole log must be pulled into memory in order to process the file.


The OpsLog sample is a demonstration of using messages for logging and collecting those messages in a centralized application where they can be viewed using a web browser and by inspecting log files. The messaging system used is NATS. The gnatsd server must be installed before running the OpsLog sample.


The gnatsd server can be started in the background using the default settings. This will allow applications to connect to is on localhost:4222.

$>gnatsd &

Next Piper Push Cache is started. Start from the downloaded piper folder.

$> bin/piper -c samples/opslog/opslog_cfg.json

This will start piper listening on port 7660 for HTTP GET requests and listening on the NATS subject 'log.>'. The opslog_cfg.json file specifies these parameters. In the opslog_cfg.json file there are several elements key to this sample.

The first notable section is the 'nats/subscriptions' element. This element contains only one subscription that Piper is interested in for this sample. Piper will listen for messages starting with 'log.'. When it receives a JSON message on that subject it will assigned it a record identifier starting 'log-' and ending in a unique incrementing psuedo time stamp. The 'id_key' value indicates that the record identifier should be added to the incoming JSON in a field named 'id'.

The next section of interest is the 'groups' element. This section assigned a group to the incoming log messages. Each group has a different set of behaviors. The error-warn group keeps a maximum of 100 records, writes those records to the change log, and caches them so they can be delivered to a web page. After that there is the default group that handles any other message such as log info or debug. These are written to the change log but not cached.

The last notable section is the 'change_log' section. Logging is active and will write to the file 'log/piper-changes.json'. This is the log file that will be analyzed later in the run.


Seeing errors, warnings, and fatal log entries immediately is important for an operator. In this sample a simple web page is provided for that purpose. In a browser of you choice open up the location localhost:7660. A page with an empty list should be presented.

What has happened is that the Javascript behind the page has made a subscription to all JSON records with a 'kind' field with a value of "Log". An onmessage() function in the connect.js file handles the incoming records which are placed on a cached list. The list is cached in a hidden HTML element. Changes in the list are drawn on the list canvas when the list changes. The Javascript code is all self contained and does not pull in any other libraries.

Mock Application

The system is ready to start processing at this point. All that is left is to start up one or more applications that use message based logging. In this sample the mock application is written in Ruby but many other languages support sending NATS messages and could be used as well.

Before the mockapp provided can be used all the opslog sources should be compiled or moved to the bin directory. This is accomplished by changing directories to the 'sample/opslog' directory and running 'make'.

$> cd sample/opslog
$> make

Running 'make' copies the Ruby mockapp and millr into the 'sample/opslog/bin' directory. If your environment is able to compile the C or Go sources the C anf Go versions of mill can be build in the millc and millg directories. There is some set up involved with each that is described in the README.md in each directory.

The mockapp will generate log messages. It does not make an attempt to randomize log messages or types but does have a few options to control the rate at which messages are generated. Before attempting to run the mockapp the 'piper-ruby' gem must be installed.

$> gem install piper-ruby

Now generate some log entries. The directory should still be 'sample/opslog'.

$> bin/mockapp -n "log.mock" -r 10 -m 1000

This will publish 1000 entries at a rate of 10 per second. Try other values and see how the system responds. Notice that on the web page warnings, errors, and fatals are displayed. Try a rate of one million a second watch the display.


After running the mockapp the change log should have many entries in it that can be used to analyze the system or could if something more useful than a mock application was used.

The change log includes not only the log entries but also the assigned record identifier. The mill application provided with piper will be used to process the change log file. To start, change directories back to the piper directory.

$> cd ../..

Piper can be killed at this point in the demonstration. It is not needed during the log file analysis.

First unwrap the change log entries and filter out any records that are not log entries. Place the result in 'log.json'.

$> bin/mill log/piper-changes.json -o log.json

Note that the new file contains separate JSON log entries of all levels and there are many more info and debug entries than any other. Explore the mill options and try filtering the log file in various ways. For example, filter out all but the error and warning entries and display them with color in classic format.

$> bin/mill -s error -s warn -c --classic log.json

It is easy to write your own log file tools. Three sample apps are included in Ruby, C, and Go. They are millr, millc, and millg repectively. All three support output in classic mode as well as back out to JSON again. Try some benchmarks if you like. C ends up the faster of the set with Go beating out Ruby. All are fast enough for even large files.

$> sample/opslog/bin/millr -c log.json

For realy large files, the piper mill used to generate the log.json file is faster than the C version here due to additional optimizations.

Whats Next?

Try using message based logging in your own applications. This OpsLog sample is a reasonable starting point for experimenting with the infrastructure necessary to build a robust centralized message based logging system.