Today’s distributed cloud applications run on more platforms and operating systems than ever before. Serverless takes it even further: we don’t know — or care — where our code is running, so long as it keeps chugging along.
Good logging practices are essential to understanding what’s happening inside a serverless function. Without logs, you’re blind as to what is happening during a function’s execution. Logging becomes even more important when your application is spread across multiple platforms like serverless, Docker containers, or infrastructure running IIS. Aggregating the data from all those sources makes it possible to easily cross-reference the logs from platform to platform.
Enter the Elastic Stack, previously known as the ELK stack. It’s composed of three products: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search engine that moonlights as a centralized logging system, and a good one at that. Its ability to consume data via Logstash and expose its search engine results through Kibana make it ideal for making sense of large amounts of data.
But wait! Both Lambda and Functions integrate with their respective built-in logging frameworks, CloudWatch and Application Insights. Are the integrated logging solutions provided by AWS and Azure sufficient for understanding what’s happening inside your serverless functions? Is there any value to ingesting serverless logs into an Elasticsearch cluster? It’s with those types of questions in mind that I set off to figure out what Elasticsearch has to offer.
A quick note about Application Insights. It’s much more than a logging engine: it’s a full-blown Application Performance Management solution. It’s a bit unfair to judge Application Insights solely on its logging functionality, but that’s what I’ve chosen to do for simplicity’s sake.
Serverless Logs Explained
There are two types of logs that are output from a serverless function: platform and custom log messages.
Platform logs are those generated by the serverless host itself. These logs provide details on executions, including successes, failures, duration, memory usage, and more.
2019-08-03T10:16:21.948 Function started (Id=ea26bcc2-fb2d-4841-9d1c-98ac68a2877a 2019-08-03T10:16:21.969 Function completed (Success, Id=ea26bcc2-fb2d-4841-9d1c-98ac68a2877a, Duration=21ms)
Custom log messages are those generated by the function’s code, written by you, the developer of the function. These logs typically indicate that an action was performed, an exception was encountered, or output some information for later use.
2019-08-03T10:16:21.950 Order received with id 12345 2019-08-03T10:16:21.955 Sending order to supplier 2019-08-03T10:16:24.960 Failed to process order with id 12345
Both types of logs are needed to understand the full context of a function’s execution.
Elastic’s Search Engine Rocks
It shouldn’t come as a surprise that Elastic’s search syntax is much more flexible than the search options available in either CloudWatch or Application Insights. Elasticsearch’s front-end, Kibana, makes it easy to build custom dashboards to monitor the state of your serverless applications, set up machine learning to detect anomalies, or configure alerts on message properties.
It’s not to say that you can’t reproduce any of this in CloudWatch or Application Insights, so make sure to experiment and see what works best for you. I’ve found the search tooling on the Elastic side to be far superior than either CloudWatch or Application Insights.
CloudWatch and Application Insights are managed Backend-as-a-Service platforms with no operational overhead on your part. The only way to achieve a similar level of laissé-faire with the Elastic Stack, short of hiring one of their employees, is by way of a managed environment. Lucky for us, there are two turn-key solutions to spin up a managed Elasticsearch cluster.
Elastic.co, the parent company that builds Elasticsearch, sells “Elastic as a service”, allowing you to provision a cluster on AWS or GCP. These clusters are configurable, support a variety of plugins, and best of all the people who work at Elastic.co are there to manage the operational aspects of the cluster. The downside, of course, is that the cost of a cluster on Elastic.co is higher than if you create a similar cluster yourself. It’s the price to pay for out-of-the-box functionality.
The alternative managed hosting option is AWS Elasticsearch. It comes in cheaper than Elastic’s offering for a cluster of the same size, but also offers fewer options in terms of configurability.
There are no out-of-the-box managed Elasticsearch environments on Azure. That said, nothing prevents you from running a managed cluster on AWS or GCP and accessing it from your Azure resources. After all, Elasticsearch is exposed via an HTTP API that is available from anywhere.
The managed environments I’ve just described make it relatively painless to get an Elasticsearch cluster up and running. It’s not quite as turn-key as Application Insights or CloudWatch, but once everything is configured, it should be mostly hands-off.
The next step is to get your serverless logs from Lambda or Functions into Elasticsearch.
Plugging AWS Lambda Logs Into Elasticsearch
The ideal scenario for feeding Lambda’s logs into Elasticsearch is to use a managed AWS Elasticsearch cluster. This is by far the easiest case to support, since AWS ES has built-in support for streaming log data to the cluster. Done and dusted.
Things are slightly more complicated if you’re using an Elastic.co managed cluster. You’ll need to deploy a tool called Functionbeat, developed by Elastic, to ship logs from CloudWatch over to Elasticsearch. Functionbeat is itself a Lambda function that you configure to ship specific log groups. Its configurability make it a compelling option.
Overall, it isn’t too difficult to stream logs from Lambda to an Elasticsearch cluster. I’ll wrap up this section with some questions to ponder and direct your decision making when it comes to choosing Elasticsearch for your logs.
- Do you already use an AWS Elasticsearch cluster for logging in the other components of your stack?
- Yes: It’s a no-brainer to also stream Lambda logs to it.
- No: Streaming Lambda logs to AWS ES could provide you with insights that might be hard to catch on CloudWatch.
- Do you have a custom, or Elastic.co managed Elasticsearch cluster for logging in the other components of your stack?
- Yes: Streaming Lambda logs to AWS ES could provide you with insights that might be hard to catch on CloudWatch.
- No: Stick with CloudWatch for now.
Plugging Azure Functions Logs Into Elastic
There is only one viable option for streaming log data from Azure Functions to Elasticsearch.
It requires three things to work:
– Application Insights must be enabled on your Function App,
– An Event Hub to stream the logs from Application Insights,
– And the Azure Event Hubs plugin to index the events into your cluster.
That’s a fair amount of work, and you might be wondering why you wouldn’t just use Application Insights instead. One key factor to keep in mind is that Application Insights stores your log data for 90 day. After that, it’s gone forever. So you’ll need to look at alternatives to it if you have any retention requirements whatsoever.
The approach outlined above will only work on an Elastic.co cluster, since it’s the only one that supports the installation of custom plugins. Taking all this into account, here’s what I would consider before embarking on the road to getting your Function app logs into Elasticsearch:
- Do you have an Elasticsearch cluster on which you can install plugins?
- Yes: Streaming your Function app logs to Elasticsearch could allow you to find trends that are harder to catch on Application Insights due to its 90 day limit.
- No: Continue using Application Insights.
- Do you want to retain your logs longer than 90 days?
- Yes: The Elasticsearch cluster is a great way to do this. You’ll be able to perform all kinds of analysis on the data you collect.
- No: Continue using Application Insights
- Do you have an Elasticsearch cluster that collects the logs for the other components of your stack?
- Yes: Consolidating your Function app logs in a single place makes it easier to trace operations from start to finish across components.
- No: Continue using Application Insights.
Life With Or Without Elasticsearch
Elasticsearch and its suite of products are needed more than ever to make sense of the huge amount of logs that our applications spit out on a daily basis.
As you saw, there are some huge advantages to doing so. A single source of truth, uniformity, and archivability are all benefits. On the down side, the tooling hasn’t entirely caught up. There is a fair bit of work that needs to be done to get logs into Elasticsearch, and for that reason, it’s hard to unequivocally say that yes, it is worth it to ship your serverless logs to Elasticsearch.