Log & Metric Experiences Matter for Streaming Data

Conduit is an open-source project that will help you stream data from any of your production data stores to the places where you need it in your infrastructure. This post is about the principles around Conduit’s logging and metrics capabilities and why these principles are better for developers when moving data into systems like Apache Kafka.

The opportunity to delight someone using your tool can happen at any time. While fancy web UIs or mobile apps tend to get the limelight, developer experience can apply to even the most mundane needs; logging and metrics. I’ll use logging and metrics somewhat interchangeably throughout this post but where the difference matters, I’ll make sure to call that out.

Principles

Send everything to the same place. Create consistency and reduce the decision overhead.

Building connectors in Conduit is fairly straightforward. We made it easy for any developer that wants to build a connector to do so without having to tightly couple their work to Conduit itself. (I highly recommend you read our post on how we use Buf to make that experience possible.) One of the main benefits of loose coupling is a developer can build a connector at their own pace in a separate repository. This also can lead to some potential drawbacks. The main drawback is a connector can create their own experiences that are decoupled from the main Conduit experience. In the Kafka Connect ecosystem, you can see how this plays out because logs from the connectors can be emitted anywhere they choose. Plus, you’ll be required to set any configuration for logging on a per connector basis. 

Conduit encourages good connector logging experiences from the get-go via the Conduit Connector SDK. The SDK has the facilities for logging built in. Arguably, a connector developer could try to emit logs to a place they choose and then use the Conduit Connector Configuration to control it but that would require more effort than going down the happy path.

Code snippet

The Conduit Connector SDK can also bring structure to what’s being emitted on each of the log lines. Every log line will always have the same set of information in the same order. Structure and consistency are super important because developers can come to rely on the information always having the same shape. Without consistency, implicit behaviors exist within systems. Implicit behaviors in a system result in frustration for developers because work will need to be done to build around them if they’re not fully documented.

Code snippet

Conduit is even bringing this experience to Kafka Connect Connectors themselves! Conduit can run Kafka Connect Connectors via the wrapper we built and we recognize how logging can be a pain. We’re actively working to fix so that your Kafka Connect Connectors can emit their logs to the same place as the Conduit logs. No extra work needed!

What are you asking the developer or operator to learn

One of the biggest gripes is being forced to use another tool to understand the tool that you’re supposedly trying to operate. In development, having to use another tool can be a deal breaker. In Conduit, if something needs to be communicated to the developer, we do it via the logs including the metrics. You might conclude that the Conduit logs could be overly verbose but this is where log levels are critical. The Conduit Connector SDK has facilities for marking logs at many different levels courtesy of the Zerolog package in Go. As the user of Conduit, you can then filter out various levels based on your needs. The benefit of all of this is that it’s text-based and any developer coming from any programming language ecosystem can quickly get the information they need to debug what’s happening. 

Code snippetOne of the biggest gripes the Conduit team hears from developers about Kafka Connect is that they have to use JMX to understand what’s happening under the hood. We don’t hear this from developers that have Java backgrounds, this is from developers who’s primary language isn’t Java (e.g. Javascript, Python, Go). Arguably, this disincentivizes developers from these other language ecosystems. From a Conduit perspective, all the metrics for what’s happening under the hood are emitted in a metrics endpoint (e.g. `/metrics`). Nothing fancy is needed beyond using `curl` in your terminal or a web browser. The benefit of this approach is that the developer can quickly see what’s happening on their own machines while the same API endpoint can be use to connect to data collection tools like Prometheus or Datadog.

Code snippet

Principles Matter for Backend Systems

The principles outlined in the blog post are just a few the Conduit team abides by and how they’re applied specifically to metrics and logs. Principles are important because they improve decision-making not only for the team but how we guide open-source contributions in the community. Principles will also ensure consistency in the product experience across the board. 


Give Conduit a try! If you like what you see, follow us on Twitter @conduitIO or join us on Discord to share your experiences and how we could make it better.

Topics: Conduit