The previous JanusGraph nuts and bolt post discussed write performance tuning. Today we’ll switch over to the read side of the house and look at the new Cassandra CQL storage adapter and one configuration option, query.batch, that can help reduce query latencies.

The Cassandra Compatible Storage Adapters

Apache Cassandra has been a popular choice for Janus deployments since the time of Janus’s predecessor Titan. The bundled “legacy” adapters provide two ways to connect to Cassandra, or a Cassandra-compatible store like ScyllaDB, the Cassandra Thrift driver, or Netflix’s Astyanax driver (also Thrift-based, albeit with more bells and whistles). These adapters are still popular, but Thrift has been deprecated and will be officially removed as of Cassandra 4. Luckily, due to the hard work of JanusGraph community members, a new CQL storage adapter was added to Janus as of the 0.2.0 release. This new CQL adapter uses the much more advanced and up to date open source DataStax Java Cassandra CQL driver that is compatible with Apache Cassandra and ScyllaDB (it may also work with Microsoft Cosmos DB’s Cassandra API, but I don’t know of anyone trying that yet). Below, we’ll compare the legacy Thrift adapter and the new CQL adapter. We’ll be looking at their default behavior and the effect of the query.batch parameter on their execution. It is important to note that in either case, the data is stored exactly the same in your Cassandra cluster so you can switch back and forth between adapters with a Janus restart.

Setup

Before we get to the fun stuff, let’s setup our test environment. If you’d like to follow along, you’ll need to grab the latest version of JanusGraph, the Cassandra Cluster Manager (CCM), and a copy of the Yourkit Java profiler.

We’ll look at four Janus configurations to demonstrate the behavior of the Thrift and CQL storage adapters with query.batch flipped on and off for each. If you are following along, you can switch between the adapters by referencing the config file specified in the below table when you grab your graph instance: graph = JanusGraphFactory.open(‘conf/janusgraph-cassandra.properties’) or

graph = JanusGraphFactory.open(‘conf/janusgraph-cql.properties’).

You’ll also need to update the properties file, first disable the db-cache for all test cases and then flip between query.batch=true and query.batch=false, depending on the test. The db-cache setting disables the Janus single instance caching behavior which is usually how you’ll want to run a multi-Janus setup unless you can live with stale data. In our case, we’re interested in storage adapter traffic patterns so we do not want to short circuit that with a layer of caching.

Starting the Cluster

Much of what I’ll be walking through could be demonstrated with Janus hitting a single Cassandra node, but one very important CQL driver feature only becomes obvious if we have an actual Cassandra cluster. You can easily stand up a local test cluster using the Cassandra Cluster Manager tool. Follow the instructions to setup CCM at https://github.com/riptano/ccm and then run the following CCM command to startup a three node cluster running Cassandra 2.2.11, the latest stable release on the 2.2 line as the of the writing of this post. Janus comes bundled with an older version of Cassandra, 2.1.18, so you’ll likely want to target a more recent stable release for a production deployment. Janus is also compatible with Cassandra 3.x line.

ccm create test -v 2.2.11 -n 3 -s

That’ll start three nodes up, all running in their own JVMs on the following loopback addresses: 127.0.0.1, 127.0.0.2, and 127.0.03.

Now, let’s start our testing with Janus’ original Thrift adapter and query.batch = false. This is the default setup if you just start things up as they are out of the box, except for the fact that we turned the database cache off. Navigate to your JanusGraph 0.2.0 download directory and start the Gremlin console.

./big/gremlin.sh

Next, we’ll instantiate a JanusGraph instance using the Thrift adapter, and perform a one time schema setup and small data load. Our schema consists of a “Person” vertex label and a “name” property. Person vertices are indexed by name using the a built-in Janus composite index which are the preferred, and most performant option for basic equality lookups.

‍The experiment dataset

Data is loaded, so we’re ready to go. Startup YourKit if you have it and connect to the Janus JVM. If you’re testing things with the Gremlin console, like I show here, the Janus JVM is the same as the Gremlin console JVM. We’ll focus our attention on the “Events” tab this round of experimentation. If you’re new to YourKit, the events is a great place to see how your application is making use of networking, the filesystem, and database connections. We can easily capture socket read and write events, and in newer versions of YourKit, human readable Cassandra CQL query events. If you do not have YourKit, you could also use a command line tool tcpdump or Wireshark, to monitor the same traffic that we’ll be looking at. Events include stack trace and calling thread information so that you can track down where the read/write was kicked off. You can figure out these call patterns from tracing through the code, but I like this view because diving into the codebase is an initially intimidating endeavor.

‍The YourKit events tab

Experiments

Time for the experiments. You should have 3 nodes up and running along with a Janus instance accessible in your Gremlin console. We’ll be using this simple query for our experimentation:

g.V().has(“Person”, “name”, “Amy”).out().values()

Regardless of the adapter, the logical plan for our query looks like this:

  1. Consult index to find the “Person” vertex (or vertices) with name = Amy.
  2. Retrieve the edges on the Amy vertex.
  3. Retrieve the properties from each adjacent “Test Person” vertex.

Thrift with query.batch = false

Go ahead and start the capture session in YourKit and run the test query in the Gremlin console. You can then stop the capture. You should see something similar to the below screenshot. I’ve filtered out some extraneous traffic by only showing the events with “CassandraThrift*” in their stacktrace. This cuts down the noise by removing things like driver keepalives and socket traffic unrelated to our query.

‍Socket reads showing the default behavior of one lookup per adjacent vertex

There are a few things I’d like to point out here that we’ll come back to later on. First, there are 7 socket reads occurring (ignore that first as it is not related to the query). One read for the initial index lookup, one to retrieve the “Amy” vertex’s edges, and then 5 reads, one for each adjacent vertex, to retrieve the name properties off of those vertices (triggered by the values() step in our query). These all occur on the “main” thread and are executed sequentially. Lastly, if you look in the details to the right, all reads are made against a single node: 127.0.0.1.

Thrift with query.batch = true

Now let’s see what happens if we flip query.batch=true on. Go ahead and update your janusgraph-cassandra.properties config and start a new JanusGraph instance. Rerun the test query, capturing the events.

‍The number of backend calls is reduced due to batching performed by the Thrift storage adapter

Ok, that looks different. Again, ignore that pool-24-thread-1 as it is not related to our query. Putting that aside, we’re down to 3 socket reads. One for the index lookup, one to grab the “Amy” vertex’s edges, and a single “batched” call retrieving the 5 adjacent vertices. Now this whole “batch” thing is starting to make a bit more sense. The Thrift adapter is grouping backend queries when it can so that step 3 of our logical query plan (retrieve all adjacent vertex properties after we have Amy’s edge list) is being executed with a single Thrift call to 127.0.0.1:9042. In many cases this batching can make a big performance difference, even when you’re running Janus co-located with a Cassandra instance. The reason being is that each socket.read roundtrip comes with its own overhead (irrespective of the time it even takes Cassandra to put the results together). This can lead to a death by a thousand papercuts situation where the sequential socket.read overheads can greatly degrade query performance and lead to unacceptable latencies. The single batched call drops our sequential call count from 5 to 1 (excluding the initial index lookup and edge query). This is a win, but not ideal. We’re still funneling all of our traffic through a single Cassandra node which is then acting as the coordinator for the batched call. As you’ll see below, there is an even better way to do this, more inline with the recommended access patterns for Cassandra.

CQL with query.batch = false

It’s time to switch over to the CQL adapter. Close your existing Janus instance and start it up again, this time with the janusgraph-cql.properties config. Make sure you turned the db-cache off and set query.batch = false.

The CQL adapter uses the open source Datastax Java Driver. Notably, this driver provides a token and data center aware load balancing policy. This means that the driver can use the topology of your cluster, including token ranges, to intelligently route your queries to the nodes that own the data you are querying for, without driving all traffic through a single coordinator.

Rerun the test query and capture a new set of events. Since we’re using the CQL driver, you can see these events under the Cassandra events section, not just in the lower level socket event section. Similarly to the first Thrift adapter test, we see 7 sequential calls from the Main thread to Cassandra.

The CQL adapter call count shown to match the un-batched Thrift patterns.

But wait, if we look a bit deeper, we’ll see that aforementioned token range aware routing in action. I’ve added the “channel read” events back into the event table in the below screenshot because the YourKit Cassandra events do not show the remote address so you can’t tease out which Cassandra instances the query traffic is going to unless you switch over to the socket view. For the most part, you can see Cassandra events paired up with their socket.channel reads. Take a look at the remote addresses in the leftmost column. Note that we’re hitting 127.0.0.1, 127.0.0.2, and 127.0.0.3. Our calls are being sent to the nodes that own the vertex data instead of being funnelled first through a single coordinator. The calls are still sequential, but this is already a big step in the right direction as we are more evenly spreading the read load around our cluster and cutting out extra hops by going directly to where we need right away.

‍CQL query.batch = false

CQL - query.batch = true

Last test. Same as before, we’ll use the CQL adapter, this time with query.batch = true. Run your test and you’ll see something similar to the following:

‍CQL storage adapter being executed in parallel

Hmm, you might say why do I still see 7 calls? That’s a good question, I thought we turned this “batching” thing on. Recall that previously, it was hinted that bundling the adjacent vertex retrievals up into a single call might be better in some cases, but not ideal. Something else did change. We have the same number of calls, but they are being executed by different threads. This time we’re not batching, but we are firing off parallel queries. Like the previous example, each query will target the relevant node (or more likely nodes plural in a production scenario where the replication factor should be at least 3) due to the token range routing of the driver.

Latency in the previous Thrift and first CQL examples  can be calculated as follows:

adapter latency = Cassandra Query 1 + … + Cassandra Query 5

Storage adapter latency will equal the summation of the latencies of each separate Cassandra query.

Now, with the CQL adapter and query.batch turned on, the total latency can be calculated like this:

adapter latency = max(Cassandra Query 1, …, Cassandra Query 5)

Or, the total latency will be equal to the single longest running query.

All our calls are started at the same time so the upper limit on our latency is the latency of the longest running call.

‍Cassandra events interleaved with the low level socket events make it easier to see the destination of each read call

In Summary

What is the takeaway from our experiments? In most cases, if you are running JanusGraph, against an Apache Cassandra or ScyllaDB cluster, you will likely get the lowest query latencies and most even Cassandra cluster utilization out of a setup that uses the new CQL storage adapter and has query batching turned on. This will give you Cassandra friendly adapter query patterns including token range aware query routing and parallel, per partition queries. Having said that, there are a few caveats. It is important to note that query.batch not only works against Cassandra and Cassandra compatible stores, but also Apache HBase and Google Cloud Bigtable so if you’re using one of those, the driver will be different, you can still experiment with query.batch.

This all brings up a good question: why isn’t this the default, out-of-the-box setup for JanusGraph? By out-of-the-box here, I’m referring to the bundled properties files and the configuration that comes up if one downloads Janus and simply runs ./bin/janusgraph.sh. The default in this case is caching turned on, the Cassandra Thrift adapter, and query batching off. First, the CQL adapter is brand new as of 0.2.0. The Janus adapter test suite is solid, but the old standby Thrift adapter has seen orders of magnitude more production hours at this point. As far as query batching goes, there is an even better reason this isn’t the default, namely an outstanding bug that causes issues with queries that make use of certain Gremlin steps. This bug will be remedied shortly, hopefully with a fix making its way into JanusGraph 0.2.1, and more general purpose usage of query batching will become possible After this happens, and the community kicks the tires on the CQL adapter a bit more, I think it will be appropriate for the JanusGraph project to evaluate if moving these settings to default makes sense. Until then, I’d recommend trying them out and seeing if you meet with success in the form of greatly reduced query response times and a more even load across your storage backends.

Contact Us

We are ready to accelerate your business. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.