DSE Graph Partitioning Part 1: Custom Vertex Ids

DataStax Enterprise 5.0 contains much more than a version bump of its component pieces. In particular, DSE 5.0 includes the new graph database, DSE Graph. DSE Graph is an Apache TinkerPop enabled distributed property graph database that uses Apache Cassandra for storage, DSE Search for full text and geospatial search, and DSE Analytics for analytical graph processing.

DataStax Enterprise 5.0 contains much more than a version bump of its component pieces. In particular, DSE 5.0 includes the new graph database, DSE Graph. DSE Graph is an Apache TinkerPop enabled distributed property graph database that uses Apache Cassandra for storage, DSE Search for full text and geospatial search, and DSE Analytics for analytical graph processing. This series of blog posts will dive into the graph partitioning capabilities of DSE Graph, discussing how they work and when they may be helpful to you. Today we will focus on graph partitioning with custom vertex ids, but first we’ll walk through a few general graph partitioning examples and techniques.

Graph Partitioning

DSE Graph is a horizontally scalable database. As you insert vertices and edges, they are stored across your cluster using a method that will distribute a relatively equal amount of data to each node. This allows you to scale your storage out as your graph grows and minimizes the chance of hotspots developing. From a data ingest perspective, this is great and your writes can scale smoothly as new nodes are added. Reads can be a different story, and as your graph becomes spread across an increasing number of nodes, depending on your query patterns, latencies can increase as an increased number of network hops are required to satisfy any given query. This phenomenon of edges crossing the physical partition boundaries introduced by vertices that are not colocated on the same machine is called an “edge cut”.

There are a number of different approaches to minimizing the number of edge cuts present in a graph cluster. We will discuss a few of them below and finish with an example of how DSE Graph approaches partitioning and how you can exert some control over how your data is organized around your cluster.

Vertical Scaling

Edge cuts are introduced when a graph is partitioned, or split into pieces. In light of this, one could forgo distributing the graph horizontally across more than one node and vertically scale the machine that the graph lives on by adding more compute, memory, and storage. All queries would be answerable without reaching out to any other nodes and latencies will be unaffected by network roundtrips. Depending on the growth, overall size, and read/write workload, this may be a reasonable solution. It would require that your DSE Graph replication factor equal the number of nodes in your cluster so that each node would maintain a full copy of the graph. This approach isn’t in the sweet spot for DSE Graph’s underlying storage system and will ultimately limit how large your graph can grow.

User-Defined Partitioning

Imagine you are developing a multitenant application that stores data for five different companies. Each company can only see its own data. This lends itself to partitioning the graph by company, making sure that company vertices and edges are distributed across the cluster in such a manner that all queries against a particular company’s data will only hit one partition and never have to reach across a network boundary. If you’ve sharded a relational database, the idea is similar. This seems nice and reasonable but could become challenging to maintain if one or more of your tenants grows past the bounds of a single machine.

In addition, tenants may share certain master data-like vertices that do not fit cleanly into one partition or another. Other graphs may not even have this clean single-dimension partition identifier like “company” that the multi-tenant scenario does. Take a social network. There are countless ways that a social network graph could be partitioned (country, region, age to name a few) but it is unlikely that you would identify a scheme that would minimize edge cuts for all common query scenarios. Let’s now take a look at how DSE Graph stores vertex and edge data in Cassandra and then discuss DSE Graph’s twist on graph partitioning.

DSE Graph Storage

DSE Graph stores your vertex and edge data in Cassandra. These vertices and edges are mapped to rows in Cassandra tables. Cassandra stores rows of data in partitions distributed around your cluster. Partitions can have a variable number of rows and it is possible (and recommended) to efficiently look up individual or ranges of rows within a partition by including a filter against one or more columns identified as clustering keys in your query. It is important to note that when you query DSE Graph, you will be expressing your traversals in Gremlin. We discuss the underlying Cassandra storage format because it is instructive when it comes to tuning the performance and scalability of your graph model. If you are interested in learning more about how Apache Cassandra distributes data, you can refer to this tutorial on data modeling or use this post as a nice jumping off point.

DSE Graph Automatic Partitioning

Out of the box, DSE Graph will perform automatic partitioning, grouping vertex additions that occur within the same transaction and that share the same vertex label into the same Cassandra partition. The idea here is that vertices created within the same transaction probably have a higher likelihood of being related to one another than vertices created in separate transactions. This is a good starter approach and is probably adequate for graphs that do not grow into the tens of billions of vertices and edges.

DSE Graph Custom Vertex Ids

Building upon the concept of user-defined partitioning, DSE Graph supports creating custom vertex ids. Custom vertex ids give us explicit control over what Cassandra partition to create a vertex in and, to take it one step further, how to sort items within the partition by also allowing us to define the clustering key. Custom vertex ids must be set up when a vertex label is created. Building on an example from the DSE documentation, we’ll use a simple IoT model for demonstration purposes. The graph will store information about sensors that are distributed around various factories. Sensors will be connected to one another and form a network. In this sample snippet, we create a new custom vertex id for Sensor vertices:

schema().vertexLabel('Sensor').partitionKey('factory_id').clusteringKey('sensor_id').create()

In our example, the factory_id identifies the partition and the sensor_id clustering key instructs Cassandra to store sensors that are within the same factory in the same partition in ascending order by their sensor_id. You can imagine that a factory monitoring application will require a large number of queries that operate on the sensors located within a single factory, not across factories. Sensors that are likely to be queried together will live on the same nodes in your cluster. This means that only a fraction of your cluster’s nodes will be required to answer most queries. Note that it is possible to define a custom vertex id by only specifying the partition key, but that will mean that each new vertex will be mapped to its own partition and we will not get the grouping of sensors by factory that we are going after.

This approach to partitioning is powerful and gives users a lot of flexibility but it will not handle all cases seamlessly. First and foremost, there are practical limits to the number of rows (or vertices and edges in DSE Graph terms) that can be stored in an individual partition. Though there is no hard and fast rule, a few hundred to a few thousand vertices may be fine per partition, as long as they do not have large numbers of edges. Having said that, as with any modeling decision, it is highly advisable that you have a performance and load testing suite set up so that you can quickly test and iterate on your initial model while running it at a scale that is representative of where you think your graph application will be in the next year (or possibly even further out) based upon your expectations of growth. In addition to performance and operations considerations, the custom vertex id is not editable so once it is set for a vertex label, you cannot change it. If you do need to change your partitioning scheme, you will need to create a new vertex label with the new custom vertex id and then copy your existing data over. It is advisable that if you do choose to use a custom vertex id, you also think through what a migration would involve if you eventually need to move back to a non-custom vertex id, or one with a different partition key and clustering key.

Conclusion

DSE Graph’s architecture provides a solid base for growing your graph. In many cases, using the out-of-the-box partitioning will work just fine, but in certain scenarios, particularly with larger graphs or queries that may involve larger numbers of vertices, it may be worthwhile to look at using custom vertex ids so that you can have more control over data placement. In the next post, we will look at the facilities DSE Graph provides to help lessen the burden of maintaining vertices that have large numbers of inbound and/or outbound vertices, the infamous “supernodes”.

Contact Us

We are ready to accelerate your business. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.