Description and Responsibilities
Fabric connectors make a collection of DBMS servers look like a single server. The fabric connector presents what appears to be a data service API to applications. It routes each request to an appropriate physical server for whatever task the application is performing, hiding the fact that a data fabric can consist of dozens or even hundreds of servers. Applications cannot tell the difference between talking to the fabric connector and talking to a real DBMS server. We call this property transparency.
Here are the responsibilities of a fabric connector. I will use the phrase proxying to refer to the first of these, and routing responsibilities to refer to the remaining three.
- Expose a data service interface to applications.
- Route each application query to an appropriate DBMS server.
- Balance load by distributing queries across multiple replicas, if available.
- Switch to another server following a failure or if the DBMS becomes unavailable due to maintenance.
Fabric connectors contain two logical components. The proxy is responsible for routing queries and responses between applications and underlying data services. This can be a library layer, a separate server process, or a TCP/IP load balancer--anything that provides a transparent indirection layer. The directory information contains rules to route SQL queries correctly to the actual location of data. There is a notification protocol that permits connectors to receive updates about the fabric topology and confirm that they have responded to them.
Motivation
Connecting to data is a problem in large systems. Sharded data sets spread data across multiple services. Data services have different roles, such as master or slave. Services fail or go offline for maintenance. Services change roles, such as a master switching to a slave. Shards move between services to balance load and use storage more efficiently. Within short periods of time there may be significant variations in load across data services. Adding routing logic directly to applications in these cases adds complexity and can lead to a tangled mess for administrators.
The fabric connector design pattern encapsulates logic to route connections from the application to DBMS servers. Hiding connection logic helps keep applications simple. It allows independent testing and tuning of the connection rules. That way you can have some assurance the logic actually works. You can also modify fabric behavior without modifying applications, for example to redistribute load more evening across replicas.
Related Design Patterns
The fabric connector design pattern manages single application connections to data services, for example a transactional data service. Transparency is the leitmotif of this pattern. It provides needed encapsulation for other data fabric design patterns and is particularly critical for sharded as well as fault tolerant data services. These will be covered in future articles on data fabric design.
There are also other design patterns for data access. Here are two that should not be confused with fabric connectors.
- Federated query. Federated query splits a SQL query into sub-queries that it routes to multiple underlying data services, then returns the results. Sharding products like DbShards and shard-query implement this pattern. It requires complex parsing, query optimization, and aggregation logic to do correctly and has varying levels of transparency.
- MapReduce. MapReduce is a procedure for breaking queries into pieces that can run in parallel across large numbers of hosts by splitting the query into map operations to fetch data followed by reduce operations to aggregate results. It can work on any distributed data set, not just SQL. MapReduce implementations often eschew SQL features like joins and also can have a very different programming model from SQL. Their use is often non-transparent to SQL applications.
I hope to add more complete descriptions for the latter three design patterns at some point in the future. For the current article, we will stick to simple connectivity.
Detailed Behavior
Fabric connectors are conceptually simple: route request from application to server, then transfer results back. Actual behavior can be quite complex. To give some perspective on the problem, here is a short Perl program for a SaaS application that logs order detail information in a table named sale, then reads the same data back. We will use the sample program to illustrate the responsibilities of this design pattern in detail.
use DBI;
# Connect to server. $dbh = DBI->connect("DBI:mysql:test;host=prodg23", "app", "s3cr3t5" ) || die "Could not connect to database: $DBI::errstr"; # Insert order using a transaction. $dbh->{'AutoCommit'} = 0; $dbh->do("INSERT INTO sale(order_id, cust_id, sku, amount) \ VALUES(2331, 9959, 353009, 24.99)"); $dbh->do("INSERT INTO sale(order_id, cust_id, sku, amount) \ VALUES(2331, 9959, 268122, 59.05)"); $dbh->commit(); # Select order back with an auto-commit read $dbh->{'AutoCommit'} = 1; $sth = $dbh->prepare("SELECT * FROM sale WHERE order_id=2331"); $sth->execute(); while( $href = $sth->fetchrow_hashref ) { print "id : $$href{id} \n"; print "order_id: $$href{order_id} \n"; print "cust_id : $$href{cust_id} \n"; print "sku : $$href{sku} \n"; print "amount : $$href{amount} \n"; } # Disconnect from server. $dbh->disconnect();
- Implement the DBMS connection protocol fully or pass it transparently to an underlying server. This includes handling authentication handshakes as well as setting base session context like client character sets.
- Handle all standard features of query invocation and response, including submitting queries, returning automatically generated keys, and handling all supported datatypes in results.
- Respect transaction boundaries so that the INSERT statements on the sales table are enclosed in a transaction in the DBMS and the SELECT statement is auto-commit (i.e., a single-statement transaction.)
- Read back data written to the sales table.
In addition to handling APIs protocols, fabric connectors need to avoid slowing down transaction processing as a result of proxying. Properly written connectors for the most part add minimal overhead, but there are at least two instances where this may not be the case for some implementations (such as network proxies). The first is establishing connections, a relatively expensive operation that occurs constantly in languages like PHP that do not use connection pools. The second is short primary key-lookup queries on small datasets, which tend to be memory-resident in the server and hence have quick access.
One common reaction is to see such overhead as a serious problem and avoid the whole fabric connector approach. Yet the "tax" applications pay for proxying is not the whole story on performance. Fabric connectors can boost throughput by an order of magnitude by distributing load intelligently across replicas. To understand the real application overhead of a connector you therefore need to measure with a properly sized data set and take into account load-balancing effects. Test results on small data sets that reside fully in memory with no load balancing tend to be very misleading.
The remaining fabric connector design pattern responsibilities are closely related: route requests accurately to the correct service, load-balance queries across replicas within a service, and route around replicas that are down due to maintenance or failure. We call these routing responsibilities. They require information about the fabric topology, which is maintained in the connector's directory information. Here is a diagram of typical directory organization.
Fabric Directory Service Organization |
Let's start with the responsibility to route requests to data services. A simple fabric connector implementation allows connections using a logical server name, such as group2, which the connector would translate to an actual DBMS server and port, such as prodg23:3306. A better fabric connector would allow applications use a customer name like "walmart" that matches what the application is doing. The connector would look up the location of customer data and connect automatically to the right server and even DBMS schema. This is especially handy for SaaS applications, which often shard data by customer name or some other simple identifier.
We could then change our program as follows to connect to the local host and look for the "walmart" schema. Under the covers, the fabric connector will connect to the prodg23 server and use the actual schema for that customer's data.
use DBI; # Connect to customer data. $dbh = DBI->connect("DBI:mysql:walmart;host=localhost", "app", "s3cr3t5" ) || die "Could not connect to database: $DBI::errstr";
This is a modest change that is very easy to explain and implement. It is a small price to pay for omitting complex logic to locate the correct server and schema that contains the data for this customer.
The next responsibility is to distribute data across replicas. This requires additional directory information, such as the DBMS server role (master vs. slave), current status (online or offline), and other relevant information like slave latency or log position. There are many ways to use this information effectively. Here are a few of the more interesting things we can do.
- Slave load balancing. Allow applications to request a read-only connection, then route to the most up-to-date slave. This works well for applications such as Drupal 7, which is an application for website content management. Drupal 7 is slave-enabled, which means that it can use separate connections for read-only queries that can run on a replica. Many applications tuned to work with MySQL have similar features.
- Session load balancing. Track the log position for each application session and dispatch reads to slaves when they are caught up with the last write of the session. This is a good technique for SaaS applications that have large numbers of users spread across many schemas. It is one of the most effectively scaling algorithms for master/slave topologies.
- Partitioning. Split requests by schema across a number of multi-master data services. SQL requests for schema 1 go to server 1, requests for schema 2 to server 2, etc. Besides distributing load across replicas this technique also helps avoid deadlocks, which can become common in multi-master topologies if applications simultaneously update a small set of tables across multiple replicas.
Recalling our sample program, we could imagine a connector using session load balancing to write the sales table transaction to the master DBMS server, then sending the SELECT to a slave if it happened to be caught up for customer "walmart." No program changes are required for this behavior.
The final responsibility is to route traffic around offline replicas. This gets a bit complicated. We need not only state information but an actual state model for DBMS servers. There also needs to be a procedure to tell fabric connectors about a pending change as well as wait for them to reconfigure themselves. Returning to our sample program, it should be possible to execute the following transaction:
$dbh->{'AutoCommit'} = 0; $dbh->do("INSERT INTO sale(order_id, cust_id, sku, amount) \ VALUES(2331, 9959, 353009, 24.99)"); $dbh->do("INSERT INTO sale(order_id, cust_id, sku, amount) \ VALUES(2331, 9959, 268122, 59.05)"); $dbh->commit();
then failover to a new master and execute:
$dbh->{'AutoCommit'} = 1; $sth = $dbh->prepare("SELECT * FROM sale WHERE order_id=2331"); $sth->execute();
...
To do this properly we need to ensure that the connecter responds to updates in a timely fashion. We would not want to change fabric topology or take a DBMS server offline while connectors were still using it. The notification protocol that updates connector directory information has to ensure reconfiguration does not proceed until connectors are ready.
Does every fabric connector have to work exactly this way? Not at all. So far, we have only been talking about responsibilities. There are many ways to implement them. To start with, fabric connectors do not even need to handle SQL. This is interesting in two ways.
First, you can skip using the Perl DBI completely and use a specialized interface to connect to the fabric. We will see an example of this shortly. Second, the underlying store does not even need to be a SQL database at all. You can use the fabric connector design pattern for other types of stores, such as key-value stores that use the memcached protocol. This series of articles focuses on SQL databases, but the fabric connector design pattern is very general.
Implementations
Here are a couple of off-the-shelf implementations that illustrate quite different ways to implement the fabric connector design pattern.
1. Tungsten Connector. Tungsten Connector is a Java proxy developed by Continuent that sits between applications and clusters of MySQL or PostgreSQL servers. It implements the MySQL and PostgreSQL network protocols faithfully, so that it appears to applications like a DBMS server.
Tungsten Connector gets directory information from Tungsten clusters. Tungsten clusters use a simple distributed consensus algorithm to keep directory data consistent across nodes even when there are failures or network outages--connectors can receive topology updates from any node in the cluster through a protocol that also ensures each connector acts on it when the cluster reconfigures itself. In this sense, Tungsten clusters implement the fabric directory service pattern described earlier.
The directory information allows the connector to switch connections transparently between servers in the event of planned or even some unplanned failovers. It can also load balance reads automatically using a variety of polices including the slave load balancing and session load balancing techniques described above.
The big advantage of the network proxy approach is the high level of transparency for all applications. Here is a sample session with the out-of-the-box mysql utility that is part of MySQL distributions. In this sample, we check the DBMS host name using the MySQL show variables command. Meanwhile, a planned cluster failover occurs, followed by an unplanned failover.
mysql> show variables like 'hostname';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| hostname | prodg23 |
+---------------+---------+
1 row in set (0.00 sec)
(Planned failover to prodg21 to permit upgrade on prodg23) mysql> show variables like 'hostname';
+---------------+---------+ | Variable_name | Value | +---------------+---------+ | hostname | prodg21 | +---------------+---------+
1 row in set (0.01 sec)
As this example shows, the session continues uninterrupted as the location of the server switches. These changes occur transparently to applications. The downside is that there is some network overhead due to the extra network hop through the Tungsten Connector, though of course load balancing of reads can more than repay the extra latency cost. Also, this type of connector is hard to build because of the complexity of the MySQL network API as well as the logic to transfer connections seamlessly between servers.(Unplanned failure to prodg22) mysql> show variables like 'hostname';
+---------------+---------+ | Variable_name | Value | +---------------+---------+ | hostname | prodg22 | +---------------+---------+
1 row in set (4.82 sec)
2. Gizzard. Gizzard is an open source sharding software developed by Twitter to manage links between Twitter users. The proxy part of the design pattern is implemented by middleware servers, which accept requests from clients using thrift, a language-independent set of tools for building distributed services. For more on a particular application built on Gizzard, look at descriptions of Twitter's FlockDB service. Gizzard servers give applications a simple API for data services, which fulfills the proxy responsibility of the fabric connector design pattern.
Gizzard servers get directory information using gizzmo. Gizzmo is a simple command line tool that maintains persistent copies of the Gizzard cluster topology and takes care of propagating changes out to individual Gizzard servers. For more on how Gizzmo works, look here. Using this information, Gizzard servers can locate data, route around down servers, and handle distribution of queries to replicas, which are the final three responsibilities of the fabric connector design pattern.
The Gizzard architecture lacks the generality of the Tungsten Connector, because it requires clients to use a specific interface rather than general-purpose SQL APIs. It also introduces an extra network hop. On the other hand, it works extremely well for its intended use case of tracking relationships between Twitter users. This is because Gizzard deals with a simplified problem and also allows scalability through many Gizzard servers. Like the Tungsten Connector the network hop expense pays for itself due to the ability to load-balance across multiple replicas.
Gizzard is a nice example of how the fabric connector design pattern does not have to be specifically about SQL. Gizzard clients do not use SQL, so the underlying store could be anything. Gizzard is specifically designed to work with a range of DBMS types.
Fabric Connector Implementation Trade-Offs
General-purpose fabric connectors like the one used for Tungsten are hard to implement for a variety of reasons. This approach is really only practical if you have a lot of resources at your disposal or are doing it as a business venture like Continuent. You can still roll your own implementations. The Gizzard architecture nicely illustrates some of the trade-offs necessary to do so.
1. General vs. particular data service interfaces. Implementing a simple data service interface, for example using thrift, eliminates the complexity of DBMS interfaces like those of MySQL or PostgreSQL. Rather than a thrift server you can also use a library within applications themselves. This takes out the network hop.
2. Automatic vs. manual failover. Automatic failover requires connectors to respond to fabric topology changes in real time, which is a hard problem with a lot of corner cases. (Look here if you disagree.) You can simplify things considerably by minimizing automated administration and instead orchestrate changes through scripts.
3. Generic vs. application-specific semantics. Focusing on a particular application allows you to add features that are important for particular use cases. Gizzard supports shard migration. To make it tractable to implement Gizzard requires a simple update model in which transactions can be applied in any order.
These and other simplifications make the fabric connector design pattern much easier to implement correctly. You can make the same sort of trade-offs for most applications.
Implementations to Avoid
Here are a couple of implementations for the fabric connector design pattern that you should avoid or at least approach warily.
1. Virtual IP Addresses (VIPs). VIPs allow hosts to listen for traffic on multiple IP addresses. They are commonly used in many failover schemes, such as programs like heartbeat. VIPs do not have the intelligence to fulfill fabric connector responsibilities like load-balancing queries across replicas. They are subject to nasty split-brains, a subject I covered in detail as part of an earlier article on this blog. Finally, VIPs are not available in Amazon and other popular cloud environment. VIPs do not seem like a good implementation choice for data fabrics.
2. SQL proxies. There are a number of software packages that solve the problem of proxying SQL queries, such as PgBouncer or MySQL Proxy. Many of them do this quite well, which means that they fulfill the first responsibility of the fabric connector design pattern. The problem is that they do not have a directory service. This means they do not fulfill the next three responsibilities to route queries effectively, at least out of the box.
Unlike VIPs, SQL proxies can be a good starting point for fabric implementations. You need to add the directory information and notification protocol to make them work. It is definitely quite doable for specific cases, especially if you make the sort of trade-offs that Gizzard illustrates.
Conclusion and Takeaways
Gizzard servers get directory information using gizzmo. Gizzmo is a simple command line tool that maintains persistent copies of the Gizzard cluster topology and takes care of propagating changes out to individual Gizzard servers. For more on how Gizzmo works, look here. Using this information, Gizzard servers can locate data, route around down servers, and handle distribution of queries to replicas, which are the final three responsibilities of the fabric connector design pattern.
The Gizzard architecture lacks the generality of the Tungsten Connector, because it requires clients to use a specific interface rather than general-purpose SQL APIs. It also introduces an extra network hop. On the other hand, it works extremely well for its intended use case of tracking relationships between Twitter users. This is because Gizzard deals with a simplified problem and also allows scalability through many Gizzard servers. Like the Tungsten Connector the network hop expense pays for itself due to the ability to load-balance across multiple replicas.
Gizzard is a nice example of how the fabric connector design pattern does not have to be specifically about SQL. Gizzard clients do not use SQL, so the underlying store could be anything. Gizzard is specifically designed to work with a range of DBMS types.
Fabric Connector Implementation Trade-Offs
General-purpose fabric connectors like the one used for Tungsten are hard to implement for a variety of reasons. This approach is really only practical if you have a lot of resources at your disposal or are doing it as a business venture like Continuent. You can still roll your own implementations. The Gizzard architecture nicely illustrates some of the trade-offs necessary to do so.
1. General vs. particular data service interfaces. Implementing a simple data service interface, for example using thrift, eliminates the complexity of DBMS interfaces like those of MySQL or PostgreSQL. Rather than a thrift server you can also use a library within applications themselves. This takes out the network hop.
2. Automatic vs. manual failover. Automatic failover requires connectors to respond to fabric topology changes in real time, which is a hard problem with a lot of corner cases. (Look here if you disagree.) You can simplify things considerably by minimizing automated administration and instead orchestrate changes through scripts.
3. Generic vs. application-specific semantics. Focusing on a particular application allows you to add features that are important for particular use cases. Gizzard supports shard migration. To make it tractable to implement Gizzard requires a simple update model in which transactions can be applied in any order.
These and other simplifications make the fabric connector design pattern much easier to implement correctly. You can make the same sort of trade-offs for most applications.
Implementations to Avoid
Here are a couple of implementations for the fabric connector design pattern that you should avoid or at least approach warily.
1. Virtual IP Addresses (VIPs). VIPs allow hosts to listen for traffic on multiple IP addresses. They are commonly used in many failover schemes, such as programs like heartbeat. VIPs do not have the intelligence to fulfill fabric connector responsibilities like load-balancing queries across replicas. They are subject to nasty split-brains, a subject I covered in detail as part of an earlier article on this blog. Finally, VIPs are not available in Amazon and other popular cloud environment. VIPs do not seem like a good implementation choice for data fabrics.
2. SQL proxies. There are a number of software packages that solve the problem of proxying SQL queries, such as PgBouncer or MySQL Proxy. Many of them do this quite well, which means that they fulfill the first responsibility of the fabric connector design pattern. The problem is that they do not have a directory service. This means they do not fulfill the next three responsibilities to route queries effectively, at least out of the box.
Unlike VIPs, SQL proxies can be a good starting point for fabric implementations. You need to add the directory information and notification protocol to make them work. It is definitely quite doable for specific cases, especially if you make the sort of trade-offs that Gizzard illustrates.
Conclusion and Takeaways
The fabric connector design pattern reduces the complexity of applications by encapsulating the logic required to connect to servers in a data fabric. There is a tremendous benefit to putting this logic in a separate layer that you can test and tune independently. Fabric connectors are more common than they appear at first because many applications implement the responsibilities within libraries or middleware servers that include embedded session management. Fabric connectors do not have to expose SQL interfaces or any other DBMS-specific interface, for that matter.
Fault-tolerant and sharded data service design patterns depend on fabric connectors to work properly and avoid polluting applications with complex logic to locate data. Products that implement these design patterns commonly include fabric connector implementations as well. You can evaluate them by finding out how well they fulfill the design pattern responsibilities.
One final point about fabric connectors. Automated failover can make fabric connectors harder to implement and increase the risk that the fabric connector may write to the wrong replica. The difficulty of managing connectivity is one of the reasons many data management experts are very cautious about automatic failover. This problem is tractable in my opinion, but it is definitely a practical consideration in system design.
My next article on data fabrics will cover the fault-tolerant data service design pattern. This design pattern depends on the fabric connector design pattern to hide replicas. I hope you will continue reading to find out about it.
No comments:
Post a Comment