Intro
Open source Tungsten Replicator for MySQL enables multi-master, parallel and even heterogeneous replication, together with zero-down time schema upgrades, transaction filtering and more, but all these features are hardly available for another elephant in the room - i.e. PostgreSQL. At least, that was true until now.
These article series present my recent extension to Tungsten Replicator, which enables logical replication for PostgreSQL and, effectively, opens up all the advanced features, described above, too. But first - some background...
Background
For a few years now I have been yearning for a user friendly and flexible replication mechanism for PostgreSQL. Especially, seeing how Tungsten grew and enhanced MySQL replication month after month, PostgreSQL development in this field seemed relatively limited.
PostgreSQL 9 introduced Streaming Replication and Hot Standby support, which was a big step forward. After implementing support for these features in Tungsten, transparent connectivity and management features, which were already developed for MySQL, opened up for PostgreSQL too. Tungsten Replicator supports plugins, so the major work I've done was in Ruby scripts, which automatically manage WAL Shipping and Streaming Replication. They connected PostgreSQL to Tungsten features like transparent automatic failover, smart read load balancing, zero-dataloss master/slave switching and all others, which can work on a physical replication pair.
Nevertheless, in order to unleash the big power in DBMS clustering, you need to have logical replication.
Physical vs. Logical Replication
To make things simple:
- Logical replication - data from one DBMS is copied to another as transactions. One can actually see each transaction as SQL statements or row change structures. This is where MySQL's Statement or Row Based Replication fall in.
- Physical replication - data from one DBMS is copied to another as differences in file system or table spaces. In other words, there is not enough transactional content in these evens to reconstruct the original statement or row change structure, thus this event can only be applied into a slave DBMS which is a bit-to-bit copy of the master. This is where PostgreSQL's WAL Shipping and Streaming Replication fall in.
- Have a different data and/or schema on the slave. Really, one can't change data on a slave at all (eg. a standby PostgreSQL server is read-only if you use Streaming Replication).
- Filter (transform) events on the fly. In practice, one can't even see what is actually being replicated via the physical replication...
Introducing Tungsten PostgreSQL Extractor
For MySQL Tungsten Replicator extracts directly from the MySQL Binary Log files. It is possible to do that, as Binary Logs contain enough transactional information. Unfortunately, the same cannot be said about PostgreSQL XLOG files. Solution? Revert to the good old trigger based replication. Specifically, the time proven Slony.
One might say, that "Slony is complex" or "hard to administer". But, in fact, the only thing that I will use are the Slony logs. I.e. there won't be any Slony replication deamons setup/running - only the triggers on the master will be reused. Here's how the complete flow looks like:
To put it from transaction's perspective:
- Transaction is completed on the master.
- Slony triggers fire and save the row changes in sl_log_1 or sl_log_2 tables. This is where Slony's responsibilities end.
- Tungsten's PostgreSQLSlonyExtractor reads from the log tables and structures the event in a DBMSEvent type. From now on generic Tungsten components do the remaining work.
- Event passes filtering and is saved in THL (Transaction History Log).
- Slave Tungsten Replicator extracts the event remotely, passes it through its pipeline down to the PostgreSQLApplier, which then executes this statement against the slave DBMS.
The new component here is the PostgreSQLSlonyExtractor, which initial implementation you may find at our code.google.com homepage. What is really nice, that the approach above stacks up inside the whole Tungsten Replicator architecture transparently and connects to all the components that are available and are being developing for MySQL.
PostgreSQLSlonyExtractor vs. Streaming Replication
As mentioned above, Tungsten can already manage Streaming Replication, which provides many features, but the ability to have each distinct transaction going through the Replicator engine, opens up a whole new dimension. To let you feel the potential, below is a possible-to-implement feature comparing matrix:
Feature | Physical Replication (Streaming Replication) | Logical Replication (PostgreSQLSlonyExtractor) |
High availability | + | + |
Read performance scaling | + | + |
Zero-downtime upgrade | ± (H/W, OS only) | + (PostgreSQL & schema too) |
Cross-version replication | - | + |
Heterogenous replication | - | + |
Multi-master replication | - | + |
Parallel replication | - | + |
Write performance scaling (sharding) | - | + |
Conclusion - To Be Continued
We have briefly reviewed the fresh logical replication method in Tungsten Replicator for PostgreSQL based on Slony logs. The real details and setup example are left for your enjoyment in the second part of this article. You are more than welcome to stay tuned!
Please define:
ReplyDeletemulti-master prelication
parallel replication
tungsten, Ive never used it before. Can you talk more about the multi master replication? That sounds really interesting.
ReplyDelete@boxman, @Aditya.S.Mukherjee:
ReplyDeleteParallel replication, in simplified terms, is a technique with which apply stage is spread into multiple threads.
Use case example: a SaaS database where each customer has its own independent schema - having parallel replication enables to commit each customer's transactions independently and in parallel to others'.
Please find more detailed articles by my colleague Robert Hodges below:
Parallel Replication Using Shards Is the Only Workable Approach for SQL
Tuning Tungsten Parallel Replication Performance
Multi-master, on the other hand, enables multiple writable masters in the same cluster. Each master accepts writes into a particular data set; when transaction completes, it is replicated out to other nodes, which are slaves from this transaction's point of view.
Use case example: geographically distributed active database replicas, where each site accepts writes into its managed data set and replicates it out to the other site.
More details at:
MySQL replication for demanding users by Giuseppe Maxia
Slouching towards Multi-Master Conflict Resolution by Robert Hodges