Migrating SQL Server HierarchyID to PostgreSQL LTREE using AWS DMS (Option 2: Ongoing Migration)
Alexander WitteIntroduction
In the previous blog post an approach was presented for a full load transformation of the HierarchyID to LTREE data type between Microsoft SQL Server and PostgreSQL using AWS Database Migration Service. As DMS doesn’t currently support this type of transformation, some custom SQL had to be written to convert HierarchyID to LTREE syntax and to cast the data.
We may be in a situation however where, after our full load migration, additional changes to the source system need to be replicated to the target. This is called a CDC “change data capture” type of migration and DMS supports this as an “ongoing migration”.
If your business is encountering this challenge, the first way to solve it would probably be to evaluate running a CDC job from a materialized view on the source system, or via a transformation expression in a DMS task as defined in this blog post. There may be cases however we we don’t want to, or can’t make alterations to the source system. This blog post presents an alternative way of handing this by chanining DMS tasks and using a database trigger.
Quick HierarchyID and LTREE recap
In a nutshell the HierarchyID is a hierarchical data structure within Microsoft SQL Server. It is represented as a binary encoded string but has a slash numerical notation ie. /1/2/2/6/1/ where / is the root and then /1 would be the first child object etc…
The LTREE is it’s comparable data type in PostgreSQL. It uses a different type of notation, delimited by periods instead of slashes. We can also use any string we want; it doesn’t have to be numerical so for example, disneyland.mickeymouse.human.heatstroke is completely valid.
We’re using a numerical notation for our LTREE syntax (ie. 0.1.2.3.4 etc) to keep it consistent with the HierarchyID but in reality this could be any collection of strings as noted above.
So when we look at converting a HierarchyID to an LTREE we first need to convert the string syntax and then cast the data type; it’s a two step process.
Our Challenges
Setting up a continuous replication task with DMS between a table with incompatible data types presents us some challenges:
DMS Task Challenge 1
DMS does not support the LTREE as a target data type.
When we perform a hetergeneous migration with DMS, DMS maintains an intermediary list of supported data types that it uses between source and target data types. As AWS puts it:
“AWS DMS maintains data types when you do a homogeneous database migration where both source and target use the same engine type. When you do a heterogeneous migration, where you migrate from one database engine type to a different database engine, data types are converted to an intermediate data type.”
What DMS will do in our case is convert the HierarchyID data type to a STRING, but a) this string is not in the correct LTREE string syntax and b) DMS cannot cast this back into an LTREE data type for us. So we’ll need to do some tomfoolery to get this to work.
DMS Task Challenge 2
Using multiple SQLite expressions in a DMS task
As mentioned in the previous blog post we know that Step 1 of this process will be to convert our HierarchyID string syntax (ie. /1/2/3/) into our LTREE string syntax (0.1.2.3). Now we could perform this simple string transfomation as a SQLite expression within DMS. That would certainly be an efficient and cool way of doing it (i have tested this and it worked fine), and I would recommend this approach if there was only one HierarchyID column in your MS SQL table, however the problem I’ve found when trying to migrate a table with multple HierarchyID columns is that the multiple SQLite statements seemed to concatonate into one and caused me problems. So for this article we’ll tackle that part of the migration with a database trigger. I may circle back and play around with SQLite some more and if I get different results I’ll update this blog.
DMS Task Challenge 3
No Support for the update-column action
Another gotcha which we won’t actually run into but relates to using SQLite so I’ll mention it, is that DMS cannot alter/transform the contents of a column and then submit the output to a target column of the same name as the source (as we’ll need to do when transfoming the HierarchyID syntax /1/ to LTREE 0.1). For example transforming the contents of a column while outputting the same column like this won’t work:
“ColumnA” (source) -> DMS transforms the contents –> “ColumnA” (target)
When we transform the contents of a column it needs to be written to a new column (through the “add-column” DMS rule action). If we want our target to have the same column names as our source (which we do in this case) we’ll need to introduce a staging area.
Now we won’t be using SQLite in this blog (see DMS Task Challenge #2) so this problem doesn’t apply to us. However since we won’t be doing this string conversion in DMS (or in the source database), we’ll need to do it in the target database (PostgreSQL). So we’ll need a staging table anyway in order to perform our string conversion (using a PostgreSQL trigger). This will be more clear in the next section.
PostgreSQL Challenge
PostgreSQL trigger limitations
PostgreSQL supports “before” triggers that fire before the data is written to the table right? Can’t we just have our DMS task send the output (in HierarchyID string format) to our PostgreSQL table and have a “before” trigger change the string format and cast the type to an LTREE before writing to the table?
Nope. Before triggers cannot cast data types. So this will need to be a two step process. Step 1) Have staging table format the string and Step 2) have a secondary DMS task send the LTREE-string-formatted data to a final destination set with the LTREE data type.
Interestingly, the concept of creating a staging table is also presented elsewhere to handle use cases like masking PII data during a CDC migration.
Migration Architecture
Phew! Taking our challenges above into consideration, we will take this approach in performing the migration:
- Use three tables: our source MSSQL table, our target PostgreSQL table and a staging table that will convert the HierarchyID string format to LTREE string format using a PostgreSQL BEFORE trigger.
- Use two DMS tasks: one to replicate changes from our source MSQL table and a second to replicate changes from our PostgreSQL staging table to our PostgreSQL target table.
For this blog we will be migrating the earth table on MSSQL through to the earth_staging table in PostgreSQL (in AWS Aurora). The source table contains a column called “node” which is of HierarchyID data type. The earth_staging table in PostgreSQL contains a database trigger to convert the string syntax. Once written to the earth_staging table, another DMS CDC task will migrate that through to the earth table where it is written an an LTREE. Tables are within the dbo schema in both databases respectively.
We are explictly not including a full load migration in this architecture. That would be performed separately as we wouldn’t want triggers to fire upon a full load of potentially millions of records. AWS recommends enabling triggers after a full load migration has occured. I have also found that the trigger does not fire during a full load migration. I’m not exactly sure why.
Walkthrough
Enable PGLOGICAL for PostgreSQL
In my environment I’ve used the PostgreSQL variant of AWS Aurora. To enable PostgreSQL as an ongoing replication source we’ll need to use the PGLOGICAL plugin and enable replication. In Aurora this can be done via parameter groups.
Within our cluster parameter group we need to set rds.logical_replication to 1.
1rds.logical_replication = 1
(this will reboot your instances in the cluster I believe)
We can add the PGLOGICAL plugin to our Aurora Writer instance by adding pglogical to the shared_preload_libraries parameter in our DB parameter group:
1shared_preload_libraries: pglogical
Finally we need to create the extention by running a SQL command:
1create extension pglogical;
Source Table and MS-CDC
Let’s create our MSSQL earth table if it hasn’t been created from the previous blog. Also, we’ll need to setup MSSQL for ongoing replication. We have a couple options: MS-Replication and MS-CDC. Making changes to MS-Replication requires sysadmin priviledges which we don’t get in our AWS RDS environment so we get to use MS-CDC. We also need to make a change to the retention period for the transactions in the T-Log. More info can be found here. Setting all this up is fairly straighforward. See the SQL below:
1create database migrationdemo;
2GO
3
4use migrationdemo
5GO
6
7create table earth (
8 id int identity(1,1) primary key,
9 node hierarchyid,
10 name varchar(20)
11)
12GO
13
14exec msdb.dbo.rds_cdc_enable_db 'migrationdemo'
15GO
16
17exec sys.sp_cdc_enable_table
18@source_schema = N'dbo',
19@source_name = N'earth',
20@role_name = NULL,
21@supports_net_changes = 1
22GO
23
24EXEC sys.sp_cdc_change_job @job_type = 'capture' ,@pollinginterval = 3599;
25GO
Create PostgreSQL tables
Now in our PostgreSQL (Aurora) system, we’ll create our LTREE extention, schema, earth_staging and earth tables:
1create schema dbo;
2
3create extension ltree schema public;
4
5create table dbo.earth (
6 id serial primary key,
7 node ltree,
8 name character varying(20)
9);
10
11create table dbo.earth_staging (
12 id serial primary key,
13 node character varying(20),
14 name character varying(20)
15);
Create PLPGSQL Trigger
Ok now that our staging table has been created, we need create our trigger:
1CREATE OR REPLACE FUNCTION dbo.hierarchy_ltree_conversion()
2RETURNS TRIGGER
3LANGUAGE plpgsql
4AS
5$$
6 BEGIN
7 IF NEW.node IS NOT NULL THEN
8 IF (TG_OP = 'INSERT') THEN
9 NEW.node =
10 CASE
11 WHEN NEW.node = '/' THEN '0'
12 ELSE '0' || REPLACE(LEFT(NEW.node, LENGTH(NEW.node) - 1), '/', '.')
13 END;
14 ELSEIF (TG_OP = 'UPDATE' and new.node <> old.node) then
15 NEW.node =
16 CASE
17 WHEN NEW.node = '/' THEN '0'
18 ELSE '0' || REPLACE(LEFT(NEW.node, LENGTH(NEW.node) - 1), '/', '.')
19 END;
20 END IF;
21 END IF;
22
23 RETURN NEW;
24
25 END;
26$$;
27
28CREATE TRIGGER hierarchy_ltree_conversion_trigger
29BEFORE INSERT OR UPDATE ON dbo.earth_staging
30FOR EACH ROW
31EXECUTE FUNCTION dbo.hierarchy_ltree_conversion();
Create DMS Tasks
Ok we’re almost there! This blog is going to assume you have stood up DMS and the source and target endpoints. We’ll need a source endpoint for MSSQL and a source AND target endpoint for PostgreSQL. Check the accompanying Cloudformation template if you need some clarification with this. When the connections are established we just need to create both tasks:
Task 1: (source table to staging table)
Type: CDC
Mappings:
1{
2 "rules": [
3 {
4 "rule-type": "transformation",
5 "rule-id": "2",
6 "rule-name": "tablerename",
7 "rule-target": "table",
8 "object-locator": {
9 "schema-name": "%dbo",
10 "table-name": "%earth"
11 },
12 "rule-action": "rename",
13 "value": "earth_staging",
14 "old-value": null
15 },
16 {
17 "rule-type": "selection",
18 "rule-id": "1",
19 "rule-name": "tableselect",
20 "object-locator": {
21 "schema-name": "%dbo",
22 "table-name": "%earth"
23 },
24 "rule-action": "include",
25 "filters": []
26 }
27 ]
28}
Source Endpoint: MSSQL
Target Endpoint: Aurora (PostgreSQL) Target
everything else is left as default
Start the task.
Task 2: (staging table to destination table)
Type: CDC
Mappings:
1{
2 "rules": [
3 {
4 "rule-type": "transformation",
5 "rule-id": "2",
6 "rule-name": "tablerename",
7 "rule-target": "table",
8 "object-locator": {
9 "schema-name": "%dbo",
10 "table-name": "%earth_staging"
11 },
12 "rule-action": "rename",
13 "value": "earth",
14 "old-value": null
15 },
16 {
17 "rule-type": "selection",
18 "rule-id": "1",
19 "rule-name": "tableselect",
20 "object-locator": {
21 "schema-name": "%dbo",
22 "table-name": "%earth_staging"
23 },
24 "rule-action": "include",
25 "filters": []
26 }
27 ]
28}
Source Endpoint: Aurora (PostgreSQL) Source
Target Endpoint: Aurora (PostgreSQL) Target
everything else is left as default
Start the task.
Test
OK let’s take this for a spin after the infrastructure deployment and table creations.
Firstly let’s create an item in our MSSQL table and check that it’s there:
1insert into dbo.earth (node, name)
2VALUES ('/1/2/', 'neptune')
3GO
4
5select * from dbo.earth
6GO
id | node | name |
---|---|---|
1 | 0x5B40 | neptune |
Now let’s pop over to our PostgreSQL earth table and check that we have the record as an LTREE:
1select * from dbo.earth;
id | node | name |
---|---|---|
1 | 0.1.2 | neptune |
Looks good!
Now let’s head back to MSSQL and update our row:
1UPDATE dbo.earth
2SET node = '/1/4/6/'
3WHERE id = 1
4GO
5
6select * from earth
7GO
id | node | name |
---|---|---|
1 | 0x5C3280 | neptune |
And check in PostgreSQL:
1select * from earth;
id | node | name |
---|---|---|
1 | 0.1.4.6 | neptune |
And finally let’s delete this item we’ve created:
1DELETE FROM dbo.earth WHERE id = 1
2GO
3
4select * from dbo.earth
5GO
we should have an empty table
And confirm that it isn’t there in our destination table:
1select * from dbo.earth;
Awesome! If you run into problems with this keep an eye on the earth_staging table as well to understand if the records are making it there as well. Another place to watch is the state of your DMS Tasks: if DB replication settings are configured correctly the tasks with fail. Task logging here helps. I could honestly create a completely seperate blog on DMS troubleshooting…
Conclusion
I’ve found that performing any type of database migration with DMS is a minefield of “gotchas!”. It’s very important to thoroughly review the AWS documentation because different databases have varying characteristics and the devil is certainly in the details.
Here we’ve presented a way to migrate a HierarchyID data type to an LTREE datatype by using triggers and mulitple DMS tasks. This is a fairly complex migration process and I would again encourage you to investigate alternate simpler approaches if your circumstances allow it. However, if your source database is close to untouchable or those other methods do not work, hopefully this blog has presented you with another way to take on this type of migration.
Using The Included Cloudformation Template
There is an included Cloudformation template I have been using to test out this migration. It will install all the requsite infrastructure only. Specifically this stack will deploy:
- An RDS SQL Server database, Aurora PostgreSQL database, parameter groups
- All networking: VPC, subnets, RDS subnet groups, route tables, security groups etc
- All DMS resources: DMS instance, DMS endpoints, tasks
Some things of note if you decide to use this:
- This deploys SQL Server Std edition (required for MS-CDC) and it is not cheap!
- There is no management server instance included containing SQL clients (ie. SSMS etc). You’re on your own for that.
- This stack is certainly not hardened for security. I have some wide open security group rules. Use with caution.
- All the DDL and other SQL commands need to be applied manually. I was way too lazy to include custom resources in Cloudformation. With this in mind, the DMS tasks will be created and tied to databases (ie. “migrationdemo”) that don’t exist yet until you run the DDL. After deploying the CF template and creating the DDL, run a “test connnection” on the DMS endpoints via the DMS console.
- Stop the DMS tasks before you delete the stack.