I’ve been “fortunate” enough to be working a great deal with Change Tracking recently and have run into a limitation that it’s worth keeping an eye out for on your servers.
If you have a database, with Change Tracking enabled, and the total number of transactions exceeds 14.4 million for all of the tables that are tracked in that database then you are going to be in for some hurt.
The problem is that the process to clean up the CT data in the (internal) syscommittab table only runs once a minute, and only cleans up a maximum of 10,000 rows every time it runs.
10000 (rows) x 60 (minutes an hour) x 24 (hours a day) = 14,400,000
Given that it cannot clear faster than that limitation the internal table will continue to grow over time and it will cause other cleanup processes to slow down, as well as your CT queries. This is definitely an issue in SQL 2012, I have no yet checked other versions.
Here’s a quick screen grab where you can see the cleanup and the number of rows cleaned up:
You can see if you are running into this limitation by checking the age of the earliest syscommittab row and seeing if it exceeds your retention time
select datediff(hour, commit_time, getutcdate()) from sys.dm_tran_commit_table where commit_ts = (select min(min_valid_version) from sys.change_tracking_tables)
You can also check the size and rowcount of the syscommittab table to gauge if it’s something you should be concerned with
selectt.Name as CTName , sum(p.rows) as ChangeTrackingRows , convert(numeric(10,2),(sum(u.total_pages)*8)/1024.0) as SizeMB from sys.internal_tables t join sys.partitions p on t.object_id = p.object_id join sys.allocation_units u on (p.hobt_id = u.container_id OR p.partition_id = u.container_id) left join sys.change_tracking_tables c on t.parent_object_id = c.object_id where t.name = 'syscommittab' group by t.name
I have a whole bunch more stuff coming on Change Tracking (including an undocumented workaround for this issue), but for now this is important to know.