
{"id":245,"date":"2017-05-10T12:37:44","date_gmt":"2017-05-10T02:37:44","guid":{"rendered":"http:\/\/bakke.online\/?p=245"},"modified":"2019-02-13T13:34:27","modified_gmt":"2019-02-13T02:34:27","slug":"implementing-checkpointing-in-long-running-updates","status":"publish","type":"post","link":"https:\/\/www.bakke.online\/index.php\/2017\/05\/10\/implementing-checkpointing-in-long-running-updates\/","title":{"rendered":"Implementing checkpointing in long-running updates"},"content":{"rendered":"<p>Occasionally we need to run updates or inserts of large numbers of records, something which has this nasty habit of taking out locks that will prevent other sessions from updating or inserting in the same tables. \u00a0Also, if we terminate such an operation, we&#8217;ll have to start all over again from the beginning, wasting a lot of time.<\/p>\n<p>Luckily, with Read Committed Snapshot Isolation which is supported by AX, readers will not be blocked by writers, but writes will still be blocked by other writes. \u00a0Once SQL Server escalates the row-level locks to table-level locks things start to go downhill rather quickly, and as we&#8217;re pretty much guaranteed not to have an outage window in which to run the update operation, someone is going to be on the phone soon. \u00a0 Quite soon&#8230;<\/p>\n<p>The solution is to break the operation into smaller transactions.<\/p>\n<p><!--more--><\/p>\n<h3>Introducing checkpointing<\/h3>\n<p>Checkpointing is a process by which we can establish a point from which we can resume a long running job, <strong>without losing any of the work done prior to the checkpoint<\/strong>.<\/p>\n<p>This technique can be used only if it is not important that the entire set of changes be handled as a single transaction, for example a mass update to initialise the value of a new field, when that field is not yet being used.<\/p>\n<p>What we&#8217;ll need is to break the update into smaller transactions that can be committed individually. \u00a0Each transaction will be small enough that we don&#8217;t take out too many or too broad locks before committing, and small enough that the transaction tracking overhead in SQL Server doesn&#8217;t cause any problems. \u00a0(For massive transactions, that overhead can become significant) \u00a0The transactions still need to be large enough that we don&#8217;t incur additional overhead from starting and committing new transactions too often. \u00a0In the example job below, I&#8217;ll use 500 records as a chunk size.<\/p>\n<pre><span style=\"color: #0000ff;\"><strong>static void<\/strong><\/span> EB_InitWorkBasketPaymentDate( Args _args )\n{\n  EB_TransactionTable trans;\n  <span style=\"color: #0000ff;\"><strong>int<\/strong> <\/span>i;\n\n  <span style=\"color: #0000ff;\"><strong>ttsBegin<\/strong><\/span>;\n\n<span style=\"color: #008000;\"><em>  \/\/ Include a where-clause to exclude any records we have already updated.\n  \/\/ This makes the process resumable.\n<\/em><\/span>  <span style=\"color: #0000ff;\"><strong>while select forUpdate<\/strong><\/span> trans\n         <span style=\"color: #0000ff;\"><strong>where<\/strong> <\/span>trans.FieldToUpdate == \"\"\n  {\n<span style=\"color: #008000;\"><em>      \/\/ Perform the field updates needed.\n      \/\/ I call a calculation method here, but use whatever way you prefer.\n<\/em><\/span>      trans.FieldToUpdate = trans.calcFieldToUpdate();\n\n<span style=\"color: #008000;\"><em>      \/\/ This test is optional, of course, but it is beneficial to only call update()\n      \/\/ when the value has actually changed.  It speeds up the operation\n      \/\/ as it results in fewer database calls.  It also causes fewer locks.\n<\/em><\/span>      <span style=\"color: #0000ff;\"><strong>if<\/strong><\/span>( trans.FieldToUpdate != trans.orig().FieldToUpdate)\n      {\n        trans.doUpdate(); <span style=\"color: #008000;\"><em>\/\/ Or update(), depending on your requirements<\/em><\/span>\n\n        i++; \/\/ Increase the counter\n        <span style=\"color: #0000ff;\"><strong>if<\/strong><\/span>( ( i <span style=\"color: #0000ff;\"><strong>mod<\/strong> <\/span><span style=\"color: #ff0000;\"><strong>500<\/strong> <\/span>) == <span style=\"color: #ff0000;\"><strong>0<\/strong><\/span> ) <span style=\"color: #008000;\"><em>\/\/ Check if we have an even 500 updates<\/em><\/span>\n        {\n<span style=\"color: #008000;\"><em>          \/\/ This may look counter-intuitive, as ttscommit comes before ttsbegin but<\/em><\/span>\n<span style=\"color: #008000;\"><em>          \/\/ we're committing the current transaction and then starting a new one.\n<\/em><\/span>          <span style=\"color: #0000ff;\"><strong>ttsCommit<\/strong><\/span>;\n          <span style=\"color: #0000ff;\"><strong>ttsBegin<\/strong><\/span>;\n<span style=\"color: #008000;\"><em>          \/\/ If you're not sure how it works, step through it in the debugger<\/em><\/span>\n<span style=\"color: #008000;\"><em>          \/\/ while monitoring the results in the database with SSMS or similar\n<\/em><\/span>        }\n      }\n    }\n  }\n<span style=\"color: #008000;\"><em>  \/\/ Final commit.\n<\/em><\/span>  <span style=\"color: #0000ff;\"><strong>ttsCommit<\/strong><\/span>;\n}<\/pre>\n<p>The trick is in the ttscommit\/ttsbegin pair in the middle of that code.<\/p>\n<p>If the job is terminated part way through, all the work done up until the last ttscommit will have been committed to the database, and will not have to be done again. \u00a0In the example, I have included a where-clause to exclude any records that have already been updated.<\/p>\n<p>No XPO this time, as you should be able to just copy\/paste and adapt the example to your needs.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Occasionally we need to run updates or inserts of large numbers of records, something which has this nasty habit of taking out locks that will prevent other sessions from updating or inserting in the same tables. \u00a0Also, if we terminate such an operation, we&#8217;ll have to start all over again from the beginning, wasting a &hellip; <a href=\"https:\/\/www.bakke.online\/index.php\/2017\/05\/10\/implementing-checkpointing-in-long-running-updates\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Implementing checkpointing in long-running updates&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4,1],"tags":[2,3,22],"class_list":["post-245","post","type-post","status-publish","format-standard","hentry","category-dynamics-ax","category-uncategorized","tag-ax2012r3","tag-dynamics-ax","tag-performance"],"_links":{"self":[{"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/posts\/245","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/comments?post=245"}],"version-history":[{"count":1,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/posts\/245\/revisions"}],"predecessor-version":[{"id":1344,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/posts\/245\/revisions\/1344"}],"wp:attachment":[{"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/media?parent=245"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/categories?post=245"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bakke.online\/index.php\/wp-json\/wp\/v2\/tags?post=245"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}