Saturday, 29 March 2025

Introduction to Dynamics 365 CE Data Migration using ADF


Dynamics 365 CE Data Migration using ADF can be necessary for various reasons, such as archiving historical data, integrating with other systems, or performing data analysis in a different environment. However, extracting and transforming data from Dynamics 365 CE Service can be challenging due to its complex data model and security considerations.

Azure Data Factory (ADF) offers a robust solution with its Metadata-Driven Pipeline feature. This approach of Dynamics 365 CE Data Migration using ADF enables you to define your data extraction process using metadata, including source entities, data fields, and transformations.

By leveraging this Dynamics 365 CE Data Migration using ADF approach, you can:

  • Streamline data extraction: Automate the process of identifying and retrieving data from Dynamics 365 CE.
  • Improve data quality: Ensure data consistency and accuracy during the extraction process.
  • Enhance security: Implement robust security measures to protect sensitive data during migration.
  • Increase efficiency: Accelerate the data extraction process and reduce manual effort.

In this blog, we will attempt to move Case records from Dataverse to Azure Storage Account Container as a JSON blob file.

READ – Filtering Dynamics 365 CE in Canvas Apps with Option-Sets Tutorial

Steps for Dynamics 365 CE Data Migration using ADF

Step 1:

Visit your Azure Resource group and create a Data Factory resource. Once deployed, launch the studio.

Azure Resource Group
Create Data Factory Resource

Step 2:

In the ADF studio, create Linked Services for your Microsoft Dynamics 365 CRM or Dataverse D365 CE environment and the target Azure Storage Account Blob container.

Dataverse Linked Service:

Dataverse Linked Service

It is highly recommended to choose the Service Principal as the Authentication Type. Once you have entered the values for Application ID and the Secret key, test the connection.

Service Principal as Authentication Type

Azure Storage Account Linked Service:

Azure Storage Account New Linked Service
Azure Storage Account Linked Service

Step 3:

Once both the Linked Services are created, navigate back to your ADF studio, click on the Ingest option and choose metadata-driven Copy Task. From there, we will set up the Control Table.

Ingest Option
Metadata-driven Copy Task

Step 4:

Have an SQL Server and DB as prerequisite.  Now, select Type as Azure SQL Database, choose your Azure Subscription, Sever Name, Database Name & Authentication Type = System Assigned Managed Identity.

SQL Server and DB as Prerequisite

Step 5:

Here comes the most important bit. We need to configure the system assigned managed identity by running a T-SQL script in SSMS (SQL Sever Management Service) tool. The script would create the Managed Identity as a user and elevate the privileges to DB owner.

Running T-SQL Script in SSMS
CREATE USER <Managed Identity> FROM EXTERNAL PROVIDER; ALTER ROLE db_owner ADD MEMBER [Managed Identity]

Step 6:

Test connection to the Linked Service.

Test Connection to Linked Service

Step 7:

Next, specify your data source as your Dataverse Linked Service.

Specify Data Source

Step 8:

We select the first table for which we wish to fetch the data from. In our instance, we are selecting the Case (Incident) table.

Select Case (Incident) Table
Choose Loading Behavior

Step 9:

On the next step, select the Azure Blob Linked Service as Destination Data Store, specifying the target file path and name.

Destination Data Store

Step 10:

Then, we select File format as JSON, and its pattern as “Array of objects”.

File Format Settings

Step 11:

On the next step, we ensure that the system has auto-mapped the fields correctly.

Schema Mapping

Step 12:

We give a custom name to the our ADF Copy Task.

Custom Name to ADF Copy Task

Step 13:

Finally, we check the summary of this entire configuration before we click Finish.

Check the Summary

Step 14:

On the completion of deployment, we will get an SQL script generated. We execute it in the SSMS tool as a new query.

Deployment Complete
SQL Script Generated

Step 15:

After the successful execution of the script, we finish setting up the entire configuration on the Azure portal. We would see that under our pipeline, 3 tasks components are created which will drive our Integration – Top Level (Control Table), Middle Level & Bottom Level.

Setting up Configuration on Azure Portal

READ – The Ultimate Dynamics 365 CRM Guide

Unit Testing

  • We execute the Pipeline by clicking the Debug button and very for the execution of the 3 tasks.
image 28
  • We confirm that Incident JSON blob is created as expected.
image 29
  • Resultant JSON array is as follows:
image 30
  • We also add the “Subject” table to the list of tables as follows.
image 31
image 32
  • We again get an updated T-SQL script that we will execute again in SSMS tool.
image 33
image 34
  • Now our Control Table has 2 rows, 1 for Case and another row for Subject.
image 35
  • Finally, when we execute our pipeline again, we confirm that JSON blobs are created for both now – Case and Subject tables.
image 36

Conclusion – Dynamics 365 CE Data Migration using ADF

Hence, we learnt in this blog how we can implement Dynamics 365 CE Data Migration using ADF metadata-driven pipelines. The Dynamics 365 CRM consulting team appreciated how easy it is to configure for multiple Dataverse tables.

Introduction to Dynamics 365 CE Data Migration using ADF

Dynamics 365 CE Data Migration using ADF can be necessary for various reasons, such as archiving historical data, integrating with other sys...