Synchronize Excel or Common Data Service tables with Microsoft Power Automate
Add and/or delete rows in one table to reflect changes in another
This automation is for you if you want to :
- Maintain a copy of a table each time it is modified or according to a specific frequency;
- Industrialize your automations (for example through the Common Data Service);
- Check the correct form and/or normalize the data in an Excel table;
- Make data from an Excel spreadsheet available for exploitation by other services (Power BI, Dynamics 365, another Excel spreadsheet, third-party applications, etc.) ;
- Copy data between two Excel files.
How does it work with Power Automate?
Follow the instructions below to set up your own flow and/or download the model as a solution to import into your Power Automate account.
Identification and extraction of data from source and target tables
First we identify the file and table from which the data comes and the table we want to keep synchronized with the source file.
Standardization of the naming of data from tables
We then normalize the names of the columns of the source table and the database in order to match them when processing the data in the Power Automate flow. This step allows us to avoid errors and to ensure clear and constant naming throughout the Power Automate flow.
To do this, we add a Select task and write the names we want to use during the flow. We browse the list of predefined dynamic content and select the corresponding values to assign to the names we have applied.
Recovery of data to be processed and exclusion of unprocessable lines
In this third step we filter out the values we do not want to synchronize. This step is optional but it demonstrates that you have the ability to exclude rows depending on the conditions you choose.
We added a filter task. For the input of the task, we passed theoutput of the previous task.
To apply a condition, we are forced to use the expression composition support. For the field on the left, we have entered the following:
And for the field on the right we simply entered :
Creation of variables to compare the rows of the two tables
Afterwards, thanks to the possibility of creating branches within a Power Automate flow, we create a variable per branch that allows us to compare the rows of the source table with the rows of the target table.
These variables, named Source Key and Target Key, represent the unique keys that match a row already present in the target table with its counterpart in the source table. It will therefore allow us to distinguish new rows from those to be deleted.
Feeding variables with comparison keys
We feed each variable by passing an "Append" task inside an "Apply to each" task. We apply the bodyof the filter task (in the case of the source table) and of the "Select" task (in the case of the target table) to the entries of the corresponding "Apply to each" task. This allows us to iterate over each row of the tables to extract the keys and add them to the lists of keys maintained by each variable.
To populate the variables we retrieve the data, employee ID, which serves as the unique key for both tables.
In the "Value" field of the "Attach the key of the variable Source key" task, we write the following function in the expression editor:
And for the task "Attach the key of the variable Target Key":
Creation of variables allowing to group the lines to be processed by their nature
We then create the two Power Automate variables, Keys of new rows and Keys of rows to be deleted. They will facilitate the grouping of rows according to their nature.
Comparison of the two tables to feed the row grouping variables
In order to distinguish new rows from those to be deleted, we create comparisons between the variables defined above, Source Key and Target Key respectively. These variables each contain a list of all the row keys of the tables that fed them. Therefore, we have the ability to evaluate whether each key is present in the opposite array.
For example, we can evaluate whether the first key in the Source Key array is present in the Target Key array. If it is not, we know that it is a new row and could be added to the New Rows Keys array. Conversely, if the tenth key of the Target Key array is not present in the Source Key array, it means that the source row had been deleted and that this key is part of Keys of rows to be deleted.
We capture all the keys of the new rows and the keys of the rows to be deleted in the variables Keys of new rows and Keys of rows to be deleted, created in a previous step.
Display the contents of the variables to check that the comparison has been carried out correctly
These variables are, at this stage, exposed by Power Automaton tasks of type "Compose" so that we have the possibility to evaluate the content of the variables and ensure that they correspond to our expectations.
Collect all the data for each row according to the keys present in the variables
Now that we have grouped the row keys, it is necessary to filter the data normalization tables by the keys in order to collect the rows in their entirety and perform operations on the target table. We do this by adding an "Apply to each" task on both sides that loops over the row keys. Within the loop, we place a filter task that returns the rows in their entirety.
Sometimes, specific data are not displayed in the dynamic content offered. To target the right data, it is still necessary to go through the expression editor. We validate the following entry:
Add rows to the target table
The filter tasks return the one and only row that corresponds to the key processed during the loop iteration. For this reason we can simply add the add row task to the loop associated with processing new row keys. It is necessary to set the add task to explicitly quote each data item to be added to the target table.
Note: The filter task returns an array of an object. In order to access the data (properties) contained in the object, it is necessary to use the first function. The first function is used to identify the first object in an array. In this case, we know that there will always be only one object returned by the filter. So we can rely on the data extracted by the first function.
Delete rows from the target table
In the same way as for adding rows, we can introduce the row deletion task directly into the loop that processes the row keys to be deleted. The parameterization of this task is simpler. You have to choose the target table and simply fill in the unique identifier of the row to delete.
We have included this data at the beginning of this stream and retrieved it later in the last filter task. As with adding the rows, it is necessary to use the first function to get the result within the array issued by the filter task.
Bravo 🎉 👏
You can now save and run your stream!
It's here to download the model 👉 here