Comments posted to this topic are about the item Using Redgate API to compare data in the same table
If you didn't have Redgate you could use an orm like entity framework or dapper to read each pair of rows in at a time.
Giving two instances of a row as an object.
Serialise them and compare the two strings.
If you needed to know which field was different.
Use reflection to get a list of public properties and iterate through their values using tostring() on each to get a consistent value and compare those.
andyo.dev - Tuesday, October 16, 2018 2:06 AM
Do you have a solution working? It would be interesting to see an example. I have done things similar to what you are suggesting in the past. They tend to be a lot more work then the solution I outlined in my article. What I really like about my solution is most of the heavy lifting it done by the redgate API.
Thanks for your post.
I'm not so sure there would be significantly more code.
Unless you wanted to generate your types on the fly.
An orm like entity framework can generate the class for each table you're interested in.
Most apps will be reading that data for some purpose of course and maybe you already have that class.
Generically serialising an object to a string is:
public static string SerializeToString<T>(this T source)
XmlSerializer xmlSerializer = new XmlSerializer(source.GetType());
using (StringWriter textWriter = new StringWriter())
If you instead wanted to see which property had a different value then.
Entity Framework database first makes reading an entire table's worth of data into a List pretty simple.
GetType obtains a type from an object generically.
Type.GetProperties obtains a list of properties the type has.
You can iterate through them use .GetValue on the property to generically obtain a value which you then .ToString().
Reflection is relatively costly but you can obtain the list of properties just once and re-use it for each record.
andyo.dev - Tuesday, October 16, 2018 5:06 AM
Yes, that sounds like it would work. Have you actually tried it? The complications always come in the details. I have done things similar to what you are suggesting before.
The nice thing about the solution I have presented is that if your schema changes, the compare code doesn't change. In the solution you are proposing, there are a number of changes that would need to happen to keep things in sync and to be able to compare the new columns.
In this solution, the api tells me which rows don't match and then it tells me which column doesn't match. That is really nice. I have done quite a bit with reflection and it is slow and can be a bit complicated at times to ensure you have proper type matching. If you get your type matching wrong you end up thinking you are matching when you are not, or not matching when you are.
Anyway, I appreciate your comments.
bkubicek - Tuesday, October 16, 2018 5:45 AM
I'm not really seeing the problems.
I've not done this task specifically but I have done similar enough to understand the overheads.
I would probably generate the class the row is read into using EF.
I've never seen that somehow generate the wrong class.
Just a click of a button and the files are re-generated using the t4 templates.
save is all it takes And very simple to do.
Other than that there is no need to ensure proper type matching.
You already have the type.
If the table changes then the only change you would need is to just generate the class again and paste into your app.
The type drives everything in my suggested approach.
Reflection is only slow when you're using it.
That would be once per run rather than once per row.
Even with a table which had many columns that overhead would be trivial.
I have a fair bit of code I use the technique of iterating properties to copy the values from one type to another.
This is for desktop apps that copy data out a data layer into a viewmodel and back again to commit changes.
I have live apps which do this with hundreds of records routinely.
I've tested with thousands of records.
I just do the reflection each record in my standard routine - because the overhead isn't worth coding round.
Static data is usually just that.
I would expect changes to table structure to be pretty rare by the time you have a live test and dev database.
But let's say that isn't the case.
You can of course get sql server to tell you about a schema change to a table.
Maybe this isn't a static table and is changing very dynamically.
You could possibly dynamically generate the type.
But there's presumably still going to be some code somewhere which is reading and writing that data.
With a particularly dynamic table you're best advised to use code first rather than database first EF.
Once you do that you have a class defines the database table.
Just put that in a common dll and reference it.
You then never need change the code which compares the two tables.
is it possible to make use of the 'where clause' on a table, by using the API still?
jp-1154930 - Tuesday, October 23, 2018 2:57 AM
Yes, that is part of why this solution worked for me. I was able to do a where clause for each table, I used it to set the headerID which was different for each set of data.
Viewing 8 posts - 1 through 7 (of 7 total)