Possibly a dumb question but do you need those WHERE clauses? If the data is the same, setting the value "10" to "10" is not going to hurt too badly if the table isn't HUGE and if there are not a lot of indexes. The only cases you want to be careful of are your first condition - if staging has NULL then you want to keep the final value, but that can be handled by CASE conditions.
Now, if that is billions of rows OR a lot of nonclustered indexes, you will benefit from only updating a subset of the data in which case those WHERE clauses are beneficial. Or if you have a calculated column or trigger keeping track of a modified date, then you will want to make sure you only update rows that have changed.
I think the other things to know is what problem are you trying to solve? What I am thinking is if you are trying to reduce the time that the "Final Table" is unusable, reducing the dataset in the Staging Table that will get written to the Final Table would be the approach I would take. What I am meaning is do all of the work in the Staging Table so you have a smaller data set to push across to the final table. You could add a column to the Staging Table to track data status and a data load datestamp. First, remove the truncate step. Then, when you are pulling the data from the feed into the staging table, the status would be unprocessed. Once the data has been pulled in, you look through the staging table ONLY for data that changed from the previous run which can be determined using the data load datestamp field. From here you update the status to be data changed or no change depending on the logic that determines what should be pulled across to the final table. You could even handle the NULL cases by updating the value in the staging table so that if it is null, it becomes the previous value. Then you can remove the previous load data as it is no longer needed (unless you want it to review the process, but it is going to make the staging table grow pretty quickly if you don't remove historical data) and push across any data where the data has changed to the final table. Advantage to this approach is that the Final Table is going to have reduced blocking on it and your UPDATE statement can have the WHERE clause logic reduced to a single WHERE condition - where status is data changed. Disadvantage is that the overall process may be slower and you will have more blocking on the staging table and you will have more disk being used long term.
I would make the Status field a TINYINT datatype, and you may want more statuses in the future if logic needs to be applied differently when moving to the final table. But my approach with ETL is to do all the work in the staging table so I have only a small bit of logic and as small of a data set as possible to work with to push across to the final table. It MAY put more strain on the SQL instance (or SSIS), but it means more uptime for my final tables.
Now, if you have a downtime window, and you are not exceeding that window, there may not be much benefit in working to improve the process. If you do have a lot of nonclustered indexes, you may benefit from disabling and rebuilding those after your data loads.
Just my 2 cents!