My 2 cents here , which are similar to above and some more.
Approach 1 : Follow an INSERT only model.
Insert all incoming rows with a flag mentioning the latest ones are the current ones.
Scan the table and if any of the "newly inserted " records existed before , mark them as inactive.
The disadvantage is , During the insertions, there could be 2 versions of the same record ( 1 old and 1 newly inserted ) . This can be handled by another ACTIVE/VALID flag , but the reporting application has to know to use this flag.
You may also have to have a separate step to purge the OLD/INACTIVE records from the table as a Maintenance job
Approach 2 : Use a TEMP table '
Similar to above discussion , do a lookup on the TARGET table and set a flag in the ETL to determine if the record is for INSERT/UPDATE. INSERT the records to the main table , but ROUTE the UPDATE records to a STAGE table.
UPDATE the main table from this STAGE table using a batched, looped UPDATE Procedure.
If you do not have a key readily available for comparison , a HASH key might help : Subject to the table.
Approach 3 : SPLIT the tables
How frequent do you receive the updates ? Will you be able to split the table ? For example, the ORIGINAL TABLE and a NEW RECORD table ? You can then sync the NEW RECORD table to the ORIGINAL table as a weekly/EOD job. A view on these 2 tables can give the complete set , except you have to get the UPDATED records from the NEW RECORD table.