• As others have said, you haven't given enough information, but you should first minimize the number of UPDATEs on bigger tables; if the FROM/JOIN clause is the same, you can use CASE in your SET lines, and spend a little more CPU on one pass through the table(s) rather than making multiple passes.

    After that, you can experiment with indexes - the usual rules apply, include useful columns from the FROM and WHERE clauses, and of those, generally favor high cardinality as leftmost, etc.

    Change where you build the index, see what happens!

    On bigger #temp tables, if you're updating columns in the index, build it afterwards. Or rebuild it afterwards!

    Use Profiler on the SQL:BatchCompleted event primarily using the CPU, Reads, Writes, and Duration columns; you want to see the aggregate effect on the entire batch, i.e. do the index(es) cost more than they benefit? You'll need to know if you're bottlenecked by CPU or Reads or Writes (most systems I see are IO bottlenecked, so I weight Reads and Writes much higher than CPU; exceptions exist, and all environments are different).