Just to confirm this, you have 1 billion rows in around 5 or 6 tables that are changing frequently where the column needs to be updated?
There are always tricks you can do to optimize this such as removing the loops. Looping is very rarely the "fastest" way to attack a SQL Server problem.
How are you currently updating that column? How are you populating the column and the table? Is this coming from a flat file and you are populating it OR do you have the data sitting in the table and the client is asking you to do a calculation on the column(s) or change the column(s)?
When they update, are they updating the entire column or a subset of the data? If it is a subset, there may be better indexes that you can filter on. If you are doing an update on the entire table with some calculation on a column, doing that in a single statement (ie no loops, no cursors, just an update statement), will likely give the best bang for your buck. If it is an SSIS package that updates the data and you are truncating and populating the tables with each change requested from the client, then reducing the data set you are changing may be a good step. What I mean here is to break the data up into more tables. Tables that have infrequently changing data and tables that have frequently changing data. For example, customer names may not change often, but sales to a customer may change quite frequently. Since customer names change infrequently, have an SSIS package to pull that across that runs once per month and on demand, and the sales pulling across once per day or on demand.
In your scenario, I would suggest having the columns that your clients change infrequently go into table A (a sort of "Finalized" table) and the columns that have frequent changes going into a second table. My thought process here is that your bottleneck MAY not be on the changing data, but on the data quantity. Moving across 1 billion rows of data to 5 or 6 tables may just be a lot of data and your bottleneck MAY be on the network I/O (presuming SSIS).
If it is a literal "UPDATE" statement that is being slow, I would check for blocking and what waits are coming out of it as well as the server resources and the estimated vs actual execution plans.
99% of tuning a query starts by looking at the execution plans. Bad estimates, tempdb spill, insufficient memory grants, etc can lead to a poorly performing query.