One solution may be to reduce the number of NCI's.
But, before looking at that, it may not hurt to grab an execution plan and see what is happening when you run that update. Generally, I do not remove indexes as someone created those for a reason. It might not be a good reason, or it may be that it turns a 20 minute process into a 20 second process.
Can you reproduce this on a test system? if so, then you can have more fun testing it without breaking production. My first step after reviewing the execution plan would be to rule out the non-clustered indexes. Disable all NCI's on that table and try the update that takes 5 seconds. Did this make it substantially faster or is it only a minor improvement?
If you can't reproduce it on test, check your execution plan on live. You should also check for blocking. It might be that it isn't the UPDATE that is slow, but multiple concurrent UPDATES are causing blocking on the row, page, or table so UPDATE 1 needs to finish before UPDATE 2 can start. And if you have a long list of these updates, that could be what is causing your bottlenecks.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.