It depends on what the job is doing. Since memory and CPU don't appear to be the bottleneck (their utilization is low), finding the bottleneck and improving on it would be a way to make the job faster. That being said, it could still be a memory bottleneck.
Now this could be disk I/O, network I/O, page file utilization (if a large continuous memory request was made that won't fit in memory, it may get dumped into the page file instead of system memory), or the job itself.
What I mean by the job itself is if your query is (for example) a loop, that will make the query slow.
Now going from 5 hours to 9 hours, my guess is this is an SSIS package being run on a server that doesn't have enough free memory to perform the operations it needs to. The change that happened over the weekend was probably adding some data that put it at the tipping point where the memory grant requested for the SSIS package exceeds the available memory and thus paged to disk.
Ways you can verify that this is the problem is to reduce the amount of memory being used by the system and re-run the package (may not be realistic), migrate SSIS to its own server with more memory and see if it is still slow, or rewrite the package to use less memory.
Having a server with 5% RAM and CPU utilized doesn't really say much either. A server with 128 GB of RAM with 5% utilized sounds like it is over-allocated for memory, but if it has 2 GB of memory and 5% is being utilized when that package runs, you are very likely going to need more memory. Same logic applies to the CPU - a 32 core server with 5% CPU is a lot different than a single core server with 5% CPU.
I would check page file utilization first and if it seems to be jumping around the time your package is running, then you either need more memory or need to allocate memory better on the server or you need to re-write your package to use less memory. If it is unchanged, my guess in the performance degrading is an increase in data volume.
Now, if this isn't an SSIS package, that is a different beast. My approach here would be to grab an estimated execution plan (actual if you can, but waiting 9 hours for it to complete is a bit of a pain) and look at where it can be optimized and anything that looks odd such as a table/view/index that you are expecting 1 million rows is reporting it is expecting 1 row. Updating statistics (if they are out of date) and reducing cross database queries and linked server usage can help. If you can't remove the cross database and linked server usage, storing those tables in table variables or temp tables can help improve performance sometimes.