I know this is an old thread, but I just wanted to add some recent experience.
I have a load process that, under normal operating conditions, loads anywhere from under a million to a bit over a million rows. It creates a table, fills it, then applies the PK. From there, it does a merge into the main table. Typically, this works fine.
Over the weekend, we wanted to refresh a much larger range of history in the main table (about 12-months' worth.) The result was something on the order of a quarter of a billion rows. The step to create the primary key ran for about 3 hours before ultimately causing so much disk thrashing that the entire server became disk-bound. Website connections were failing, GUI functions in SSMS were failing. I couldn't even stop the Agent job responsible using the GUI, I had to use the T-SQL command for it.
As far as I can tell, the takeaway is that if you have an exceedingly fast disk array, maybe you can pull off clustered index/PK creation after the table load for excessively large tables. But in this instance, it probably would have worked better if I created the PK first. While this would slow down the load, it would have prevented the performance-destroying disk thrashing at the end.