If you'll (almost) always look up by certain key value(s), and those keys are inherently terrible for clustering (such as a guid), then cluster on all of them. Yes, in theory that makes any non-clus indexes wider, but (1) you won't come in by other key values that often and (2) you might have had to include one or more of those keys in that nc index anyway,.
As to uniqueness, the potential issue with leaving it non-unique is that it seems to lead to ghost rows in the table (which for some reason MS docs insist on calling ghost "records").
When the clus key is not unique, SQL has to add a "counter" to the key to make it unique. But SQL needs a starting point for that counter. While rows exist, SQL can use the last row to "count" from. But when you delete rows, SQL would lose the "counter" if it allowed all rows to be deleted. So I suspect SQL must keep a ghost row(s) to provide that counter(s). I believe SQL can't simply start back at 1, or some lower value, because of potential rollbacks (or forward recovery?) of the db.
SQL DBA,SQL Server MVP(07, 08, 09) Prosecutor James Blackburn, in closing argument in the Fatal Vision murders trial: If in the future, you should cry a tear, cry one for them [the murder victims]. If in the future, you should say a prayer, say one for them. And if in the future, you should light a candle, light one for them.