1) Dimensional modelling means denormalized dimensions and normalized facts, so the argument of duplicate data is not valid
2) Treat Date and Time as different dimensions. For the Date dimension (day up to year plus any extra values based upon them like holidays etc) you have a Date_key (no lookup needed when a date is included in your facts). For the Time dimension (for example seconds up to hour plus any extra values based upon them such as specific time frames) there is a Time_key which also does not require a lookup. Based upon a time (for instance a timestamp) in your fact, you can calculate a unique key on forehand (14:32:21 results in 14 x 3600 plus 32 x 60 plus 21 = 52341) in your facttable. The Time dimension can be precalculated based on the same method (results in 86400 rows = seconds in a day)
Since dimensions are in principle used for analysis the specific second of a time dimension is usually not interesting, however one wants to discover predictive patterns based on intra-day values. If one needs to know what happened the last 45 minutes (how many transactions for some specific call option for some stock with an exercise price of X and expiration of Y) one uses the timestamp of the transaction which is included in the fact table.
3) Using stored procedures can perhaps be necessary (calculate a value based on spot prices in the stock market during a day), however since dimensional modelling involves ETL and a datamodel, one needs to avoid as much as possible complexity on the query side. Introducing stored procedures there as well does decrease the maintainability
4) Query performance is only an issue when you query on the fly. Using a cube you can pre-calculate the values, so this should not be an issue in the first place