• SQL Server is going to do disk I/O in 64K chunks.  Forget anything else you've read about NTFS, use 64K clusters.  And try to keep everything except SQL files off the SQL disks.  SQL Server's units of data storage are 8K pages in 64K extents.  If there is some reason you can't use 64K clusters, use the largest you can down to 8K.

    Beyond the cluster size, the I/O traffic is different for data and log files.  They both should use 64K cluster size, but there is a difference in randomness, cache lifetime, and other characteristics.  You will see better performance if you don't mix them on the same physical drives.  In this context, indexes and tables are data.  Anything in MDF or NDF files should be on separate disks from LDF files.  Keeping indexes and tables on seperate drives may be a good idea, but that's a higher-level optimization.

    For an external drive array, I would consult the vendor about stripe size.  Make sure they understand that all I/O requests will be 64K, and they are not just giving a generic recommendation.  If no guidance is available, trying to make the stripes work out to a multiple of 64K across all drives is probably not a bad idea.

    Forget about "BACKUP ... WITH BLOCKSIZE 64K", that only applies to tape blocksize.  The backup is written in pages and extents and will be restored that way, the underlying cluster size is irrelevant.

    SQL Server doesn't really care how many different cluster sizes you use on different drives.  SQL Server allocates data in 64K extents.  It can create & drop files and make them grow & shrink, but it lets the operating system keep track of the physical disk layout (unless you're using raw partitions).

    General guidelines:

    • All SQL drives should have a 64K cluster size.  Period.
    • You should have permissions set so only the SQL Server account has permissions on the database (MDF, NDF, LDF) drives.  React violently if anyone suggests putting a file share on there since you've got all that space.  Tell them to store their MP3 collection somewhere else.
    • Put data (MDF and NDF) files on separate physical drives from LDF files.  A partitioned RAID set with multiple drive letters is still only one set of physical drives.  This also applies to LUNS in a SAN, even if they are assigned to different hosts (at least this is the advice I got for my EMC SAN, ymmv).
    • Put tempdb MDF on an isolated drive if possible.  The fastest drive available would be ideal.  There are some good reasons for creating two data files (or more for multi-cpu servers) for tempdb in the PRIMARY file group, but they can be on the same drive unless you have an extra drive available.  tempdb log can be placed with the other log files, again unless you have the luxury of having a separate drive for this purpose.
    • If you're backing up to disk, the backups should not be on the same disks as the database files.
    • If you still have multiple drive letters available for data files, feel free to separate databases or to create multiple filegroups per database and manually place tables & indexes.  Multiple filegroups per database may give you better performance if you have the time and expertise to place things just right for your structure & workload, but it requires a lot more effort from the DBA to be done correctly.
    • There is usually no good reason to have more than one LDF file per database, but separating the LDF files for active databases (if you've got drives to spare) is a good idea.

    I've never had a server with enough drive letters to do everything on that list.

    SQL 2005 considerations:

    • tempdb may require more space, especially if you use snapshot isolation.
    • The Enterprise version can use filegroups on separate drives for partitioning large tables.