• Since NTFS still fragments the file - which has nothing to do with how that file is laid out on a SAN - you should still look at the fragmentation level of the file and defrag the file if you have a lot of file fragments.

    Each file fragment will generate a split I/O to the SAN which can affect overall performance.

    Take for instance a database file or log file that was created at the default sizes, with the default auto growth settings. For a data file, the default is now 1MB and for log files it is 10%. Every time SQL Server needs to grow the files - a new file fragment could be created.

    If your data file is now 10GB - and it grew out to that size in 1MB increments - it is possible that you have thousands of file fragments. This will increase the number of split I/Os to read that data from the SAN volume - and could seriously affect your performance.

    I would definitely review the files and defrag them if you have a lot - and as Gail mentioned, definitely do this with SQL Server shut down to be sure you avoid any possible corruption of the files.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs