(1) is this doable? ie can i install multiple instances on a cluster, and have them use specific cpu cores? eg let's say i have installed 3 instances on a cluster whose nodes have 3 x 4-core cpus giving 12 cores in each node. instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12. this means i only have to purchase 3 sql 2008 standard licenses yea?
Yes this is workable by using affinity masking in SQL Server (this is an option in sp_configure).This will restrict the instance to use specifis CPUs.For 3 active instances you will purchase 3 licenses but since there is also an option of per CPU based you will be spending less[for 1 QUAD core you need to spend 1 license].
(2) is the cluster smart enough to know that when it fails an instance over, it must keep using only the cores allocated to it? ie instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12
Yes .the changes in sp_configure are stored in master database and all the databases are on the shared drives .So after the failover same settings will prevail.
(3) i understand that i can have a maximum of 16 instances on the same node. that's fine. however, i must be careful with the way the drives are designated as there are only 25 letters to use. unless i use mount points. um, what's a mount point in this context? my research has only found info that is either too generic or assumes too much knowledge of network storage devices.
Well , you will not be reaching the 16 instance limit i think .Microsoft's NTFS 3 supports Volume Mount Points through the use of NTFS reparse points, which allows volumes to be mounted at arbitrary locations in the file system in addition to the standard drive letters (e.g. C:, E:).Cluster supports the mount points.But there can be better options that others can offer .
(4) let's say i choose not to use mount points and stick with drive letters. with network storage being what it is, is it the case i do not really need to put my data, log & tempdb files on different 'drives' since it's the mirroring & striping of the network storage itself that provides the safety & avoids disk contention?
Putting the Data,log,backups and tempdb on different physical drives will give you better performance .
(5) my solution must be compatible with an as-yet unknown future disaster recovery project. my thinking is that the most direct way to do this will be to physically separate the cluster nodes. to make the most of this, i should also separate the disk mirrors. is this doable? ie is there such a thing as a network storage device that allows you to have one set of its disks in one place and the mirror disks in another place?
For disaster recovery you would be having the Raid drives on SAN .But you can also use the options like log shipping , mirroring and Replication and create a DR site (i case both nodes goes down ).
Sr.DBA (MCITP/MCTS :SQL Server 2005/2008 ,OCP 9i)