There are no simple answers for a number of reasons:
DBAs do not often participate in hardware or vendor selection. They work with what they have.
Corporations will often have a corporate standard for server hardware, which may or may not be influenced by DBA input.
Capacity calculations are often made up of information that is not available (how many users will be using the system, how many IOPS are required, how fast will data volumes grow, how much network bandwidth will the application consume and how fast will that grow) etc.
Cost is a major component of determining hardware standards. Since hardware costs are not predictable (see last years hard drive shortages and attendant unanticipated price increases), companies will often buy as much hardware as they can get... for price X. Anything more than price X requires justification.
Now add virtualization into the mix. You've got the underlying hardware plus the hypervisor/virtualization layer to consider. Let's say you have a 16 core server running 4 VMs. Do you provision 1, 2, 4, 8 or all 16 cores to a virtual machine.
At the end of the day, it comes down to testing in production. Yes, this is a bad idea. But testing in a lab environment often doesn't yield accurate results. If you see processes that are processor bound, add cores. If you see processes that are memory bound, add RAM. If you see processes that are network bound, add network bandwidth (if you can). If you see processes that are disk bound, look at your storage configuration (this last one is often hardest, especially in a production environment).
commented on Aug 30 2012 8:54AM