[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
On 04/04/13 21:12, Rob Beard wrote: >>> >> Why? Depends on the hardware, I run an * server on a VM. >> > > It's a PCI card, as far as I'm aware it won't detect the card in a VM. > Otherwise we'd have probably have done a VM. I/O MMU virtualization is what you're looking for: VT-d in Intel land, AMD-Vi otherwise (really big Unix iron has had this forever, particularly the IBM stuff and their big z/OS mainframes). Generally only available on very high end desktop CPUs, common on server class iron. You can use this tech to bind certain hardware on the host box to individual VMs or groups as appropriate - I use this daily. Most common use case is to bind your quad socket/256Gb RAM servers multiple HBA (host bus adaptor) PCIe cards to individual or groups of VMs running on it to physically and logically separate out their iSCSI/FCoE/other storage option connections, mostly for optimisation purposes (i.e., maximum throughput from the SAN). It used to be used extensively to map individual 10G ethernet or multiple 1G ethernet physical network ports on the host iron to individual VMs as well, but everyone is going SDN (software defined network) these days so the switches are commonly virtualized as well. It's VMs all the way down my friends, all the way down... Cheers -- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/listfaq