On lip 21, hvjunk wrote:
In the sharing environment:
- You have a Desktop - you’ll use Parallels/VMWare/VirtualBox (vagrant for headless
vbox)
(Okay, you can and could use libvirt + KVM, but you are already on a GUI and want GUI
interaction/etc.)
- You have a LINUX/debops server with spare capacity and need to run VMs: libvirt with
KVM
(freeBSD: bhyv something in the place of kvm)
Now for dedicated hypervisor(s) on physical baremetals:
- Simple GUI, only a single servers: ProxMox / ESXi (free edition, and it doesn’t do
clustering afaik)
- fun “niche” SmartOS
- a few clustered servers with GUI no auto-allocations: ProxMox + clustering / VMWare
- medium size, auto allocations: ganeti / VMWare
- medium massive scale, auto allocations, USers with own rights and billing: OpenStack /
OpenNebula / VMWare
The “fun” with the OpenStack/Nebula/VMWare/etc. is that you might be
installing some of the controllers with a Debian type OS, the actual storage
and compute physicals are not necessarily a “distro” installation, but
their own “thing” to talk to the master(s). I might be wrong with these, but
that was the information I saw last time I investigated them.
Very nice breakdown of the various virtualization models. The SAN/NAS storage
can definitely be something else like Synology or NetApp devices which you can
connect to your hypervisor infrastructure using NFS, iSCSI or similar
technologies. On the other side, your hypervisors can then be thin blade
servers with only so much storage to start the hypervisor OS - after all
virtual machine storage is hosted elsewhere. There's where the compute/storage
split comes from.
it’s an auto allocations based on the controllers etc. controlling
what/where etc. and you typically only have block storage you write to… the
challenge here: BIG FAT back end network links, as every disk io goes out on
the network
Using 10 GBit/s or more FibreChannel connections definitely helps. In such
case your hypervisors usually have separate network cards that connect to the
storage network, and a different set of cards that are used for normal VM
traffic. Add to that separate VLANs or even entire software-defined networking
like VXLAN overlays to keep your tenants in their own separate networks and we
can start talking about offering cloud services to third parties... Kind of
like our own AWS/OVH/Hetzner/DigitalOcean cloud. Suddenly your IT department
can become a source of revenue instead of being only a cost center. But that's
a topic for another discussion. :)
> Are these specific packages needed to split processes across
the
> OpenStack components? Nova etc... (I'm guessing here)
I’ll not be surprised that you’ll have to be compiling lots of the stuff
yourself too … no need yet to investigate myself ;(
I'm not sure about the current state of OpenStack in Debian, you can consult
the wiki[2] page about it. OpenNebula provides their own APT repository with
various packages for each release[3], so manual compilation might not be
necessary.
[2]:
https://wiki.debian.org/OpenStack
[3]:
https://docs.opennebula.io/5.12/deployment/opennebula_installation/fronte...
>> In the end, I think that DebOps as a solid base to deploy
OpenStack,
>> OpenNebula or Kubernetes in production environment is a good target to aim
>> for. Many of the underlying components are shared, which should make things
>> a bit easier.
I do not disagree, I would rather advise that this be a tad sideline
project, instead of a full blown DebOps path, unless there are real needs
and people having the budgets for those equipment “sponsoring” this part of
the project ;)
I know that DebOps is used in very diverse environments, from cloud providers
to local installations with their own hardware. I want to use it to manage
on-premises infrastructure of a large university I'm working at, so focus on
on-premises and clustered environments is definitely on point for me. The
complexity of such infrastructure is very high, so I want to do it slowly and
methodically to have a stable base. We will get there, eventually. :)
PS: Ansible have ProxMox VM creation stuff lately, and ProxMox does
have API
etc. to control it ;)
That's great to hear, but I'd like to be able to use vanilla Debian as a base.
The infrastructure I'm working on is used by tens of thousands of people
daily, and its stability and reliability is important enough to carefully
consider the vendors. We're not talking about a WordPress blog here, although
a few of these also are in the mix.
Of course if somebody wants to add roles for Proxmox deployment in DebOps, go
ahead and show me what you got.
Cheers,
Maciej