On mar 29, listerin wrote:
On 29.03.21 01:04, hvjunk wrote:
> As I have a proper FortiGate-VM in play, I can do proper limiting of
> outgoing traffic and SSL deep inspection of outgoing traffic so that way
> I also force as much as possible DNS/apt-caching/etc. to internal servers,
> nd the devs needs to help me specify the specific outside resources they
> need to access.
Sounds great. The location of jump hosts is still confusing to me, though.
It sounds like you have:
WAN -- FW -- jump host -- LAN1
|_ LAN3, etc.
one single jumphost, that has NICs in every Subnet / VLAN? Where is the
ansible controller then? Also how to prevent that jumphost being a single
point of failure?
First of all, due to SSH forwarding, you can detach "Ansible Controller" from
a specific host on your network. DebOps is designed to not require any special
facilities on the controller host, apart from Python, Ansible and some other
tools. If you put your project directory in a git repository, you can pick
a host, install Ansible and DebOps on it, clone the project directory and get
back to work. Or, in other words, make a laptop your Ansible Controller and
keep it offline when not using it to modify your infrastructure. There's no
constantly-on entry point, so everything is even more secure.
You can create as many jump hosts into your network as you want. In even that
one jump host is down, you can just switch your ~/.ssh/config options to the
second one. On the remote host side, you can specify either multiple IP
addresses or entrie subnets as "Ansible Controllers", using the
'core__ansible_controllers" variable. The firewall and TCP Wrappers roles will
take them into account and allow SSH connections from them.
I've used debops-padlock so far and commited the encrypted
found then that the organization of my own gpg / ssh keys is really lacking
(securely synchronizing between my clients, number and separation of keys,
etc.) Ordered a nitrokey, maybe this will help me become more organised in
Nitrokey is a good choice, perhaps having two of them as backup or redundancy
could be even better. I'm currently using Yubikey NEO for this purpose. I even
have a script that automatically forwards the GPG and SSH agents to remote
hosts, so that I can sign code using GPG on remote hosts via my local Yubikey.
When you set everything up, check that the 'system_users' role correctly picks
the key from the SSH agent.
> yeah well, beware of over automation ;)
Oh, I can identify :D
"Automation" lured me with the promise of getting rid of
crafted snowflake servers, but I stayed for the possibility of having a
huuuuge (well-configured) cluster of services I'll never have the time to
really use :D
Sometimes people ask me why I want to automate even single hosts, of course
with DebOps. The point is that even if you automate just one host, using
public code, then multiple people can use the same code to automate their own
single hosts. Suddenly your single-host automation code manages thousands of
discrete machines, and can be improved by hundreds of people over time. It's
a very large amplification potential.
On 29.03.21 10:49, Maciej Delmanowski wrote:
> With a completely new environment, I would try and find all the quirks the
> provider has - do the hosts have the proper DNS PTR records available, are
> 'ansible_hostname' and 'ansible_fqdn' variables resolved properly,
and so on.
> When you have the bootstrap.yml playbook working as expected, it's all pretty
> easy from there. I usually apply the common.yml playbook first and then
> inspect the host to see if basics are set up correctly - PKI realms, firewall,
> expected UNIX groups and user accounts. Afterwards, it all depends on the
> purpose of a given host.
Do you test for these quirks automatically or manually?
At the moment I check manually, but perhaps there could be a 'sanity.yml'
playbook and role added to the project which contains a set of assert tasks
with common problems. I'll see what can we do with this, and if it can be
written in a way that lets people extend this; perhaps similar to how the
'ldap' role manages an extensible set of LDAP tasks.
> It all depends on the available resources. Do you have access to
> beefy hardware machine? Just cram everything in there, most DebOps roles are
> designed in such a way that there shouldn't be conflicts. If you can set up as
> many VMs or containers as you want, then it might be a good idea to separate
> different applications into different hosts, or even create multiple instances
> of a given application with proper redundancy. Some roles like slapd are
> designed with this kind of operation in mind, check their documentation.
What I am doing right now is designing an ideal lab environment on a proxmox
host with sufficient ressources (for testing). In it I try to focus on
finding a balance between secure separation of services and efficiently
using a VM's given ressources. So far I have
opnsense: DHCP / DNS / FW
Host1: Controller / PKI / Jump Host (?)
Host2: PXE / Preseed / Apt-Cache
Host5: Monitoring / Logserver
Hopefully, this blueprint makes it easier to adapt to not-so-ideal
circumstances, but at least it's a good lab to learn debops.
That's a good set. You can also try out the 'rsnapshot' role and make a
server that pulls data from all the other hosts and makes daily/weekly/monthly
> The 'site.yml' DebOps playbook has the order of various
roles designed pretty
> well, from setting up basic host services like firewall, SSH, networking, then
> further to various databases and backend services, finishing on end-user
> applications like GitLab and Nextcloud. If you plan to write your own roles
> to deploy applications, it's a good practice to check the playbooks of similar
> software stack included in DebOps to see what might be needed for your
> software deployment.
Should I try to find the right position for my custom roles or is appending
it to a site.yml in my inventory enough? If not, how do I specify a position
without editing the debops code?
Unfortunately, Ansible is a bit inflexible in this part - you cannot define
a non-existent playbook which will be silently skipped if not found. So you
cannot add a custom playbook anywhere without modifying the DebOps code, which
breaks an ability to get the updates via git... We managed to add 'task_src'
custom Ansible module that lets us "inject" additional tasks into specific
roles using external files, but it's a brittle mechanism which I'm trying to
For now what you can do is to create your own 'site.yml' playbook and include
playbooks from DebOps that you want to use via 'import_playbook' keyword. Also
remember that you can specify multiple playbooks to be executed on the command
line. For example when I'm working on say, NetBox, after a new LXC container
is created I run:
debops bootstrap-ldap -l container -u root \
&& debops common service/postgresql_server service/redis_server -l container
&& debops service/netbox -l container
Tnis gives me about 10 minutes to do other thins until all the playbooks
> Jump Hosts
> The SSH protocol is pretty versatile here. [...]
Sorry, this is getting redundant, but in your example, where is the ansible
controller located? Do you have one in your "home environment" that controls
all of your different debops environments or one controller in each debops
environment to which you (manually) connect through a jump host?
I have all my project directories in their own git repositories. So, when I'm
at home I work with project directory checked out locally, if I'm at work
I use the project directory checked out locally as well. Thanks to SSH jump
hosts, both workstations connect via the same bastion host, and the remote
hosts are none the wiser. Git provides synchronization and backups.
You can definitely create different jump hosts for different parts of your
network, if you wish to separate them completely. If you stick to subdomains,
then in the ~/.ssh/config file it's easy to specify different jump hosts for
> Secrets are handled by the 'debops.secret' role with
either EncFS or git-crypt
> to provide encryption at rest. Of course it's best if you use that on top of
> an encrypted filesystem to not leave traces on the hardware, and don't publish
> repositories with secrets on public websites like GitHub to minimize exposure.
Having started to use git just now this makes me super anxious to use public
repositories, but finding out about debops-padlock eased that a bit.
Since DebOps currently uses EncFS for encrypting secrets, you might want to
read its security audit. The conclusions are that EncFS is mildly secure as
long as you don't replace any encrypted file contents; otherwise encryption
details might leak out. Keeping the repositories accessible to specific people
only and not available publicly should mitigate the risks somewhat.
> It's a chicken-and-egg problem [...]
Thanks for the detailed walkthrough. I did only find out about
bootstrap-ldap via the newly added sssd role documentation, bootstrap-sssd
seems to be working similarly. How do I know which tasks to skip? Probably
by knowing the code, right? :)
Well, if you want to know what tasks to skip during LDAP bootstrapping
process, run the playbooks without any --skip-tags arguments. You will see
when the playbook execution breaks and what roles should be avoided.
> Over the years developing DebOps and helping people debug issues
> infrastructure I can say that each person's environment is different. It all
> depends on the purpose of the infrastructure - web applications will have
> different requirements than a HPC cluster, which will have different
> requirements than a backup storage array. The recent trend of going into the
> Cloud can cover probably 70%-80% of common use cases pretty easily, especially
> when you just get perpared VMs for your applications akin to Heroku. But
> handling your own infrastructure properly from the ground up is still a skill
> which you acquire over years of practice. So don't get discouraged if you
> stumble on a roadblock - everything can be either fixed or redesigned if
Thanks for the encouragement and the support! As strange as it may sound I'm
having great fun learning debops and its multitude of possibilities. Just
feel overwhelmed by it at times, but that keeps me motivated.
I'm glad you like it. Using DebOps as a tutorial definitely requires some
determination, since there's not much hand-holding and explanation involved,
like in proper tutorials. The upside is that this is code really used in
production environments all over the world, so you can learn best practices