| Age | Commit message (Collapse) | Author |
|
My current deployment of non-transport servers looks like this:
xmpp-prod host:
chat.fennell.dev
chat.koyo.haus
prod turn server
xmpp-nonprod host:
chat.continuous.nonprod.fennell.dev
chat.continuous.nonprod.koyo.haus
nonprod turn server
So, for each environment, there are two XMPP servers and only a single turn
server.
Therefore, within each environment, all XMPP servers need to point to the same
turn server. I decided arbitrarily that that server would be defined for the
koyo.haus domain.
I used to have a variable defined for each host to manually point the turn
server to that domain. However, we can prevent some duplication of information
in the playbook if we just define the turn_domain (i.e. koyo.haus) in the
inventory, and then derive the full path for that environment from that.
|
|
I have two different kinds of servers - transport servers (which connect to
legacy networks and have s2s disabled) and non-transport servers (which are
XMPP-only and have s2s enabled).
I previously had an is_transport_server boolean defined for each host in the
inventory - however, this is duplicated information that can be derived from
the length of the transports value (which lists the legacy networks to
transport to).
Transport servers have a non-empty transports list, while non-transport servers
do not define the variable at all. So, handle this case in the playbook by
deriving an empty list if the value is not present.
|
|
I previously had separate inventories for each environment: prod, transport and
staging, with each inventory having a single xmpp_server group.
I want to start adopting group_vars so that I can share common variables
between hosts, so I've moved all hosts into a common hosts.yaml file with
groups for each environment.
This means there is no longer an xmpp_server group, and all hosts are in a
single inventory. Adjust the playbook to account for this.
|
|
I use the playbook to deploy to three different domains. Before this commit,
some instances were deployed to the root domain (e.g. example.org) and others
were deployed to a subdomain (e.g. chat.example.org), so that other
services/hosts could easily live at the root.
I would now like to enforce that all instances live under the chat. subdomain.
There is no real benefit to having this difference in deployments, having more
consistency will make reasoning about the different instances easier and allow
me to delete some extra variables, and it will also allow me to deploy separate
services to the root domains in the future if needed.
|
|
When I first made this playbook, I was a little sceptical of -or-later
licenses. However, I've come around to the idea over time.
|
|
All hosts I previously used had unattended-upgrades already installed, but a
standard debian install doesn't have it installed by default. So, make sure it
is installed.
|
|
Prosody falls back to a legacy DNS module and also logs warnings if lua-unbound
is not installed.
|
|
We do not need s2s modules or config for a single-user transport oriented
server.
Likewise, we do not need admin or abuse contacts if s2s is disabled. No
messages can escape, and it would be impossible to contact them regardless!
|
|
Invites are not needed on a single-user transport-only server. Therefore, place
this functionality behind a flag.
|
|
I am planning on deploying a new single-user server, without s2s connections or
other features, specifically for transports.
This necessiates splitting off some functionality behind a flag, so that it is
only enabled for non-transport ("standard") servers.
|
|
I added python3-pexpect to the dependency list in
de867dadbcc3c69d97acf96bf3e86d11295eea39, to use the pexpect ansible module for
a reason that is lost to the sands of time. This module is no longer used, so
the dependency can be removed.
|
|
I found these variables a bit confusing after having to interact with
them again. It is useful to have some context now I have forgotten all
about the DS record setup!
|
|
It is useful to jump to diferent variables using the {} keys in vim, and
the rest of the playbook has similar whitespace.
|
|
domain_with_ds is checked against the empty string when checking whether
we should define ds_subname.
When no parent_domain was found, we setting domain_with_ds to None,
which in Ansible 10 was (correctly) failing the domain_with_ds != ""
check. However, in Ansible 12, it now fails that check, meaning that
Ansible tried to evaluate ds_subname even when domain_with_ds was None,
resulting in a type conversion failure.
Therefore, make sure that domain_with_ds is always a string, even if
parent_domain is undefined, and use the empty string to represent this,
as expected in the playbook itself.
|
|
Some services, such as munin, read the hostname from the system, and
don't allow "virtual host" configuration like prosody. For such
services, we want to make sure the hostname is set correctly.
|
|
I want ansible to take full control of managing /etc/hosts, hostname
etc. I think it is most convenient to disable cloud-init entirely, to
prevent contention between ansible and cloud-init.
|
|
db and database have been deprecated, and replaced with login_db.
|
|
This is now enforced by ansible-lint.
|
|
This was originally intended for motoristic, but is no longer needed by
any domain.
|
|
This was only ever enabled for testing purposes, and is no longer
needed.
|
|
I made a mistake in the original configuration - I tried to give each
virtual host a separate turnserver on its own subdomain. However, since
koyo.haus and fennell.dev (and likewise in nonprod) share a virtual
machine, they can only have one turnserver between them (in the
turnserver.conf, there can only be a single realm).
Therefore, always point to koyo.haus for the turnserver in each
environment.
|
|
I used to have a dedicated server for cert renewals; now I just run it
from my laptop, with an increased cron frequency. This is simpler,
especially when there is a powercut, and I'll certainly use my laptop
every 30 days.
|
|
These steps where not idempotent, because there was no way to check if
the password was correct. So, they would again each time.
The playbook gets run infrequently enough, and it is simple enough, to
add users manually.
|
|
Backups are now handled outside of the playbook.
|
|
The main way the config varies from Debian's default, is that we make sure to
reboot after each upgrade.
|
|
This is useful for two reasons:
* To test clients that render roster groups provided by the server
* To evaluate whether it is worth enabling this flag in production
|
|
This is not always installed by default on all hosts. We encountered an issue
where this package was not installed, and it was causing the system time to
gradually drift.
|
|
These vary significantly from deployment to deployment, and running this
playbook previously caused issues on fennell.dev deployments, where I need to
be able to deploy certificates by other means.
|
|
This will primiarly be used for motoristic.
|
|
Although this playbook originally installed certificates to the server, this
turned out to be a bad idea, because the playbook could in some circumstances
(if the acme project had already renewed the certs) have installed a different
certificate to the remote server.
By delgating responsibility to the acme server fully, this should prevent any
such issues, as well as potential DANE misconfigurations.
|
|
|
|
The AAAA record should be created by the libcloud bootstrap process instead, so
that the playbook can ssh using the hostname as normal.
|
|
The playbook initially deleted the public keys from root's authorized_keys
after copying them to admin, but this prevents the playbook from running the
"Ensure admin account is created" commands in subsequent runs. Therefore, we
shouldn't delete them.
In the long term, I would like to find a way to only attempt to run the root
commands if it's not possible to ssh as admin. This is as I don't like the idea
of root having direct ssh access.
|
|
Initially, I used AWS Lightsail for deployment. However, I am now using Vultr
via libcloud, which does not create a user named "admin" by default. Therefore,
this commit aims to ensure that such an account is created, even on providers
that don't create it by default.
|
|
I have moved DNS configuration for all of my servers to deSEC, thanks to its
easy-to-use REST interface. This allows me to configure DNS records as part of
the playbook, instead of having to add them manually for each new server I'd
like to create. The consequence of this is that the playbook now has a hard
dependency on deSEC.
|
|
This makes it easier to navigate through the playbook, and jump to the part
that you're interested in editing, using the { and } keys in vim.
|
|
I would like certificate renewal to be handled centrally across all of my
deployed services. Therefore, responsibility for certificate renewal no longer
belongs in this playbook.
|
|
I tried to create a fresh nonprod deployment today on
continuous.staging.nonprod.chat.fennell.dev. However, the first step failed
because the apt command could not find borgmatic.
The solution was to run apt update before running apt install. Unfortunately,
ansible's package module does not have an option for this. Therefore, although
I would have liked to stick with "package" (to keep it general and away from
the specific of using "apt" as a package manager), I have switched back to
using the apt module so that the step can succeed without any manual
intervention on fresh install.
|
|
|
|
This commit adds support for XEPS 0065 and 0365 - i.e. sending files from one
account to another.
|
|
This commit enabled SOCKS5 Bytestreams, allowing users to send and receive
files.
|
|
Previously, the playbook would fail if it needed to install packages, as this
(in the case of apt) requires sudo.
|
|
This directory is created by a user command, not as part of the package
installation process. Therefore, it may not exist if the user has not yet
configured borgmatic on the host.
|
|
This commit uses the simpler, more standard validate feature of template
instead of triggering a handler. The feature is there - may as well use it!
|
|
This commit adds borgmatic, to provide automated backups.
|
|
The playbook previosuly assigned the prosody config files to the root group.
With root as the owner, and permissions as 0640, this meant that prosody was
not able to read the files. This commit fixes this.
|
|
This commit ensures certificates are installed, via Lets Encrypt.
|
|
There is no sense reloading prosody if none of its configuration files have
changed. Therefore, this commit moves the reload to a handler that only gets
triggered in this situation.
|
|
This commit uses the new per-host virtual_host variable to create the necessary
prosody host-specific cfg files.
|
|
This commit adds a prosody configuration file that can be installed on the
remote hosts. This lets me make the configuration locally, deploy it to staging
environments, and then to prod, without having to directly login to the hosts.
|