Age | Commit message (Collapse) | Author |
|
While looking through the list of available prosody-modules, these
seemed useful.
|
|
While looking through the list of available prosody-modules, these two
seemed useful.
|
|
Thankfully the servers I manage have not seen any spam, nevertheless,
I'd rather set up some kind of mitigation now, before it becomes a
problem.
|
|
Whoops! mod_component is not supposed to be loaded directly, instead it
gets indirectly loaded as a result of the relevant component
definitions.
|
|
I took the opportunity to look through the module list and add some
extra ones that were missing before.
|
|
These are newly available in Trixie. I believe Monal will start loudly
warning if they are not used in the near future.
|
|
According to prosodyctl check, this module is no longer used or needed.
|
|
This is not available in prosody-modules
0.0~hg20250402.f315edc39f3d+dfsg-2.
|
|
I found these variables a bit confusing after having to interact with
them again. It is useful to have some context now I have forgotten all
about the DS record setup!
|
|
It is useful to jump to diferent variables using the {} keys in vim, and
the rest of the playbook has similar whitespace.
|
|
domain_with_ds is checked against the empty string when checking whether
we should define ds_subname.
When no parent_domain was found, we setting domain_with_ds to None,
which in Ansible 10 was (correctly) failing the domain_with_ds != ""
check. However, in Ansible 12, it now fails that check, meaning that
Ansible tried to evaluate ds_subname even when domain_with_ds was None,
resulting in a type conversion failure.
Therefore, make sure that domain_with_ds is always a string, even if
parent_domain is undefined, and use the empty string to represent this,
as expected in the playbook itself.
|
|
Some services, such as munin, read the hostname from the system, and
don't allow "virtual host" configuration like prosody. For such
services, we want to make sure the hostname is set correctly.
|
|
I want ansible to take full control of managing /etc/hosts, hostname
etc. I think it is most convenient to disable cloud-init entirely, to
prevent contention between ansible and cloud-init.
|
|
db and database have been deprecated, and replaced with login_db.
|
|
I don't need to specify the exact interpreter through ansible, as I can
do this from the host itself.
|
|
This is now enforced by ansible-lint.
|
|
There's no need to jump back to 2 GiB yet, but I was finding 10 MiB too
restrictive.
|
|
Debug logging was historically enabled in nonprod. This would let me
test interactions between the client and the server by checking exactly
what was sent and received.
However, this will shortly not be needed as prosody 13 supports
prosodyctl shell watch log, allowing me to "dip in" to debug logs
whenver needed.
|
|
This was originally intended for motoristic, but is no longer needed by
any domain.
|
|
This was only ever enabled for testing purposes, and is no longer
needed.
|
|
I made a mistake in the original configuration - I tried to give each
virtual host a separate turnserver on its own subdomain. However, since
koyo.haus and fennell.dev (and likewise in nonprod) share a virtual
machine, they can only have one turnserver between them (in the
turnserver.conf, there can only be a single realm).
Therefore, always point to koyo.haus for the turnserver in each
environment.
|
|
|
|
I used to have a dedicated server for cert renewals; now I just run it
from my laptop, with an increased cron frequency. This is simpler,
especially when there is a powercut, and I'll certainly use my laptop
every 30 days.
|
|
It's too time-consuming, especially when making multiple commits in one
go, and having tested those changes by manually running make staging
already.
|
|
These steps where not idempotent, because there was no way to check if
the password was correct. So, they would again each time.
The playbook gets run infrequently enough, and it is simple enough, to
add users manually.
|
|
|
|
This makes it easier to debug why a step is unexpectedly not idempotent.
|
|
Backups are now handled outside of the playbook.
|
|
This was quite generous, and if everyone used it at the same time, the
host would fall over!
|
|
|
|
|
|
The main way the config varies from Debian's default, is that we make sure to
reboot after each upgrade.
|
|
This is useful for two reasons:
* To test clients that render roster groups provided by the server
* To evaluate whether it is worth enabling this flag in production
|
|
This lets us log each individual stanza from a server perspective, which can be
useful when debugging client behaviour.
|
|
This is to test how clients handle downloading large files.
|
|
This is not always installed by default on all hosts. We encountered an issue
where this package was not installed, and it was causing the system time to
gradually drift.
|
|
These vary significantly from deployment to deployment, and running this
playbook previously caused issues on fennell.dev deployments, where I need to
be able to deploy certificates by other means.
|
|
This is in order to debug an issue I was seeing with group chats previously. I
don't believe it actually had an impact, but I can't remember for sure now. I
should debug this at some point and remove if necessary.
|
|
This is so that I can test sending a relatively large APK in order to debug an
issue in Dino.
|
|
In the README section for acme account information, I had incorrectly referred
to the CAA records as TLSA records (which do not need this information at all).
This commit fixes that mistake.
|
|
This commit updates the README to include config lines that are being used as
of previous commits.
|
|
I am rolling out a Matrix bot that will auto-reply to contacts in bridged
conversations, encouraging people to reach out to me on XMPP.
The bot will send them an invite link, retrieved from this API.
|
|
This will primiarly be used for motoristic.
|
|
Although this playbook originally installed certificates to the server, this
turned out to be a bad idea, because the playbook could in some circumstances
(if the acme project had already renewed the certs) have installed a different
certificate to the remote server.
By delgating responsibility to the acme server fully, this should prevent any
such issues, as well as potential DANE misconfigurations.
|
|
|
|
The naming scheme I'm using for prod and nonprod environments have changed,
therefore this commit also updates the documentation to match this.
|
|
These references were out of date with what was needed from the playbook.
|
|
The AAAA record should be created by the libcloud bootstrap process instead, so
that the playbook can ssh using the hostname as normal.
|
|
The playbook initially deleted the public keys from root's authorized_keys
after copying them to admin, but this prevents the playbook from running the
"Ensure admin account is created" commands in subsequent runs. Therefore, we
shouldn't delete them.
In the long term, I would like to find a way to only attempt to run the root
commands if it's not possible to ssh as admin. This is as I don't like the idea
of root having direct ssh access.
|
|
Initially, I used AWS Lightsail for deployment. However, I am now using Vultr
via libcloud, which does not create a user named "admin" by default. Therefore,
this commit aims to ensure that such an account is created, even on providers
that don't create it by default.
|