diff options
| author | Matthew Fennell <matthew@fennell.dev> | 2026-01-02 00:39:00 +0000 |
|---|---|---|
| committer | Matthew Fennell <matthew@fennell.dev> | 2026-01-02 01:03:57 +0000 |
| commit | 9535fc83e22cc9624535c84c3e8ddfa52e44b6ab (patch) | |
| tree | 6faee9c54b6e6862f8be015b491d3ea241782352 | |
| parent | 51d48e2af6b4890fa9b58b5f2c036a6394f7e596 (diff) | |
Create script for Mythic Beasts DNS API requests
I am moving DNS provider from deSEC to Mythic Beasts. As part of this change, I
need to use Mythic Beast's DNS API [1] in the playbook.
I want to reduce the number of operations that are made by grouping several
records together. To do this, I can use the "Identifying records to replace"
method from their DNS tutorial. [2] This provides a way to specify which
records should be replaced by the new records that you PUT onto the endpoint.
To use this, you specify the records via a url-encoded series of select
queries. Then, you can combine them into a disjunction of conjunctions like so:
?select=type%3DA%26host%3Dchat&select=type%3DAAAA
This gets split into two separate queries which are then decoded into:
type=A&host=chat
type=AAAA
Then, these records are replaced by whichever records are specified in the PUT
request.
It's painful to write these by hand, so write a script to generate them
automatically. Then, they should be pasted into the playbook when the desired
records update. If this happens often, we should make the playbook call the
script to get the values directly.
As an additional benefit, the script definitively states which records are
"owned" by the playbook. This is because the records specified in the script
are the ones that will be replaced each time the playbook is run.
Finally, since we've now added python to the playbook for the first time, add
the black linter to keep the code style in check.
[1] https://www.mythic-beasts.com/support/api/dnsv2
[2] https://www.mythic-beasts.com/support/api/dnsv2/tutorial
| -rw-r--r-- | .precious.toml | 7 | ||||
| -rwxr-xr-x | scripts/generate-dns-http-queries.py | 63 |
2 files changed, 70 insertions, 0 deletions
diff --git a/.precious.toml b/.precious.toml index 1b7bee2..c9965b9 100644 --- a/.precious.toml +++ b/.precious.toml @@ -10,6 +10,13 @@ ok_exit_codes = [0] path_args = "none" type = "lint" +[commands.black] +cmd = ["black", "--quiet", "--check"] +include = ["*.py"] +invoke = "once" +ok_exit_codes = 0 +type = "lint" + [commands.gitlint] cmd = ["gitlint"] include = "*" diff --git a/scripts/generate-dns-http-queries.py b/scripts/generate-dns-http-queries.py new file mode 100755 index 0000000..6ff1ba9 --- /dev/null +++ b/scripts/generate-dns-http-queries.py @@ -0,0 +1,63 @@ +#!/usr/bin/env python3 +# SPDX-FileCopyrightText: 2026 Matthew Fennell <matthew@fennell.dev> +# +# SPDX-License-Identifier: AGPL-3.0-or-later + +import urllib.parse + + +# We ultimately want the variables to be specified in the double-brace format +# recognised by ansible. However, we don't want those double braces or spaces +# to be encoded, so search for the placeholders ENVSUFFIX and DANEHASH from the +# original selects to replace them after encoding is complete. +def template_url(selects): + url = "&".join( + map(lambda select: urllib.parse.urlencode({"select": select}), selects) + ) + url = "https://api.mythic-beasts.com/dns/v2/zones/{{ domain }}/records?" + url + url = url.replace("ENVSUFFIX", "{{ env_suffix }}") + url = url.replace("DANEHASH", "{{ dane_hash }}") + return url + + +# These select queries specify the records that will be replaced whenever we +# PUT new records to the endpoint. +# For most records, we only specify the host and type. For instance, +# host=chat&type=A will select any A record on the chat subdomain for +# replacement. +# For TLSA records, we additionally specify the data (which for these records +# is the hash of the cert). +# This is crucial to rollover new certs properly: when requesting a new cert +# with a different TLSA hash, we have to first add the new TLSA record, wait +# for propagation, only then update the cert, and finally delete the old +# record. While waiting for propagation, both the old and new TLSA records need +# to be present. +# Therefore, specifying the data prevents us from replacing the TLSA hash of +# the existing cert if we run the playbook while waiting for propagation. It +# simply ensures that a TLSA record with this hash exists, and leaves any +# others alone for manual cleanup. +common_selects = [ + "host=chatENVSUFFIX&type=A", + "host=chatENVSUFFIX&type=AAAA", + "host=conferenceENVSUFFIX&type=CNAME", + "host=uploadENVSUFFIX&type=CNAME", + "host=_xmpp-client._tcpENVSUFFIX&type=SRV", + "host=_xmpps-client._tcpENVSUFFIX&type=SRV", + "host=_5222._tcp.chatENVSUFFIX&type=TLSA&data=DANEHASH", + "host=_5223._tcp.chatENVSUFFIX&type=TLSA&data=DANEHASH", +] + +non_transport_selects = [ + "host=_xmpp-server._tcpENVSUFFIX&type=SRV", + "host=_xmpps-server._tcpENVSUFFIX&type=SRV", + "host=_xmpps-server._tcp.conferenceENVSUFFIX&type=SRV", + "host=_xmpps-server._tcp.uploadENVSUFFIX&type=SRV", + "host=_5269._tcp.chatENVSUFFIX&type=TLSA&data=DANEHASH", + "host=_5270._tcp.chatENVSUFFIX&type=TLSA&data=DANEHASH", +] + +common_url = template_url(common_selects) +non_transport_url = template_url(non_transport_selects) + +print(common_url) +print(non_transport_url) |
