Skip to main content

Encrypt all the things

xkcd #538: Security
Went into blogger settings and enabled TLS on my custom domain blogger blog. So it is now finally a https://blog.surgut.co.uk However, I do use feedburner and syndicate that to the planet. I am not sure if that is end-to-end TLS connections, thus I will look into removing feedburner between my blog and the ubuntu/debian planets. My experience with changing feeds in the planets is that I end up spamming everyone. I wonder, if I should make a new tag and add that one, and add both feeds to the planet config to avoid spamming old posts.

Next up went into gandi LiveDNS platform and enabled DNSSEC on my domain. It propagated quite quickly, but I believe my domain is now correctly signed with DNSSEC stuff. Next up I guess, is to fix DNSSEC with captive portals. I guess what we really want to have on "wifi" like devices, is to first connect to wifi and not set it as default route. Perform captive portal check, potentially with a reduced DNS server capabilities (ie. no EDNS, no DNSSEC, etc) and only route traffic to the captive portal to authenticate. Once past the captive portal, test and upgrade connectivity to have DNSSEC on. In the cloud, and on the wired connections, I'd expect that DNSSEC should just work, and if it does we should be enforcing DNSSEC validation by default.

So I'll start enforcing DNSSEC on my laptop I think, and will start reporting issues to all of the UK banks if they dare not to have DNSSEC. If I managed to do it, on my own domain, so should they!

Now I need to publish CAA Records to indicate that my sites are supposed to be protected by Let's Encrypt certificates only, to prevent anybody else issuing certificates for my sites and clients trusting them.

I think I think I want to publish SSHFP records for the servers I care about, such that I could potentially use those to trust the fingerprints. Also at the FOSDEM getdns talk it was mentioned that openssh might not be verifying these by default and/or need additional settings pointing at the anchor. Will need to dig into that, to see if I need to modify something about this. It did sound odd.

Generated 4k RSA subkeys for my main key. Previously I was using 2k RSA keys, but since I got a new yubikey that supports 4k keys I am upgrading to that. I use yubikey's OpenGPG for my signing, encryption, and authentication subkeys - meaning for ssh too. Which I had to remember how to use `gpg --with-keygrip -k` to add the right "keygrip" to `~/.gnupg/sshcontrol` file to get the new subkey available in the ssh agent. Also it seems like the order of keygrips in sshcontrol file matters. Updating new ssh key in all the places is not fun I think I did github, salsa and launchpad at the moment. But still need to push the keys onto the many of the installed systems.

Tried to use FIDO2 passwordless login for Windows 10, only to find out that my Dell XPS appears to be incompatible with it as it seems that my laptop does not have TPM. Oh well, I guess I need to upgrade my laptop to have a TPM2 chip such that I can have self-unlocking encrypted drives, and like OTP token displayed on boot and the like as was presented at this FOSDEM talk.

Now that cryptsetup 2.1.0 is out and is in Debian and Ubuntu, I guess it's time to reinstall and re-encrypt my laptop, to migrate from LUKS1 to LUKS2. It has a bigger header, so obviously so much better!

Changing phone soon, so will need to regenerate all of the OTP tokens. *sigh* Does anyone backup all the QR codes for them, to quickly re-enroll all the things?

BTW I gave a talk about systemd-resolved at FOSDEM. People didn't like that we do not enable/enforce DNS over TLS, or DNS over HTTPS, or DNSSEC by default. At least, people seemed happy about not leaking queries. But not happy again about caching.

I feel safe.

ps. funny how xkcd uses 2k RSA, not 4k.

Comments

  1. Howdy!

    I backup all of my QR codes via plain-text files on my encrypted drive in a dedicated folder which contains ba(sh) scripts for generating my OTP without the need for my physical keys, i.e.:

    #!/bin/bash

    ## Code for: github.com

    oathtool --totp -b "supersecrettextcode"


    Thus, I have the codes ready for use or reassignment anywhere I am through rsync'ing them between my devices as needed.

    ReplyDelete
  2. I just backup the app with Titanium.

    ReplyDelete
  3. Regarding "Does anyone backup all the QR codes for them, to quickly re-enroll all the things?" Do you know Bitwarden or Latch?

    ReplyDelete

Post a Comment

Popular posts from this blog

How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client Overivew TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1. As announced on the 15th of October 2018 Apple , Google , and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well. To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that...

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI. Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported. Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too. Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu C...

Ubuntu 23.10 significantly reduces the installed kernel footprint

Photo by Pixabay Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements. Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS: 2x less disk space used (1,417MB vs 2,940MB, including initrd) 3x less peak RAM usage for the initrd boot (68MB vs 204MB) 0.5x increase in download size (949MB vs 600MB) 2.5x faster initrd generation (4.5s vs 11.3s) approximately the same total time (103s vs 98s, hardware dependent) For minimal cloud images that do not in...