Skip to main content

Blogging about Let's encrypt over HTTP

So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.

However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.

This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.

http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.

http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.

I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.

One step at a time I guess.

The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.

Comments

  1. Great post, thanks for sharing!

    Hương Lâm với website Huonglam.vn chuyên cung cấp máy photocopy toshiba cũ và dòng máy máy photocopy ricoh cũ uy tín, giá rẻ nhất TP.HCM

    ReplyDelete

Post a Comment

Popular posts from this blog

How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client Overivew TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1. As announced on the 15th of October 2018 Apple , Google , and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well. To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that

Ubuntu 23.10 significantly reduces the installed kernel footprint

Photo by Pixabay Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements. Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS: 2x less disk space used (1,417MB vs 2,940MB, including initrd) 3x less peak RAM usage for the initrd boot (68MB vs 204MB) 0.5x increase in download size (949MB vs 600MB) 2.5x faster initrd generation (4.5s vs 11.3s) approximately the same total time (103s vs 98s, hardware dependent) For minimal cloud images that do not in

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI. Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported. Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too. Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu C