Skip to main content

avahi + apt-cacher-ng + sbuild ?!

Laptop enters the WiFi network and decides that it wants to build some packages using sbuild.
At the same time, on this network there is apt-cacher-ng operating with most of packages cached.
On the other hand, there is a project squid-deb-proxy which provides yet another apt proxy, but with the added bonus of avahi discovery.

Can we throw all of this stuff together and make it work? Well let's try =)

On ubuntu:

$ sudo apt-get install apt-cacher-ng squid-deb-proxy-client

On debian:

Squid-deb-proxy-client is not packaged just fetch and install it. It's really just one python script & one conffile. Or I have published the python script and the config as part of this posts gist.

Next all of the avahi magic, really is just publishing a service file & letting the python-script from the squid-deb-proxy-client package find it, and adding apt-conf.d snippet which calls the above mentioned script and generate correct proxy line.

But we are running apt-cacher-ng! So on the server where apt-cacher-ng is running drop this:

/etc/avahi/services/apt-cacher-ng.service

<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
 <name replace-wildcards="yes">apt-cacher-ng proxy on %h</name>
 <service protocol="ipv4">
  <type>_apt_proxy._tcp</type>
  <port>3142</port>
 </service>
</service-group>

Now on any host that you want automatic apt-proxy discovery, simply install squid-deb-proxy-client & your apt-cacher-ng will be auto-discovered.

What about sbuild? Well, you don't really want to install avahi-daemon and start it inside the chroot, so we can cheat with creating an executable hook:

/etc/schroot/setup.d/51aptproxy
#!/bin/sh
set -e

. "$SETUP_DATA_DIR/common-data"
. "$SETUP_DATA_DIR/common-functions"
. "$SETUP_DATA_DIR/common-config"

APT_AVAHI_DISCOVER=/usr/share/squid-deb-proxy-client/apt-avahi-discover

if [ $STAGE = "setup-start" ] || [ $STAGE = "setup-recover" ]; then
    if type apt-avahi-discover 1>/dev/null; then
        APT_AVAHI_DISCOVER=apt-avahi-discover
    fi
    APT_PROXY=`$APT_AVAHI_DISCOVER`
    if [ -n "$APT_PROXY" ]; then
        info "Setting apt-proxy to avahi proxy at ${APT_PROXY}"
        printf 'Acquire::http { Proxy "%s";};\n' ${APT_PROXY} > "${CHROOT_PATH}/etc/apt/apt.conf.d/30schrootautoproxy"
    fi
fi

Tadah. Now I'm using my apt-cacher-ng whenever I'm on my home WiFi =)

Please advertise your apt-cache proxies!

Now to integrate this properly:
  • apt-cacher-ng should auto-create avahi service file in it's init script
  • apt-get or some other package should ship apt-avahi-discover script (there is also a C implementation available)
  • apt-get or some other package should ship the apt.conf.d snippet
  • sbuild / pbuilder / cowbuilder / mk-sbuild / [p|cow]builder-dist should have hooks to retrieve hosts' apt proxy settings and use them in the chroots
For full sources see lp:squid-deb-proxy and my gist with all other config snippets.

Comments

  1. I do something fairly similar:


    ~/Canonical/Mir/mir/build ⮀ cat /etc/schroot/chroot.d/sbuild-raring-amd64
    [raring-amd64]
    description=raring-amd64
    groups=sbuild,root
    root-groups=sbuild,root
    # Uncomment these lines to allow members of these groups to access
    # the -source chroots directly (useful for automated updates, etc).
    #source-root-users=sbuild,root
    #source-root-groups=sbuild,root
    apt.proxy.detect=/usr/share/squid-deb-proxy-client/apt-avahi-discover
    type=btrfs-snapshot
    command-prefix=eatmydata
    btrfs-source-subvolume=/var/lib/schroot/chroots/raring-amd64
    btrfs-snapshot-directory=/var/lib/schroot/snapshots

    [raring-local-amd64]
    description=raring-local-amd64
    groups=sbuild,root
    root-groups=sbuild,root
    source-clone=false
    # Uncomment these lines to allow members of these groups to access
    # the -source chroots directly (useful for automated updates, etc).
    #source-root-users=sbuild,root
    #source-root-groups=sbuild,root
    apt.proxy.detect=/usr/share/squid-deb-proxy-client/apt-avahi-discover
    local.repository=/home/chris/Builds
    type=btrfs-snapshot
    command-prefix=eatmydata
    btrfs-source-subvolume=/var/lib/schroot/chroots/raring-amd64
    btrfs-snapshot-directory=/var/lib/schroot/snapshots

    And:

    ~/Canonical/Mir/mir/build ⮀ cat /etc/schroot/setup.d/60apt-proxy
    #!/bin/sh
    # /etc/schroot/setup.d/01apt-proxy

    . "$SETUP_DATA_DIR/common-data"
    . "$SETUP_DATA_DIR/common-functions"
    . "$SETUP_DATA_DIR/common-config"

    if [ $1 = "setup-start" ] || [ $1 = "setup-recover" ]; then

    if [ "x$APT_PROXY_DETECT" != "x" -a \
    -x $APT_PROXY_DETECT ] ; then
    PROXY=$($APT_PROXY_DETECT)
    info "Setting apt proxy to ${PROXY} (autodetected)"
    echo "Acquire::http::Proxy \"${PROXY}\";" > \
    ${CHROOT_PATH}/etc/apt/apt.conf.d/99sbuild_proxy
    fi
    fi

    ReplyDelete

Post a Comment

Popular posts from this blog

How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client Overivew TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1. As announced on the 15th of October 2018 Apple , Google , and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well. To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that...

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI. Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported. Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too. Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu C...

Ubuntu 23.10 significantly reduces the installed kernel footprint

Photo by Pixabay Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements. Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS: 2x less disk space used (1,417MB vs 2,940MB, including initrd) 3x less peak RAM usage for the initrd boot (68MB vs 204MB) 0.5x increase in download size (949MB vs 600MB) 2.5x faster initrd generation (4.5s vs 11.3s) approximately the same total time (103s vs 98s, hardware dependent) For minimal cloud images that do not in...