Skip to main content

My Debian Activities: Multiarch & upstart on kFreeBSD hacking

Looks like there is a recent trend to publicize one's debian activities, so I thought I should join in =) Not so sure if MDA acronym would work for these types of posts...

Multiarch

I have been working on multi-arching additional libraries. In particular I am working on multiarching boost libraries. At the moment boost1.53 multiarched libraries are uploaded in the experimental, and I am still working out the quirks with those. At the same time I have submitted a patch to multiarch libicu-dev, which has been accepted by the maintainer and is currently waiting in new queue. This brings us closer to multiarching all of boost libraries. But as it has been pointed out at the Multiarch BOF at Debconf'13, one doesn't have to wait for dependencies (be it libs or lib-devs) to be Multiarched before multi-arching your own library. If you do it right (put all or arch-specifi headers in multiarch location, and place libraries in multiarch location, split utilities into a separate package), you can go and multiarch your libraries ahead of time. Maybe it's time to default to debhelper 9 in dh-make, and default to multiarch locations for all libraries?

Boost also has libraries which depend on mpi, and at the moment that's not multiarched. I'm not sure if there are additional considerations there given the support for alternative mpi implementations in Debian.

Boost also depends on libpython. At the moment it's shipping libboost_python-py2.7.so & libbost-py3.3.so, with a libboost_python.so symlink to libbost_python-py2.7.so. I do wonder if it's a sensible approach, or should the boosty_python library be simply shipped in arch specific & python implementation specific paths e.g. /usr/lib/pythonX.Y/config-X.Y$(abi_tag)-$(DEB_HOST_MULTIARCH)/ that way we'll be able to support python debug interpreter in python_boost. I guess I should seek expert opinion on other multiarch wizards.

In addition to boost, I've now started looking at qmake and how it handles multiarch. At the moment, it doesn't handle it much. All paths are statically copied around and hard-coded in each module qmake provides. I have now been working at abusing qmake's $$system to call into dpkg-architecture, and actually choose native or cross compiler & adjust settings and paths to Qt libraries. There are a couple of bugs to fix, but I should be able to propose a "debian-multiarch" qmake spec soon. Such that in calls dpkg-architecture & properly adjust compilers & locations for the DEB_HOST. qmake by the way has its quirks. For example all variable assignments seemed to default to be "by value" & not "by reference" (i.e. upon assignment all variables on the right hand side get expanded and the literal string is stored, instead of storing $(prefix)/lib/$(arch), and evaluating that later.) Hence my current qmakespec is less elegant than I hoped.

upstart on kFreeBSD

FreeBSD 9.2 and up have implemented the wait6 and with it waitid is also exposed. At the moment there are some differences between glibc and FreeBSD waitid usage. I am not well versed in glibc, FreeBSD and POSIX, but the bug at hand & POSIX reference lead me to believe that POSIX is slightly ambiguous about it.

Why is waitid support is interesting? Well, kFreeBSD/libc maintainer in Debian is working on exposing it with/after (e)glibc2.18 is in Debian, which will remove one more blocker from porting Upstart to kFreeBSD. To this extend, I have in the mean time setup FreeBSD & kFreeBSD virtual machines and got libnih to compile on both (sans waitid on kFreeBSD). One of the optional upstart's features is to use inotify to spot & reload configuration files, at the moment I haven't ported that functionality to use native kevent/kqueues (EVFILT_AIO) instead I have cheated and packaged for Debian the Google Summer of Code 2011 project libinotify-kevent which provides inotify compatible API, implemented with kevent. Which is kind of the reverse strategy of libkqueue and/or libevent. It would be interesting to see kernels and/or those libraries to unify/standartise on common APIs. Unfortunately, it still seems like using native API for each kernel is best to avoid dependencies and leverage all features and semantics. Nonetheless I did package it for now. To further progress with actually porting upstart, I'd like to get my hands at (e)glibc 2.18 compiled for kFreeBSD with waitid patches. I got pointers to the patches, but I haven't yet succeeded at compiling (e)glibc with those applied. So no upstart booting kFreeBSD VM yet, but rather work in progress. It's quite fun. It's been years since I have touched a FreeBSD boxes and it's first time me using kFreeBSD. The semantics, differences and benefits of each do shine and it's a promising platform.

Multiarch on kFreeBSD?

Given the excellent multiarch implementation on Debian and sophisticated Linux Emulation Layer on FreeBSD kernel, I was half expecting the following to work:
  • on kFreeBSD enable linux i386/amd64 repository
  • apt-get install hello:i386
  • $ hello
But unfortunately this confused kFreeBSD, since it didn't find ELF tags and linker didn't seem to load the right libraries any more =( Not sure if this issue has been raised before, but I think it would be awesome if FreeBSD Linux Emulation Layer worked transparently on Debian/kFreeBSD multiarch system.

Comments

Popular posts from this blog

How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client Overivew TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1. As announced on the 15th of October 2018 Apple , Google , and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well. To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that...

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI. Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported. Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too. Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu C...

Ubuntu 23.10 significantly reduces the installed kernel footprint

Photo by Pixabay Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements. Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS: 2x less disk space used (1,417MB vs 2,940MB, including initrd) 3x less peak RAM usage for the initrd boot (68MB vs 204MB) 0.5x increase in download size (949MB vs 600MB) 2.5x faster initrd generation (4.5s vs 11.3s) approximately the same total time (103s vs 98s, hardware dependent) For minimal cloud images that do not in...