Skip to main content

cross-compile go code, including cgo

By all means cross-compiling a new language/stack is not going to be pretty, but it didn't turn out that bad.

A few weeks back, I was told that go code which uses cgo (that is utilising C api calls to shared libraries exporting C interface) cannot be cross-compiled. Well, if it's just calling out a C compiler it should totally be easy to cross compile, since so much of our platform is.

So there we go, first I've picked a moderately small project which only does a couple cgo calls, and check that it compiles correctly:

$ sudo apt-get build-dep ubuntu-push-client
$ go get launchpad.net/ubuntu-push/...
$ cd $GOPATH/src/launchpad.net/ubuntu-push/
$ go build ubuntu-push-client.go
Well, when your gcc is all is easy.

I didn't want to polute my system, so I quickly created a chroot with go, build-dependencies in armhf architectures and cross-compiler:

# Get a chroot with build-dependencies installed, I am basing on top of a click-chroot
# one should be able to use any chroot which is armhf multiarch enabled.
$ sudo click chroot -aarmhf -fubuntu-sdk-14.04 -s utopic create
$ sudo click chroot -aarmhf -fubuntu-sdk-14.04 -s utopic maint apt-get install golang-go golang-go-linux-arm golang-go-dbus-dev golang-go-xdg-dev golang-gocheck-dev golang-gosqlite-dev golang-uuid-dev libgcrypt11-dev:armhf libglib2.0-dev:armhf libwhoopsie-dev:armhf libubuntuoneauth-2.0-dev:armhf libdbus-1-dev:armhf libnih-dbus-dev:armhf libsqlite3-dev:armhf crossbuild-essential-armhf
After that the tricky bit was advising go to cross-compile:

$ click chroot -aarmhf -fubuntu-sdk-14.04 -s utopic run CGO_ENABLED=1 GOARCH=arm GOARM=7 PKG_CONFIG_LIBDIR=/usr/lib/arm-linux-gnueabihf/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig GOPATH=/usr/share/gocode/:~/go CC=arm-linux-gnueabihf-gcc go build -ldflags '-extld=arm-linux-gnueabihf-gcc' ubuntu-push-client.go
Ignoring the click chroot wrapper:
  • CGO_ENABLED=1 - by default cgo is disabled when cross-compiling, but really shouldn't be as compiler names are standard $(GNU_TRIPPLET) prefixed tools
  • GOARCH=arm - set the target arch
  • GOARM=7 - set ABI level
  • PKG_CONFIG_LIBDIR - the ugly beast to pass where pkg-config should search for .pc files. With autoconf one simply sets PKG_CONFIG environment variable pointing at a cross-pkg-config, $(GNU_TRIPPLET)-pkg-config but go tool doesn't support it. I've raised merge proposal to get that added https://codereview.appspot.com/104960043/
  • Next I just set GOPATH to where my packages are and CC as to which compiler to use
  • The last portion to the puzzle was to pash "-ldflags '-extld=$CC'" because the linker tool (5l) didn't use environmental variable CC and simply defaults to gcc. I'll raise a merge proposal for this.
Overall that's it. Given that all of above can be re-factored into standard variables (e.g. use $GNU_TRIPPLET prefix, and offer to override it), I see no reason why cross-compilation in go with cgo cannot eventually become a simple
GOARCH=arm go build 

Comments

Popular posts from this blog

How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client Overivew TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1. As announced on the 15th of October 2018 Apple , Google , and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well. To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that

Ubuntu Livepatch service now supports over 60 different kernels

Linux kernel getting a livepatch whilst running a marathon. Generated with AI. Livepatch service eliminates the need for unplanned maintenance windows for high and critical severity kernel vulnerabilities by patching the Linux kernel while the system runs. Originally the service launched in 2016 with just a single kernel flavour supported. Over the years, additional kernels were added: new LTS releases, ESM kernels, Public Cloud kernels, and most recently HWE kernels too. Recently livepatch support was expanded for FIPS compliant kernels, Public cloud FIPS compliant kernels, and as well IBM Z (mainframe) kernels. Bringing the total of kernel flavours support to over 60 distinct kernel flavours supported in parallel. The table of supported kernels in the documentation lists the supported kernel flavours ABIs, the duration of individual build's support window, supported architectures, and the Ubuntu release. This work was only possible thanks to the collaboration with the Ubuntu C

Ubuntu 23.10 significantly reduces the installed kernel footprint

Photo by Pixabay Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements. Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS: 2x less disk space used (1,417MB vs 2,940MB, including initrd) 3x less peak RAM usage for the initrd boot (68MB vs 204MB) 0.5x increase in download size (949MB vs 600MB) 2.5x faster initrd generation (4.5s vs 11.3s) approximately the same total time (103s vs 98s, hardware dependent) For minimal cloud images that do not in