Borg, Kopia, Restic: going further

In this final article, I'm going to talk about the little extras that can't be included in my comparison as a measurable criterion.

I'm going to talk briefly about graphical interfaces, because for some users, this can make a big difference. Application sizes are given for the macOS version.
Of the three programs, Restic has the richest and least mature ecosystem in terms of graphical user interfaces. There are several of them, but they're all independent projects more or less in the alpha or beta phase, and they don't all offer the same level of functionality. Continue reading

Related posts

Borg, Kopia, Restic : aller plus loin

Dans ce dernier article, je vais aborder les petits à-côtés qui ne peuvent pas faire partie de ma comparaison en tant que critère mesurable.

Je vais rapidement parler des interfaces graphiques, car pour certains utilisateurs, cela peut faire une grande différence. Les tailles des applications sont données pour la version macOS.
Des trois logiciels, c'est Restic qui a l'écosystème le plus riche et le moins mature en terme d'interfaces graphiques de gestion. Il en existe plusieurs, mais ce sont des projets indépendants plus ou moins en phase alpha ou beta, et qui n'assurent pas tous un niveau de fonctionnalités équivalent. Continue reading

Related posts

Borg, Kopia, Restic: restoration and maintenance

[This is the English translation by DeepL]
[Version originale en français]


From the previous article, we know how the backup process works in Borg, Kopia and Restic. Now we'll take a look at restoration and maintenance.
Restoration is a stage in the life of a backup, which - when all goes well - is never used. So it's understandable that the benchmark for this stage isn't overly interesting or representative. Tracing was done in the original backup script, but restoration was not performed every day.
The test consisted in restoring a 39 MB zip archive from the oldest snapshot available in the archive, as well as restoring a Library/Preferences directory from the oldest of the last 10 snapshots. The size of the directory varies from one backup to the next, but is generally around 290 MB for 951 directories and ~15K files. Continue reading

Related posts

Borg, Kopia, Restic : restauration et maintenance


Depuis l’article précédent nous savons comment se déroule la sauvegarde dans Borg, Kopia et Restic. Nous allons maintenant examiner la restauration et la maintenance.
La restauration est une étape de la vie d'une sauvegarde, qui — quand tout se passe bien — n’est jamais utilisée. Aussi, il est tout à fait compréhensible que le benchmark de cette étape ne soit pas excessivement intéressant et représentatif. La prise de trace est faite dans le script de sauvegarde original mais la restauration n'a pas été exécutée tous les jours.
Le test a consisté en la restauration d'une archive zip de 39 Mo à partir du plus ancien snapshot disponible dans les archives, ainsi que la restauration d'un répertoire Library/Preferences à partir du plus ancien des 10 derniers snapshots. La taille du répertoire est variable d'une sauvegarde à l'autre, mais tourne globalement autour des 290 Mo pour 951 répertoires et ~15K fichiers. Continue reading

Related posts

Running Splunk forwarder on a FreeBSD 14 host

Few months ago I discovered that Splunk did not bother updating its forwarder to support FreeBSD 14. It’s a real PITA for many users, including myself. After asking around for support about that problem and seeing Splunk quietly ignoring the voice of its users, I’ve decided to try and run the Linux version on FreeBSD.

Executive summary: it works great on both FreeBSD 14 and 13, but with some limitations.

A user like me has few options:

  1. (re)check if you really need a local log forwarder (for everything that is not handled by syslog), if you don’t, just ditch the Splunk forwarder and tune syslogd to send logs to a Splunk indexer directly
  2. find an alternate solution that suits you: very hard is you have a full Splunk ecosystem or if, like me, you really are addicted to Splunk
  3. Run the Linux version on FreeBSD: needs some skills but works great so far

Obviously, I’m fine with the latest.


You will run a proprietary Linux binary on a totally unsupported environment: you are on your own & it can break anytime, either because of FreeBSD, or because of Splunk.

You will run the Splunk forwarder inside a chroot environment: your log files will have to be available inside the chroot, or Splunk won’t be able to read them. Also, no ACL residing on your FreeBSD filesystem will be available to the Linux chroot, so you must not rely on ACLs to grant Splunk access to your log files. This latest statement is partially wrong. You can rely on FreeBSD ACLs but it might require some tweaks on the user/group side.

How to

Below you’ll find a quick&dirty step by step guide that worked for me. Not everything will be detailed or explained and YMMV.

First step is to install a Linux environment. You must activate the Linux compatibility feature. I’ve used both Debian and Devuan successfully. Here is what I’ve done for Devuan:

zfs create -o mountpoint=/compat/devuan01 sas/compat_devuan01 
curl -OL
mv ceres /usr/local/share/debootstrap/scripts/daedalus
curl -OL
mv devuan-archive-keyring.gpg /usr/local/share/keyrings/
ln -s /usr/local/share/keyrings /usr/share/keyrings
debootstrap daedalus /compat/devuan01

This last step should fail, it seems that it’s to be expected. Following that same guide:

chroot /compat/devuan01 /bin/bash
dpkg --force-depends -i /var/cache/apt/archives/*.deb
echo "APT::Cache-Start 251658240;" > /etc/apt/apt.conf.d/00chroot

Back on the host, add what you need to /etc/fstab:

# Device        Mountpoint              FStype          Options                      Dump    Pass#
devfs           /compat/devuan01/dev      devfs           rw,late                      0       0
tmpfs           /compat/devuan01/dev/shm  tmpfs           rw,late,size=1g,mode=1777    0       0
fdescfs         /compat/devuan01/dev/fd   fdescfs         rw,late,linrdlnk             0       0
linprocfs       /compat/devuan01/proc     linprocfs       rw,late                      0       0
linsysfs        /compat/devuan01/sys      linsysfs        rw,late                      0       0

and mount all, then finish install:

mount -al
chroot /compat/devuan01 /bin/bash
apt update
apt install openrc

Make your log files available inside the chroot:

mkdir -p /compat/debian_stable01/var/hostnamedlog
mount_nullfs /var/named/var/log /compat/debian_stable01/var/hostnamedlog
mkdir -p /compat/debian_stable01/var/hostlog
mount_nullfs /var/log /compat/debian_stable01/var/hostlog

Note: /var/named/var/log and /var/log are ZFS filesystems. You’ll have to make the nullfs mounts permanent by adding them in /etc/fstab.

Now you can install the Splunk forwarder:

chroot /compat/devuan01 /bin/bash
ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime
useradd -m splunkfwd
export SPLUNK_HOME="/opt/splunkforwarder"
echo /opt/splunkforwarder/lib >/etc/ 
apt install curl
dpkg -i splunkforwarder_package_name.deb
/opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 0 -user splunkfwd

Note: splunk enable boot-start -systemd-managed 0 activates the Splunk service as an old-school init.d service. systemd is not available in the context of a Linux chroot on FreeBSD.

Now from the host, grab your config files and copy them in your Linux chroot:

cp /opt/splunkforwarder/etc/system/local/{inputs,limits,outputs,props,transforms}.conf /compat/devuan01/opt/splunkforwarder/etc/system/local/

Then edit /compat/devuan01/opt/splunkforwarder/etc/system/local/inputs.conf accordingly: in my case it means I must replace /var/log by /var/hostlog and /var/named/var/log by /var/hostnamedlog.

Go back to your Devuan and start Splunk:

chroot /compat/devuan01 /bin/bash
service splunk start

Startup script

It’s best if your linux Splunk can start automatically when your FreeBSD boots. That can be achieved with a quick modification of the native Splunk rc script for FreeBSD (/etc/rc.d/splunk). Here is what I’m using:


# PROVIDE: splunkd
# KEYWORD: shutdown

# /etc/rc.d/splunk
# init script for Splunk.
# generated by 'splunk enable boot-start'.

. /etc/rc.subr

eval "${rcvar}=\${${rcvar}:-'NO'}"

	chroot /compat/devuan01 "${splunk_home:-/opt/splunkforwarder}/bin/splunk" start --no-prompt --answer-yes "$@"

	chroot /compat/devuan01 "${splunk_home:-/opt/splunkforwarder}/bin/splunk" stop  "$@"

	chroot /compat/devuan01 "${splunk_home:-/opt/splunkforwarder}/bin/splunk" restart  "$@"

	chroot /compat/devuan01 "${splunk_home:-/opt/splunkforwarder}/bin/splunk" status  "$@"

load_rc_config $name
run_rc_command "$@"
Related posts

Borg, Kopia, Restic: backup and resource utilization

[This is the English translation by DeepL]
[Version originale en français]

In this article, I'm going to take a closer look at backup-related metrics. In particular, those that are easy to measure: backup execution time and network transfer volumes. CPU consumption is not easy to measure on the test platform and, in my context, is of little importance. Measuring I/O on storage could have been interesting, but as the backup destination disk is shared with other uses, it wasn't a metric that could be recovered during my tests. Continue reading

Related posts