Minimalistic NAS with SSHFS

A few days ago I installed Nextcloudpi on my Raspberry 4B. It worked fine, but it impressed me that a quarter of an hour felt like software was downloaded and installed. Actually, I just wanted to keep a few files central.

That made me decide to take a look around the net to see what alternatives there are. The installation of Openmediavault was similar. A lot of software was downloaded and in the end it was all about managing users and memory.

With OMV, for example, I could provide NFS shares, but can’t I? Said, done. But NFS didn’t convince me: It has no encryption for now. If so, it slows down. I don’t need that in the house either, but it would be nice if I could simply use the same technology I use at home over the Internet. That doesn’t work with NFS out-of.the-box or Samba.

Finally, I came across sshfs, a file system that uses ssh as the underlying technology. According to what I read, it should play in a similar speed league to NFS without encryption. So I just tried it out.

How can I configure a NAS with minimal software effort? Well, ssh comes with every Linux distribution for the server. You only have to turn it on.

On the accessing client it still has to be installed, on Debian or Arch-Linux like this:

apt install sshfs


pacman -S sshfs

Measure Performance

I’ve now done some testing using a benchmark script from Michał Turecki, which measures the time two devices can exchange data over ssh for different ciphers, i.e. encryption techniques. Here everything plays a role: Which hardware support client and server have, how the network is, the maximum transfer unit (MTU) set in the network interface, whether WLAN, LAN or PowerLAN and so on. Here is the script. To make it work properly, public-key support must be possible on the opposite SSH server and we should use the ssh-agent first to preload our private key. On the client:


Now our key is in the memory and we are not constantly asked for the passphrase, which would falsify our measurements. So here is the script, please replace user@sshserver:

for i in `ssh -Q cipher`; do dd if=/dev/zero bs=1M count=100 2> /dev/null \
  | ssh -c $i user@sshserver "(time -p cat) > /dev/null" 2>&1 \
  | grep real | awk '{print "'$i': "100 / $2" MB/s" }'; done

I made the following measurements with it:


The rather disappointing access from my notebook via WLAN to the router, from there to the PowerLAN via my ancient 60s power line to the basement and into the Raspberry Pi 4B:

aes128-ctr: 4 MB/s
aes192-ctr: 4.16667 MB/s
aes256-ctr: 4.34783 MB/s 4.16667 MB/s 4.16667 MB/s 4.34783 MB/s

With the Raspberry Pi 3B in the cellar it was once again a third! That’s really bad, even with the 4. The next thing to do is to run into the basement and connect the notebook directly to the switch built into the PowerLAN via LAN:

aes128-ctr: 50 MB/s
aes192-ctr: 50 MB/s
aes256-ctr: 50 MB/s 33.3333 MB/s 33.3333 MB/s 50 MB/s

Bumm, it looks different. So PowerLAN is the bottleneck.

Now I’m up again and have connected my notebook to my second Raspberry Pi 3B in the living room via WLAN, which normally only houses my Pleroma instance.

aes128-ctr: 9.09091 MB/s
aes192-ctr: 9.09091 MB/s
aes256-ctr: 10 MB/s 10 MB/s 10 MB/s 10 MB/s

The same now again with LAN cable, the Raspberry Pi 3B hangs directly on the router and me too:

aes128-ctr: 12.5 MB/s
aes192-ctr: 12.5 MB/s
aes256-ctr: 12.5 MB/s 12.5 MB/s 12.5 MB/s 12.5 MB/s

Well, I guess the three Raspberry Pi isn’t all that fast. Then I prefer the one in the cellar. Conclusion

So the bottleneck is definitely the PowerLAN on the long and old lines here in the house. The use of LAN compared to WLAN has also increased the throughput a little. Switching off the compression with

-o compression=no

had no influence on the throughput.

In general the Chaha20 Cipher seems to work well for the combination ARM and x64. AES128 obviously doesn’t work. Use

To mount the remote directories, I built the following small script, as I said before, load the private key into the ssh-agent:


opts="-o auto_cache -o idmap=user \
-o cache_timeout=115200 -o attr_timeout=115200 -o entry_timeout=1200  \
-o \
-o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3"

sshfs user@sshserver: ~/Cloud $opts

The auto_cache option causes the cached data in the client to be invalidated as soon as the file size or the modification time changes on the server side.

With idmap=user we achieve that all files with rights for our local user appear, although on the server the files belong to a user with a different user ID.

The cache lines ensure that files once loaded into the local cache in RAM do not fall out so quickly.

The last line is responsible for a reconnect, if you have not got a connect for at least 3x15 seconds.