Posts tagged ·



QNAP compatible encrypted disks

Comments Off

I’ve previously written about encryption on the QNAP TS-431P NAS and basic cryptsetup usage. Since then, encryption standards and defaults have changed, and it is now easy to create an external encrypted disk which cannot be mounted by a QNAP NAS. The following shows how to work around the issues with cipher and ext4 journaling settings.

The first issue has to do with the default cipher algorithms on Ubuntu and QNAP. As of Ubuntu 16.10, the default cipher is Advanced Encryption Standard (AES) in xts-plain64 mode with a SHA256 hash. The default and supported encryption on the QNAP NAS is also AES, using “128-bit block size, with key sizes of 128, 192 or 256 bits”. However, the supported mode is cbc-essiv:sha256 with hash spec SHA1, as with older Ubuntu and Debian distributions. When trying to decrypt the drive on the NAS, you might see errors like “Failed to setup dm-crypt key mapping for device /dev/sdc1. Check that kernel supports aes-xts-plain64 cipher (check syslog for more info)” and in /var/log/storage_lib.log an error like “crypt: IV mechanism required”.

It should be noted that this is most likely not an issue with volumes created by the NAS itself on its internal drives, unless you start moving drives from one NAS box to another, which is probably not recommended in the first place.

To see the supported ciphers, both on a normal GNU/Linux distribution and the QNAP, use the following commands:

cat /proc/crypto
cryptsetup --help

Also, to see the currently used cipher and LUKS formatted volume, use the luksDump command:

cryptsetup luksDump /dev/sdX1


If you start from scratch, it’s easy to work around the incompatibility. Follow the instructions in the cryptsetup basics article, but add the following options for cipher and hash function to the luksFormat command. Note that this will format and erase all data on the partition.

cryptsetup luksFormat --cipher aes-cbc-essiv:sha256 --hash sha1 /dev/sdX1

You might also consider ext3 over ext4, since the former seems better supported by the QNAP NAS at the time of writing. See below for further details.

Changing the cipher

If on the other hand, you discover the incompatibility a bit too late, and have already filled up the external disk with a lot of content, you’re not all out of luck. You’ll just have to decide which way you’d like to waste your time: You can transfer it all over to another disk, reformat, and then transfer back. It will take a few hours, and a bit of work. Or, you can change the encryption cipher on the existing volume, using the cryptsetup-reencrypt tool. However, you’ll probably have to wait multiple days while the whole disk is re-encrypted. On a 2 TB external disk over USB 2.0, it took about 35 hours to complete.

cryptsetup-reencrypt --cipher aes-cbc-essiv:sha256 --hash sha1 --key-file /tmp/keyfile --key-slot 0 /dev/sdX1

Notice that the command uses the same cipher and hash arguments above. However, it adds arguments for the key file to unlock the volume, and which key slot that file is linked to. This is necessary to avoid being asked about each and every password for each of the key slots. Of course, if you have only added a single password based key slot, these arguments can be skipped, and you’ll have to type the password once.

ext4 journaling compatibility

Once the encrypted volume can be opened, there might still be hurdles. The default settings for the ext4 journaling might also not be compatible with the QNAP NAS. At this point, I have to admit I lost interest in researching exactly what the cause was, and fired off multiple changes at once. The error when using the mount command was “mount: wrong fs type, bad option, bad superblock on /dev/mapper/sdc1, missing codepage or other error”.

The default features set from the Ubuntu 16.10 created ext4 formatted partition was:

dumpe2fs /dev/mapper/sdc1
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum

The following commands removed a few of them, changed to 32 bits, finished with a check:

tune2fs -O ^huge_file /dev/mapper/sdc1
tune2fs -O ^dir_nlink /dev/mapper/sdc1
tune2fs -O ^extra_isize /dev/mapper/sdc1
tune2fs -O ^metadata_csum /dev/mapper/sdc1
e2fsck -f /dev/mapper/sdc1
tune2fs -O ^64bit /dev/mapper/sdc1
resize2fs -s /dev/mapper/sdc1

In the end, the following features remained, and the volume mounted.

Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file uninit_bg

Comments Off

Linux and disk fragmentation

Comments Off

A wind of nostalgia blew past, and for some reason I remembered rainy nights in the early 90s, waiting for Norton Speed Disk to defragment a 30 MB FAT16 drive. But what happened to the defrag tools? Well, on Windows it seems they are all alive and well, with Windows 7 apparently doing automatic daily defrag in the background. In other words, on modern NTFS file systems it is still considered necessary. What I’d like to see benchmarks on is how much of a difference it makes. Is it really worth it?

On most Linux file systems the story is different. They typically do not require defrag, since they don’t suffer from fragmentation in the first place. In fact, ext based systems will intentionally scatter files so there is room for them to grow without splitting up. For a great and easy to understand explanation, see the OneAndOneIs2 blog.

For a comparison between file systems, see the Ubuntu forms, and Wikipedia for notes on defrag approaches and tools.

As for staring at those fancy looking progress and status screens of the defrag tools, it seems it’s a thing of the past across all OSes. It was a nice way to kill time; a bit like watching the washing machine tumble the clothes, I guess. Well, there’s always Bittorrent chunks. They actually look a bit similar when only part of the torrent is downloaded.

Comments Off

Choosing an SSD

1 comment

With most technology, I choose to be a late adopter. Letting other people do the first rounds of QA has saved we loads of money. Waiting for the prices of new gadgets to drop down to reasonable levels has saved even more. So after “everybody” has gotten a SSD drive, I’m thinking it’s time to look into it.

I expect to use the drive as a boot drive for Fedora, so it should excel on the random read/write tests. I have an older motherboard which only supports SATA 3.0 Gb/s, so the high end SSD are not interesting at this point. Finally, I’m running a fan-less system, water cooled, and with Silverstone Nightjar fanless PSU, thus also opting for the quiet WD Green drivers (5400 – 7200 RPM). It means switching to SSD will be a very significant improvement, while also removing the last noise from the back-scatter of disk bound OS work.

In Anandtech’s review from November 2010, the Corsair Force drives are on top. Furthermore, he stresses the SandForce controllers as “the sensible choice” for OS and applications. At 180 Euros, the F120 is a bit pricey, while the F40 and F60 are almost the same at 98 and 105 Euros respectively. Although the F60 was not included in Anandtech’s review, it seems like a safe bet. 60 GB should also be plenty of space for the OS, swap, and basic user files (documents, e-mail, but not images or video).

As for compatibility, in the Fedora 14 documentation, they mention that ext4 is the only fully-supported file system that supports TRIM”. Furthermore, to enable the TRIM command (which is disabled by default), the drive should be mounted with the discard option. Finally, the docs states that the swap partition will use TRIM by default. In other words, everything is ready to go.

Robert Penz goes into details to bust some of the myths around SSD. He concludes that on a normal user system, you don’t need to take special consideration when switching from spinning to solid drives. Only on the advice of using “noatime” he seems incorrect, challenged by this thread: “noatime is not necessary. Fedora defaults to relatime , which is a better choice: it reduces disk access almost as much as noatime, but preserves enough atime info for practical purposes”.