Archive for the ·

linux

· Category...

Upgrading Debian Wheezy 7 to Jessie 8

no comments

Upgrading from Debian 7 to 8 is reasonably straight forward, following the official instructions. These shorter summaries are also useful references.

Very briefly then, make sure you have backup.
dpkg --get-selections "*" > dpkg_selections.txt
tar zcvf upgrade_backup.tar.gz /etc /var/lib/dpkg /var/lib/apt/extended_states /etc/mysql/my.cnf /etc/fuse.conf /etc/ssh/ssh_config

Update /etc/apt/sources.list, and replace or occurrences of wheezy with jessie.
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list

If VirtualBox is installed, update to the new key:
wget -q -O - http://download.virtualbox.org/virtualbox/debian/oracle_vbox_2016.asc | sudo apt-key add -

Then comes the upgrade dance, with a few prompts, warnings, questions.

apt-get update
apt-get upgrade
apt-get dist-upgrade

After the upgrade, it is recommended to purge unused packages
apt-get purge $(dpkg -l | awk '/^rc/ { print $2 }')
apt-get autoremove

It is also recommended to install the linux-image-* metapackage, e.g. for AMD CPUs
apt-get install linux-image-amd64

Finally, cross your fingers and reboot.

Add-on development for Kodi

Comments Off

On the heels of the QNAP NAS setup notes, here’s a fun integration with my home automation system for living room lights.

The idea is to send the same commands from the Kodi app as the custom Android app does to the Arduino controlled relays. Before the movie starts, the lights go off. I’ll skip the details of that code, but point to a few useful pages to get started. It’s simple.

The Kodi Add-on documentation is good. To get started, you need at least two files: The addon.xml configuration, and your Python script, e.g. myaddon.py. These have to be in a directory on the format script.name (more in the linked documentation) and zip-ed in a ZIP file which does not use use compression, as seen below. This zip-file can now be copied to the NAS, and installed from Kodi.
zip -0 -r myaddon.zip script.myaddon

One gottcha is that the addon.xml file cannot contain a final new-line. At least some people have reported that causing a install error.

For an easy way to get started, look at the Hello World add-on example, as well as its source code. It doesn’t get easier than that.

Comments Off

Securing a Postfix mail server – TLS transport encryption

Comments Off

I previously discussed SPF and DKIM setup for the Postfix mail server. Here’s some notes on TLS transport encryption. (Although, maybe those articles should have come in opposite order).

Using a self-signed certificate (which should be fine for small scale usage), setup is rather easy and straight forward. Creating the keys and certificats boils down to these instructions, copied from here. (Similar instructions here).

openssl genrsa -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem
openssl genrsa -out device.key 2048
openssl req -new -key device.key -out device.csr
openssl x509 -req -in device.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out device.crt -days 500

Modifying /etc/postfix/main.cf, you might end up with something like this, assuming you’ve copied the keys as indicated by the linked article.
smtp_use_tls = yes
smtpd_use_tls = yes
 
smtp_tls_note_starttls_offer = yes
 
smtpd_tls_security_level = may
smtpd_tls_ask_ccert = yes
smtpd_tls_security_level = may
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
 
smtpd_tls_key_file = /usr/share/ssl/certs/postfix/device.key
smtpd_tls_cert_file = /usr/share/ssl/certs/postfix/device.crt
smtpd_tls_CAfile = /usr/share/ssl/certs/postfix/rootCA.pem

Once all the changes are made, restart postfix:
service postfix restart

Now you can verify the setup with telnet:
telnet mail.example.com 25
 
EHLO example.com
STARTTLS

This should yield:
220 Ready to start TLS

Another way to confirm the setup is to send an email to a gmail.com account, and observe the lock status icon on the header field drop-down, explained in detail here.

Finally, the official Postfix documentation and notes on authentication (older doc) might come in handy.

Comments Off

QNAP NAS and autofs auto mount

Comments Off

After considering multiple options to cover a HTPC and a NAS, I finally went with the combined “living room” QNAP HS-251+ NAS earlier this year. I’ll leave the reviews to other sites, and just summarize the main features:

  • 2 bay 3.5″ or 2.5″ HDD or SSD
  • Intel Celeron 2GHz Quad core; 2 GB DDR3 RAM
  • 2x 1Gb RJ-45 ports; 2x USB 2.0; 2x USB 3.0
  • 1x HDMI
  • Fan-less
  • Simple remote control
  • Multiple in-house and external apps
  • Good support for Kodi (aka. XBMC)
  • Linux based 32-bit OS, with most common tools and network services available, including SSHD, NFS, SMB, FTPS, rsync.

NFS

Setting up NFS shares on the NAS side is straight forward through the web based UI under “Control Panel”. You probably want to create one or more users which match your own client (e.g. laptop) user, and possibly also related group. All this can be achieved through the UI, however, for setting specific user IDs, SSH into the NAS (using the admin account) and edit /etc/passwd and /etc/group. If the IDs are changed, you’ll also have to update /mnt/HDA_ROOT/.config/nfssetting.

/etc/passwd
david:x:1001:8008:Linux User,,david,:/share/homes/david:/bin/sh
john:x:1000:8008:Linux User,,john,:/share/homes/john:/bin/sh

/etc/group
foobar:x:8008:david,john

The reason for changing the user or group IDs manually might be to match existing IDs on the client machines. In that case, you might also have to provide this option, to make those IDs are actually used by the NAS. This setting is not permanent, so if the NAS is restarted frequently, you might consider a start-up script solution.
echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping

The two relevant configuration files for the NFS setup on the NAS are /etc/exports and /mnt/HDA_ROOT/.config/nfssetting. They will be automatically configured by the UI, however some manual tweaking might be needed. I ended up with something like this, for two machines (with DNS names”laptop”, “desktop” – you can also use IP address) and two shares (“pictures”, “video”). The user (UID) and group (GID) ids will match what’s seen in the /etc/passwd and /etc/group files above.

/etc/exports

"/share/CACHEDEV1_DATA/pictures" laptop(rw,async,no_subtree_check,insecure,no_root_squash) desktop(rw,async,no_subtree_check,insecure,no_root_squash)
"/share/CACHEDEV1_DATA/video" laptop(rw,async,no_subtree_check,insecure,no_root_squash) desktop(rw,async,no_subtree_check,insecure,no_root_squash)

/mnt/HDA_ROOT/.config/nfssetting
"/share/CACHEDEV1_DATA/Public" *(rw,async,no_root_squash,insecure)
[Global]
Version = 4.2.0
[Access]
/share/CACHEDEV1_DATA/Public = FALSE
/share/CACHEDEV1_DATA/pictures = TRUE
/share/CACHEDEV1_DATA/video = TRUE
[AllowIP]
/share/CACHEDEV1_DATA/Public = *
/share/CACHEDEV1_DATA/pictures = laptop,desktop
/share/CACHEDEV1_DATA/video = laptop,desktop
[Permission]
/share/CACHEDEV1_DATA/Public = rw
/share/CACHEDEV1_DATA/pictures = rw,rw
/share/CACHEDEV1_DATA/video = rw,rw
[SquashOption]
/share/CACHEDEV1_DATA/Public = no_root_squash
/share/CACHEDEV1_DATA/pictures = no_root_squash,no_root_squash
/share/CACHEDEV1_DATA/video = no_root_squash,no_root_squash
[AnonUID]
/share/CACHEDEV1_DATA/Public = 65534
/share/CACHEDEV1_DATA/pictures = 1001,1000
/share/CACHEDEV1_DATA/video = 1001,1000
[AnonGID]
/share/CACHEDEV1_DATA/Public = 65534
/share/CACHEDEV1_DATA/pictures = 8008,8008
/share/CACHEDEV1_DATA/video = 8008,8008

After making any changes to the NFS config, restart the service:
/etc/init.d/nfs restart

Client side and autofs

On the client, e.g. laptop or desktop, you’d want to point your NFS mount configuration to the shares created above. However, since either NAS or more likely personal machine will be rebooted, it is useful to configure this through autofs instead of the traditional /etc/fstab config. That way, the shares will be mounted and re-mounted on demand. It will also avoid long waits at boot and shutdown of the client machines.

First, make sure the NFS and autofs packages are installed:
apt-get install portmap nfs-common autofs cifs-utils

Edit /etc/auto.master and add the following line, which specify local mount point, and specific configuration files. Note that that has to match with your setup, so you might want to change the names here. As long as the /mnt directory and config file match, you can use whatever names you like.

/etc/auto.master
/mnt/qnap /etc/auto.qnap

The share specific configuration is then added in the file referenced above. It assumes you’ve named the shares on the NAS “pictures” and “video”. It also assumes the DNS name of the NAS is “qnap” (or you can use an IP here). Finally, it assumes that the shared group is called “foobar”, which should match the GID 8008 above. That GID should also be present on the client machine.

/etc/auto.qnap
pictures -fstype=nfs,rw,soft,tcp,nolock,gid=foobar qnap:/pictures
video -fstype=nfs,rw,soft,tcp,nolock,gid=foobar qnap:/video

Finally, after making changes to the NFS / autofs confg, restart the service:
/etc/init.d/autofs restart

Comments Off

Let’s Encrypt TLS certificate setup for Apache on Debian 7

Comments Off

Through Let’s Encrypt, anybody can now easily obtain and install a free TSL (or SSL) certificate on their web site. The basic use case for a single host is very simple and straight forward to set up as seen here. For multiple virtual hosts, it is simply a case of rinse and repeat.

On older distributions, a bit more effort is required. E.g. on Debian 7 (Wheezy), the required version of the Augeas library (libaugeas0, augeas-lenses) is not available, so the edits to the Apache config files have to be managed by hand. Furthermore, for transitioning from an old HTTP based server, you need to configure the redirects for any old links which still might hard code “http” in the URL. Finally, there’s some security decisions to consider when selecting which encryption protocols and ciphers to support.

Installation and setup

Because the installer has only been packaged for newer distributions so far, a manual download is required. The initial execution of the letsencrypt-auto binary will install further dependencies.

sudo apt-get install git
git clone https://github.com/letsencrypt/letsencrypt /usr/local/letsencrypt
 
cd /usr/local/letsencrypt
./letsencrypt-auto --help

To acquire the certificates independently of the running Apache web server, first shut it down, and use the stand-alone option for letsencrypt-auto. Replace the email and domain name options with the correct values.

apache2ctl stop
 
./letsencrypt-auto certonly --standalone --email johndoe@example.com -d example.com -d www.example.com

Unless specified on the command line as above, there will be a prompt to enter a contact email, and to agree to the terms of service. Afterwards, four new files will be created:

/etc/letsencrypt/live/example.com/cert.pem
/etc/letsencrypt/live/example.com/chain.pem
/etc/letsencrypt/live/example.com/fullchain.pem
/etc/letsencrypt/live/example.com/privkey.pem

If you don’t have automated regular backup of /etc, now is a good time to at least backup /etc/letsencrypt and /etc/apache2.

In the Apache config for the virtual host, add a new section (or a new file) for the TSL/SSL port 443. The important new lines in the HTTPS section use the files created above. Please note, this example is for an older Apache version, typically available on Debian 7 Wheezy. See these notes for newer versions.

# This will change when Apache is upgraded to >2.4.8
# See https://letsencrypt.readthedocs.org/en/latest/using.html#where-are-my-certificates
 
SSLEngine on
 
SSLCertificateFile /etc/letsencrypt/live/example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/example.com/chain.pem

To automatically redirect links which have hard coded http, add something like this to the old port *.80 section.

#Redirrect from http to https
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

While editing the virtual site configuration, it can be useful to watch out for the logging format string. Typically the logging formatter “combined” is used. However, this does not indicate which protocol was used to serve the page. To show the port number used (which implies the protocol), change to “vhost_combined” instead. For example:

CustomLog ${APACHE_LOG_DIR}/example_com-access.log vhost_combined

To finish, optionally edit /etc/apache2/ports.conf, and add the following line to the SSL section. It enables multiple named virtual hosts over SSL, but will not work on old Windows XP systems. Tough luck.

<IfModule mod_ssl.c>
  NameVirtualHost *:443
  Listen 443
</IfModule>

Finally, restart Apache to activate all the changes.

apache2ctl restart

Verification and encryption ciphers

SSL Labs has an excellent and comprehensive online tool to verify your certificate setup. Fill in the domain name field there, or replace your site name in the following URL, and wait a couple of minutes for the report to generate. It will give you a detailed overview of your setup, what works, and what is recommended to change.

https://www.ssllabs.com/ssltest/analyze.html?d=example.com

Ideally, you’ll get a grade A as shown in the image below. However, a few more adjustments might be required to get there. It typically has to do with the protocols and ciphers the web server is configured to accept and use. This is of course a moving target as security and cryptography research and attacks evolve. Right now, there are two main considerations to make: All the old SSL protocol versions are broken and obsolete, so should be disabled. Secondly, there’s an attack on the RC4 cipher, but disabling that is a compromise, albeit old, between its insecurity and the “BEAST” attack. Thus, disabling RC4 now seems to be preferred.

Taking all this into account, the recommended configuration for Apache and OpenSSL as it stands excludes all SSL versions, as well as RC4 versions. This should result in a forward secrecy configuration. Again, this is a moving target, so this will have to be updated in the future.

To make these changes, edit the Apache SSL mod file /etc/apache2/mods-available/ssl.conf directly, or update the relevant virtual host site config file with the following lines.


SSLHonorCipherOrder on
 
SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4 !ECDHE-RSA-RC4-SHA"
 
SSLProtocol all -SSLv2 -SSLv3

Restart Apache, and regenerate the SSL Labs report. Hopefully, it will give you a grade A.


 
 

Final considerations

Even with all the configuration above in place, the all-green TSL/SSL security lock icon in the browser URL bar, as seen below right, might be elusive. Instead a yellow warning like the on in the image to left might show. This could stem from legacy URLs which have hard coded the http protocol, both to the internal site and external resources like images, scripts. It’s a matter of either using relative links, excluding the protocol and host altogether, absolute site links, inferring the protocol by not specifying it, or hard coding it. Examples:

<img src="blog_pics/ssl_secure.png">
 
<img src="/blog_pics/ssl_secure.png">
 
<img src="//i.creativecommons.org/l/by-sa/3.0/88x31.png">
 
<img src="https://i.creativecommons.org/l/by-sa/3.0/88x31.png">

On a blog like this, it certainly makes sense to put in some effort to update static pages, and make sure that new articles are formatted correctly. However, going through all the hundreds of old articles might not be worth it. When they roll off the main page, the green icon will also show here.

 
 

Comments Off

SPF and DKIM on Postfix

Comments Off

A recent post by Jody Ribton laments the fact that DIY mail servers are having a hard time not getting blocked or rejected in today’s email landscape. The ensuing Slashdot discussion dissected the problem, and came up with a few good pieces of advice also seen on this digitalocean guide:

  • Make sure the server is not an open mail relay.
  • Verify that the sender and server IP addresses are not blacklisted.
  • Apply a Fully Qualified Domain Name (FQDN) and the same host name as the PTR record.
  • Set a Sender Policy Framework (SPF) DNS record.
  • Configure DomainKeys Identified Mail (DKIM) on the sending server and DNS.

Sender Policy Framework (SPF)

“Sender Policy Framework (SPF) is a simple email-validation system designed to detect email spoofing by providing a mechanism to allow receiving mail exchangers to check that incoming mail from a domain comes from a host authorized by that domain’s administrators”. [Wikipedia]. It is configured through a special TXT DNS record, and further setup on the sending part is not required.

This guide outlines the parameters, and the easiest way to get started is actually this Microsoft provided online wizard. Given a domain, it will guide you through the settings and present you with the DNS record to add at the end. If the domain already has a SPF record, it will verify it, and also take the current settings into account through the steps.

DomainKeys Identified Mail (DKIM) on Postfix

DKIM offers similar email spoofing protection, but also offers simple content signing. From Wikipedia: “DomainKeys Identified Mail (DKIM) is an email validation system designed to detect email spoofing by providing a mechanism to allow receiving mail exchangers to check that incoming mail from a domain is authorized by that domain’s administrators and that the email (including attachments) has not been modified during transport. A digital signature included with the message can be validated by the recipient using the signer’s public key published in the DNS.”

Configuration is quite straight forward on Postfix, and this guide shows a typical setup and some common pitfalls. If the same email server caters for multiple domains, an alternative configuration is required. This guide covers those details. Another DNS TXT record on the domain is also required. Finally, once the setup is complete, this tool can be used to verify the DNS record.

Verify the configuration

For both SPF and DKIM, the setup can also be verified by sending an email to check-auth@verifier.port25.com. In addition, an email can be sent to any Gmail account, and by viewing the original message and headers, an extra Authentication-Results header can be seen. See the last guide for further details.

 

 

Comments Off

Manual wifi config in Debian

Comments Off

Most modern GUI based distros handle setup and management of Wifi connections very well these days. However, sometimes you need to go the way of the command line. The following outlines the basics in Debian, plus some useful commands.

Driver
First, the Wifi device I had laying around was a Realtek based USB dongle similar to this. The driver for that is in the non-free repository, so I added the parts in bold to my /etc/apt/sources.list

deb http://ftp.ch.debian.org/debian/ wheezy main contrib non-free
deb http://ftp.ch.debian.org/debian/ wheezy-updates main contrib non-free

I could then install the driver:
apt-get update
apt-get install firmware-realtek

Config
There are two config files to handle: The basic network configuration (/etc/network/interfaces), which also includes wired networks and the loopback, and the WPA wifi specific configuration (/etc/wpa_supplicant/wpa_supplicant.conf). Although it is also possible to specify wifi parameters in the network interfaces file, it is better handled by the wpa because then you can configure settings for multiple networks (e.g. home and work) as seen below.

/etc/network/interfaces contains the following:

# The loopback network interface
auto lo
iface lo inet loopback

# Wired ethernet
auto eth0
iface eth0 inet dhcp

# The primary network interface
auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
      wpa-driver nl80211
      wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

The loopback lo interface is configured, a wired eth0 port, and the wlan0 wifi. All networks are set to come up automatically, the last two use DHCP to get their address, and the Realtek nl80211 driver is specified as well as a reference to the WPA Supplicant config.

/etc/wpa_supplicant/wpa_supplicant.conf contains:

ctrl_interface=/var/run/wpa_supplicant
update_config=1

network={
    ssid="my_home_network"
    key_mgmt=WPA-PSK
    psk="wifi passphrase"
}

network={
    ssid="my_work_network"
    key_mgmt=NONE
}

Here two networks are configured: A home network with WPA encryption and its passphrase, and an open network for work.

To bring the wifi network up, simply run the following. If iterating on the configuration, it’s has to be stopped first.

ifdown wlan0 && ifup wlan0

Useful commands
Other useful commands while debugging this include:

For general network configuration and status:

ifconfig

iwconfig

For listing all available networks and their parameters. This works even before you have connected to a specific one, so it’s a good test to see if the wifi device is even working:
iwlist wlan0 scan

For starting the wpa supplicant manually and checking the wifi configuration. Notice the specific driver and interface name:
wpa_supplicant -B -Dnl80211 -iwlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf

Comments Off

Debian 7 – netinst

Comments Off

In search of a small simple GNU/Linux server setup, I started with a Debian 7 installation through a network based install – netinst. Using that image is simple, either by writing to a CD, or simply to a USB drive or memory card:
(Replace X with your flash drive, but be careful; everything will be overwritten, without any recovery option).

sudo dd if=debian-7.8.0-i386-netinst.iso.torrent of=/dev/sdX
The installation was straight forward, but it has to be hand-held since there are multiple prompts from various parts of the installation throughout. Unfortunately, the final step of writing out the GRUB configuration failed, since the install medium, the USB flash reader, was included in the GRUB device map. Removing it from /boot/grub/device.map fixed that, and a little rescue operation resolve the rest.

Once booted, there was a problem with the start-stop-daemon; for some reason, it was set to a fake mock implementation. That caused all services to not start. Swapping in with the real implementation took care of that:

mv /sbin/start-stop-daemon /sbin/start-stop-daemon.FAKE
ln -s /sbin/start-stop-daemon.REAL /sbin/start-stop-daemon

Finally, some essentials are always missing:

apt-get install emacs atop htop iftop iotop tree git tig sudo autossh iptables-persistent wpasupplicant cryptsetup smartmontools

Comments Off

Open a Firefox image in a VirtualBox guest OS

Comments Off

In my recent Ubuntu and Windows install for my father, a requirement was to open images shown in Firefox (which runs on the host Ubuntu OS) in Corel Paint Shop Pro which runs in the VirtualBox guest Windows 7 OS.

The command to execute an application from the host in the guest is vboxmanage guestcontrol [...] exec, documented here. It also needs the (guest OS) user credentials, the application to run and its arguments. Also note, that back-slashes have to be escaped, and since the path contains spaces it has to be in quotes.

The following assumes that the name of the VirtualBox machine is called “win7″, and application to run is PSP, and shows example values for username, password and filename.

Note that it also assumes that the Windows drive S:\ is a shared mount point between the host and guest OS, and that downloaded images is saved to that location.

vboxmanage guestcontrol win7 exec --image "C:\\Program Files (x86)\\Corel\\Corel PaintShop Pro X5\\Corel PaintShop Pro.exe" --username my_guest_user --password my_guest_password -- "s:\\image_file.png"

So far so good. However, to execute this for any image in Firefox proved a bit tricky. It does not seem to be possible to associate an Open action from the right-click menu with a given application in Firefox. The plugin “Open With Photoshop” came to the rescue. It gives an extra menu item when right-clicking on any image, and what most useful, it let’s you choose the executable to run.

I took the command above, and created my own script, e.g. “corel_psh.sh”, which looks like this:

#!/bin/bash
 
WINUSER=my_win_user
PASSWORD=my_win_password
 
f=$1
b=`basename $f`
 
vboxmanage guestcontrol win7 exec --image "C:\\Program Files (x86)\\Corel\\Corel PaintShop Pro X5\\Corel PaintShop Pro.exe" --username $WINUSER --password $PASSWORD -- "s:\\$b" &> /tmp/open_with.log

I suppose a valuable addition would be to move the downloaded file to the correct location if necessary.

Comments Off

Default PDF viewer in Debian

Comments Off

By some strange logic, the primary and default application for viewing PDFs in Debian is Gimp. If you want to edit the PDF, that might make sense, but that is not the most common use case. There is a bug and discussion about this, but unfortunately, in somebody’s stubborn opinion, “it is not a bug”, and was closed many years ago.

Luckily it is easy to fix. The default setting can be found in the file /usr/share/applications/mimeinfo.cache which contains this line:
application/pdf=gimp.desktop;gimp.desktop;epdfview.desktop;evince.desktop;

Notice how Gimp is listed first, while the PDF viewers ePDFViewer and Evince are last in the list. You can edit that file (as root). Or if you prefer you can override the user local setting in /home/$USER/.local/share/applications/mimeinfo.cache, and insert something like

application/pdf=epdfview.desktop;evince.desktop;

The change should take effect immediately, across all applications and browsers, unless the default is overridden there. E.g. Firefox and Chrome have their own internal PDF viewers, however the default MIME applications will be available for selection when the file is downloaded.

Comments Off

Git branch in zsh prompt

Comments Off

When working in a git directory, I would like to see the current branch as part of the Zsh prompt. There are more advanced use cases out there, but I’ll stick with the branch name for now.

The following lines in ~/.zshrc takes care of the prompt. There are a few gotchas, though: The git command will fail if not in a git controlled directory, so we’ll have to hide that failure message. Then, for Zsh to execute the function, rather than printing its verbatim name, the prompt_subst option has to be set. Finally, it is important to use single quotes for the PROMPT line. If double quotes are used, the first output of the function is used, and never called again.

function get-git-branch {
  b="`git symbolic-ref --short HEAD 2> /dev/null`" && echo "($b)" || echo ""
}
 
setopt prompt_subst
 
PROMPT='%n@%2m %~ $(get-git-branch) %# '

Comments Off

Key mappings in Zsh

Comments Off

Z-Shell is a powerful alternative to Bash, but some of the details can take time to get use to, and some things just have to be changed. For example, the key binding for cursor and other special keys. Using CTRL + arrow keys to skip words might give funny characters like “;5D” and “;5C” instead. As pointed out by Luke Wilde, these keys have to be set up manually. In my case, I had to include the semi-colon in the bindkey command as well.

These should go in ~/.zshrc.

bindkey ';5D' emacs-backward-word
bindkey ';5C' emacs-forward-word
 
export WORDCHARS=''

Only funny thing about setting it up like that, is that if the actual character sequence “;5D” is pasted into the terminal, it will be taken as if the CTRL+LEFT key was pressed. I’m not aware of a work-around for that.

The Zsh wiki lists a few other possible key bindings, including for the Home and End keys:

bindkey "${terminfo[khome]}" beginning-of-line
bindkey "${terminfo[kend]}" end-of-line

Comments Off

Multiplexed SSH sessions for quicker connection

Comments Off

If you need to open multiple SSH connections to the same host, it can get tedious to re-authenticate for every one. And even with public key authentication and no password, the extra channel eats a bit of bandwidth. The solution is multiplexed SSH sessions: Authenticate once, and the following connections to the same host goes over the same session. It’s dead easy to set up:

In your ~/.ssh/config file add the following lines. (Make sure that file has user permissions only, i.e. 600).

Host *
   ControlMaster auto
   ControlPath ~/.ssh/master-%r@%h:%p

It takes effect immediately. SSH twice to the same host to verify.

Comments Off

chroot to ARM

Comments Off

chroot allows you to “run a command or interactive shell with special root directory”, as the man page says. However, it is assumed that the second level root directory is built for the same CPU architecture. This causes a problem if you want to chroot into an ARM based image, for the Raspberry Pi, let’s say. qemu-arm-static, some “voodoo” and several tricks come to the rescue. The process is documented well at Sentry’s Tech Blog, and the original seems to be by Darrin Hodges.

After downloading and unzipping the image, it has to be mounted. There are a few ways to go about this, but I found the easiest was to use plain old mount with an offset. The typical RPi image file is a full disk image, as opposed to a single partition or ISO though. We are after the second partition, which in our case starts at sector 122880. (See this discussion for how to find the correct starting sector using fdisk).

mkdir /mnt/rpi
mount -o loop,offset=$(( 512 * 122880 )) 2014-01-07-wheezy-raspbian.img /mnt/rpi

Next we’ll copy a statically built QEMU binary for ARM to the mounted image. You might need to install QEMU on the host system first. Furthermore, we need to mount or bind the special system directories from the host to the chroot.

apt-get install qemu-user-static
cp /usr/bin/qemu-arm-static /mnt/rpi/usr/bin/

mount -o bind /dev /mnt/rpi/dev
mount -o bind /proc /mnt/rpi/proc
mount -o bind /sys /mnt/rpi/sys

Next comes the magic. This registers the ARM executable format with the QEMU static binary. Thus, the path to qemu-arm-static has to match where it is located on the host and slave systems (as far as I understand).

echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register

Finally, it’s time for the moment of truth:

chroot /mnt/rpi

uname -a
Linux hrb 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 armv7l GNU/Linux

In some cases, the error “qemu: uncaught target signal 4 (Illegal instruction) – core dumped” occurs. User kinsa notes here that the lines of the file ld.so.preload (i.e. on the slave, /mnt/rpi/etc/ld.so.preload) has to be commented out (with a # in front).

Congratulations, you now have an ARM based chroot. What to do with it? Maybe install a few “missing” packages before copying over to one or more SD cards, set up the users, modify passwords, etc. Or take advantage of the CPU and memory of the host system or compile from source.

apt-get install htop tree ipython ipython3 gnuplot

As a final note, when done, you want to clean up the mount points.

umount /mnt/rpi/dev
umount /mnt/rpi/proc
umount /mnt/rpi/sys
umount /mnt/rpi

Comments Off

Fedora 20 released

Comments Off

Fedora 20 was released a few days ago. J.A. Watson at ZDNet has a brief overview of the different desktops available, and concludes that for the most they run just fine on any hardware, including “sub-notebooks”. Furthermore, even though the “spin” of each desktop have specialised in their own applications, there are always plenty more to chose from in the main Fedora repositories.

The Anaconda installer was written back in release 18, and FedUp (FEDora UPgrader) is now the main system upgrade tool. It is not quite clear whether it is preferred to perform that on a running system though, as opposed to booting from an installer image.

Thus, the following links still apply, even for existing installations:

Comments Off