Archive for the ·

linux

· Category...

Default PDF viewer in Debian

Comments Off

By some strange logic, the primary and default application for viewing PDFs in Debian is Gimp. If you want to edit the PDF, that might make sense, but that is not the most common use case. There is a bug and discussion about this, but unfortunately, in somebody’s stubborn opinion, “it is not a bug”, and was closed many years ago.

Luckily it is easy to fix. The default setting can be found in the file /usr/share/applications/mimeinfo.cache which contains this line:
application/pdf=gimp.desktop;gimp.desktop;epdfview.desktop;evince.desktop;

Notice how Gimp is listed first, while the PDF viewers ePDFViewer and Evince are last in the list. You can edit that file (as root). Or if you prefer you can override the user local setting in /home/$USER/.local/share/applications/mimeinfo.cache, and insert something like

application/pdf=epdfview.desktop;evince.desktop;

The change should take effect immediately, across all applications and browsers, unless the default is overridden there. E.g. Firefox and Chrome have their own internal PDF viewers, however the default MIME applications will be available for selection when the file is downloaded.

Comments Off

Git branch in zsh prompt

Comments Off

When working in a git directory, I would like to see the current branch as part of the Zsh prompt. There are more advanced use cases out there, but I’ll stick with the branch name for now.

The following lines in ~/.zshrc takes care of the prompt. There are a few gotchas, though: The git command will fail if not in a git controlled directory, so we’ll have to hide that failure message. Then, for Zsh to execute the function, rather than printing its verbatim name, the prompt_subst option has to be set. Finally, it is important to use single quotes for the PROMPT line. If double quotes are used, the first output of the function is used, and never called again.

function get-git-branch {
  b="`git symbolic-ref --short HEAD 2> /dev/null`" && echo "($b)" || echo ""
}
 
setopt prompt_subst
 
PROMPT='%n@%2m %~ $(get-git-branch) %# '

Comments Off

Key mappings in Zsh

Comments Off

Z-Shell is a powerful alternative to Bash, but some of the details can take time to get use to, and some things just have to be changed. For example, the key binding for cursor and other special keys. Using CTRL + arrow keys to skip words might give funny characters like “;5D” and “;5C” instead. As pointed out by Luke Wilde, these keys have to be set up manually. In my case, I had to include the semi-colon in the bindkey command as well.

These should go in ~/.zshrc.

bindkey ';5D' emacs-backward-word
bindkey ';5C' emacs-forward-word
 
export WORDCHARS=''

Only funny thing about setting it up like that, is that if the actual character sequence “;5D” is pasted into the terminal, it will be taken as if the CTRL+LEFT key was pressed. I’m not aware of a work-around for that.

The Zsh wiki lists a few other possible key bindings, including for the Home and End keys:

bindkey "${terminfo[khome]}" beginning-of-line
bindkey "${terminfo[kend]}" end-of-line

Comments Off

Multiplexed SSH sessions for quicker connection

Comments Off

If you need to open multiple SSH connections to the same host, it can get tedious to re-authenticate for every one. And even with public key authentication and no password, the extra channel eats a bit of bandwidth. The solution is multiplexed SSH sessions: Authenticate once, and the following connections to the same host goes over the same session. It’s dead easy to set up:

In your ~/.ssh/config file add the following lines. (Make sure that file has user permissions only, i.e. 600).

Host *
   ControlMaster auto
   ControlPath ~/.ssh/master-%r@%h:%p

It takes effect immediately. SSH twice to the same host to verify.

Comments Off

chroot to ARM

Comments Off

chroot allows you to “run a command or interactive shell with special root directory”, as the man page says. However, it is assumed that the second level root directory is built for the same CPU architecture. This causes a problem if you want to chroot into an ARM based image, for the Raspberry Pi, let’s say. qemu-arm-static, some “voodoo” and several tricks come to the rescue. The process is documented well at Sentry’s Tech Blog, and the original seems to be by Darrin Hodges.

After downloading and unzipping the image, it has to be mounted. There are a few ways to go about this, but I found the easiest was to use plain old mount with an offset. The typical RPi image file is a full disk image, as opposed to a single partition or ISO though. We are after the second partition, which in our case starts at sector 122880. (See this discussion for how to find the correct starting sector using fdisk).

mkdir /mnt/rpi
mount -o loop,offset=$(( 512 * 122880 )) 2014-01-07-wheezy-raspbian.img /mnt/rpi

Next we’ll copy a statically built QEMU binary for ARM to the mounted image. You might need to install QEMU on the host system first. Furthermore, we need to mount or bind the special system directories from the host to the chroot.

apt-get install qemu-user-static
cp /usr/bin/qemu-arm-static /mnt/rpi/usr/bin/

mount -o bind /dev /mnt/rpi/dev
mount -o bind /proc /mnt/rpi/proc
mount -o bind /sys /mnt/rpi/sys

Next comes the magic. This registers the ARM executable format with the QEMU static binary. Thus, the path to qemu-arm-static has to match where it is located on the host and slave systems (as far as I understand).

echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register

Finally, it’s time for the moment of truth:

chroot /mnt/rpi

uname -a
Linux hrb 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 armv7l GNU/Linux

In some cases, the error “qemu: uncaught target signal 4 (Illegal instruction) – core dumped” occurs. User kinsa notes here that the lines of the file ld.so.preload (i.e. on the slave, /mnt/rpi/etc/ld.so.preload) has to be commented out (with a # in front).

Congratulations, you now have an ARM based chroot. What to do with it? Maybe install a few “missing” packages before copying over to one or more SD cards, set up the users, modify passwords, etc. Or take advantage of the CPU and memory of the host system or compile from source.

apt-get install htop tree ipython ipython3 gnuplot

As a final note, when done, you want to clean up the mount points.

umount /mnt/rpi/dev
umount /mnt/rpi/proc
umount /mnt/rpi/sys
umount /mnt/rpi

Comments Off

Fedora 20 released

Comments Off

Fedora 20 was released a few days ago. J.A. Watson at ZDNet has a brief overview of the different desktops available, and concludes that for the most they run just fine on any hardware, including “sub-notebooks”. Furthermore, even though the “spin” of each desktop have specialised in their own applications, there are always plenty more to chose from in the main Fedora repositories.

The Anaconda installer was written back in release 18, and FedUp (FEDora UPgrader) is now the main system upgrade tool. It is not quite clear whether it is preferred to perform that on a running system though, as opposed to booting from an installer image.

Thus, the following links still apply, even for existing installations:

Comments Off

Building XBMC on the RPi

Comments Off

Some notes on building XBMC from source on the Raspberry Pi: I started with the Raspbian 2013-09-25-wheezy image from here. After basic setup, I switched to CLI only, set the GPU memory to 16 MB, logged in over SSH, and started a screen session. A remote session is preferred, since there will be a lot of coping back and forth between the RPi and your desktop.

For the most, I followed these instructions, with a few modifications: First, the boot files to set GPU memory are not there in the Raspbian distribution I had installed. Instead, I used raspi-config to set the memory split. Secondly, the large one-liner apt-get install of all the dev packages (step 4 in the instructions) did not work very well. It gave dependency conflicts with the mesa packages. I found myself splitting up that line into many chunks, which then worked fine. Finally, a few packages were missing, and I had to run configure several times to figure that out. In the end, I also installed these:

apt-get install dh-autoreconf gawk gperf zip ccache

For a successful build, I had to modify the search path of a header file. There are a few ways to go about that, as discussed here. I used this solution:

sudo sed -i 's/#include "vchost_config.h"/#include "linux\/vchost_config.h"/' /usr/include/interface/vmcs_host/vcgencmd.h

That took me as far as a working XBMC setup, however videos are not playing. With MPlayer there is no problem, but XBMC just gives a black screen. I will have to investigate further.

There’s a similar set of instructions here.

Comments Off

Firefox plug-ins and fixes

Comments Off

Firefox wants to be like Chrome. I’d rather just keep my old Firefox. Here’s the list of fixes to restore or remove some of their blunders:

Disable the “fancy” stuff. These settings can be reached from the special about:config page. I disable these

  • browser.tabs.animate
  • browser.tabs.insertRelatedAfterCurrent
  • keyword.enabled
  • browser.fixup.alternate.enabled

Then some plug-ins:

Comments Off

perceptualdiff – compare images perceptually

Comments Off

I recently found myself needing to compare bitmap images, to see if they were about the same. The images were from Gnuplot generate graphs, and I wanted to check whether subtle changes in the data had not introduced unexpected changed to the plot. A simple binary diff told me that there were indeed some differences, however, comparing them manually was not possible.

Enter the handy tool perceptualdiff, which lets you compare TIF and PNG images based on a perceptual metric. It actually goes beyond simple bitwise pixel diff, and tries to compare based on a model of the human visual system. Consider the examples below, and it is clear that it is a useful tool. It makes it easy to see where the small differences were introduced. (As it turns out, it is only the result of the two plots being generated on different machines, with different versions of Gnuplot and possibly also different available fonts).

The tool is in the Fedora repository, so a simple yum is enough:
yum install perceptualdiff

To generate an output diff image, this command does the job:
perceptualdiff -output diff.ppm image1.png image2.png

Comments Off

Fonts for Gnuplot

Comments Off

After struggling with fonts in Gnuplot 4.6 (on Fedora 17) (getting the not so useful error “gdImageStringFT: Could not find/open font while printing string”), I found tonicas post on debugging the issue. Although helpful, it did not give the full solution to my problem. It turns out, many of the old fonts are not available in Fedora 17 at all.

I wanted a sans-serif font, and in the end I went for the DejaVuSans. After installing the font packages, I specifically exported that path for use with Gnuplot:

sudo yum install dejavu-sans-fonts dejavu-fonts-common
 
export GDFONTPATH=/usr/share/fonts/dejavu

Then I can use that font in Gnuplot by addressing it specifically:

set title 'Test' font 'DejaVuSans,21'

Comments Off

Bodhi Linux on Nexus 7

2 comments

Bodhi Linux is a Debian based distribution using the Enlightenment Window Manager. They have taken the effort to make a Nexus 7 image (ARM HF), and have gone for the very simple approach seen by the Ubuntu folks: Simply flash the boot and userdata partitions, and you’re ready to go.

The images can be found here. (Notice, there are actually two different kinds; boot and root, with different versions of the later). After unpacking, it boils down to:
sudo fastboot erase boot
sudo fastboot erase userdata
sudo fastboot flash boot boot.img
sudo fastboot flash userdata rootfs.img
sudo fastboot reboot

With the Enlightenment WM, they have actually managed to come further towards a UI which suits a small touch screen (compared to the Ubuntu and KDE Plasma Active UIs). The top panel features large buttons, and overall the interaction feels snappy. Now, they have of course not managed to cover all the upstream applications, which is now the next frontier for all of these distributions.

As it is Debian based, is uses the the Debian repositories, and just like the Ubuntu ones, a lot of applications have already been cross-compiled to ARM HF (Hard Floating point). As a crude test, Eclipse-JDT installed and started up fine, While Libreoffice-Writer was missing a package. glxgears installs, but does not start; possibly a driver issue. USB OTG with a hub, keyboard and mouse works out of the box.

So far, this is the best alternative GNU/Linux based distribution for Nexus 7 I’ve tested. However, as mentioned before, these are still early days, and there will be a lot of work for both upstream application maintainers and distributions to create a great UI experience suited for a touch screen.

KDE Plasma Active for Nexus 7

Comments Off

On the heels of running Ubuntu on the Nexus 7, I thought I’d try KDE’s Mer based (partly derived from MeeGo) Plasma Active as well. As their documentation states: “Even though very much already works reasonably well, there are still some glitches. So, please don’t expect a 100% working system.” And it is indeed a bit more than small glitches which have to be fixed before it is a usable system.

Using Ruediger Gad instructions, I downloaded the boot and userdata images, and proceeded with the installation steps. Although not one-click like Ubuntu, it is reasonable straight forward, however, once it is time to boot the new OS, there are some problems: I only see a “dead” Android on its back with a red exclamation mark. (This is of course Google’s fault, who have hidden any useful information one might get a further clue from, and gone for a “IT for Dummies” mode). It seems Gad had anticipated some problems though, since he has provided a helpful fastboot command to load the boot image dynamically. This works, with the caveat that the MOSLO (MeeGo OS Loader) will go into a USB slave mode if a USB cable is detected. Therefore, I had to issue the fastboot command and then quickly jerk out the cable, and the OS would boot. (Skipping the MOSLO altogether failed to boot at all (Stuck on “Waiting for root device /dev/mmcblk0pX”)).

Once finally in the UI, I don’t seem to have the same luck as Gad. On the first try, even the most basic clicks and moves left the whole screen hanging for up to 30 seconds, and it failed to render the application icons seen on his blog. Trying to boot a second time, I got so far as to open the browser and terminal. Typing with the on-screen keyboard in the URL did not work, since it loses focus once another part of the screen is touched. Looking at dmesg in the terminal, I could see that my USB-OTG adapter, USB Hub, keyboard and mouse were detected correctly, however there seems to be drivers missing, since nothing happened when moving the mouse or typing. So yeah, some glitches, which hopefully will be ironed out in a new release.

What’s a bit more worrying, is the impression of the overall UI, and usefulness on a small touch screen. Just like the Ubuntu UI, Plasma Active is still stuck with a desktop centric view: Small icons and buttons, difficult to interact with. Setting up the Wifi dumped the user right in to an old desktop dialogue, complete with small text fields, and OK / Cancel buttons in the far bottom corner. This was probably the most disappointing bit, since I had expected the Plasma Active interface to be designed specifically for small touch screens. Clearly I was wrong.

Overall then, KDE Plasma Active is an interesting initiative, and one to watch in the future. However, just like Ubuntu, these are still very early days for new alternative OSes on tablets and phones. Given some more time, things will look a lot more promising, for sure.

Comments Off

The Do-It-Yourself Cloud

1 comment

“In the cloud”

The buzzword “cloud” seems to be here to stay for quite a lot longer. The problem is that it is rather ill-defined, and sometimes it is used to mean “on the Internet”, regardless of how or where a particular service or content is hosted.

It is not before we pick up further buzzwords that we can add some meaning to the term: Although there are even more terms used, I would like to focus on two of them: Infrastructure as a Service (IaaS), or what traditionally has been called “hosting”; virtual or dedicated machines which you can install and operate on OS root level with little or no oversight. Examples include your local hosting provider, and global businesses like Amazon EC2 and Rackspace.

Secondly, Software as a Service (SaaS), where you don’t write the software or maintain the system yourself. All it takes is to sign up for a service, and start using it. Think Google Apps, which includes GMail, Docs, Calendar, Sites and much more; or Salesforce, Microsoft Office 365, etc. Often these services are billed as “free”, with no financial cost to private users, and the development and operating costs of the provider is financed through various advertisement programs.

Black Clouds

The problem with the later model, Software as a Service, is that it can put many constraints on the user, including what you are allowed to do, say, or even make it difficult for you to move to another provider. In his 2011 essay “It’s the end of the web as we know it”, Adrian Short likens the later model to tenants: If you merely rent your home, there are many things you will not be allowed to do, or which you do not have control over. Short focuses on web hosting where using a service like Blogger will not let you control how links are redirected, or were you to move in the future, take those page-clicks with you onto your new site. The same goes for e-mail: If AOL decides that their e-mail service is not worth-while tomorrow, many people will lose e-mails with no chance to redirect. Or look at all the storage services which collapsed in the wake of the raid on MegaUpload. A lot of users are still waiting for FBI to return their files.

More recently, the security expert Bruce Schneier wrote about the same problem, but from a security perspective. We are not only tenants he claims, but serfs in a feudal system, where the service providers take care of all the issues around security for us, but in return our eye-balls are sold to the highest bidder, and again it is difficult to move out. For example, once you’ve invested in music or movies from Apple iTunes, it is not trivial to move to Amazon’s MP3 store; and if you’ve put all your contacts into Facebook, it is almost impossible to move to MySpace.

In early December, Julian Assange surfaced to warn about complete surveillance, and governments fighting to curb free speech. His style of writing is not always as straight to the point as one could wish for, but in between there is a clear message: Encrypt everything! This has spurred interesting discussion all over the Internet, with a common refrain: Move away from centralized services, build your own.

Finally, Karsten Gerloff, president of the Free Software Foundation Europe (FSFE), touced on the same theme in is talk at the LinuxCon Europe in Barcelona, in November 2012. He highlighted the same problems with centralised control as discussed above, and also mentioned a few examples of free software alternatives which distributes various services. More about those below.

Free Software

The stage is set then, and DIY is ready to become in vogue again. But where do you start, what do you need? If not GMail or Hotmail, who will host your e-mail, chat, and other services you’ve come to depend on? Well, it is tempting to cut the answer short, and say: “You”. However, that does not mean that every man, woman and child has to build their own stack. It makes sense to share, but within smaller groups and communities. For example, it is useful to have a family domain, which every family member can hinge their e-mail address off. A community could share the rent of a virtual machine, and use it for multiple domains for each individual group; think the local youth club, etc. The French Data Network (FDN), has a similar business model for their ISP service, where each customer is an owner of a local branch.

For the software to provide the services we need in our own stack, we find ourselves in the very fortunate situation that it is already all available for free. And it is not only gratis, it is free from control of any authority or corporation, free to be be distributed, modified, and developed. I’m of course talking about Free and Open Source Software (FOSS), which has much to thank Richard Stallman for its core values, defined in the GPL. (“There isn’t a lawyer on earth who would have drafted the GPL the way it is,” says Eben Moglen. ["Continuing the Fight"]). We may take it for granted now, however, we could very easily have ended up in a shareware world, where utilities of all kinds would still be available, but every function would come with a price tag, and only the original developers would have access to the source code, and be able to make modification. Many Windows users will probably recognize this world.

Assuming one of the popular GNU/Linux distributions, most of the software below should already be available in the main repositories. Thus it is a matter of a one-line command, or a few clicks to install. Again a major advantage of free software. Not only is it gratis, it usually refreshingly simple to install. The typical procedure of most proprietary software would include surfing around on an unknown web site for a download link, downloading a binary, and trusting (gambling really) that it has not been tempered with. Next, an “Install Wizard” of dubious usefulness and quality gives you a spectacular progress bar, sometimes complete with ads.

The DIY Cloud

The following is a list of some of the most common and widely used free and open source solutions to typical Internet services, including e-mail, web sites and blogging, chat and voice and video calls, online calendar, file sharing and social networks. There are of course many other alternatives, any this is not meant to be an exhaustive list. It should be plenty to get a good personal or community services started, though.

  • The Apache HTTP web server is the most widely used web server on the Internet, powering shy of 60% of web sites (October 2012). It usually comes as a standard package in most distributions, and is easy to start up and configure. For the multi-host use-case, it is trivial to use the same server for multiple domains.
  • If you are publishing through a blog like this one, the open source WordPress project is natural companion to the Apache web server. It too is available through standard repositories, however, you might want to download the latest source and do a custom install, both for the security updates, and to do custom tweaks.
  • For e-mail, Postfix is typical choice, and offers easy setup, multi-user and multi-domain features, and integrates well with other must-have tools. That includes SpamAssassin (another Apache Foundation project) and Postgrey to handle unwanted mail, and Dovecot for IMAP and POP3 login. For a web-frontend, SquirrelMail offers a no-frills fully featured e-mail client. All of these are available through repository install.
  • Moving into slightly less used software, but still very common services, we find the XMPP (aka Jabber) servers ejabberd and Apache Vysper, with more to choose from. Here, a clear best-of-breed has yet to emerge, and furthermore, it will require a bit more effort on the admin and user side to configure and use. As an alternative, there is of course always IRC, with plenty of software in place.
  • Taking instant chat one step further, a Voice-over-IP server like Asterix is worth considering. However, here setup and install might be tricky, and again, signing up / switching over users might require more effort. Once installed, though, there are plenty of FOSS clients to choice from, both on the desktop and mobile.
  • Moving on to more business oriented software, online calendar through the Apache caldav module is worth exploring. As an alternative the Radicale server is reported to be easy to install and use.
  • A closely related standard protocol, WebDav, offers file sharing and versioning (if plain old FTP is not an option). Again, there is an Apache module, mod_dav, which is relatively easy to set up, and access in various ways, including from OSX and Windows.
  • DIY Internet

    That list should cover the basics, and a bit more. To round it off, there are a number of experimental or niche services which is worth considering to their propitiatory and closed alternatives. For search, the distributed YaCy project looks promising. GNU Social and Diaspora aim to taken on heavy weights in social networking. Finally, GNUNet and ownCloud are peer-to-peer file-sharing alternatives.

    The future lies in distributed services, with content at the end-nodes, rather than the hubs. In other words, a random network, rather than scale-free. Taking that characteristic back to the physical layer (which traditionally always has been scale-free), there are “dark nets” or mesh nets, which aim to build an alternative physical infrastructure based on off-the-shelf WiFi equipment. Currently, this at a very early experimental state, but the trend is clear: Local, distributed and controlled by individuals rather than large corporations.

Ubuntu on the Nexus7

1 comment

I recently got my hands on an Asus Nexus 7 tablet. In it self, maybe not a groundbreaking device, if it wasn’t for the fact that Canonical will use it as their reference device for running Ubuntu on tablets and dual (or more) core mobile phones. Just to be clear, this is no dual boot, emulator, nor “chroot”-trick. The OS boots natively, and brings up the standard Ubuntu Unity desktop. The kernel is copied from (or based on) Google’s Android 4.1 kernel for the Nexus 7, which includes several non-committed changes, as well as binary drivers and firmware. See here fore more information.

A decent proof-of-concept build of Ubuntu 13.04 is already available, and it runs fine on the Nexus 7. If you’re running Ubuntu on your desktop, a pre-packaged installer is available from the a repository. Alternatively, download the boot and userdata images, and install using fastboot yourself. (All commands below need sudo).

fastboot devices
fastboot erase boot
fastboot erase userdata
fastboot flash boot raring-preinstalled-desktop-armhf+nexus7.bootimg
fastboot flash userdata raring-preinstalled-desktop-armhf+nexus7.img
fastboot reboot

Now, I said proof-of-concept, and what you get with this image is not really that handy on a tablet. So far, it just starts up a desktop Window Manager, which is not too comfortable with a touch screen. However, with a USB On-the-Go (OTG) adapter, you can plug in a USB hub, keyboard and mouse. Now it becomes usable like any other desktop. I got one of these compact ones from Deal Extreme, However, due to the rounded shape of the Asus Nexus 7, I had to chisel off a few millimetres to make it fit. The version with a wire would probably had worked better. Maybe also interesting to try would be a HDMI adapter (I’m not sure if that particular one works). Finally, the missing bit to have a fully functional docking setup would be charging while the OTG cable is connected. The Ubuntu FAQ mentions that this will be enabled, but you’ll probably need yet another special adapter cable to piece it all together.

What’s impressive about the current offering is that most, if not all, packages have already been compiled for the ARM architecture and are available in the Ubuntu repositories. This is very welcome, as it frees the tablet from the Android markets, and brings in an enormous selection of free and open source software. Not all of it is immediately suited for a small touch screen on a slow CPU, but that will change over time.

On a whim, I tried apt-get install emacs and eclipse. Both downloaded and worked fine, however, even with a four core CPU, ARM is not up to Eclipse quite yet. It should also be noted that the desktop UI has some unnecessary features which notably slows down the experience. For example, eye-candy like fading transitions when ALT-TABing between windows is enabled.

In conclusion, this is a very interesting first move from Canonical, and more GNU/Linux distributions will surely follow. With more alternatives and variety in this space, it will hopefully open people’s eyes up to the fact that the mobile phones and tablets they carry around are full-fledged computers in themselves, with no reason to remain restricted to a single OS from a single vendor. Maybe it will eventually turn stupid laws which makes it illegal to hack and experiment on these devices.

USB Smart Card Reader

Comments Off

Just got another shipment from Deal Extreme. This time a USB Smart Card Reader. It’s for reading my soon to arrive Free Software Foundation Europe fellowship card. However, all I had to try out with so far were old redundant bank cards. And that seemed to work without problems.

The reader came with a cute mini-CD with Linux drivers, but they were not required to get it running. Following the page at FSFE, the reader was up in minutes. Confirm that these two are installed; in my case they were:

yum install libusb pcsc-lite

Add the udev rules and group as indicated by the FSF howto, and restart X if necessary.

I also installed yum installed pcsc-tools, and could detect cards by running

pscs_scan

Comments Off