Posts tagged ·

linux

·...

PC build: Silent yet powerful

no comments

It’s been a long time since I’ve had the chance to put together a machine. The one I’m typing on right now has a more than five years old AMD Athlon 64 X2 5050e, and one of its HDDs report 47220 Power_On_Hours, or 5.4 years. It was fun to look at some new hardware.

This build is not for me, though. My father’s current machine is from 2005, and the AMD Sempron 2600 1.6GHz has kept up well, however would not be a good fit for the new requirements: A silent build which can handle a modern Ubuntu distribution plus Windows 7 in a VM. After good advice from Redditors on r/buildapc, I got the following components.

Main

Storage

The rest

 
 

Requirements and reasoning

At € 1055 (in June 2014), it’s not a cheap build, and I could definitely have saved a bit here and there. However, that was not my main concern – my father deserved something top-notch. I wanted something powerful enough so that it would last many years to come without upgrading, yet silent for the living room. That’s why some of the components are somewhat over-provisioned: the fanless 460W PSU, while I expect the peak draw to be less than 150W; 16 GB RAM, 256 SDD, 4 TB HDD.

For the CPU, I went for the four core Intel Core i5 4570 (LGA 1150, 3.20GHz), based on redMarllboro’s advice. It is indeed more powerful than the AMD A10-6700 I had originally planned for, and furthermore, the virtual cores would not benefit the VM much.

With the CPU fixed, I narrowed down my search for an Asus motherboard to the ASUS Maximus VII Ranger (Z97). That was based on the following criteria: more than 4 SATA ports, Intel Ethernet controller (I try to keep away from Realtek based on this issue, even if that was WiFi related), 4 DIMM slots, an onboard DVI and/or VGA port. Turns out, that really narrows it down, and about the only contender was the ASUS Sabertooth Z97 Mark 2, however that only has HDMI and Displayport embedded.

Now, one could argue that both of those MBs are overkill for what I’m building. However, most of the boards I’d be looking at would be in the €100-150 range anyway, and as price was really not a main issue here, why not go for the latest chipset? Furthermore, the “Republic of Gamers (ROG)” marketing from Asus I find somewhat misleading. The Maximus board looks aggressive in black and red, but surely it is the hardware specifications which matter. For example, the 10K Black Metallic Capacitors are welcome when cooling is an issue. Also, some of the ROG “features” in the form of software are dubious at best: How is a RAM disk a feature of the MB? On most GNU/Linux distributions, it’s there by default under /dev/shm.

For storage, an SSD is a no-brainer these days, and the only questions are: How large? And is additional storage required? 128 GB might have been just enough, but with ~50 GB for the Ubuntu host OS, ~40 GB for the VM, and ~30 GB for swap it would have been very tight. (In fact, post install, only 70 GB is left on a 256 GB disk). Doubling to 256 GB is less than double the price. I will require more storage space, so added the 4 TB spinning disk. When it comes to WD Red over Green, it’s only about €10 difference, so another no-brainer.

As the VM will be running Windows, my plan is to back it up frequently, in the hope of recovering from certain problems of that OS. Now, several people on r/buildapc thread advised against this. I suppose they are mostly right; it might be possible to lock down a Windows installation to the point where malware and adware is not a problem. The first and second issues with that are I’d have to spend a lot of time learning about it, and I would not be very interested. And why should I? A restricted install with no direct user access to system binaries and most applications delivered from a trusted cryptographically signed source has been the norm on most GNU/Linux distributions for more than a decade. It takes no effort at all, so why go with something inferior? If this machine and setup can avoid my father spending hundreds of bucks at PC Repair shops every year, it will pay itself back quickly and be a success.

 

Silent and cool

The most important requirement for this build was to make it silent. The fanless Seasonic P-460 achieves that without breaking a sweat. At normal load, which is 35 to 50 W at the power socket (220 V; in EU), I’ve measured its temperature of the PSU at 31 C. Also, the modular cable system is very nice, as it means no lose cables hanging around. In fact, there are no cables crossing the motherboard at all, as seen in this picture.

For the CPU, I had wished for passive water cooling, however most solutions on the market today are downright ugly. If the Zalman Reserator tower was still around, I would have gotten that. The compromise was therefore the over-sized Noctua NH-U14S. Again it is probably a bit of an overkill, however the benefit is that it’s not pushing the limit of the cooling, so it remains silent and cold. CPU temperatures at load is around 30 C, and at peak 45 C when the case fans kick in. The part which gets warmest is the Z97 chipset heat-sink, at around 36 C.

One of the features I appreciated most with the ASUS Maximus VII Ranger motherboard was the fan-control. Five fans can be controlled individually based on temperature. Both PWM (Pulse Width Modulation) and DC (voltage) regulation is possible, based on fan type. As seen in the pictures below, the two case fans are off when they are not needed, and kick in slowly when it gets hot. On low to normal load the CPU fan spins at 350 RPM, and can barely be heard if you put your ear right next to the case.

Finally, the only other moving part in the machine is the Western Digital 4 TB Red HDD. At a maximum rotation speed of 5400 RPM it is not dead silent, but quiet enough.

 

Building

Building this machine came with a lot of fun! The Fractal Design case was pure joy to work with. All aspects were well thought out: Easy access to left and right side (back of MB), excellent cable management, easy disk mounting slots, two large (and quiet) fans. Gone are the days of scratched and bleeding hands because of sharp edges around the case. And the fact that there are no cables criss-crossing the motherboard not only looks good, but also makes for good airflow. If I were to say anything against the case, it would have to be that it is big heavy beast.

The other components were also top notch, and caused no problems. In particular the modular Seasonic PSU and cable system is very welcome. You only have to plug in the cables you actually need, so no lose ends hanging around. The fact that the PSU comes in a pouch which competes with expensive cologne is also a nice touch.

The Noctua NH-U14S is a massive cooler. And it was another reason why I ended up with the Define R4 case; it was one of the few cases which had enough clearing for the cooling block. With a 14 cm fan it keeps the CPU nice and cool. The initial boot was without the fan, and temperatures went up to about 45 C in the BIOS. With the fan at lowest speed (about 350 RPM), it sits at around 35 C (still without having applied thermal paste; will wait till it’s shipped). The only concern I had was with fan direction. Its default orientation was to blow air from the RAM side backwards over the cooler. Currently, I’ve put it on the other side, so it sucks air over the block, and blows it right out at the rear fan. I might experiment with the difference of direction and position.

Here are a few pictures while building, followed by a couple of BIOS screen shots.

(Click for larger images.)


(Click for larger images.)

 
 

Software

As mentioned above, the goal was to have an Ubuntu installation, with Windows 7 in a VM. I chose Ubuntu 14.04 (aka “Trusty Tahr”), since it is a Long Term Support (LTS) release, and figured this would be the right balance between stability, supported hardware and packages. Other distributions I am currently using include Fedora and Debian, but for this build I figured hitting the middle-ground would be OK, thus Ubuntu. Since my father is used to Windows, I went for the simple Xfce 4 desktop, with a familiar taskbar, window icons and SHIFT+TAB application switching. As seen in the screen-shots below, it blends nicely with the seamless VirtualBox integration.

I tried and installed both the alternative Xubuntu ISO and the main Ubuntu ISO. The main difference is the default desktop, which is Xfce in the former. However, that had boot problems with Secure Boot, even after I enabled “Other OS” in the BIOS. It would install fine, but not find the boot image afterwards. It was possible to repair that by refreshing Grub, however it gave me a bad feeling at the start. The main Ubuntu ISO had now boot issues, and changing the desktop is just a matter of installing a package and selecting a different option at log-in. (The Ubuntu variations are really a bit redundant in that regard. Especially when other basic functionality, like boot, fails).

Apart from the default ISO packages, I added the following. There you can see xfce4, the VirtualBox packages, various utilities, and a few benchmarking tools. Nothing much came out of the later. Instead, see the CPU graphs below, which shows calm and moderate load while running Windows in the VM.

apt-get install autossh bonnie++ conky cpuburn dbus dos2unix elementary-icon-theme emacs evince fancontrol feh geeqie gimp git gitk gnome-icon-theme-extras gnome-icon-theme-full gnome-icon-theme-symbolic gnome-terminal gnupg gparted gthumb htop iftop imagemagick iotop k3b kdiff3 libnss-myhostname lmbench mencoder mplayer mtr nmap openssh-server parcellite policykit-1 policykit-1-gnome policykit-desktop-privileges screen smart-notifier sysbench sysstat tango-icon-theme tor tree usbutils virtualbox virtualbox-guest-additions-iso vlc wireshark xfce4 xsensors xubuntu-icon-theme

The installation of Windows in the VM is very simple. One important option to notice, is the Intel Virtualization Technology (VT-x) setting in the BIOS, as seen here. Once that is enabled, the rest is a breeze. VirtualBox comes with a brief but useful “wizard” which guides you through creating the image. I opted for a 40 GB, 2 CPU cores, 8 GB setup. After that, add the install medium (physical CD or ISO), and boot. Windows 7 will reboot about ten times, just as in the old days, but eventually will leave you with a full fledged install. Right after installation, it’s useful to add the VirtualBox Guest Additions, which amongst other things enables the seamless mode. Also, a shared mount-point is useful, and can be easily enabled through the VirtualBox settings. It automatically appears in Windows.

The CD/DVD drives are passed through, and the physical drives were mapped to similar drives in the VM. For shared directories / drives, I wanted to makes sure the they were mounted to the same Windows drive all the time, regardless of other mount points. Thus, the VirtualBox setting does not use auto-mount, and instead the directory was manually mounted as seen in the Dropbox example below.

Installing Dropbox was a matter of downloading and installing this package, and start it as an unprivileged user. Then, in order to make that available in the Windows image as well, the top Dropbox directory was shared as a drive. (Note: The Windows VM is intentionally not connected to the network). Finally, a requirement was to have that fixed on C:\Dropbox, which was achieved with a symbolic link in Windows. The following lines has to be executed in a shell run “as Administrator”:

net use x: \\vboxsvr\Dropbox
mklink /d x:\ c:\Dropbox

One of the few special applications which requires Windows, was Corel Paint Shop Pro (PSP). The usage pattern for this is typically to download something from the web, and the process it. To make this easy and seamless, I added a Firefox plug-in so every image gets an extra right-click menu item which opens the image in PSP inside the VM. Details for this is explained here.

Finally, another special Windows only application was the genealogy program Aldfaer. The requirement here was that it could be updated, over the web. To make this work, the main install is on Ubuntu, with an option to run and update from Wine. However, it runs better inside the VM, so the application folder is mapped to Windows through another shared folder in VirtualBox. I will go into detail regarding this setup in a later post.

Writing this a few months after the machine was delivered, I’ll declare it a success. Raw performance is at a very different level from what my father was used to. The machine is silent, and in fact is turned on most of the time (as opposed to the old which he never used because of fan-noise). The split Ubuntu / VM setup is slightly complicated, but seems to work out well. As expected, the Windows install has already regressed, but it is easy to go back to a previous Snapshot, instead of re-installing everything again. This machine will definitely last a long time.


(Click for larger images.)

Making an ARM Linux based computer from scratch

Comments Off

Over at Henrik Forstén’s blog, he has a write-up of his very impressive project where he designed, assembled, soldered and installed an BGA (Ball Grid Arry) ARM based board from scratch.

He discusses board design challenges with a four-layered PCB, considerations with traces for DDR2 RAM, CPU, and three voltage supplies. There are many pictures showing the soldering process. His summary is: “Many people say that soldering BGAs is hard but based on this experience I can’t agree. Maybe I just got lucky but I didn’t have any problems with them.”

Once the board is all put together, he goes on to boot Linux. That also proves somewhat tricky, and he ends up with a three-phase boot using an ARM bootloader, U-boot, and finally a custom built kernel.

He says, “I don’t really care about the usefulness of the board and this whole project is more of a learning experience”. Clearly it was a great success.

Comments Off

chroot to ARM

Comments Off

chroot allows you to “run a command or interactive shell with special root directory”, as the man page says. However, it is assumed that the second level root directory is built for the same CPU architecture. This causes a problem if you want to chroot into an ARM based image, for the Raspberry Pi, let’s say. qemu-arm-static, some “voodoo” and several tricks come to the rescue. The process is documented well at Sentry’s Tech Blog, and the original seems to be by Darrin Hodges.

After downloading and unzipping the image, it has to be mounted. There are a few ways to go about this, but I found the easiest was to use plain old mount with an offset. The typical RPi image file is a full disk image, as opposed to a single partition or ISO though. We are after the second partition, which in our case starts at sector 122880. (See this discussion for how to find the correct starting sector using fdisk).

mkdir /mnt/rpi
mount -o loop,offset=$(( 512 * 122880 )) 2014-01-07-wheezy-raspbian.img /mnt/rpi

Next we’ll copy a statically built QEMU binary for ARM to the mounted image. You might need to install QEMU on the host system first. Furthermore, we need to mount or bind the special system directories from the host to the chroot.

apt-get install qemu-user-static
cp /usr/bin/qemu-arm-static /mnt/rpi/usr/bin/

mount -o bind /dev /mnt/rpi/dev
mount -o bind /proc /mnt/rpi/proc
mount -o bind /sys /mnt/rpi/sys

Next comes the magic. This registers the ARM executable format with the QEMU static binary. Thus, the path to qemu-arm-static has to match where it is located on the host and slave systems (as far as I understand).

echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register

Finally, it’s time for the moment of truth:

chroot /mnt/rpi

uname -a
Linux hrb 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 armv7l GNU/Linux

In some cases, the error “qemu: uncaught target signal 4 (Illegal instruction) – core dumped” occurs. User kinsa notes here that the lines of the file ld.so.preload (i.e. on the slave, /mnt/rpi/etc/ld.so.preload) has to be commented out (with a # in front).

Congratulations, you now have an ARM based chroot. What to do with it? Maybe install a few “missing” packages before copying over to one or more SD cards, set up the users, modify passwords, etc. Or take advantage of the CPU and memory of the host system or compile from source.

apt-get install htop tree ipython ipython3 gnuplot

As a final note, when done, you want to clean up the mount points.

umount /mnt/rpi/dev
umount /mnt/rpi/proc
umount /mnt/rpi/sys
umount /mnt/rpi

Comments Off

iodine – IP over DNS

1 comment

A recent stay in a couple of Germany hotels revealed a few things: First, American cultural imperialism has spun out of control, to the point where hotel receptionists are now footsoldiers for those who claim ownership of music and movie content. One hotel owner told us he had been fined two thousand Euros for MP3s downloaded by guests. While in another hotel I was hard pressed to get a second access code on their WiFi, and was not allowed to sign for the it on behalf of my wife. No wonder the The German Pirate Party has wind in its sails.

Secondly, even without these surveillance tactics in place, connecting to the abundance of half-open WiFi networks without authenticating can be useful. They are open in the sense that WiFi encryption is not used, and you can acquire a local IP without password. Most of the time, these networks are set up with a local log-in page, which grants you access for a specific device (MAC based) typically for a fixed amount of time. However, before the authentication code and password is entered, some traffic is let through: DNS requests have to work to get to the log-in page, and local hotel page. This is the basis of several TCP/IP over DNS protocols. I choice iodine, and successfully used the hotel network without log-in.

iodine is a bespoke server/client protocol which lets you tunnel IP4 data over DNS requests/responses. It works by setting up an extra network interface (TUN/TAP device) on both server and client, so that any traffic can be tunnelled. It takes care of a lot of the nitty-gritty settings itself, and probes for best settings. Finally, iodine is available for most popular platforms, including GNU/Linux (in the default repositories of Fedora and Ubuntu), *BSD, Android. However, make sure the same version is running on both client and server, as the author states that compatibility between versions is not a project goal.

Detailed setup is covered by several people, including their own HowTo and README; a CentOS compile example; and one for Debian. Thus, I wont repeat those details, and only cover some of the gotchas I stumbled upon and lessons I learnt:

  1. Start small and expand: The client/server can be brought up on the same machine, so make sure to try that first. Then try on the same local network, or remote but open networks, and finally on a semi-open network.
  2. Watch your firewall! The default DNS port, 53, is typically blocked, so you’ll have to punch through and forward that. Also make sure you open for UDP on that port! Use nmap from different locations to confirm that the port is open throughout. nc (Netcat) is useful in debugging the connection, but again make sure it’s UDP.
  3. Make sure the DNS entries for your domain are correct. You need two entries, and with some providers, it might not be obvious how to fill in their web-form to achieve the exact settings. I found this example most helpful.
  4. Debug the DNS setup using the CLI command dig, and the DNS web-tool by MXTools. For dig usage, this comment was useful.
  5. Use the test page provided by the author of iodine. It gives detailed and useful error reports on how far you’ve come with your setup.

With some luck, you’ll have a working setup, and will now be prepared for the next time the hotel receptionist does not give you enough WiFi vouchers for all your devices. Having said that, it does not really replace full access, as the connection will be “modem-slow”, or even worse. However, you do get access, which is sometimes what counts.

A client is is also available for Android from iodine, and Marcel goes into details on how to compile and run. I’ve not tried yet, and it seems there’s room for an easy to install F-Droid package there. More about that later.

The Do-It-Yourself Cloud

1 comment

“In the cloud”

The buzzword “cloud” seems to be here to stay for quite a lot longer. The problem is that it is rather ill-defined, and sometimes it is used to mean “on the Internet”, regardless of how or where a particular service or content is hosted.

It is not before we pick up further buzzwords that we can add some meaning to the term: Although there are even more terms used, I would like to focus on two of them: Infrastructure as a Service (IaaS), or what traditionally has been called “hosting”; virtual or dedicated machines which you can install and operate on OS root level with little or no oversight. Examples include your local hosting provider, and global businesses like Amazon EC2 and Rackspace.

Secondly, Software as a Service (SaaS), where you don’t write the software or maintain the system yourself. All it takes is to sign up for a service, and start using it. Think Google Apps, which includes GMail, Docs, Calendar, Sites and much more; or Salesforce, Microsoft Office 365, etc. Often these services are billed as “free”, with no financial cost to private users, and the development and operating costs of the provider is financed through various advertisement programs.

Black Clouds

The problem with the later model, Software as a Service, is that it can put many constraints on the user, including what you are allowed to do, say, or even make it difficult for you to move to another provider. In his 2011 essay “It’s the end of the web as we know it”, Adrian Short likens the later model to tenants: If you merely rent your home, there are many things you will not be allowed to do, or which you do not have control over. Short focuses on web hosting where using a service like Blogger will not let you control how links are redirected, or were you to move in the future, take those page-clicks with you onto your new site. The same goes for e-mail: If AOL decides that their e-mail service is not worth-while tomorrow, many people will lose e-mails with no chance to redirect. Or look at all the storage services which collapsed in the wake of the raid on MegaUpload. A lot of users are still waiting for FBI to return their files.

More recently, the security expert Bruce Schneier wrote about the same problem, but from a security perspective. We are not only tenants he claims, but serfs in a feudal system, where the service providers take care of all the issues around security for us, but in return our eye-balls are sold to the highest bidder, and again it is difficult to move out. For example, once you’ve invested in music or movies from Apple iTunes, it is not trivial to move to Amazon’s MP3 store; and if you’ve put all your contacts into Facebook, it is almost impossible to move to MySpace.

In early December, Julian Assange surfaced to warn about complete surveillance, and governments fighting to curb free speech. His style of writing is not always as straight to the point as one could wish for, but in between there is a clear message: Encrypt everything! This has spurred interesting discussion all over the Internet, with a common refrain: Move away from centralized services, build your own.

Finally, Karsten Gerloff, president of the Free Software Foundation Europe (FSFE), touced on the same theme in is talk at the LinuxCon Europe in Barcelona, in November 2012. He highlighted the same problems with centralised control as discussed above, and also mentioned a few examples of free software alternatives which distributes various services. More about those below.

Free Software

The stage is set then, and DIY is ready to become in vogue again. But where do you start, what do you need? If not GMail or Hotmail, who will host your e-mail, chat, and other services you’ve come to depend on? Well, it is tempting to cut the answer short, and say: “You”. However, that does not mean that every man, woman and child has to build their own stack. It makes sense to share, but within smaller groups and communities. For example, it is useful to have a family domain, which every family member can hinge their e-mail address off. A community could share the rent of a virtual machine, and use it for multiple domains for each individual group; think the local youth club, etc. The French Data Network (FDN), has a similar business model for their ISP service, where each customer is an owner of a local branch.

For the software to provide the services we need in our own stack, we find ourselves in the very fortunate situation that it is already all available for free. And it is not only gratis, it is free from control of any authority or corporation, free to be be distributed, modified, and developed. I’m of course talking about Free and Open Source Software (FOSS), which has much to thank Richard Stallman for its core values, defined in the GPL. (“There isn’t a lawyer on earth who would have drafted the GPL the way it is,” says Eben Moglen. ["Continuing the Fight"]). We may take it for granted now, however, we could very easily have ended up in a shareware world, where utilities of all kinds would still be available, but every function would come with a price tag, and only the original developers would have access to the source code, and be able to make modification. Many Windows users will probably recognize this world.

Assuming one of the popular GNU/Linux distributions, most of the software below should already be available in the main repositories. Thus it is a matter of a one-line command, or a few clicks to install. Again a major advantage of free software. Not only is it gratis, it usually refreshingly simple to install. The typical procedure of most proprietary software would include surfing around on an unknown web site for a download link, downloading a binary, and trusting (gambling really) that it has not been tempered with. Next, an “Install Wizard” of dubious usefulness and quality gives you a spectacular progress bar, sometimes complete with ads.

The DIY Cloud

The following is a list of some of the most common and widely used free and open source solutions to typical Internet services, including e-mail, web sites and blogging, chat and voice and video calls, online calendar, file sharing and social networks. There are of course many other alternatives, any this is not meant to be an exhaustive list. It should be plenty to get a good personal or community services started, though.

  • The Apache HTTP web server is the most widely used web server on the Internet, powering shy of 60% of web sites (October 2012). It usually comes as a standard package in most distributions, and is easy to start up and configure. For the multi-host use-case, it is trivial to use the same server for multiple domains.
  • If you are publishing through a blog like this one, the open source WordPress project is natural companion to the Apache web server. It too is available through standard repositories, however, you might want to download the latest source and do a custom install, both for the security updates, and to do custom tweaks.
  • For e-mail, Postfix is typical choice, and offers easy setup, multi-user and multi-domain features, and integrates well with other must-have tools. That includes SpamAssassin (another Apache Foundation project) and Postgrey to handle unwanted mail, and Dovecot for IMAP and POP3 login. For a web-frontend, SquirrelMail offers a no-frills fully featured e-mail client. All of these are available through repository install.
  • Moving into slightly less used software, but still very common services, we find the XMPP (aka Jabber) servers ejabberd and Apache Vysper, with more to choose from. Here, a clear best-of-breed has yet to emerge, and furthermore, it will require a bit more effort on the admin and user side to configure and use. As an alternative, there is of course always IRC, with plenty of software in place.
  • Taking instant chat one step further, a Voice-over-IP server like Asterix is worth considering. However, here setup and install might be tricky, and again, signing up / switching over users might require more effort. Once installed, though, there are plenty of FOSS clients to choice from, both on the desktop and mobile.
  • Moving on to more business oriented software, online calendar through the Apache caldav module is worth exploring. As an alternative the Radicale server is reported to be easy to install and use.
  • A closely related standard protocol, WebDav, offers file sharing and versioning (if plain old FTP is not an option). Again, there is an Apache module, mod_dav, which is relatively easy to set up, and access in various ways, including from OSX and Windows.
  • DIY Internet

    That list should cover the basics, and a bit more. To round it off, there are a number of experimental or niche services which is worth considering to their propitiatory and closed alternatives. For search, the distributed YaCy project looks promising. GNU Social and Diaspora aim to taken on heavy weights in social networking. Finally, GNUNet and ownCloud are peer-to-peer file-sharing alternatives.

    The future lies in distributed services, with content at the end-nodes, rather than the hubs. In other words, a random network, rather than scale-free. Taking that characteristic back to the physical layer (which traditionally always has been scale-free), there are “dark nets” or mesh nets, which aim to build an alternative physical infrastructure based on off-the-shelf WiFi equipment. Currently, this at a very early experimental state, but the trend is clear: Local, distributed and controlled by individuals rather than large corporations.

Cool Linux games on Fedora

1 comment

Linux might not be famous for its games, however there are still plenty around. You will not find the latest Call of Duty, though. Rather, there is a long list of classics and small and fun games. From the Scumm based offerings from Revolution, to remakes of classics like Freeciv, LinCity, and Ultimates Stunts.

Fedora offers a dedicated “spin” installation for games, which offers more than hundred small and big games. Below is a random pick of a few favourites, along with their RPM package names.

As far as I understand, many of them are OpenGL based, or require a properly configured graphics card to run.

  • Beneath a Steel Sky – beneath-a-steel-sky-cd
  • Lure of the Temptress – lure
  • Flight of the Amazon Queen – flight-of-the-amazon-queen-cd
  • Freeciv – freeciv
  • Glaxium – glaxium
  • Mania Drive – maniadrive
  • Ultimates Stunts – ultimatestunts
  • Tremulous – tremulous
  • Abuse – abuse
  • LinCity – lincity-ng

And to install them all!

yum install beneath-a-steel-sky-cd lure flight-of-the-amazon-queen-cd freeciv glaxium maniadrive ultimatestunts tremulous abuse lincity-ng

Mobile OS

Comments Off

In the world of OSes for mobile phones, there have been a lot of changes lately, with some going away and others joining the race. A while back, Intel announced that they would drop MeeGo, which means that it is dead since there is nobody else to support it if the community can’t keep it going. But at the same time, they said the code would be merged with another mobile OS. Intel and the Linux Foundation will be steering the OS with the very unfortunate name Tizen (it can easily be mistaken for meaning penis in some of the Scandinavian languages).

Meanwhile, over at Nokia they are betting on Windows Mobile (and making many of their employees disgruntled), while at the same time releasing the already defunct MeeGo OS in their N9 phone. However, since these are all OSes for high end smart phones, they also need something for their so called “feature phones” which are not power full enough (or have different user groups) to drive all the complex functionality. Enter Meltemi, ironically enough a Linux based OS to replace Symbian S40 series.

The story does not end there, though. Amongst the free mobile OSes, KDE is entering the race. Not with a complete separate OS, but rather a UX platform, Plasma Active, with an API for phones, tables, set-top boxes, home automation, and so on. Plasma Active has to run on top of some OS, and currently they are using MeeGo and openSUSE based Balsam Professional.

It is refreshing to see a lot of movement in this area, and hopefully it will lead to a free alternative. However, the at moment it is still looking somewhat bleak for truly free mobile phone OSes. The firmware and driver issue seems to be never ending, and not even the OpenMoko can escape it.

Comments Off

Fedora on Raspberry Pi

Comments Off

Chris Tyler has published a video demonstrating Fedora running on the ARM based Raspberry Pi. This looks very promising, and the Fedora project is working actively to support several ARM based systems.

Here’s general instructions on how to install Fedora from a USB stick, and here’s minimal Xfce based spins. (I am not sure if these instructions apply to Raspberry Pi).

Comments Off

MeeGo (CE) and the FreeSmartphone.Org Distributions

1 comment

Timo Jyrinki has an interesting write-up about free software on mobile phones. Mentioning FreeSmartphone.Org (FSO), Openmoko, Debian’s FSO group, SHR, QtMoko, and MeeGo.

He highlights the promising combination of GNU/Linux + Qt in MeeGo, and also hopes for further development in FSO, SHR, and QtMoko. However, he concludes that getting the community to take over the MeeGo project after Nokia leaves might be difficult task.

Raspberry Pi: A €30 Computer

Comments Off

A few days ago,  Raspberry Pi announced that they had gotten Quake 3 running on their ARM computer. Furthermore, their FAQ estimates the networked model will cost $35 and be released at the end of this year. There is also an interview in the Guardian.

Provisional specification

  • 700MHz ARM11
  • 128MB or 256MB of SDRAM
  • OpenGL ES 2.0
  • 1080p30 H.264 high-profile decode
  • Composite and HDMI video output
  • USB 2.0
  • SD/MMC/SDIO memory card slot
  • General-purpose I/O
  • Optional integrated 2-port USB hub and 10/100 Ethernet controller
  • Open software (Ubuntu, Iceweasel, KOffice, Python)
  • The device is powered by an external AC adapter, and the Model A consumes around 1W at full load.
  • The device should run well off 4xAA cells.

Comments Off

Arduino on Ubuntu 8.04

Comments Off

A few days ago I set up the Arduino development kit on Fedora 11. Here the steps for Ubuntu 8.04 follow, based on this. I was using a slightly custom setup, so the Java install is assumed, and the extra repository suggested here (Problems Ubuntu 8.04 amd64) was not added. Instead I download the packages directly. Furthermore, I wound that it was better to go for a setup in my home directory, as you frequently have to tweak libraries and other files.

Finally, this includes a new library for DHCP by Jordan Terrell.


sudo apt-get install gcc-avr avr-libc binutils-avr avrdude uisp

# Add your personal user to these groups: dialout, uucp
sudo emacs /etc/group

cd ~
mkdir arduino
cd arduino

# Download stuff for manual installation.
wget http://arduino.googlecode.com/files/arduino-0017.tgz
tar zxvf arduino-0017.tgz

wget http://rxtx.qbang.org/pub/rxtx/rxtx-2.2pre2-bins.zip
unzip rxtx-2.2pre2-bins.zip

wget http://www.thepotterproject.net/NewSoftSerial%20JL.zip
unzip "NewSoftSerial JL.zip"

wget http://www.thepotterproject.net/Picaso.zip
unzip Picaso.zip -d Picaso

wget http://blog.jordanterrell.com/public/Arduino-DHCPv0.4.zip
unzip Arduino-DHCPv0.4.zip -d dhcp

# Use the new avrdude
cd arduino-0017/hardware/tools; rm avrdude avrdude.conf; ln -s /usr/bin/avrdude; ln -s /etc/avrdude.conf; ll; cd -

# Use the new RXTX
cd arduino-0017/lib; rm librxtxSerial.so RXTXcomm.jar; ln -s ../../rxtx-2.2pre2-bins/x86_64-unknown-linux-gnu/librxtxSerial.so; ln -s ../../rxtx-2.2pre2-bins/RXTXcomm.jar; ll; cd -

# Make NewSoftSerial, Picaso, and Dhcp libraries available
cd arduino-0017/hardware/libraries; ln -s ../../../NewSoftSerial; ln -s ../../../Picaso; ln -s ../../../dhcp; ll; cd -

That was the basic setup, which should hopefully work in most cases. However, for gcc version 4.2.2 there is a special issue with the gcc-avr package. I’ll download it and update manually.


[back in 5...]

Comments Off

Ethernet2VGA (Arduino w/ethernet -> microVGA PICASO)

2 comments

I recently bought an Arduino starter kit along with the Ethernet "shield". In addition, I got a uVGA-PICASO-MD1 Graphics Controller chip, which attaches on to the PICASO Universal Base Board. The total price was around 130 Euros. And the goal: To create a device which takes Ethernet input and gives VGA output. The use case would be typical demo or dashboard screens, which need no user interaction, and to avoid the 3GHz/4GB RAM laptop or desktop which usually drive them.

The software installation on Fedora 11, 64 bit was relatively pain less. There are a few steps to follow, and also some special tricks for 64 bit. The gist of it, went something like this:

# Install the RPMs available from Fedora repositories.
yum install java-1.6.0-openjdk avr-gcc avr-binutils avr-libc avr-libc-docs avr-gcc-c++ avrdude rxtx uisp

cd /usr/local
mkdir arduino
cd arduino

# Download stuff for manual installation.
wget http://arduino.googlecode.com/files/arduino-0017.tgz
tar zxvf arduino-0017.tgz

wget http://rxtx.qbang.org/pub/rxtx/rxtx-2.2pre2-bins.zip
unzip rxtx-2.2pre2-bins.zip

wget http://www.thepotterproject.net/NewSoftSerial%20JL.zip
unzip "NewSoftSerial JL.zip"

wget http://www.thepotterproject.net/Picaso.zip
unzip Picaso.zip -d Picaso

# Use the new avrdude
cd arduino-0017/hardware/tools; rm avrdude avrdude.conf; ln -s /usr/bin/avrdude; ln -s /etc/avrdude/avrdude.conf; ll; cd -

# Use the new RXTX
cd arduino-0017/lib; rm librxtxSerial.so RXTXcomm.jar; ln -s ../../rxtx-2.2pre2-bins/x86_64-unknown-linux-gnu/librxtxSerial.so; ln -s ../../rxtx-2.2pre2-bins/RXTXcomm.jar; ll; cd -

# Own the examples dir, for compiling as user
chown -R myuser:myuser arduino-0017/examples

# Make NewSoftSerial and Picaso libraries available
cd arduino-0017/hardware/libraries; ln -s ../../../NewSoftSerial; ln -s ../../../Picaso; ll; cd -

# Add your personal user to these groups: dialout, uucp, lock
emacs /etc/groups

And that’s that… Thanks Sebastian Tomczak for his blog entry on the same topic. And also thanks to Jonathan Laloz for "The Potter Project", where he provides the NewSoftSerial and Picaso libraries. Without them, I would have struggled a lot more. Thanks to them, I now have a working network server on the Arduino which prints the input text through the VGA chip. Using nc, it be comes a "remote screen".

nc 192.168.2.150 9090
Hello World!![ENTER]

Here’s my program as it looks tonight:

#include <Picaso.h>
#include <NewSoftSerial.h>
#include <Ethernet.h>

//Define the Picaso object
Picaso VGAOut;

// Set MAC, IP, and start server on port.
byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };
byte ip[] = { 192, 168, 2, 150 };
Server server(9090);

void setup() {  
// Initialise the uVGA device, Set the resolution to 640 x 480
VGAOut.Init(); VGAOut.SetResolution(1);

// Init Ethernet shield
Ethernet.begin(mac, ip); server.begin();
}

// Blink some LEDs attached to digital pins 4 and 5
void blinkTwo() {
digitalWrite(4, ON); digitalWrite(5, ON); delay(50);
digitalWrite(4, OFF); digitalWrite(5, OFF); delay(50);
}

void loop() {
// Run the demo once, just to make sure there's something which works.
VGAOut.Demo(); VGAOut.Clear();

// Listen for incoming text over Ethernet.
Client client = server.available();
if (client) {
blinkTwo(); blinkTwo();

int x=10; int y=10;
while (client.connected()) {
if (client.available()) {
digitalWrite(5, LOW);

// Draw character by character from the input stream.
char c[2] = {(char)client.read(), '\0'};
VGAOut.DrawText(x, y, 1, c, VGAOut.GetRGB(255, 255, 255));

delay(30); digitalWrite(5, HIGH); delay(30);

// Wrap the lines. (I guess I can fit a bit more here...)
x++;
if(x > 40) { x = 0; y++; }
if(y > 40) { y = 0; }
}
}
client.stop();
} else {
// If client was connect, turn on the red light.
digitalWrite(4, OFF); digitalWrite(5, ON);
}
delay(5000);
}

Slashdot net

Comments Off

In the spirit of Upside-Down-Ternet, I thought I’d play some pranks with all the neighbours squatting on my wifi. I recently installed OpenWrt on my Linksys, which has a very nice Linux distro: It comes with all the features you’d expect of a Wifi router, plus all the best Linux tools: SSH, a package and repository system with comprehensive tools, and of course iptables.

And here we go:

# Accept my machines
iptables -t nat -A PREROUTING -m mac --mac-source 00:12:34:56:78:90 -j ACCEPT
iptables -t nat -A PREROUTING -m mac --mac-source 00:ab:cd:ef:01:23 -j ACCEPT

# Everybody else gets Slashdot for HTTP
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 216.34.181.45

# List
iptables -t nat -L PREROUTING


You will notice that in the end I opted to only forward port 80. There were some issues with going for everything, presumably because some DHCP or DNS traffic is affected. But at least my neighbours can read Slashdot!

Comments Off

Restoring MBR on Fedora Linux

Comments Off

Yesterday I found my self in a situation which I’ve never seen before on a Linux system: The MBR (Master Boot Record) got wiped. I’m still clueless as to the cause. I had just upgraded to a new kernel through yum, and after a boot where I attached a new SATA drive, a blank screen appeared after BIOS, accompanied my an endless bip. Neither changes seems a likely cause, so I still don’t now what happened.

When I started to fix it, I found that I had never needed to reinstall the MBR in the last 10 years of Linux, so I was a bit at loss. Luckily, I had the Fedora install CD, which has rescue mode. It gives you a shell, and from there I was able to find this excellent article.

And the command to restore the MBR? Very simple:

grub-install /dev/sda

(This is assuming you already did chroot /mnt/sysimage in the rescue shell, and that you have a SATA disk at slot 1.)

Comments Off

Undelete files on FAT with Linux

Comments Off

It happens to everybody: You make a small mistake, and there go all your holiday pictures. Luckily, all memory cards and cameras share a few features: JPG files on FAT/FAT32. This makes it especially easy to recover deleted files using standard tools on Linux.

Key points to remember:

  • Once the disaster is a reality, keep the card away from both computer and camera.
  • Do not work with card directly, but instead make an image copy. See below.
  • If you don’t have it already, install the forensic tool foremost.
    yum install foremost
  • Before you start, make sure the card is not mounted.

The commands:

umount /media/disk
dd if=//dev/sdb1 of=/tmp/card.img bs=4096 conv=sync,notrunc
mkdir /tmp/recover
foremost -v -t jpg -i /tmp/card.img -o /tmp/recover

Comments Off