Wednesday, March 10, 2010

4K alignment for disks: IMPORTANT!!!

This is the type of performance you can get when fdisk from util-linux-ng 2.17.1 or later is PROPERLY used to align a partition along 4K boundaries in an SSD (Micron/Crucial C300 in this case):


tar -xzf ./linux-kernel-tarball.tar.gz
real 0m6.682s
user 0m5.783s
sys 0m1.680s


And this is what happens on a different partition of the exact same SSD that I forgot to manually align to 4K boundaries:


tar -xzf ./linux-kernel-tarball.tar.gz
real 0m13.317s
user 0m5.806s
sys 0m1.673s


Sure it only sounds like 6.7 seconds, but look more carefully: If you factor out CPU time, the aligned decompress is happening WELL OVER 10 TIMES FASTER on the aligned partition!

Here's the problem: While the newest version of fdisk will align the FIRST partition for you with the -c option, all SUBSEQUENT partitions have to be HAND ALIGNED to the correct boundaries. The first test you saw was from the /sdb1 first partition... the second from a logical partition that was NOT properly aligned by me.

Lesson #1: EVERY partition has to be aligned (I'm still working on how this affects the extended partition + logical sub-partitions, but I'm guessing both have to be aligned right).

So: Even though I knew about the 4K alignment problem, I was only 1/2 clever, not clever enough!

Lesson #2: Linux NEEDS BETTER TOOLS THAT WORK WITH 4K BY DEFAULT AT ALL TIMES (INCLUDING PESKY EXTENDED PARTITIONS)! Oh, and believing what the drive tells you is out, because drives will lie about 512 byte clusters even when they are designed for 4K underneath! MAKE A MODE THAT IGNORES THE DRIVE AND ALIGNS EACH AND EVERY PARTITION ON 4K BOUNDARIES, NO FUSS, NO MUSS, WE DON'T CARE ABOUT DOS COMPATIBILITY.




UPDATE



I managed to move the two problematic partitions to a conventional drive, re-partition and re-format the misaligned partitions, and then move everything back: ALL the partitions are now operating at the same (incredibly fast) speed that the C300 is capable of providing.

The technique to actually get to the 4K alignment on every partition is tricky, but here's the end result:



>fdisk -ucl /dev/sdb
Disk /dev/sdb: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd510b84e

Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 19538582 9768267+ 83 Linux
/dev/sdb2 19539968 250069679 115264856 5 Extended
/dev/sdb5 19542016 204163072 92310528+ 83 Linux
/dev/sdb6 204167168 243240236 19536534+ 83 Linux
/dev/sdb7 243243008 250069679 3413336 82 Linux swap / Solaris


/dev/sdb1 is the primary partition, and it was the one partition that fdisk aligned properly originally. With the new version of fdisk, using the -c option is absolutely critical to having fdisk at least align the partition properly. The other important option is the -u which displays units in sectors.. not particularly human readable, but useful.

A little more discussion of the above partition table follows. First, note that each and every number under the "Start" column is evenly divisible by the magic 2048 number needed to generate the appropriate alignment for a 4K sector drive. The 2048 multiple appears to align every partition start sector to be divisible by 1024KB (4K aligned too by nature of 1024KB being 256 * 4KB). This may be larger than what is actually needed, but fdisk defaults to this value and the effect on wasted disk space is minimal. The vital thing is that every "Start" sector be a number that is divisible by 2048. This includes the extended paritition /dev/sdb2 which acts as a container for logical partitions sdb5 - sdb7... no exceptions. The "End" sector does not have to be divisible by 2048, although this might be a good idea in order to minimize the number of sectors that are placed between partitions (the amount of wasted space is very small on a modern drive). Incidentally, the "Blocks" column numbers listed above are just 1/2 the value of the number of sectors in each partition (Start - End == Blocks * 2).

In the new "-c" mode, fdisk does attempt to provide 2048 sectors between partitions, but there is NO guarantee that it will start SUBSEQUENT partitions at a sector number that is evenly divisible by 2048... I had to enter all the numbers you see above by hand after doing the math myself. I'm not saying that this is rocket science, but it is WAY beyond what I would expect even an experienced user to have to go through in order to actually enjoy the benefits of technologies that are actually supported by the Linux kernel.... sorry guys, but until this becomes automatic, Windows 7 and even Vista are clearly better than Linux in this area.

I'm a Linux veteran but I was used to doing partitioning using the (slightly) easier to use cfdisk instead of the old-school fdisk utility. Unfortunately, cfdisk has not been updated with this new option. I've heard that parted & gParted are semi-smart about this, but my tests showed they have issues as well. The problem with parted is that it is too smart for its own good... it can query a device to determine if the device has 512 byte or 4096 byte sectors in hardware. Sounds great until you actually use a 4K device in early 2010 where the devices will actively lie to you in order to allow Windows XP to use the drive (albeit with big performance drops). The parted guys need to have the automatic option I request above: IGNORE what the drive says and align EACH AND EVERY partition to 4K to solve the problem. I think that this would even be safe to do on older 512 byte drives since 4K is just a measure of 8 of the old 512 byte sectors. I know that DOS might have issues, but unless a DeLorean shows up and kidnaps me back to 1985, I'm not too worried about that.

Monday, February 15, 2010

Howto (mostly) use a WINS server from a Linux client

The Back Story


Just setup a VPN with the office using OpenVPN. So far I'm really happy with OpenVPN, but OpenVPN (or any VPN for that matter) only serves to bring a remote machine into a LAN... the rest of the configuration builds on top of the VPN.



In my case I'm joining a small office network that offers the standard NT services including a PDC for NT domain authentication, WINS, and file sharing. We also have network printers, but as I've recently found out, they are not going through any centralized print server, which may be why we have problems with several client machines inside the LAN being able to print. Oh.. did I mention that the PDC and Window file server isn't running Windows at all, but is actually Samba?


Yup... Trying to get a Linux client to talk to a Linux server using Windows protocols. We truly live in a bizarre world, but I'm not the only one in this situation. This blog post will be the first in a series of HowTo reports on getting stuff working in a sane manner. For reference, as of this righting I'm using SAMBA 3.4.5, and the samba server is running an older 3.0.x series install.



Name Resolution using NORMAL Linux tools


This won't be a revelation for the SAMBA experts out there, but to be blunt, while SAMBA is a very powerful software package, the documentation and interfaces are lacking when it comes to doing anything even remotely complicated. I'm not even talking about a cute GUI, I'm talking about docs missing for simple use cases like: I'm a Linux client querying a WINS server.. how can I get normal programs to use the WINS server for name resolution? I'm not talking about using a specialized utility like nmblookup.. I want it to "just work" for normal programs.



The good news is that I found a partial solution after hunting around. Before beginning, make sure you have at least the client packages for Samba installed. I am using Arch Linux so your paths for config files may vary slightly in different distros.


  • First:edit /etc/samba/smb.conf and add IP address of your WINS server. For example: wins server = 172.16.42.1

  • Next: update a config file called "nsswitch.conf" I have been using Linux for 10 years and had never messed with this file before, but it basically allows you to tell different name resolution services how to try to resolve names. It goes way beyond the simple task of resolving host names to IP addresses that we address here, but for our purposes the fix is simple. Add an entry for "wins" to the hosts line like so:
    hosts: files dns wins


That's it for the basic configuration. The final entry in nsswitch.conf tells the name resolver to use WINS resolution last if files (e.g. /etc/hosts) or a standard DNS query cannot resolve a name. The configuration of the WINS server in smb.conf from step 1 ensures that there is a valid WINS server to query.



What does & does not work


So after the description of tweaking nsswitch.conf above, any program that is setup to use proper name resolution will automagically work with the WINS server in addition to the existing DNS setup! This includes (but is not limited to): ssh, ping, wget, CUPS (specifying a printer with a WINS name), konqueror & dolphin (smb:// protocol). Even a 2-line Python program can use WINS once you do the configuration:



import socket
socket.gethostbyname ("WINS_name_or_DNS_name_it_does_not_matter")



Unfortunately, some utilities and programs do not use /etc/nsswitch.conf properly. Some are network-specific utilities like "host" & nslookup that are specifically designed to use DNS. More notoriously... a certain web browser named after an ignited carnivorous quadruped also fails to resolve names properly. Some browsers that do work properly under Linux include Arora and Chromium if you like a Google-browser experience.



In summary: While Linux does have a very robust and flexible system for using different services to resolve names... not all software on Linux actually wants to do things the easy way. However, for the purposes of the LAN at work, I can now use WINS to resolve names. This is very useful not only to make it easier than typing in dotted-quads, but also because DHCP means those dotted-quads are not necessarily stable, while names are. I've already gotten network shares to mount, and I'm looking forward to getting my home PC setup even better than some of the local machines on our LAN while being secure at the same time.

Wednesday, October 28, 2009

DROID coming Soon

Looks like my stubborn refusal to drop Verizon for cutesy phones with subpar networks has finally been rewarded. I'm getting out there early, and we'll see if this IS the droid I'm looking for. Look for a real review later, until then, try Engadget.

Saturday, September 5, 2009

NFSv4 & Gigabit Ethernet: A Semi-Functional Combination

First: Dig up Old Box. Second: Off With its Head!


So I had an old P4 desktop that had served me well from 2003 until the beginning of 2008 when I built my current Core2 box. It had been sitting around collecting dust until a few days ago when I grabbed 2GB of DDR400 RAM and slapped Arch Linux on it. It's an old Northwood core P4 that came with one onboard Gigabit card, and I slapped a second one in to make it a dual-homed host. Finally, after the OS was installed and I could get networking automatically running at boot... I pulled the video card to save on power, and since I wanted to see how an normal desktop PC would behave with zero graphics capabilities.



So I'm running headless, which is fine for right now since I managed to get the installation right enough that it gets onto the network and runs SSH. One of the reasons behind why I have two network cards is that the (better) one is hooked directly to my main PC and has a static IP address. It's there for performance since I run my NFS over that interface, and also for reliability since assuming the headless box can boot, it can get that static interface configured with a minimum amount of hassle. Of course... that assumes the machine can boot....



Right now my main concern is that this headless box is fine as long as I'm able to get it on a network. That's fine for normal operations, and this thing has a very stripped-down set of software that does not have to be updated often, but you never know when an update will result in a system that does not want to reboot without manual intervention. Since this box does not have normal serial ports, my only current option is to slap a video card back in and reboot...but I've asked some other people if they have solutions for that problem... we'll see.




NFSv4: FTW?


The next step is to actually do something with this nifty headless box, and I choose to make an NFS server. While I've used NFS as a client in larger settings like school, this was the first time I had ever setup a box to serve NFS. I chose NFS more to experiment, and because I've heard it has better performance than something like SAMBA in a pure-Linux environment which is where I'm running it. This isn't some crazy storage array, right now I only have one disk in the server with about 150GB free for the actual NFS export. I'm doing this more to get experience for 4TB RAID array I intend to build at some undertermined future date when buying a bunch of 2TB disks is even cheaper than it is now ;-)



The setup was actually pretty easy, although I didn't bother with the actually complicated part which is setting up security. I did make it so the /usr/sbin/rpc.mountd server only listens on my dedicated network port, so that my desktop (client) is the only machine physically capable of trying to mount the server. So, when I say that NFSv4 is relatively "simple" to setup, I'm sort of skipping all the Kerberos stuff that is necessary but also a real PITA to setup in the real world. I'm not saying it's as easy as right-clicking a folder and selecting "share this", but the Arch Linux Wiki has pretty clear instructions, and it doesn't take too long to setup. In fact, the Arch Wiki is informative enough that I'm not going to bother repeating it... but I will share some of my own specific issues below.




Teething Issues


I'm not expecting the setup to be perfect, but I definitely know why some people really hate NFS now (even though I think it's OK). When it is working, it's just like the mounted partition is on your own machine, but when something goes wrong, you'll see a mounted partition that hangs every single application that even tries to list the NFS directory. Fortunately, umount -f was invented just for these situations, but the NFS client's default configuration means that if your server flakes out, expect your client machine to hang up pretty badly, without even a timeout. These problems seem to rear their heads the most when I reboot the client computer, and try to re-establish my NFS mount. Even if I cleanly unmounted before the reboot, I've noticed that my NFS share will often be in a semi-comatose state where I can mount it, but any writes to the share start to hang. See below for why I think some of the source of this may be hardware related, but it would be great if the NFS client was a little more cognizant that Bad Things Happen (TM) and to not assume that there is a perfect NFS server 100% of the time.



The underlying issue: Flaky Gigabit cards


I've noticed that most of my problems have boiled down to flaky gigabit ethernet cards. Now, these cards "work" in Linux in as much as there are drivers and you can configure the interfaces, but I need more than that for what I want to do. In order to get as close to real gigabit speeds as possible I went to enable jumbo frames... which causes all sorts of problems with low-grade hardware. The good news: My Marvell Yukon2 PCIe controllers on my new machine can jump to 9000 byte frames without any issues. Also, it appears that the old Intel Gigabit controller built into the motherboard on my old machine does 9000 byte frames too. However, before I got to this point I had to go through the process of dealing with cards that used both the r8169 and skge drivers...and it was not pretty. The r8169 card worked with jumbo frames up to 7200 bytes (it refused to use the more common 9000 byte size) but then after this worked for a few hours.. the card dropped off the face of the earth and could not be reached, even after I rebooted the machine. Even worse, the card with the skge driver would come up fine, allow you to set an MTU of 9000 bytes, but then any real packet operations immediately froze the card and any pending NFS operations.



The main moral of my story: be VERY selective with the ethernet cards you use if you want to do anything more intensive than a simple home network. In my case, the Intel ethernet controllers are outstanding (they may be $5 more on Newegg but you get what you pay for) and also the Marvell 88E8056 controllers built into my X38 motherboard are also quite good. Be warned: Just because cards accept Jumbo Frame MTU's does not mean they will actually work when put under real stress. By the way, the performance using Jumbo Frames is quite a bit better than without, especially because my P4 box is not exactly a heavy-weight in the CPU category and all that extra packet processing does slow things down. Just about anything made this century can handle a saturated 100Mbit connection, but once you get above that you need to make sure the parameters are tweaked in order to get the performance listed on the box.



How its Working Now:



The good news first: If I do a large continuous write to my network share I'm averaging a little over 70 megabytes per-second, and from the saw-tooth graph I get on gkrellm, the network itself is no longer the bottleneck since it can easily exceed 100 megabytes/sec in transfers, but then drops down since the old disk in the server can't keep up. By the way, for those of you who like SCP (and I do too) I get about 40 Megabytes a second accross the wire with Jumbo Frames, and 30 Megabytes with normal frames. With NFS, my P4 box is spending a bunch of time in the IOWAIT state since the disk is the bottleneck, but with SCP, the CPU overhead becomes dominant since all that cryptography takes its toll at high speeds.



So I've got a nifty NFS server setup, and a fun platform to mess around with too. I eventually intend to turn my current desktop machine into a RAID file server as well as a home-network server providing DHCP/DNS caching/Printer/Firewall/Routing services, so this is a good first step to try things out.

Saturday, July 11, 2009

KDE 4.3: More ranting Review & Comment.

KDE turns 4.3


I've already ranted about how awesome KDE 4.2 was, so this is an update for what I'm currently doing with KDE 4.3RC2. So far I'm liking a lot of it, but I also want to share my own grievances. At this point I think KDE has the greatest potential of any desktop system I've seen (be it Linux/Mac/Windows) to really be dominant, and even make the jump from conventional desktops to mobile devices without having to be completely rewritten. However, I'm tempering all of this with some pleas on how KDE may be better off with fewer features, as long as those features are working 100%




How KDE 4.3 Customizes for my own Tastes:


Here's my basic desktop, note the lack of a standard panel:



A few words on how this desktop is laid out. I'm using the new AIR theme right now, which is the new default for KDE 4.3. A lot less black than the previous Oxygen, and I'm finally happy with a default KDE theme choice (although there are still plenty of additional themes available). I do not use any of the "normal" KDE program launchers, and I don't have a panel with a taskbar either. So how do I get anything done? That takes a little explanation.



For managing windows, since I don't have a taskbar, I re-mapped the "scale window" keyboard shortcuts to the F9 and F10 keys. F9 is for selecting all windows on my current desktop, and F10 for all available windows. These effects also restore minimized windows, so no need to worry about minimized windows disappearing forever. The ALT+TAB combo still works too and I can traverse windows in a list that way. I actually find that these tricks are a good bit faster than hunting for the right window on the taskbar. I need far less mouse interaction to jump between windows than using the conventional workflow. Once windows are presented, filtering can be done by typing in the window's title (a feature adapted from Compiz) EDIT:(a feature that was copied by Compiz, thanks to sebas for pointing this out).






To actually run applications I have two major methods I use from the GUI that have made things like the "K" launcher superflous. First, along the bottom you see my Daisy quicklauncher. Daisy is a very nice application launcher plasmoid that I've configured to act as a simple program launcher. It can be configured similarly to the OS X dock, but I only use it to quick-launch common applications.



So what do I do if an application is not on the quick-launch bar? I use the amazing krunner application that is my new Swiss-Army Chainsaw for running things from the desktop. Krunner does not require you to type the exact name of a command to run, it can look at any application with a .desktop file and use words to associate the application with what you want to run. Some of these features are already present in KDE 4.2, but krunner in 4.3 appears to be much more stable & versatile:



Krunner has a plethora of other functions that are too numerous to fully outline here, but it can also act as a calculator, currency converter, and even a shortcut to Google or Wikipedia searches:




Things That Need Attention


So, as with every release things are improved but not perfect. While a project like KDE can never truly be "finished" a few things could stand to use some polish. These requests are not really for new functionality, just to get the currently advertised functionality working properly.



First: Nepomuk needs a massive amount of attention. I won't lie, I turn the service off right now, but for experimentation, I turned it on and let the Strigi indexer churn through my harddrive for 15 minutes. After that was done, I opened up a simple Nepomuk search, and tried typing in a single word search ("Major") which, being a common word, should have turned up matches in several documents. Unfortunately, the search ran for 45 minutes at 100% CPU power and produced no results. I killed it at that point, and I'm hoping that my (rather modest) collection of files isn't too much for Nepomuk to handle. The theory behind Nepomuk is interesting, but if it is going to actually deserve to run on my desktop, it better work right, and not hog too much RAM in the process.


Second: While I don't use kmail or the KDE PIM suite, I've heard horror stories about Akonadi and the need to have a full-blown Mysql server running just to look at email addresses: Not cool guys, get the bloat out. Any address book so complex that it needs a Mysql database is going to be on a centralized LDAP server to begin with anyway.



Third: Work on the uncool stuff, even if it isn't as much fun as the latest plasmoid. Let me explain by example: the kio_http utility had a bug that would cause it to go to 100% CPU usage when invalid gzipped data came down the wire. This affected many higher level applications, including my favorite yAWP weather plasmoid, that I much prefer to the KDE default.



So what is the problem with this bug? Well... it was a basic coding oversight that failed to check for a well-known error condition that had a pretty obvious negative impact on the system. The bug was first reported on November 24, 2008... and the original bug report pointed out the SPECIFIC CODE that was broken! Here's the problem: the fix wasn't put in until July 5, 2009.. that's over seven months with a bug that was widely reported (see all the duplicates of the original) and that was a basic error in not checking return values. This is the type of programming that isn't necessarily "fun", but it is vital to making sure that people actually want to use KDE. See below for more on how I'm afraid people are too focused on the newest social-networking plasmoid and not enough on making stuff work.



Another basic problem that I fortunately have a work-around for is that the KDE Network manager plasmoid is STILL in a state of disarray. I know that the developers are working on some major updates for it, but in my own experience the plasmoid has actually regressed from the KDE 4.2 days. Using the current SVN tree, the plasmoid will run and detect wireless networks OK.. unfortunately it will absolutely refuse to accept my WPA passwords and actually connect to those networks. I have kwallet open, I have 74 prompt screens asking for the passphrase over & over again, and I have no working connections. I have to use the gnome nm-applet program right now, which works just fine and even allows for proper secure-caching of WPA passphrases so I can autoconnect. Getting on a wireless network is a basic task that anyone with a laptop needs to have working, and when it comes to priorities, this should be at the top of the list.



Finally: RESOURCES, RESOURCES, RESOURCES. I'm not exactly what you'd call a tree-hugging hippie, but when it comes to my system RAM, I'm all about conservation! Most of my problems are not directed at KDE itself, but with the combination of Qt and X.org which seem to enjoy frittering away memory and then never releasing it back (short of me killing the X server entirely). Unfortunately, since KDE is based on Qt, it inherits all of these problems. However, there are some things that the KDE developers could do to minimize resources:

  • Kill off those annoying kio processes! I know that they are supposed to time-out on their own, but I suspect that when I suspend/resume my system, the timeouts fail and I have a bunch of kio_https lying around that are doing nothing. These things should open, do a specific task, and then bail. The same goes for kio_file, kio_thumbnail, and kio_trash processes... get the job done and then EXIT properly.

  • Here's a minor but annoying one: Kwrited... a background daemon using 18MB of resident memory to catch and give a fancy gui to the old-school "write" command on the terminal. OK, I guess it's cute for the systems that still bother to use this stuff... but allow me to TURN IT OFF! As it stands I kill it and delete the executable every time I upgrade KDE now....

  • Shared Memory Hogging: I use the (very common) binary Nvidia graphics drivers that are not compiled with -fPIC, meaning every Qt app that I run loads up roughly 10MB of non-shared code sucked in from the Nvidia drivers. Yes, I wish that Nvidia would fix this, and I'd even buy a faster card if it really hurt performance (Nvidia you're losing money)! However, since this is apparently not in the cards, I have heard that it could be possible to have a parent launch program that takes the memory hit one time, and then forks processes that can still share memory. This would be great for the little programs like klipper, kmix, etc. that are all very small, but are still sucking down 16-20MB each... that usage adds up quickly




My Personal Plea for KDE 4.4: Quality NOT Quantity with the Features!


When KDE 4 first appeared the two major criticisms were that it was different than the 3 series, and that it lacked features of the 3 series. KDE 4 is going to continue to be different than 3 in many ways, but more and more people have come around to realizing that its differences are mostly for the better. On the feature front, most of the important features from KDE 3 have already been brought over, and KDE 4.3 adds in a few more ones that people were missing. There might be a few features left out that some people find important, but at this stage all the basics seem to be implemented... however the quality of the implementation is another matter that I think needs addressing. I think that the developers have been in a headlong rush to develop new features, but that it is coming at a cost to system stability that is starting to take a toll.



I won't repeat my criticisms from above about specific bugs that are annoying me, and are long-standing issues that have not been fixed. The point here is that KDE 4 is actually ready from a feature perspective, it is just that the quality of what KDE has got needs to be honed to be faster and more reliable.



I'm not advocating that there should be no new features in KDE 4.4! I'm just saying that at this point, the marginal gains to KDE from improving speed, stability and bugfixing plasmoids to "just work" are going to make a more positive impact on the user experience than rushing off with some new plasmoid or huge database project that leaks memory. One good example I hate to point out is the new social-networking plasmoid that has gotten a good bit of attention lately. I'm not saying that it shouldn't be developed, but I am saying that when basic plasmoids like the weather forecast plasmoid have longstanding bugs and are lackluster compared to third-party plasmoids like yAWP (which can't seem to get included in the KDE default branch) then there is an issue with developer priorities. While making existing features work right is not as much fun, it is probably a better way to win new users than yet-another social network that happens to have a plasmoid.



Another one of these half-done features deals with plasma workspaces and how they do or do not map to virtual desktops. I (think) I get the notion that plasma workspaces are supposed to support independent plasmoids, while virtual desktops are for organizing regular windows. Unfortunately, I've never seen any value in using the cashew to (akwardly) zoom in and out of workspaces that often appear to pop into existence at random. I'm sure this is useful for developers that want to debug plasma, but from an end-user experience I don't want two separate and confusing paradigms for moving between workspaces, I want one that works right. I've messed around with the separate plasma workspace on each desktop option, but it's never really behaved correctly. This feature could be useful if it was done right but it's actually a hindrance to even have it at all if it is done wrong.



Incidentally, I'm not only criticizing KDE specifically here, as other projects have the same issues. Look at my screenshots and you'll note that I like music, which is why I'm using juk and not amarok. When amarok developers think it is more important to use 1GB of RAM to play music, while simultaneously not even having a basic equalizer plugin, I give up. I'm not saying that Juk couldn't use more features, but I'm willing to bet $100 that the vast majority of users want a feature like a solid and reliable equalizer plugin that could have custom profiles for individual tracks over the ability to add a facebook plugin any day of the week. KDE's Juk wins with me because it is easy to use and reliable at what it does.



I also want to point out that several new features are very helpful. KDE 4.3 has some nice improvements that DO focus on making the desktop experience better, and the folder-view plasmoid is an excellent example. A folder within a folderview can now be accessed without having to open the full file-manager, and executables on the desktop now come with a red exclamation mark and a warning before being run the first time. These types of features have a direct impact on making the desktop both more functional and more secure... this is exactly the type of innovation that directly helps the desktop experience, and KDE 4.4 really just needs more stuff like this to really shine.





KDE: The OS X of Open Source



For anybody who thinks I'm being too harsh, I want to pay KDE 4 a huge compliment: I think that the 4 series now has OS X like status... and no I don't mean it's overpriced for hipsters :-) What I really want to say is that OS X started in a similar situation to KDE 4 in that it had an entrenched userbase who loved its predecessor, but while the early versions of OS X had a lot of promise, they also lacked many things that the userbase were used to. As time went by, OS X's feature set filled in and as of 10.5 the biggest issue that most people have is that there are too many features that are not fully "just working" for many users. Hence, the Snow Leopard release that is not so much focused on the latest whiz-bang changes, but on getting the existing features refined, and doing it all faster.


I think KDE 4.4 can be a Snow Leopard-esque release for the K developers, that really works on refining things to be smoother for the end-user without having to focus on huge changes. KDE 4.3 is looking better (always a positive) and if 4.4 focuses on making KDE leaner, meaner, and more refined it will be the type of desktop that nobody will be able to say no to!

Wednesday, May 6, 2009

Jack Bauer Approved: KDE 4 on 24!

For those of you who are fans of 24 and watched the 4:00 AM - 5:00 AM episode this week, you got to see a little KDE 4 action on the screen. It turns out the terrorists are using a KDE 4 desktop to help further their evil plots. While I'd prefer to see Chloe doing some datamining using Nepomuk, any publicity is good publicity ;-)



This is not the first time that KDE has been seen on 24 or other shows like Alias, but this is the first time that I think I have seen a KDE 4 variant on TV. Also, look very carefully at the photos below, and you will see that it looks like the picture frame plasmoid and the SVG analog clock plasmoid are being used on the desktop. The analog clock is very appropriate for a show like 24! Please see this episode on Hulu to check out all the details.



First screenshot of KDE 4 in action with the terrorists


Another shot with KDE 4.2 on a nice widescreen monitor.. the terrorists know how to buy equipment


This is a screenshot of a different laptop that the terrorists had in their van. I'm not sure if it's KDE or a custom job done for the show. Sorry for the blur, the screen looks clearer when viewed in the video.

Thursday, April 9, 2009

Arch Linux: For Good or For Awesome?

Back to the Future with Arch Linux


Back in the day when I was an actual engineer getting my Master's degree from Carnegie Mellon, I had some decent development skills. My thesis was based around a 3Kloc project that got into the internals of the Linux Kernel involving a mandatory access control scheme using the Linux Security Modules infrastructure. At the time (in 2004), the 2.6 kernel was just being released and I was on the bleeding edge of development using the (at the time) new & trendy Gentoo distro. As a source based distro, Gentoo was excellent for my development purposes since I never had to worry about installing "dev" packages that were always missing with other distros, and it was generally easier for me to integrate my own source code since everything had to be compiled anyway. Having a full-blown development environment and all the dependencies was a big help when your day job involves poking with the kernel ad nauseum.



Well.. all good things come to an end. I got out of grad school, still use Linux exclusively, but Gentoo got to be a bit tedious. I didn't have the time to put into maintaining the system, and Gentoo went from being an excellent and dynamic distro to one with community infighting and all sorts of problems with the portage system that broke the distro. I started Law School three years ago, and that meant my free time to mess with a misbehaving Linux machine dropped greatly. So... I went over to the "dark side" and started using Kubuntu, which is the Ubuntu flavor supporting my KDE desktop.



Kubuntu worked well enough, and I've used it throughout Law School with mostly positive results. The Ubuntu guys do make it pretty stupidly simple to get a working machine, and when you want need to concentrate on reading cases, a system that is easy to maintain is ideal. However, as many others have pointed out, despite efforts to the contrary, Ubuntu tends to treat KDE as a second-class citizen compared to GNOME.




On to Arch


I own two PCs, a Lenovo notebook and a custom-built desktop PC. The notebook is mostly for school, and I don't want to make any changes to it until after I graduate in May. The desktop on the other hand is more open to experimentation. I blew away my current Beta of Jaunty Jackalope and replaced it with the current 64 bit Arch release (2009.02). Here are a few thoughts about my experiences....



Installation


Arch's installation is not cutesy and graphical like Ubuntu, but it is straightforward. I particularly enjoyed the fact that the Arch site has images for flash media like SD-cards or USB sticks so on modern systems that can boot from these drives an install does not require a burned CD. Arch also uses a "rolling release" system that updates packages incrementally. Thus, even though I downloaded the 2009.02 release from the Arch download page that download is just a (minimal) snapshot of the basic kernel, networking, and command line utilities that are needed in order to pull & install the rest of the system. Individual packages get updated when they are ready, so there is not a huge point where the entire system is radically upgraded all at once.



My recommendation: If doing things like running cfdisk and editing several files in the /etc directory are old hat to you, or if you want to learn and can follow directions, then you too can install Arch. I REALLY like the /etc/rc.conf file which consolidates a bunch of different system settings into one (well documented) file.



The default system



In 2.5 words: Lean & Mean. After the initial installer runs, you get a system that will boot up to a command prompt with networking support... and that is about it. If you are afraid of the command line, Arch is probably not for you. However, if you are either experienced with the command line, or want to learn how to setup a custom distro, Arch's configuration is not particularly complicated, and there is excellent documentation to help you along.



Things that Rock!


Customization



I REALLY appreciate the customizability that Arch gives you that was missing with Ubuntu. I will use the Networkmanager package as an example of how much better Arch makes things for those of us who like to be able to get the details right. Networkmanager is the popular package for managing network connections on a Linux system. Originally developed by RedHat and used in Fedora, it has spread to become the default way that most distros configure networking, especially on notebooks with wireless connections. I use Networkmanager on my laptop and I'd probably never be able to get onto a wireless network without it, and I do remember the bad-old days of trying to get the Orinoco 802.11b drivers working. Linux wireless networking has come a very long way in a few years and Networkmanager is great. BUT... I really have no need of it on my desktop which has a normal gigabit ethernet connection that just uses DHCP to access my LAN.


The problem with Kubuntu was that it was very difficult to either remove, or just customize Networkmanager: You get the kitchen-sink, or a bunch of broken dependencies. One thing that drove me nuts was that the wpa_supplicant process was running on my desktop machine... the one that does not even have a wireless card installed. I actually manually deleted the wpa_supplicant binary, and then tweaked my logging options so that my log files were not being filled with "error could not run wpa_supplicant" messages from Networkmanager, but this was not an optimal solution.


The Arch solution? Well Arch certainly allows you to install Networkmanager if necessary, but the default is a much simpler setup that allows you to configure an ethernet interface and set it up statically or with dhcp.... done & done! Arch gives me extra flexibility that Ubuntu hides in the name of ease of use.



One bonus to all of the customization I've mentioned is that the boot time on my desktop is... fast. After the BIOS finally clears out of the way, it about 14 seconds to having a KDM prompt running for login. The boot time is now spending more time running the BIOS checks than actually loading the OS... not too bad.



Pacman System for Package Management


While it isn't perfect (see below) the Pacman system is the fastest & leanest package management system I've found. Pacman does the standard dependency tracking and even has some features I haven't seen in other package systems, even apt-get. For example, installing a larger "meta package" like pacman -S kde where "kde" is just an alias for many sub-packages is a very common thing across many Linux distros and package management systems. Where Pacman improves upon the others is that it will actually allow you to pick & choose which elements of the meta-build you actually want to install, and then only install those elements (along with necessary dependencies). For example, when I installed KDE after I got to my initial command line, I was able to avoid installing certain parts of KDE that I don't use, like kdeedu, while still getting a fully-functional KDE desktop.



Pacman gets even better with the AUR system that allows you to build packages from source. As somebody who has installed compiled packages on Ubuntu, I am a real fan of this process. The reason is that you still have the flexibility of compiling software from source, but the end result is a package that pacman will install, check dependencies for, and also allow you to remove easily. This process is usually pretty easy, and it has already allowed me to (for example) get the newest versions of WINE running on my x86-64 box even though Arch does not officially package wine for 64 bit machines.



Pacman is also comparable to apt-get and other package management systems in that you can add extra package repositories to your system. For example, there are separate repositories for the specialized Kdemod4 packages that tweak KDE4. The repositories can be added to /etc/pacman.conf to use the new repositories:

[kdemod-core]
Server = http://chakra-project.org/repo/core/x86_64

[kdemod-extragear]
Server = http://chakra-project.org/repo/extragear/x86_64

[kdemod-playground]
Server = http://chakra-project.org/repo/playground/x86_64




Things that Lack Rock:



You can have my .vimrc when you pry it from my cold, dead hands!


So nothing is perfect, and Arch is no exception. While I think I like the ability to tweak items, this comes at the expense of having some packages not simply "just work" as you might expect them to. The worst offender here was the vi package. Yes kids, I fight for truth, justice, and lean & mean text editors, so I'm a vi (specifically vim) guy. Like many other distros, the arch-linux default install includes what I call a "quasi-vim" editor that looks like vim, but doesn't have all the features. The real vim needs to be installed later, which is not my major problem. Instead, the major problem was that my carefully crafted .vimrc files were not respected when I ran vi. Instead, the system default files in /etc/virc and /etc/vimrc were used, and they did not allow my personal ~/.vimrc settings to take precedence (as had been done on earlier distros from Ubuntu and Gentoo). This led to bad highlighting, no autowrap, and (even worse) the middle mouse button could no longer be used to paste text into a vim session in insert mode!!! OK, that rant may sound a little over the top to outsiders, but you vim users out there know what I'm talkin' bout: you don't mess with a man's vimrc! Fortunately I was able to fix the problem by completely removing the system files and then all of the sudden my normal .vimrc was respected.... I'm glad I figured out a solution or Arch might have had a short shelf-life.



SSH-Agent: Double Oh Configuration


Another issue I had was with the ssh-agent program, which is a nifty service that allows you to cache the keys used for authenticating with other systems if you have RSA keys configured. Ubuntu has a nifty system that starts one (and only one) ssh-agent instance running when you login, and from there is was easy for me to add startup scripts that added in my ssh keys so I could login to my backup system (provided by rsync.net) without needing passwords.


Arch does not have the same feature, but fortunately I was able to add a small shellscript to ~/.kde/env/ that runs ssh-agent in a way where all programs in my KDE startup will be able to use it. See here for more details. It would be nice if more stuff worked "out of the box", but Arch's support forums helped me fix the problem without too much trouble.




I'm liking Arch Right Now


I've been running Arch for just under a month, and so far I think I'm sold. The distro itself is well put together and I've enjoyed having a system that conforms to what I want. I really appreciate the Arch user community which is friendly and also shares knowledge well, particularly through the Arch wiki.


Over the years I've used Mandrake, Suse, Debian, Gentoo, Ubuntu/Kubuntu, and now Arch. I'm getting good at figuring out what I want and I'm still amazed at how much better Linux is today than it was when I started on it back in 2000. There's always more to learn, which is where all the fun is. For right now Arch is where I'm most comfortable, so Arch is for AWESOME.