Thursday, 12 December 2013

Fix FontConfig Warning in LMDE

I found another bug in Mint Debian relating to how fonts are setup.

Originally I found the issue while playing about in imagemajick which would produce an error like this.

"Fontconfig warning: "/etc/fonts/conf.d/53-monospace-lcd-filter.conf", line 10: Having multiple values in isn't supported and may not work as expected"

You can reproduce that error using this command;

fc-match sans

So, I opened up the file referenced in the error and found it was an XML file.

In the element test name="family" there were two fonts configured, in my case these were "DejaVu Sans Mono" and "Bitstream Vera Sans Mono".

Now, considering that the error was complaining about not liking having two values present, I decided to remove one. I removed the second one.

After doing that things behaved in much more polite way;

fc-match sans
DejaVuSans.ttf: "DejaVu Sans" "Book"

Tuesday, 10 December 2013

Problems connecting to libvirtd (KVM) on remote hosts

I ran into this annoying bug trying to connect using SSH (key auth) to libvirtd (running on CentOS6) from a LMDE host.

The error I received was unhelpful.

Unable to connect to libvirt.

Cannot recv data: Value too large for defined data type

Verify that the 'libvirtd' daemon is running
on the remote host.


I was pretty sure that the problem was not with the server running libvirtd because it had been working the day before and was unchanged since then. On the other hand my LMDE install was completely fresh.

To cut the chase I don't know what the fix is (it seems to be a bug).

If you read to the end of that bug thread it seems you can work around the problem by using the hostname instead of its FQDN.

For this to work of course you need to be able to resolve the target IP address using just the hostname. Since I was on the same domain as the libvirt server this was simply a matter of defining the domain in /etc/resolv.conf on the client.

domain tuxnetworks.net

If that is not a practical solution (because your client and server on different domains) I reckon you could probably configure the server hostname as an individual entry in your /etc/hosts file too, although I have not tried that. Let me know in the comments if that works for you!

Sunday, 8 December 2013

Set the Number of Workspaces in Cinnamon

After adding the "Workspaces" applet onto a Cinnamon taskbar you find it only has two workspaces configured by default

The trouble is, you want four.

gsettings set org.cinnamon number-workspaces 4

You will need to log out and in again for that chnage to take place.

Saturday, 7 December 2013

Multiple Hassles Installing LMDE

I had a hell of a time installing Linux Mint Debian Edition on a particular PC.

The PC was running Ubuntu 13.04 fine. It is just a generic Asus Mobo with AMD Quad Core CPU and a Geforce550Ti graphics card. Nothing special in there at all.

So I grabbed the LMDE  CD that I used to install the PC I am currently using to type this blog entry, stuck it in the CD tray and booted the PC up.

It simply stopped booting at some point and just sat there.

I rebooted and this time chose "compatibility mode" on the grub boot menu.

Among other things, compatibility mode does not hide all the stuff starting up so you can see what is happening as the system boots up.

My system got as far as "udev starting version 175" and then just stopped. Eventually I decided that it was not going to go any further.

So thinking that my CD may have gotten scratched, I grabbed a USB key and used dd to create a usb key from the original iso I had downloaded.

Once that was done I plugged the key in and repeated the above process with the exact same results.

Hmmm. Now I resort to google and discover that this has apparently never happened to anyone else. OK.

Back in the day, one of the first things we used to do when troubleshooting stuff like this is to remove all peripheral hardware. In my case this was pretty much just the graphics card so I pulled that out.

Boot the PC up again and what do you know I got to the Mint desktop!

So, problem 1 is resolved.

Next, I click the installer and repartition my /dev/sda which is where I will be installing LMDE.

While playing about in gparted I received a warning that /dev/sdc had some problem. I ignored that because /dev/sdc is just the USB key that I was installing from.

So, the install process went smoothly, all the files copied over fine and we eventually got to the stage "localizing packages".

While the install is progressing I had been mucking about on this PC and glancing over at the progress of the installation occasionally.

At some port I noticed that this "localizing packages" was taking a mighty long time. I waited a bit longer until I decided that no, it was not progressing. I consulted top and sure enough there was no noticable activity showing at all.

I rebooted and tried again with the same result.

Then I remembered the partition warning from earlier and tried again, this time going back to the original CD.

This time everything went well. The installer finished and suggested that I reboot which I did only to get an error message from grub;


error file boot/grub/i386/pc/normal.mod not found

Goddamn it. Is this PC cursed?

So, first thing to do obviously is just to try re-installing grub.

Once again I boot the live CD, this time I go to a command prompt as root.

First, mount the drive

mkdir /root/tmp
mount /dev/sda1 /root/tmp

Then install grub

grub-install /dev/sda --root-directory=/root/tmp

Reboot and finally see the blessed desktop in all its Cinnamon glory.

To recap:

To fix the udev error, remove your nvidia graphics card

To fix "localizing packages" use a CD rather than a USB key

To fix grub error, re-install grub

Saturday, 30 November 2013

Steam + LMDE + AMD64 + NVidia

To install Steam on 64 bit LMDE with NVidia drivers (already installed) use this command;

sudo apt-get install steam libgl1-mesa libgl1-nvidia-glx:i386

Friday, 29 November 2013

gksu on LMDE

I just installed Linux Mint Debian Edition (LMDE) for a friend and came across a small problem with launching the Mint Software Centre application.

Because the Mint Software Centre allows users to add and remove software it sensibly requires root privileges before you can proceed. To do this it utilises gksu.

Specifically, the launcher in the menu looks like this;

gksu mintinstall

The problem is that when you try to launch the MSC it asks for a password. Entering your user password (which has sudo privileges) does not work. This is because gksu is expecting the root user password, which with a default install does not exist.

The way to fix this is to tell gksu to use sudo for privilege excalation.

gksu-properties

This will popup a dialog box like this;




 Change the "Authentication Mode" from "su" to "sudo" and you are done!

Tuesday, 12 November 2013

Using DD to copy a failing drive

I have posted before about copying a hard disk using DD.

This post expands on that concept a bit for situations where you need to do an emergency copy of a failing hard disk.

The scenario:

You boot up your system only to receive a SMART warning that one of  your hard disks is failing and should be replaced.

I currently have such a situation.

The drive in question has a bootable Windows partition as well as non-bootable ext4 partition.

Fortunately, I boot my Ubuntu system from a seperate SSD but if you don't have that option then a LiveCD is your best option. 

So, now I have purchased a replacement hard disk and I'm wanting to copy the old drive over to the new drive to save having to do all the re-installing of Windows/Steam.

I am aware of the potential for file corruption and if I see evidence that corruption has adversely affected things I will go to PLAN B, a complete Windows re-install. I hope I won't have to go that route.

So, I plug the new disk into my system and determine that the failing disk is /dev/sdb and the replacement is /dev/sdc

To copy the disk over I want to use the 'dd' command

dd if=/dev/sdb of=/dev/sdc

This works, but after a bit I hit a bad sector, at which point dd stops copying data. This is not what I want.

To make dd ignore errors and re-sync it's position on the target device once it resumes we can expand on our dd command like this:

dd if=/dev/sdb of=/dev/sdc conv=noerror,sync

That's great. Don't you love Unix?

Anyway, after a while of dd chugging along I decide to check on progress. I open another terminal (you can also background the process and use the same terminal) and enter:

kill -SIGUSR1 `pidof dd`

This produces the following output in the terminal window that dd is executing in:

7382+0 records in
7381+0 records out
56787277824 bytes (57 GB) copied, 1369.35 s, 4.1 MB/s

Good lord. At 4.1MB/s this 1TB drive will take an estimated 2 days to complete!

This is obviously unacceptable.

The problem is that dd uses a default block size that is quite tiny. Of course, this being Unix we can control that.

I stop the dd process and re-issue the command, this time specifying a 4M block size:

dd if=/dev/sdb of=/dev/sdc conv=noerror,sync bs=4M

Now, when I check the progress I see a much more satisfying result:

37382+0 records in
37381+0 records out
156787277824 bytes (157 GB) copied, 1369.35 s, 114 MB/s

144 MB/s, that's more like what I want. The re-image should now take a matter of hours rather than days.




Thursday, 19 September 2013

Disable DNSMASQ on KVM host

I have a fleet of servers with bridged, static IP's running as KVM guests. These servers do not require DHCP yet KVM by default starts up dnsmasq regardless.

Normally this is not an issue but I just so happened to need dnsmasq for DNS on one of the KVM hosts and it would refuse to start due to it being already invoked by libvirt.

You can't just disable the libvirt dnsmasq because it seems required for any virtual network that is active. You can however disable the unused virtual network which has the same effect.

# virsh net-destroy default
# virsh net-autostart --disable default



Then you can configure dnsmasq by editing /etc/dnsmasq.conf and it should work normally.

Wednesday, 18 September 2013

SOLVED: nss_getpwnam errors in CentOS 6

I've been getting an annoying error in the system logs for newly installed CentOS 6 servers with NFS4 configured.  

rpc.idmapd[6004]: nss_getpwnam: name 'root@localdomain' does not map into domain 'mydomain.net'

This error doesn't appear to cause any issues but I don't like that sort of thing constantly spamming my logs so I wanted to fix it. So, it turns out that RHEL6/CentOS6 has a dodgy default configuration for the rpcidmapd service that you need to fix.

Edit this file:

 # vi /etc/idmapd.conf

Find the setting for "Domain" which is incorrectly set to an edu domain and change it so that it is like this:

 Domain = localdomain

Restart the service:

 # service rpcidmapd restart

After this you should no longer get the above error in your system log.

Tuesday, 17 September 2013

HOWTO: Send mail via a mailhub in CentOS

To do this I use ssmtp which is far easier to configure for a simple task than sendmail or even postfix. If you are intending to build a full mail server then this is not the correct way to do so, I only want my CentOS server to be able to send out mail alerts, not act as a full on mail hub.

So, let's get down to it, I will be starting from a "minimal" install of CentOS 6. You must also ensure that your host has a FQDN on your network otherwise your mail hub will refuse to relay any emails from it.

Firstly, as is often the case the standard CentOS repos are a lot more sparse than those in Debian/Ubuntu land so we need to add a Fedora repository:

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Note: The above is for CentOS 6 x64, if you are using CentOS 5 or a 32 bit OS then modify the above line accordingly. For example, for CentOS 5/32 you would browse to this URL: http://download.fedoraproject.org/pub/epel/5/i386
and then find the epel package at that address e.g. epel-release-5-4.noarch.rpm


We will need to remove postfix

# yum remove postfix

Now install SSMTP and mailx

# yum install ssmtp mailx

Backup the default ssmtp config file

# cp /etc/ssmtp/ssmtp.conf /etc/ssmtp/ssmtp.conf.default
# vi /etc/ssmtp/ssmtp.conf


Find the mailhub=mail line and change it to the hostname of your main mail server:

e.g. mailhub=mail.tuxnetworks.com

You can now send a test email:

mail -s TEST me@mydomain.com
(type in some text and press ctrl+d when done)

You can check how things went in your mail log:

# tail -f /var/log/maillog

If all went well you should see something like this:

Sep 17 09:19:20 myhost sSMTP[31558]: Sent mail for root@myhost.tuxnetworks.com (221 2.0.0 Service closing transmission channel) uid=0 username=root outbytes=510

If you see that and your email arrives then congratulations, you are done.

Tuesday, 10 September 2013

CentOS 6 minimal does not install cron

To install it:

# yum install crontabs

Don't forget to turn it on and start up on reboot:

# service crond start
 # chkconfig crond on


Here are some other things you might want to install after doing a CentOS "minimal" install.

Wednesday, 4 September 2013

I Like Pi

So, I finally got myself a Raspberry Pi to play around with and here are some things that I've found.

The first thing I discovered is that there is no point powering the Pi up without a properly prepared SD card. I got mine thinking I had a spare SD card kicking around but when the Pi arrived I couldn't find one. So, I figured I could at least hook everything else up and at least get a POST screen but no, you get nothing, but a red led and a blank screen from your Pi without a properly imaged SD card.

So, I went out and bought a 4GB card for a whopping $6 and downloaded the standard "Raspbian" image based on Debian "Wheezy".

A quick dd later and the card was prepared, inserted and booted up.

Now, what I really want is to run xbox media center so jumped straight in and tried apt-get install xbmc but got an unresolvable dependency error installing xbmc-bin.

Not to worry, I already knew that the proper way to do this is to download the Raspbmc image which has all that stuff already set up for you, so off to their website I went.

On the Raspbmc site you are given a choice, 1) download a small netinstall image which will "download the most up to date version of xbmc" or 2) download the full image which they don't recommend.

So, I took option 1 re-imaged my SD card and booted the Pi up again. The script then went to work and started downloading and install a bunch of things. This took some time because I have a bad Internet connection.

Eventually it finished and the Pi rebooted but got stuck in a crash/restart loop repeatedly showing a message "Relax, xbmc will restart shortly" over and over and over and over.

So, off to google I go and found somebody who had the same problem and "fixed it" by downloading the full image and installing that instead. I'm usually not a fan of net installers so I found myself asking why I'd gone that route in the first place. Oh well, no matter.

So next step, download the full image instead. 




Thursday, 4 July 2013

Adding a new disk to an existing LVM Volume

In a previous post, I described how to build an LVM based NAS using Ubuntu.

Now I've reached the stage where that NAS is full so I want to add a new drive. The existing Volume Group consisted of 2 x 2TB hard drives and I'm adding a third.

The system is Ubuntu 13.04 "raring" but the following should work equally well on any Debian (or indeed Redhat) based system.

Here is what I did.

The first and most important step is to backup all the data on your drive! When we play around with disks and partitioning we can destroy terabytes of data with just one mistake. I used a 4TB external USB drive. You have been warned. OK, second thing I did naturally, was install the new drive, which in my case also required installing a new SATA interface in the server, which went surprisingly smoothly.

OK, with all the preliminaries in place, we can proceed to making use of the new hard drive.

I used fdisk -l to determine that my new drive was detected by the system and that it is device /dev/sdf.

I used parted to partition it.

Also using parted, I set the LVM flag:  

set 1 lvm on

Back to the bash console, I added the drive as a new physical volume:  

# pvcreate /dev/sdf1

Confirm that this went OK.

 # pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               store
  PV Size               1.82 TiB / not usable 4.09 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               2dvFBg-uAFn-kudl-JW14-UkgE-EvPq-G5Ye5Q
  
  --- Physical volume ---
  PV Name               /dev/sde1
  VG Name               store
  PV Size               1.82 TiB / not usable 4.09 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              476931
  Free PE               0
  Allocated PE          476931
  PV UUID               BH8Osp-7UR7-6YVh-sbVe-zvCh-6WC9-hQkozl


 "/dev/sdf1" is a new physical volume of "1.82 TiB"
 --- NEW Physical volume ---
  PV Name               /dev/sdf1
  VG Name     
  PV Size               1.82 TiB          
  Allocatable           NO 
  PE Size               0 
  Total PE              0
  Free PE               0 
  Allocated PE          0 
  PV UUID               29ZZUV-qg34-Vcm9-Gw4M-iGau-eTiD-E0geXp

My VG (Volume Group) is named store, you can determine this with the vgdisplay command:  

# vgdisplay --- Volume group --- VG Name store 
 [...] 
 VG Size 3.64 TiB 
 [...]

Note: I have cut out the info that we are not interested in from this output, as indicated by [...]

Now, extend the VG "store" onto my new drive:  

# vgextend store /dev/sdf1

Check the VG again to find that my VG has grown to 5.46T:

# vgdisplay
 --- Volume group --- 
 VG Name              store
 [...]
 VG Size              5.46 TiB 
 [...] 
 Alloc PE / Size      953862 / 3.64 TiB 
 Free  PE / Size      476931 / 1.82 TiB 
 [...]

I want to use all the space for my existing LV (logical volume), so I must extend it with the lvextend command.

Before we do that, let's have a look at the existing LV details:  

# lvdisplay 
  --- Logical volume --- 
  LV Path                /dev/store/library
  LV Name                library
  VG Name                store
  LV UUID                Jo8uag-AkXk-8p7s-4x01-IFS9-FuWz-3MMSQH
  LV Write Access        read/write
  LV Creation host, time jupiter, 2013-05-17 17:25:01 +1000
  LV Status              available
  # open                 1
  LV Size               
3.64 TiB 
  Current LE             953862 
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0


I can see from that output that my LV is named "library". From the output of the above two commands we can assemble a command to extend the LV.  Note that from the earlier vgdisplay command we got this information:  

Alloc PE / Size 953862 / 3.64 TiB 
Free  PE / Size 476931 / 1.82 TiB 

If we take those two numbers and add them together (953862 + 476931) we can get the total number of "extents" in the volume group (1430793) and we can use that to extend the LV to take up all the free space:

 # lvextend -l1430793 /dev/store/library 
   Extending logical volume library to 5.46 TiB 
   Logical volume library successfully resized 

In the comments, Andrew suggests an easier way to allocate all the free space: 

# lvextend -l +100%FREE /dev/store/library

Check that everything is OK:

# lvdisplay
 --- Logical volume ---
  LV Path                /dev/store/library
  LV Name                library
  VG Name                store
  LV UUID                Jo8uag-AkXk-8p7s-4x01-IFS9-FuWz-3MMSQH
  LV Write Access        read/write
  LV Creation host, time jupiter, 2013-05-17 17:25:01 +1000
  LV Status              available
  # open                 1
  LV Size                5.46 TiB
  Current LE             1430793
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0


Everything looks OK so we proceed to the final part, which is to resize the actual file system. This is also the most dangerous part. You did make a backup right?

Before we can resize the file system, you will need to umount it;

# umount /dev/mapper/store-library

Just to be safe, make sure there are no file system errors:  

# e2fsck -f /dev/mapper/store-library

Finally, resize the file system:  

# resize2fs -p /dev/mapper/store-library

Now, the last two commands may take some time to complete, but once they are done (and assuming there were no problems of course) we can remount our LV and check out all that lovely new free space.

# df -h 
Filesystem                 Size  Used Avail Use% Mounted on
[..]
/dev/mapper/store-library  5.4T  3.6T  1.8T  67% /store/library


yay!

Partition a hard drive with 4096 byte sectors


When partitioning a drive in parted, you might get this warning:

Warning: The resulting partition is not properly aligned for best performance.

This is most likely due to you using one of the newer large capacity hard drives
that has 4096 byte sectors.

To fix it, in parted, issue this command to create a single partition using the
entire disk:

mkpart primary 4096s 100%

If you still have problems, you might find this blog post helpful. 

Astrotek AT-CPES6A

Just bought an el-cheapo Astrotek AT-CPES6A PCI-E SATA card and am pleased to say I just plugged it in to my Ubuntu 13.04 box and it worked straight away without trouble.

No need to install drivers or anything. Yay.

Saturday, 15 June 2013

Fix Window Manager Crash When Switching Desktops

There is a bug that causes the window manager to crash if you configure multiple desktops in Gnome fallback with effects turned on

Selecting a desktop will leave you with just the wallpaper showing and no way out other than to restart the window manager.

The fix for this is to install compizconfig-settings-manager:

sudo apt-get install compizconfig-settings-manager

 We will use this to configure multiple workspaces instead of the obvious way, which is to right-click the workspace switcher applet on your desktop toolbar.

Important Note! If you have already configured multiple desktops in the obvious manner, make sure you undo that change and configure it back to a single workspace before we continue.

Now, to configure multi desktops open Compiz Settings Manager in Applications>System Tools>Preferences.

Next, navigate to General Options and open the Desktop Size tab.

Set up your workspaces how you want them there.

Updated 19/06/123
If you still have trouble, ensure that you configure the "obvious" place by selecting "Show only the current worskpace" and "1" for "Number of workspaces". In compiz-settings-manager "Horizontal virtual size" should be "4", the others "1".

Close the Compiz settings manager and your workspaces should behave properly again.

Install and Use Nemo as default file manager in Ubuntu

After I upgraded to Mint 15 I had a bunch of issues on both my machines. Considering I had resorted to using Gnome fallback session instead of Mate or Cinnamon anyway I decided to reinstall Ubuntu which will hopefully be less buggy.

One of the things I like better about Mint is the Nemo file manager, which is streets ahead of the default Gnome file manager, the imaginatively named "Files".

Googling for "install nemo ubuntu" or similar returns a bunch of people suggesting you install the "noobslab" repository like this:

sudo add-apt-repository ppa:noobslab/nemo

You can do that if you want to, but if you prefer to minimise the number of third party repositories on your system you can simply enable the ubuntu-backports  repo in your sources list instead.

Whichever way you choose, simply install Nemo like this:

sudo apt-get install nemo

Now that Nemo is intalled, we should make it the default file manager too. Enter this into the terminal console:

xdg-mime default nemo.desktop inode/directory application/x-gnome-saved-search

And that's all folks.

Adding a PPA

I'll use the Handbrake video encoder in this example but it will work for any PPA providing you have the correct id string. In this case that is "ppa:stebbins/handbrake-releases"

Add the PPA to your apt repository sources:

sudo add-apt-repository ppa:stebbins/handbrake-releases

If you do an apt-get update now you will probably get an error like this:

W: GPG error: http://ppa.launchpad.net raring Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6D975C4791E7EE5E

Add the key like this, replacing the key at the end of the command with the one from your previous key error output.

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 
6D975C4791E7EE5E

You should be able to update again without errors.

Sunday, 14 April 2013

ZFS on Ubuntu

Some time back I wrote an article describing how to set up ZFS on FreeBSD.

Well, now that ZFS on Linux has been declared "production ready" I'm going to go ahead and install it on my 12.10 "Quantal" based NAS. I will keep some notes and post them here.

Note: This tutorial will install ZFS as a Linux kernel module, which is not to be confused with the zfs-fuse userland implementation which can be found in the standard Ubuntu repositories.

OK, so before we start we should ensure that your system is up to date by doing an apt-get update and dist-upgrade.

Now, we add the ZFS ppa to our system:

$ sudo add-apt-repository ppa:zfs-native/stable

Update again and install ZFS;

$ sudo apt-get update 
sudo apt-get install ubuntu-zfs nfs-kernel-server

We will need some hard disks on which we can create a zfs pool. In my example I will use /dev/sdc1 and /dev/sdd1. You will also want to come up with a name fot the pool which will appear as a directory in your file system root. I will use "store" for my pool.

Create a mirrored zfs pool "store":

$ sudo zpool create store /dev/sdc1 /dev/sdd1

or

Create a raidz zfs pool "store":

$ sudo zpool create store raidz /dev/sdc1 /dev/sdd1


You can check that this worked:

$ sudo zpool status
  pool: store state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    store       ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sdc1    ONLINE       0     0     0
        sdd1    ONLINE       0     0     0

errors: No known data errors


and

$ sudo zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
store   3.62T 0G      3.62T   0%   1.00x  ONLINE  -


We can also turn on de-duplication and compression for our pool:

sudo zfs set compression=on store
sudo zfs set dedup=on store

So, presently I have a ZFS pool, which already has a default filesystem. There is no need to do a mkfs. You can see that it is mounted using the df command;

# df -h
Filesystem       Size    Used   Avail Capacity  Mounted on
/dev/ada0p2       55G     21G     29G    42%    /
devfs            1.0k    1.0k      0B   100%    /dev
store            3.4T    1.0k    3.4T     0%    /store


Normally, you would not just start dumping files straight onto the pool (which you can do if you really want to but you lose some of the benefits of ZFS), but instead you create another filesystem to store your files in. You do this with the "zfs" command.

# zfs create store/library

Check your mounted filesystems again;

# df -h
Filesystem       Size    Used   Avail Capacity  Mounted on
/dev/sda1         55G     21G     29G    42%    /
[...]
store            3.4T    1.0k    3.4T     0%    /store
store/archive    3.4T    1.0k    3.4T     0%    /store/library


Another neat thing about ZFS is how easy it is to share a filesystem using NFS. Let's share store/library;

$ sudo zfs set sharenfs=rw store/library

Unlike with "normal" NFS there is no need to restart any services after issuing this command, although you should note that is not recommended that you mix "normal" NFS (ie: /etc/exports) with ZFS controlled NFS.

In other words, keep your /etc/exports file empty.

My archive filesystem is now shared, but it is open to everybody. Usually I don't care about that at home but in other scenarios you may wish to restrict access to my 10.1.1.0/24 network and allow the root user full control;

$ sudo zfs set sharenfs='rw=@10.1.1.0/24',no_subtree_check,async,no_root_squash store/library

At this point we are pretty much done. You can start placing files in your store/library filesystem now.

Some usefull commands:

zfs get all store/library 
zfs list
zpool list



Thursday, 21 March 2013

Disabling XML Validation in Eclipse 4 Juno

It seems that every time I set up a project in Eclipse PDT I run up against the problem of a bunch XML errors being generated from deep inside some third party library somewhere.

Searching Google shows lots of advice  saying you need to go to Preferences>Validators and clicking "Suspend all validators".

The trouble is that there is no 'Validators' option in that location.

It seems that to turn off this unwanted validation you need to first install
"Eclipse XML Editors and Tools".

Why Eclipse will try and do the validation in the first place without this module installed is a question for another day.

Once you have installed that you can follow the normal steps to  "Suspend all Validators" (either globally or for individual projects) as described everywhere on the Intertubes.

One final thing though, be aware that after suspending the validators, you must right-click the project and click "Validate" to remove all the nasty red marks.

Monday, 18 March 2013

SOLVED: "Permission denied" when mounting sshfs

I've just come across an annoying bug while attempting to mount a directory using sshfs.

sshfs brettg@myserver.net:/home/brettg/test /home/brettg/test
fuse: failed to open /dev/fuse: Permission denied


The normal google search resulted in many, many hits explaining that this is due to the user account not being a member of the 'fuse' group.

Trouble is, my user account is a member of the fuse group:

$ groups
brettg adm cdrom sudo dip plugdev fuse lpadmin sambashare libvirtd


Note: To add your user to the fuse group use this command:

sudo usermod -a -G fuse brettg

The problem is that Mint 14 sets the user permissions on the fuse device incorrectly which results in only the root user being able to mount it.

You can confirm this is the case like this:

$ ls -al /dev/fuse
crw------T 1 root root 10, 229 Mar  9 10:15 /dev/fuse


There are two problems here. The first is that the fuse device is not owned by the fuse group. Fix it like this:

$ sudo chgrp fuse /dev/fuse


The next problem is that the group permissions for the fuse device are set to deny access to everyone. Fix that with:

sudo chmod 660 /dev/fuse


The fuse permissions should now look like this:

$ ls -al /dev/fuse
crw-rw---- 1 root fuse 10, 229 Mar  9 10:15 /dev/fuse


Having done this you should now be able to mount a fuse device (such as sshfs) as a normal user (who belongs to the fuse group of course).

UPDATE
Upon reboot, I noticed that the permissions on the fuse device were partly reset;

$ ls -al /dev/fuse
crw-rw-rwT 1 root root 10, 229 Mar 18 12:03 /dev/fuse



However, this does not appear to have had an adverse effect on my ability to mount. I find this to be somewhat confusing.