Showing posts with label debian. Show all posts
Showing posts with label debian. Show all posts

Friday, 6 October 2017

Querying video file metadata with mediainfo

I am working on a script that will query media files (mp4/mkv videos) to obtain metadata that can be subsequently used to rename the file to enforce a naming convention. I use the excellent mediainfo tool (available in the standard repositories) to do this.

mediainfo has a metric tonne of options and functions that you can use for various purposes. In my case I want to know the aspect ratio, vertical height and video codec for the file. This can be done in a single command;

mediainfo --Inform="Video;%DisplayAspectRatio%,%Height%,%Format%"

This works fine and returns something like this;

1.85,720p,AVC

When I say it works fine I mean it works fine in 99% of cases. The other 1% are made up of files that contain more than one video stream. Sometimes people package a JPEG image inside the container which is designated internally as "Video#2". In such cases the above command will also return values relating to the JPEG image producing something like this;

1.85,720p,AVC1.85,720p,JPEG

When this happens my script breaks. The workaround for that is to pipe the results through some unix tools to massage the output;

mediainfo --Inform="Video;%DisplayAspectRatio%,%Height%,%Format%\n" "${_TARGET}" | xargs | awk '{print $1;}'

Things to note in the revised command. There is a carriage return ("\n") at the end of the --Inform parameters which will put the unwanted data on a new line like this;

1.85,720p,AVC
1.85,720p,JPEG

xargs will remove that line feed and replace it with a space;

1.85,720p,AVC 1.85,720p,JPEG

And finally awk will produce only the first "word" (space delimited) from the result, which produces the desired output.

1.85,720p,AVC

Now obviously this method assumes that the first video stream in the container is the one we are interested in. I'm struggling to imagine a scenario where this would not be the case so at this point I am OK with that. If I find a file that doesn't work I might have to revise my script, but for now I will stick with this solution.

Wednesday, 2 July 2014

Libvirt/qemu/kvm as non-root user

Prerequisites:

A server with KVM

I'm going to use the qemu user that is created when you install KVM but you could use any user you like.

First, your user should belong to the kvm group:

grep kvm /etc/group kvm:x:36:qemu

Create a libvirtd group and add your user to it

groupadd libvirt
usermod -a -G libvirt qemu


Create a new policykit config to allow access to libvirtd using your user account via ssh

vi /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla

Add the following content:

[Remote libvirt SSH access]
Identity=unix-group:libvirt
Identity=unix-user:qemu
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes


Restart libvirt

service libvirtd restart

Thursday, 12 December 2013

Fix FontConfig Warning in LMDE

I found another bug in Mint Debian relating to how fonts are setup.

Originally I found the issue while playing about in imagemajick which would produce an error like this.

"Fontconfig warning: "/etc/fonts/conf.d/53-monospace-lcd-filter.conf", line 10: Having multiple values in isn't supported and may not work as expected"

You can reproduce that error using this command;

fc-match sans

So, I opened up the file referenced in the error and found it was an XML file.

In the element test name="family" there were two fonts configured, in my case these were "DejaVu Sans Mono" and "Bitstream Vera Sans Mono".

Now, considering that the error was complaining about not liking having two values present, I decided to remove one. I removed the second one.

After doing that things behaved in much more polite way;

fc-match sans
DejaVuSans.ttf: "DejaVu Sans" "Book"

Tuesday, 10 December 2013

Problems connecting to libvirtd (KVM) on remote hosts

I ran into this annoying bug trying to connect using SSH (key auth) to libvirtd (running on CentOS6) from a LMDE host.

The error I received was unhelpful.

Unable to connect to libvirt.

Cannot recv data: Value too large for defined data type

Verify that the 'libvirtd' daemon is running
on the remote host.


I was pretty sure that the problem was not with the server running libvirtd because it had been working the day before and was unchanged since then. On the other hand my LMDE install was completely fresh.

To cut the chase I don't know what the fix is (it seems to be a bug).

If you read to the end of that bug thread it seems you can work around the problem by using the hostname instead of its FQDN.

For this to work of course you need to be able to resolve the target IP address using just the hostname. Since I was on the same domain as the libvirt server this was simply a matter of defining the domain in /etc/resolv.conf on the client.

domain tuxnetworks.net

If that is not a practical solution (because your client and server on different domains) I reckon you could probably configure the server hostname as an individual entry in your /etc/hosts file too, although I have not tried that. Let me know in the comments if that works for you!

Thursday, 19 September 2013

Disable DNSMASQ on KVM host

I have a fleet of servers with bridged, static IP's running as KVM guests. These servers do not require DHCP yet KVM by default starts up dnsmasq regardless.

Normally this is not an issue but I just so happened to need dnsmasq for DNS on one of the KVM hosts and it would refuse to start due to it being already invoked by libvirt.

You can't just disable the libvirt dnsmasq because it seems required for any virtual network that is active. You can however disable the unused virtual network which has the same effect.

# virsh net-destroy default
# virsh net-autostart --disable default



Then you can configure dnsmasq by editing /etc/dnsmasq.conf and it should work normally.

Saturday, 15 June 2013

Adding a PPA

I'll use the Handbrake video encoder in this example but it will work for any PPA providing you have the correct id string. In this case that is "ppa:stebbins/handbrake-releases"

Add the PPA to your apt repository sources:

sudo add-apt-repository ppa:stebbins/handbrake-releases

If you do an apt-get update now you will probably get an error like this:

W: GPG error: http://ppa.launchpad.net raring Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6D975C4791E7EE5E

Add the key like this, replacing the key at the end of the command with the one from your previous key error output.

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 
6D975C4791E7EE5E

You should be able to update again without errors.

Monday, 18 March 2013

SOLVED: "Permission denied" when mounting sshfs

I've just come across an annoying bug while attempting to mount a directory using sshfs.

sshfs brettg@myserver.net:/home/brettg/test /home/brettg/test
fuse: failed to open /dev/fuse: Permission denied


The normal google search resulted in many, many hits explaining that this is due to the user account not being a member of the 'fuse' group.

Trouble is, my user account is a member of the fuse group:

$ groups
brettg adm cdrom sudo dip plugdev fuse lpadmin sambashare libvirtd


Note: To add your user to the fuse group use this command:

sudo usermod -a -G fuse brettg

The problem is that Mint 14 sets the user permissions on the fuse device incorrectly which results in only the root user being able to mount it.

You can confirm this is the case like this:

$ ls -al /dev/fuse
crw------T 1 root root 10, 229 Mar  9 10:15 /dev/fuse


There are two problems here. The first is that the fuse device is not owned by the fuse group. Fix it like this:

$ sudo chgrp fuse /dev/fuse


The next problem is that the group permissions for the fuse device are set to deny access to everyone. Fix that with:

sudo chmod 660 /dev/fuse


The fuse permissions should now look like this:

$ ls -al /dev/fuse
crw-rw---- 1 root fuse 10, 229 Mar  9 10:15 /dev/fuse


Having done this you should now be able to mount a fuse device (such as sshfs) as a normal user (who belongs to the fuse group of course).

UPDATE
Upon reboot, I noticed that the permissions on the fuse device were partly reset;

$ ls -al /dev/fuse
crw-rw-rwT 1 root root 10, 229 Mar 18 12:03 /dev/fuse



However, this does not appear to have had an adverse effect on my ability to mount. I find this to be somewhat confusing.

Wednesday, 29 August 2012

Unmount stale NFS mounts

If you have a stale NFS mount hanging on your system it can cause various programs and utilities to fail. A typical symptom is a hang when using the 'df' command.

In such cases you cant do umount /path/to/stale/nfs because it will say "the device is busy" or words to that effect

To fix this you can unmount it with the 'lazy' option;

umount -l /path/to/stale/nfs

If you don't expect that mount point to ever be available again (for example the nfs server was decommissioned) then make sure you adjust /etc/fstab accordingly.

Sunday, 19 August 2012

Remove Subtitles and Audio tracks from MKV files

To remove unwanted audio and subtitle tracks from Matroska (mkv) files you use mkvtools; sudo apt-get install mkvtoolnix-gui Once it is installed then open up the gui (Sound & Video menu) and follow these steps; 1) In the "Input files" box on the "Input" tab browse to the mkv file you want to modify. 2) In the "Tracks, chapters and tags" box uncheck any part you want to remove (leave the stuff you want to keep checked) 3) In the "Output filename" box keep the default name or modify to suit. 4) Click "Start muxing" and wait a minute or two until it completes. Once you are done, you can delete the original file (after checking it worked of course!) and rename the new file accordingly.

Thursday, 12 July 2012

HOWTO: Squid 3 Transparent Proxy

A lot of the stuff on the Internet describing how to do transparent proxy is outdated and does not work on more recent distro's that sport Squid V3.

This guide is Googles top hit for "squid transparent proxy" but it doesn't work. If you attempt to configure Squid 3 using the "httpd_accel" directives provided in that post squid will simply fail to start.

It seems that the developers of Squid 3 have streamlined the configuration of squids transparent proxy feature down to a single word.

If you find the http_port directive in your squid.conf and add the word "transparent" to the end of it then you have basically configured squid as a transparent proxy.


Find a line like this;


http_port 3128


Add "transparent" to the end so that it looks like this;

http_port 3128 transparent

Restart squid and you are done. All that is required now is to redirect traffic on your firewall to go to the proxy.

You can use your iptables firewall to redirect web traffic (port 80) to your squid proxy with  these commands;

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 10.1.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128


This assumes that your LAN adaptor (the adapter that your client requests are coming in on) is eth0 and that the IP address of your proxy is 10.1.1.1

You can test that your proxy is working by accessing the Internet from a network client on your LAN and monitoring squids access log file;


tail -f /var/log/squid3/access.log

If you browse to www.tuxnetworks.com while watching the access.log file then you should see something like this;

1342076113.358      1 10.1.1.14 TCP_HIT/200 437 GET http://www.tuxnetworks.com/ - NONE/- text/html

Enjoy! 

Monday, 23 April 2012

Have CRON Log To A Separate File

Sometimes you might want to have cron events logged to a file other than the standard syslog (/var/log/syslog)

This is how you do it.

Edit this file;

vi /etc/rsyslog.d/50-default.conf

Find the line starting with #cron.* and uncomment it.

This will cause all cron events to be logged to /var/log/cron.log (unless you changed  the path) however the same events will also continue to be logged to syslog.

In the same file, find the line that looks like this;

*.*;auth,authpriv.none  -/var/log/syslog

Alter the line so that it looks like this;

*.*;auth,authpriv.none;cron.none  -/var/log/syslog

Restart the logging service;

service rsyslog restart

Now cron will log to /var/log/cron.log but not to syslog


Thursday, 8 March 2012

Remove Unwanted Audio Tracks From AVI Files

If you have downloaded videos from certain sources lately, you may have noticed that it is now possible to create a video container (AVI,MKV) that includes multiple audio channels just like on a DVD.

This is a great thing because it allows people of different languages to use the same video file. Alternately it allows the directors commentary to be included.

That said, I am an English speaker, and I have never had any interest in directors commentaries so all these extra audio tracks represent unwanted data in my movie library.

Also, some files default to playing the commentary or the non English track in some players which is also mildly annoying.

So, in shuch circumstances you can use ffmpeg to remove the unnecessary tracks from an AVI file (I have not tried it for MKV, I will update this page if I do.

Things you need to install are vlc and avconv (avconv is the replacement for ffmpg which is now deprecated)

sudo apt-get install vlc libav-tools

Note: On RedHat based distributions you must install libav. ie:
yum install libav
You can see what audio tracks are available, and select them by opening the video file in vlc and looking in Audio>Audio Track.

Once you have determined which track you want to keep, you can run the file through avconv to strip the unwanted tracks. In this example I use the second map parameter to keep track 2 (ie lose track 1);

avconv -i sourcefile.avi -map 0:0 -map 0:2 -acodec copy -vcodec copy outfile.avi

And that's it, happy Linuxing (is that a word?)

Wednesday, 15 June 2011

Managing Deluge Daemon

I use Deluge bit torrent client on a couple of headless machines. There's not much to it, you can learn how to set it up here

However up until now I've been manually bringing it up and down at the command line, it's not hard but I thought I'd streamline it a bit by making a script.

Download or copy+paste this script into a file called "torrents" and make it executable;

#!/bin/bash

FLAG="/tmp/torrents_on"
UPDATE_FIREWALL="/store/scripts/firewall"

# Checking for dependancies
if [ ! ${DELUGED=`which deluged`} ] ; then echo "ERROR : Can't find 'deluged' on your system, aborting" ; exit 1; fi
if [ ! ${DELUGE_WEB=`which deluge-web`} ] ; then echo "ERROR : Can't find 'deluge-web' on your system, web interface will be disabled" ; exit 1; fi

DELUGED_PID=`ps ax | grep "${DELUGED}" | grep -v grep | awk '{print $1}'`
if [ "${DELUGED_PID}" = "" ] ; then DELUGED_PID=0 ; fi

DELUGE_WEB_PID=`ps ax | grep "${DELUGE_WEB}" | grep -v grep | awk '{print $1}'`
if [ "${DELUGE_WEB_PID}" = "" ] ; then DELUGE_WEB_PID=0 ; fi

case "$1" in
start)
if [ ! $DELUGED_PID -gt "0" ] ; then
deluged
nohup deluge-web > /dev/null 2>&1 &
touch $FLAG
$UPDATE_FIREWALL
exit 0
else
echo "Deluged is already running (PID $DELUGED_PID)"
exit 1
fi
;;

stop)
if [ ! $DELUGED_PID = "0" ] ; then
kill $DELUGED_PID
kill $DELUGE_WEB_PID
rm $FLAG
$UPDATE_FIREWALL
exit 0
else
echo "Deluged is not running"
exit 1
fi
;;

status)
if [ $DELUGED_PID -gt "0" ] ; then
ps ax | grep deluge | grep -v grep
exit 0
else
echo "Deluged is not running"
exit 0
fi
;;

*)
echo "Usage: torrents {start|stop|status}"
exit 1
;;
esac


The script will open/close ports on your firewall as required assuming you modify the UPDATE_FIREWALL variable with the correct location of your firewall script and modify that script to include something like this;
# Flush tables before re-applying ruleset
sudo /sbin/iptables --flush

# Bittorrent traffic
if [ -f /tmp/torrents_on ] ; then
sudo /sbin/iptables -A INPUT -p tcp --dport 58261 -j ACCEPT
sudo /sbin/iptables -A INPUT -p udp --dport 58261 -j ACCEPT
fi

#

# Drop all other traffic from WAN
sudo /sbin/iptables -A INPUT -i $WAN -j DROP
sudo /sbin/iptables -A FORWARD -i $WAN -j DROP

The above firewall script is for illustraion purposes and shouldn't be used as is. Make sure you modify use a script that suits your own network.

Once installed, you can now use the script to control deluged and the deluge web interface from the command line using this syntax;

torrents {start|stop|status}

Enjoy!

Tuesday, 31 May 2011

FIX: 404 Errors Using apt-cacher-ng

I recently upgraded my Ubuntu server to Debian Squeeze, but not without difficulties.

The first problem was with apt-cacher-ng. Well, actually it was two problems. The first involves an apparent difference between how Ubuntu and Debian configure their apt clients to use a proxy.

On Ubuntu I have always used /etc/apt/apt.conf with a single line pointing to my proxy like so;

Acquire::http { Proxy "http://apt:3142"; };

This didn't seem to work with Debian clients, the fix is to put the same line into a file at /etc/apt/apt.conf.d/01proxy.

Actually you can call the file anything you like, even "apt.conf" but naming the file like this fits in with normal Debian conventions better, which can't be a bad thing.

The second issue I had was with Ubuntu clients using the apt-cacher-ng proxy installed on a Debian server.

I kept getting "404 Not Found" errors every time I did an apt-get update on the (lucid) clients.

Google was no help. There were a few people who were having the same problem but no answers.

Eventually I found the issue myself. There is a file in the apt-cacher-ng directory that has an error. To fix the problem you edit this file;

vi /etc/apt-cacher-ng/backends_ubuntu

and change the line so it says something like this;

http://au.archive.ubuntu.com/ubuntu/

Of course you should change it to something more local to where you are. The key here is that on my system the trailing /ubuntu/ was missing and this was causing the 404 errors.

Monday, 30 May 2011

HOWTO: Relocate LVM to a New Server

The main storage on my home server is on 2 x 2TB hard disk drives which are configured as an LVM volume.

(For a guide on configuring LVM from scratch see this article.)

My server currently runs Ubuntu 10.04 but I've decided to bite the bullet and swap it over to Debian Squeeze.

I've never moved an LVM volume to another server / OS installation so it's time to learn how to do it, I guess.

Note:
It should be obvious but I will nevertheless say it here anyway. Mucking about with file-systems is a dangerous thing to do and any misstep can lead to disastrous, catastrophic and permanent DATA LOSS! Ensure that you have adequate backups before attempting this procedure. You have been warned!

First, you need to login as root.

sudo -i

Get the details for your current LVM Volume Group(s);

vgdisplay
--- Volume group ---
VG Name store
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 0 / 0
VG UUID 9zwhOn-3Qs6-aPTo-kqQ4-RL4p-ICTA-l56Dsz


As you can see, I have a single volume group called "store".

Let's see what Logical Volumes are in the Volume Group;
lvdisplay
--- Logical volume ---
LV Name /dev/store/archive
VG Name store
LV UUID 80eFYi-n0Z7-9br1-bbfg-1GQ6-Orxf-0wENTU
LV Write Access read/write
LV Status available
# open 1
LV Size 3.64 TiB
Current LE 953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

We can see that there is a single volume 'archive' in the group.

Check your fstab for the line pertaining to your LVM volume;

cat /etc/fstab

The relevant line in my case is this;

UUID=057272e5-8b66-461a-ad18-c1c198c8dcdd /store/archive ext3 errors=remount-ro 0 1

Make sure you keep this info at hand for later on.

It so happens that I am sharing this volume using NFS so I need to stop my NFS server;

service nfs-kernel-server stop

so that I can unmount it.

umount /store/archive/

Now I need to mark the VG as inactive;
vgchange -an store
0 logical volume(s) in volume group "store" now active

Next I prepare the volume group to be moved by "exporting" it;
vgexport store
Volume group "store" successfully exported

Let's take another look at the volume group details;
vgdisplay
Volume group store is exported
--- Volume group ---
VG Name store
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status exported/resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 0 / 0
VG UUID 9zwhOn-3Qs6-aPTo-kqQ4-RL4p-ICTA-l56Dsz

As you can see, the VG Status has changed to "exported"

Now you can shutdown your system and relocate the drives or reinstall the OS. In my case my OS is installed on a removable Compact Flash card which I have already pre-installed Debian Squeeze. i.e. Here is one I prepared earlier!

OK, once our server has rebooted we need to install LVM and associated utils;

sudo apt-get install lvm2 dmsetup reiserfsprogs xfsprogs

We activate the volume group using the vgchange command again;
vgchange -a y
Volume group "store" is exported

Import the volume group into our new system with the 'vgimport' command;

vgimport store

Let's have a look at our logical volumes again;
lvdisplay
--- Logical volume ---
LV Name /dev/store/archive
VG Name store
LV UUID 80eFYi-n0Z7-9br1-bbfg-1GQ6-Orxf-0wENTU
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TiB
Current LE 953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

That looks good. It should be the same as it was in the old system and the LV status should be "available"

Take the line from the fstab file on the old server and add it to the new server;

vi /etc/fstab

Paste the line at the end;

UUID=057272e5-8b66-461a-ad18-c1c198c8dcdd /store/archive ext3 errors=remount-ro 0 1

Recreate the mountpoint if it doesn't already exist;

mkdir -p /store/archive

And finally we can mount the drive;

sudo mount /store/archive/

We can check that it is mounted OK with df;
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7387992 930944 6081752 14% /
tmpfs 1025604 0 1025604 0% /lib/init/rw
udev 1020856 184 1020672 1% /dev
tmpfs 1025604 0 1025604 0% /dev/shm
/dev/mapper/store-archive
3845710856 2358214040 1292145880 65% /store/archive



And that's it! Glad I didn't need to resort to my backups . . .

Wednesday, 25 May 2011

Adjust Your File System To Accomodate Flash Drives

I am using a Compact Flash card for my root partition on a small Atom based server at home. I find this to be a good way to build an inexpensive, quiet, low powered server however it does introduce a few special problems due to the absence of any wear leveling logic as used by proper SSD hard drives.

I use a slot loaded CF card which makes it easy for me to pull the card to make an image (using dd) or swap over to another OS without unscrewing the case. However using Compact Flash or a plain old USB thumb drive might be cheap and handy but it also brings to head some issues related to how the system reads and writes to the card.
Flash memory is not happy about writing to the same memory cell over and over and over so consequently any cell that is treated in this manner will eventually die. Proper SSD's work around this by incorporating special logic that shifts sectors around automatically so that all the memory cell's do roughly the same amount of work. This is called "wear leveling". Dumb old Compact Flash cards don't do wear leveling but there are some steps we can take to ameliorate this problem to some degree.

These days most Linux systems use EXT3 or EXT4 file systems which are both journaling file systems. This means that every time a file is accessed the journal is updated to record that action. This makes it easier for the file-system to keep track of reads and writes and detect when an error may have occurred. Unfortunately it also means lots of writing (to the journal) which means a premature death for our CF card.

The older EXT2 FS does not do journaling which means we will have less protection but seeing that we are not using the CF to store data (just for the OS) I consider it an acceptable "risk".

To use EXT2 edit your fstab file;

vi /etc/fstab

On the line pertaining to your root (/) file-system change the "ext3" to "ext2", it's that simple.

Another thing that a normal system does is update each file every time it is read with a time-stamp. That's another thing we want to stop happening.

On the same line, find where it says errors=remount-ro and append to that column ,noatime.

Here is an example line;
UUID=68a316e0-8071-47e3-b31d-718a7be2e498 / ext2 errors=remount-ro,noatime 0 1

Another area that gets written to quite a bit is the /tmp directory.

We can move that off the Flash drive and into RAM using tmpfs. This has the bonus of improving overall system performance by a small amount.

Add this line to your fstab;
tmpfs /tmp tmpfs  size=256000m,exec,nosuid 0 0

You can tweak the amount of RAM to use but I wouldn't go below 128Mb personally.

And finally, once you install a normal hard disk to your server you should relocate your swap file over to a partition located on it. For now I am simply going to disable swap altogether. To do that just stick a comment "#" at the start of the relevant line.

And that's it, these changes should allow your "dumb" Flash drive based OS installation to live a reasonably long and happy life. My current server has been going strong for about 2 years now, fingers crossed!

Modify The System-Wide PATH

For years, when I wanted to add a directory to the system path I have been adding something like PATH=$PATH:/new/path to the end of /etc/profile.

This is because this is the "answer" that many people provide when you do a Google search on how to do it.

However, in Ubuntu the default path is not set in this file so I have always had the feeling that this is not the "right" way to do this. There must be somewhere else that holds this info?

Well I just discovered it.

The system path is set in /etc/environment.

Here is the default path;

brettg@jupiter:~$ cat /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"


To add a path simply stick a colon at the end followed by your new path;

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/my/new/path"

Easy peasy! (When you know how)

This does not apply to debian (squeeze), the system path is set in /etc/profile same as other systems.

Saturday, 21 May 2011

HOWTO: The Five Minute VPN Guide

UPDATE: 05-06-2012, latest version: VPN v1.03

I posted some time ago a guide for setting up a PPP over SSH vpn but it was a bit clunky and I was never fully happy with it.

So, I have spent some time adding all of the hard work to a fully interactive script that has the following features;

* Automatically configure your client details in a "vpn.conf" file.

* Bi-directional routing between the server and all remote nodes is automatically configured.

* Interactive functions to stop and start the VPN

* Function to check the VPN status, attempts restart if down

* Send email to a specified administrator when the state of the VPN changes.

* Automatic setup of keys for passwordless connections.

* Zero scripting skills required

Audience:
Network admins who want to connect two or more offices using an encrypted, secure VPN over the public Internet. Clients at remote sites will access the Internet directly through their local gateway while all internal traffic is automatically routed via encrypted VPN links back to the central site.

Pre-requisites:
Two debian or ubuntu servers configured as routers, designated as SERVER and CLIENT. Both must be connected to the internet and the SERVER should have a FQDN of some sort (see www.dyndns.com if you don't have your own domain). Assuming you are running a firewall on the server you must poke a hole in it to allow SSH connections from the internet.
Authors Note:
This script started out as a simple clean up of the old one but of course that quickly blew out to a full on rewrite until I ended up with an all singing, all dancing mega-vpn management utility that I reckon is the easiest way to setup a VPN ever. Not one line of the original script remains intact!

Anyway, enough of the chest beating, let's get on with the show!

Step 1: Setting up the Server

Clients connecting to the server will do so using a local account on the server that is specifically used for this purpose.

Create a user called "vpn"

sudo adduser --system --group vpn

The --system parameter has set the vpn users shell to be /bin/false. However, because the vpn user needs to be able to log in via ssh, we must change this to /bin/bash in the /etc/passwd file.

Edit the passwd file;

sudo vi /etc/passwd

Modify the line for the vpn user so that it ends with /bin/bash;

vpn:x:110:110::/home/vpn:/bin/bash

We also need to set a password for the "vpn" account;

sudo passwd vpn

NOTE:
The vpn account password will only be used while doing the initial configuration of your VPN clients. You should choose a reasonably complex (secure) password and not be concerned about making it easy to type or remember.

The vpn user needs to be able to bring the ppp connection up and down as well as modify the system routing table. We will configure sudo to allow access to those commands.

Edit your sudoers file;

sudo visudo

Append these lines to the end of the file;

vpn ALL=NOPASSWD: /usr/sbin/pppd
vpn ALL=NOPASSWD: /sbin/route


Finally, we need to log in as the vpn and set up a few bits in its home folder.

Change to the vpn user;

sudo su vpn

Create an ssh directory;

mkdir .ssh

We need to seed an "active_networks" file with the subnet(s) that the VPN clients should be able to route to (usually the local LAN subnet that your VPN server is on but if you have a more elaborate network there may be multiple subnets. Simply add all the ones that you need remote hosts to route to). The "active_networks" file should be located in the vpn user's home folder (and owned by the vpn user)

Create an active_networks file;

vi ~/active_networks

Add a line for the servers local subnet followed by a hash and the server name.

Example:

10.1.1.0/24 # vpn-server

If you have more than one subnet then you can add them too (on separate lines)

Note:
The active_routes file holds a record of all the remote subnets (VPN network nodes) that the server is connected to. As each client gateway connects to the server it adds it's own LAN subnet to the file. If new routes are added later you can do a "vpn check" on the client gateway which will automatically update the local routing table with any new routes for any other nodes that may have been recently added to your network.

And that's all there is for the server part of this howto!

Yes, seriously.

Step 2: Configuring a Client

Configuring the client is even easier than the server, we just need to download and run my new VPN script.

On the client host, download my vpn client script

Put the vpn script file somewhere in your system path or make a link to wherever you decide to put it.

E.g. sudo ln -s /store/scripts/vpn /usr/sbin/vpn

Install dependencies;

sudo apt-get install ipcalc ppp

And finally, execute the script with the "setup" directive;

sudo ./vpn setup

This will create a default config for your VPN. If this will be the first or only client node to be connected to your VPN server then the only required value is the SERVER_HOSTNAME for the VPN server. This should be a FQDN that is pingable from the Internet. For 99% of scenarios the rest of the default settings will work perfectly fine.

Once you have finished the setup, you can start the vpn;
sudo ./vpn start
Starting vpn to vpn.tuxnetworks.net on port 22 - Using interface ppp0
Connect: ppp0 <--> /dev/pts/1
Deflate (15) compression enabled
local  IP address 192.168.2.2
remote IP address 192.168.2.1
Setting static routes
Added route to 10.1.1.0/24
Added route to 10.48.17.0/24

Check the status of your VPN connection;
sudo ./vpn status
---------------------------------------------------------
Local network : 10.1.3.1/255.255.255.0 [eth0]
Connected to  : vpn.tuxnetworks.net
Remote IP     : 192.168.2.1
Local IP      : 192.168.2.2
PID           : 25493

To have your client check whether the VPN is up (and automatically restart it if it's not) add an entry to your system crontab;

sudo vi /etc/crontab

Add a line like so;
*  *    * * *   root    vpn check >> /dev/null 2>&1

This will run every minute, check the vpn status as well as update any new routes to new nodes that have been added since the last time it was run.

Note:
Of course the vpn script file will need to be in your system path for this cronjob to execute. (See above)

If you also configure an administrator email address (and your system is able to send emails of course) then it will email the administrator every time the VPN link goes down, and again when it comes back up.

I've tried to make the script as simple as possible to use and I hope I have covered all the possible failure scenarios gracefully.

Give it a go and let me know in the comments or send me an email to the address contained in the script header and tell me what you think.

Any and all feedback is welcome.

Cheers!

Wednesday, 4 May 2011

Getting Up To Speed With IPv6: Basic IPv6 Setup

This is the third in a series of articles that I hope will get you on the road to IPv6 in a relatively painless fashion.

If you arrived here from a search query, you may be interested in reading the Introduction and Setting The Stage articles first.

In order to use IPv6 our router requires a globally routable IP address. This means we cannot use one of the ubiquitous "home network gateways" in it's normal mode of operation as a NAT router as they do not offer native support for IPv6 and we cannot do IPv6 over NAT.

Fortunately, most of these devices can be configured in "bridge" mode which will allow a Debian/Ubuntu server to take over the role of our main Internet router thereby paving the way to IPv6 goodness.

Step 1: Configure A Server As An IPv4 Internet Gateway

Follow the IPv4 router guide found here to prepare your router for IPv6. Come back here when you are finished.

OK, once you have your IPv4 router setup and running we can start adding IPv6 support.

Step 2: Basic IPv6 Functionality

Creating an account with a Tunnel Broker:
The first thing you need to do is visit Hurricane Electric's Tunnel Broker page and sign up for an IPv6 tunnel account.

Once you have created your account we need to configure the tunnel on our router.

Creating an IPv6 tunnel:
Login to your tunnel account and click on "Create Regular Tunnel"

Enter the public IP address of your server in the text field called "IPv4 Endpoint (Your side): (ie: The IP address of your ppp0 adapter)

Choose a tunnel server that is closest to where you are (you can use traceroute to find one with the lowest number of hops or simply go by geographical location). Going with the default server offered is probably the best bet though.

Click on "Create Tunnel"

You should now see something like this;



And that's it, your tunnel has been created!

Configuring Your Tunnel

Now we need to configure our router to use our new tunnel.

As root, edit your interfaces file;

sudo vi /etc/network/interfaces

Add the following lines replacing the parts in [italics] with the address details as provided on your Tunnel Details Page;

auto ip6tunnel
iface ip6tunnel inet6 v4tunnel
address [Client IPv6 Address]
netmask 64
ttl 64
gateway [Server IPv6 Address]
endpoint [Server IPv4 Address]
local [Client IPv4 Address]

Note: When entering addresses, don't include the /n at the end as this is the netmask and is not part of the address. It is worth noting that, unlike IPv4, IPv6 supports only CIDR notation (bitwise) netmasks, decimal netmasks are not supported.

Once that is done, save the file and restart your router, I'll wait here until that's finished.

OK, now let's have a look around and do some tests.

In a shell console, enter;

ifconfig ip6tunnel

ip6tunnel Link encap:IPv6-in-IPv4
inet6 addr: fe80::7b02:25c5/128 Scope:Link
inet6 addr: 2001:470:c:2345::2/64 Scope:Global
UP POINTOPOINT RUNNING NOARP MTU:1472 Metric:1
RX packets:170953 errors:0 dropped:0 overruns:0 frame:0
TX packets:168578 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:140703681 (140.7 MB) TX bytes:20945209 (20.9 MB)


Hopefully you will see similar output to above.

Let's try and ping the Hurricane Electric IPv6 DNS servers. The address for this server can be found on the tunnel details page.

IPv6 uses the ping6 command but it works exactly like "normal" ping;
ping6 -c 4 2001:470:20::2
PING 2001:470:20::2(2001:470:20::2) 56 data bytes
64 bytes from 2001:470:20::2: icmp_seq=1 ttl=64 time=437 ms
64 bytes from 2001:470:20::2: icmp_seq=2 ttl=64 time=474 ms
64 bytes from 2001:470:20::2: icmp_seq=3 ttl=64 time=441 ms
64 bytes from 2001:470:20::2: icmp_seq=4 ttl=64 time=510 ms

--- 2001:470:20::2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 437.905/465.959/510.010/29.213 ms


Awesome. Right now you are probably thinking "I can't believe it was that easy!". Well, hold on there cowboy, it's not over yet. Right now our gateway is the only host on our network that can use IPv6, and it can't even resolve names to IPv6 yet.

We can fix name resolution very easily for now by simply adding an entry for HE's IPv6 DNS server to our resolv.conf file;

sudo vi /etc/resolv.conf

Add an extra nameserver line like so;

nameserver 2001:470:d:1018::1

Let's try pinging a name this time;

ping -c 4 ipv6.tuxnetworks.com
ping: unknown host ipv6.tuxnetworks.com


Oops, don't forget we need to use ping6 instead! Let's try it again;

ping6 -c 4 ipv6.tuxnetworks.com
PING ipv6.tuxnetworks.com(2001:470:c:1004::2) 56 data bytes
64 bytes from 2001:470:c:1004::2: icmp_seq=1 ttl=63 time=227 ms
64 bytes from 2001:470:c:1004::2: icmp_seq=2 ttl=63 time=332 ms
64 bytes from 2001:470:c:1004::2: icmp_seq=3 ttl=63 time=230 ms
64 bytes from 2001:470:c:1004::2: icmp_seq=4 ttl=63 time=227 ms

--- ipv6.tuxnetworks.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms


That's a whole lot better but there are still some things to be done. We will cover those in the next article.

Continue on to Step 3: LAN access and autoconfiguration

P.S. Don't forget that June 8 is IPv6 Day!

Getting Up To Speed With IPv6: Introduction

With all the news about ipv4 address exhaustion going around not to mention that IPv6 Day is just around the corner I thought it was time that I investigate IPv4's successor, IPv6.

(If you are wondering whatever happened to IPv5 then look here)

The good old IPv4 address that we all currently know and love (and understand, I hope) is basically a 32 bit number divided into four 8 bit "octets".

That gives us a theoretical address space of 2^32 (4.3 billion) addresses.

On the other hand, IPv6 uses 128 bit address's represented as eight groups of four hexadecimal digits, each group representing 16 bits (two octets)

This new address space supports 2^128 (340 undecillion) addresses which is more than we will ever conceivably require on Planet Earth.

Right now you are probably thinking "Yeah, I've heard that before about 640K RAM and we all know how that one worked out" so to put the sheer size of the IPv6 address space into perspective let me quote an excellent IPv6 primer over at Ars Technica, "there are currently 130 million people born each year. If this number of births remains the same until the sun goes dark in 5 billion years, and all of these people live to be 72 years old, they can all have 53 times the address space of the IPv4 Internet for every second of their lives."

Now, I'm sure you'll agree, that is a lot of addresses.

For a good overview of how IPv6 addressing works, I recommend this article.

After reading up a bit about IPv6 you could be excused for concluding that the whole idea of IPv6 is quite daunting and then push the whole damned thing into the "too hard" basket. This is what most people and organisations have been doing up to this point and explains why adoption rates are currently so low.

ISP's in particular are putting off the inevitable by hoarding blocks of the remaining IPv4 space.

Don't be put off by the apparent complexity of IPv6 though!

In practice it is in fact not that hard to get up and running, even if you don't completely understand how it all hangs together at first. I still haven't figured it out properly, but soldier on I will!

As they say, practice makes perfect and this is the intention of the series of articles I will be posting on getting up to speed with IPv6.

Note: A word of warning, these articles are intended to be used for educational purposes only. Because we cannot use an IPv6 address range of our own we are going to be obtaining one through what is known as an "IPv6 Tunnel Broker". This is of course not an ideal situation because we are going to be relying on that broker for all of our IPv6 addresses and routing. I do not advise that you configure a production network for IPv6 connectivity using this guide as you will surely face performance penalties, possible reliability issues, and (most importantly) future migration issues when your ISP eventually starts providing you IPv6 directly. If you are intending to roll out IPv6 in a production scenario I suggest that you choose an ISP that is already providing native IPv6 to their customers.


Next up, Setting The Stage