OK, here is the scenario.
I have a Linux Mint machine (maya) that has not been booted for some time.
I started it up and decided to do some housekeeping as a prelude to upgrading it to 'Nadia'
The first thing I did was do an apt-get update which failed with "hash sum mismatch" when updating the Mint list (Sorry, I didn't keep a copy of the full error)
Another machine on my LAN is already on Nadia and it apt-get updates fine.
I use apt-cacher-ng, so I disabled this and tried update again. This worked, which led me to believe there was some corrupt file in the cache somewhere. I spent hours trying to nail this and even did apt-get purge apt-cacher-ng followed by a re-install.
None of this worked.
Eventually something twigged in my brain, and I wondered if there was a compatibility problem with the version of apt on this machine. As I said, it's been some time since I updated.
Here is what I did;
1) Edited sources.list and commented out the single Mint line, leaving just the ubuntu repositories in place. These were updated from 'precise' to 'quantal'.
2) Did an apt-get update, this worked without errors.
3) Upgrade apt (apt-get install apt) this installed about three new files.
4) Edited sources.list again, removing comment from the Mint line and changed it from 'maya' to 'nadia' while I was there.
5) Another apt-get update and this time there were no errors.
From there everything worked as expected and no more hash sum mismatches!
Tuesday, 25 December 2012
Sunday, 23 December 2012
HOWTO: Build Your Own NAS
Here is the scenario, you have a Debian/Ubuntu server and you want to share files to other clients on your LAN.
This is a very simple process.
1) Create a group "nfs", this group will define the users who can read and write to the share;
2) Add a user to the group;
3) Create a directory for your shared files. I will create a directory called "store"
4) Change its group ownership to our "nfs" group
5) Change permissions to allow the "nfs" group full access (and deny everyone else);
2) Edit the exports file;
Create a simple read/write share that any network host may connect to. Add this line to the end of the file;
3) Restart the NFS server
3) Mount the share;
4) Verify that the share is mounted using df
nfs:/nasmount 11G 11G 255M 98% /nasmount
Achievement Check: You should be able to create and delete files in the shared directory from the client*.
* You will need to use a user account on the client with the same UID as the account we set up earlier on the server. In this example the user account was "brett" with a UID of 1000. You can use something like LDAP to centrally manage user accounts across a network if you have a lot of users.
This is a very simple process.
Section 1: Setting up a directory to share
1) Create a group "nfs", this group will define the users who can read and write to the share;
sudo addgroup nfs
2) Add a user to the group;
sudo usermod -a -G nfs brett
3) Create a directory for your shared files. I will create a directory called "store"
sudo mkdir /store
4) Change its group ownership to our "nfs" group
sudo chgrp nfs /store
5) Change permissions to allow the "nfs" group full access (and deny everyone else);
chmod 770 /store
Achievement check: You should be able to create files in /store while logged in as a local user who is a member of the "nfs" group.
Section 2 : Sharing with NFS server
1) Install NFS server;sudo apt-get install nfs-kernel-server
2) Edit the exports file;
sudo vi /etc/exports
Create a simple read/write share that any network host may connect to. Add this line to the end of the file;
/store *(rw,sync,no_subtree_check,no_root_squash)
3) Restart the NFS server
sudo service nfs-kernel-server restart
Section 3 : Connecting with a client
To complete this section, you will need to be able to ping your server from the client using an IP address or DNS name. If you can't then there is no point continuing. My server name is "nas"
1) On your client, confirm that you can ping your server;
ping nas
PING nas (10.1.1.1) 56(84) bytes of data.
64 bytes from nas (10.1.1.1): icmp_req=1 ttl=64 time=3.22 ms
2) Create a directory to mount the share on. We will use a directory called "nasmount" here but you can use anything you like;
sudo mkdir /nasmount
2) Edit your fstab file;
vi /etc/fstab
Add this line to the end , replacing the servername (nas) with the IP or name of your server;
nas:/store /nasmount nfs rw,defaults 0 0
3) Mount the share;
sudo mount /nasmount
4) Verify that the share is mounted using df
df -h
nfs:/nasmount 11G 11G 255M 98% /nasmount
Achievement Check: You should be able to create and delete files in the shared directory from the client*.
* You will need to use a user account on the client with the same UID as the account we set up earlier on the server. In this example the user account was "brett" with a UID of 1000. You can use something like LDAP to centrally manage user accounts across a network if you have a lot of users.
Copy a hard disk between PC's
This is a variation on this post.
I use this when I have a machine that I want to convert to a virtual machine in cases where VMWare Converter is unsuitable. eg FreeBSD
Prerequisites:
1) Ubuntu LiveCD x 2 or a CD and an ISO image
2) A pre made virtual guest machine with a virtual HDD of sufficient size.
3) A working network connection between the two. You should also note down the ip address of the target machine
Step 1:
Boot both the source machine and the new virtual machine off the LiveCD. Choose "Try Ubuntu without changing my computer"
Step 2:
Partition the virtual hard disk on the target PC using fdisk or gparted. Make sure you use the correct partition type ie: FreeBSD
Step 3: (logged in as root on the virtual machine)
Start the netcat client listening on port 5000 and pipe its output to the virtual hdd (via gunzip)
Step 4: (logged in as root on the physical machine)
Do a dd of the source drive and pipe its output to nc using port 5000 (via gzip)
You can increase the amount of compression done by gzip to a maximum of 9 but if you are using slow CPU's and a fast network then this can actually make the whole transfer slower. Feel free to experiment yourself.
I use this when I have a machine that I want to convert to a virtual machine in cases where VMWare Converter is unsuitable. eg FreeBSD
Prerequisites:
1) Ubuntu LiveCD x 2 or a CD and an ISO image
2) A pre made virtual guest machine with a virtual HDD of sufficient size.
3) A working network connection between the two. You should also note down the ip address of the target machine
Step 1:
Boot both the source machine and the new virtual machine off the LiveCD. Choose "Try Ubuntu without changing my computer"
Step 2:
Partition the virtual hard disk on the target PC using fdisk or gparted. Make sure you use the correct partition type ie: FreeBSD
Step 3: (logged in as root on the virtual machine)
Start the netcat client listening on port 5000 and pipe its output to the virtual hdd (via gunzip)
nc -l -p 5000 | gunzip | dd of=/dev/sda1
Step 4: (logged in as root on the physical machine)
Do a dd of the source drive and pipe its output to nc using port 5000 (via gzip)
dd if=/dev/sda1 | gzip -1 | nc hostip 5000
You can increase the amount of compression done by gzip to a maximum of 9 but if you are using slow CPU's and a fast network then this can actually make the whole transfer slower. Feel free to experiment yourself.
Steam Has Arrived!
Sure, it is still beta, but the beta has just been made open.
To install it simply download the deb package from the above page and install it using dpkg.
You may need to fix up some dependancies
If you use a 64 bit distro you may also need to install the 32 bit libraries;
Once you are done, simply double click the Steam icon on your desktop or find it in the Games section of your Applications menu.
Be aware that the first time you run Steam it will need to download the bulk of the application from the Internet (steam_latest.deb simply sets up the repository and acts an installer)
I only have 3 games that work so far but that's a start.
To install it simply download the deb package from the above page and install it using dpkg.
sudo dpkg -l steam_latest.deb
You may need to fix up some dependancies
sudo apt-get install -f
If you use a 64 bit distro you may also need to install the 32 bit libraries;
sudo apt-get install ia32-libs
Once you are done, simply double click the Steam icon on your desktop or find it in the Games section of your Applications menu.
Be aware that the first time you run Steam it will need to download the bulk of the application from the Internet (steam_latest.deb simply sets up the repository and acts an installer)
I only have 3 games that work so far but that's a start.
Saturday, 8 December 2012
Google Fu, Finding files
Search google for ubuntu iso files (or anything else);
?intitle:index.of?iso ubuntu
?intitle:index.of?iso ubuntu
Thursday, 22 November 2012
Convert ape to flac
If you have a "Monkeys Audio" file (.ape) converting it is pretty simple using avconv;
To install avconv you need to install the unhelpfully named package '
avconv -i audiofile.ape audifile.flac
To install avconv you need to install the unhelpfully named package '
libav-tools'
sudo apt-get install libav-tools
Saturday, 17 November 2012
HOWTO: Partition a drive over 2TB
I just got a 3TB hard drive but when I partitioned it the way I normally do with fdisk it was only recognised as 2TB.
Here's how to do it using parted (as root);
Here's how to do it using parted (as root);
# parted /dev/sdb # Substitute with the drive device you are trying to partition
> mklabel gpt # A gpt partition table is needed for partitions over 2TB
> mkpart pri 1 -1 # Makes a new primary partition using the whole disk
Now you can
format the partition the usual way. Here is an ext4 example;
mkfs -t ext4 /dev/sdb1
Tuesday, 25 September 2012
Kobo Touch EReader Induces RAGE FACTOR 9
WARNING: Ranting, SHOUTING and rude words ahead. Read at your own risk.
So, I went and bought my Dad a Kobo Touch and it arrived today.
I chose the Kobo over the Kindle because I figured it would be less tied into Amazons so-called "Ecosystem" and Sony are the king of proprietary formats. Also the Kobo is supported by Calibre so I reckoned the odds were that it would be reasonable quality (most reviews were positive) and hopefully I wouldn't need to feck about in Windows just to use it.
This view was reinforced when I did what I always do when considering buying a product which undoubtedly would come with software that only works on Windows, which is to Google for "kobo linux" and see what comes back.
The top result was this blog which cheerfully stated that using Kobo on your Linux PC is super easy;
Neat! Except for one tiny thing.
It turns out that the advice given there is pretty much totally wrong.
You see, when you fire up your shiny new Kobo the first thing it presents you with is a screen that tells you that you need to "setup your ereader by browsing to www.kobosetup.com". Ummm, no, I don't reckon I should need to do that, after all the afore mentioned site says this on the subject;
The setup software is Win / Mac only, but you don’t need it. When you start the device, it insists that you run the setup software. You don’t have to. As far as I can tell, the setup does two things:
Anyway, it is true that you are given a skip button, but when you choose to skip you are told that "setting up your Kobo is important" and you will be "reminded later".
When they say "reminded later" they actually mean "nagged relentlessly".
Undaunted, I ignored the nagging, figuring I could find a way to turn it off later and went straight to Calibre. Cool, straight away the device was detected and I was able to send books to the it.
Browsing in Nautilus showed that the books were indeed copied onto the Kobo as expected.
However, after ejecting the device and attempting to view my library I found that there were no books on it at all and that further, when re-connecting the USB cable to it that Calibre had now changed its mind and was reporting that indeed, there are in fact, no books on the device after all.
Rinse, lather, repeat.
It turns out that you MUST download the "Kobo Desktop" application, install it, run it, allow it to "Connect to your Kobo account" (which seemed to do nothing) and then download what looked to be a massive "upgrade".
Going by the time it took to download I'm talking an iOS sized update download in the hundreds of megabytes range.
When it finally completed the upgrade it started "Checking for books".
This went on for over half an hour until I gave up and pressed "cancel", after which it congratulated me on successfully setting the thing up. WTF? Whatever dude.
Ok, cool, apparently it is done, so, I rebooted to Linux so I could try Calibre again.
While doing so I disconnected it from the USB.
The device then went straight to a screen that basically said "Updating" in 10 different languages, it sat there for a couple of minutes, restarted and went straight back to nagging me about doing the "Computer setup" thing again! Gaah!
So, reboot to bloody Windows again and the "Setup your Kobo" thing autostarts.
Now it wants me to login to my "Kobo Account". Hmmm, it did seem a tad strange that it tried connecting to my imaginary Kobo account earlier but appeared to do nothing.
Anyway, I do not have, nor do I wish to have, a fucking Kobo Account.
Apparently I can also use Facebook. Unfortunately, my negative feelings towards participating in Faceplant in any way at all pretty much means I guess I will have to open a goddam Kobo account.
I hate these fucking companies. No, really.
So I register with my dodgy email address, making sure, as a big FU to "Team Kobo" that I untick the "Spam me relentlessly with crap I don't want to buy" tick box anyway and let this god forsaken piece of crap continue on with whatever it is it needs to do.
To really ice the cake, the stupid setup app cheerfully informs me when it is finished that now "You and your friends can now share your Kobo reading activity through Facebook Timeline"
O'rlly? WHY IN GODS NAME WOULD I EVER WANT TO DO THAT?
Did I mention how much I fucking hate these companies?
OK, whatever, at least it is finished, hopefully I will never have to revisit this retarded piece of crap ever again.
Before I could manage to restart however, I note that the idiotic Kobo app had now noticed that I don't have any books yet and started presenting me with a screen full of random books with YES/NO buttons beneath. Apparently it was interested in knowing if I had read them and whether I liked them or not. I assume that this is so they can bombard me with "helpful suggestions" as to what books I might like to purchase through their stupid fucking book store.
I am not making this up.
Note To Kobo Marketing Dept: NO I HAVEN'T READ THOSE BOOKS AND I HAVE NO INTENTION OF PARTICIPATING IN YOUR INFURIATING MARKETING CAMPAIGN OR EVER BUYING A FUCKING BOOK FROM YOUR STORE.
P.S. Also, the two ereaders I was considering buying my kids for xmas will almost certainly not have the Kobo logo on them.
Anyway, now when I unplug the device it just says "Sleep mode" instead of nagging about setting it up so that's a good sign I guess.
Uhoh, nope, not a good sign after all. When the Kobo touch screen is activated there it displays a "release notes" screen which when you can just click past, which then returns you to the hated "Computer Setup" nag screen.
WHAT THE FUCK!
Even better, now when I start Calibre, the device is not recognised at all.
FFS. I have actually gone backwards.
Once more, back to Windows I go and into the hated "Kobo Desktop" app.
The stupid Kobo app can't see it either and the device is now permantly in "Sleep mode."
I am just about ready to pack this thing up and send it back.
But no, I decide to persist. Back to google and I discover I can do a reset by pushing a button inside the tiniest hole I have ever seen. It looks even tinier than the hole you use to eject the sim card in an iphone. I resort to a using stripped wire tie to reset the thing.
Oh, happy days, it is asking me to connect to Facebook again.
This had better be the last time it asks me that or god help me . . . .
I press the FUCK OFF button.
Now I reach an ugly web2.0 style mash-up of book covers in my "Library" section.
Oh, but hold on, what's that? Those are the books I put on there a mere 2 hours ago!
Joy!
Apparently putting the books on the Kobo had been successful all along, it's just that the idiotic thing decided it would refuse to see them until I had signed up to their abomination of a book store.
That is the work of assorted Business Diploma & Marketing 'tards I'm sure.
Anyway, finally I had some success. The trouble is that now, I'm torn on whether I should be glad my books are finally readable or fucking furious at wasting over 2 hours getting to this point.
This is where I put on my grumpy old man hat and remember a day when you could buy a piece of tat, open the box and just use it. The worst that could happen is that it is Xmas day and you forgot to buy batteries. Oops. What do you mean the shops aren't open dad?
Imagine that, you opened the box, plugged it in and it did what it was supposed to do.
If it didn't it was broken and you took it back.
So what is it with this crap these days? Every arsehole with a Degree in Marketing wanting to tie you into fucking web stores and "ecosystems" (whatever the hell they are) and then wanting you to put your shit all over the Internet via Facestab or Twitter or whatever happens to be trendy at the moment.
When did we all decide that this sort of corporate rogering sans the lube and without even a courtesy reach-around was acceptable?
I bought the Kobo in a (clearly) misguided attempt to avoid at least some of that nonsense but no, it just doesn't seem possible to buy stuff these days without being required to undergo a full rectal examination in order to even use a device that you paid for with proper money.
Fuck you Kobo, Amazon, Apple and all you other privacy leaching sociopathic shitstain companies. FU to Facebook too. And when Microsoft start pushing everything through their new Win8 appstore that will be another entry on the long list of things to hate them for too.
Oh, and get off my god damn lawn!
Yeah, I'm totally going with furious.
Epilog:
After battling for more than 2 hours with this infuriating thing I decided to go take a dump and do a bit of reading on it.
I reckoned I deserved it.
So, I settled in and fired up a book. I flipped through a page or three and something pops up at the bottom of the screen.
What is that? Hmmm, it looks like a facestab logo.
Next to the facestab logo it says, and I shit you not, "New award: Page Turner"
What?
Am I playing a game here or reading a god damned book?
And most importantly, DO YOU THINK THAT I AM 8 YEARS FUCKING OLD?
Die marketing scum, DIE NOW!
So, I went and bought my Dad a Kobo Touch and it arrived today.
I chose the Kobo over the Kindle because I figured it would be less tied into Amazons so-called "Ecosystem" and Sony are the king of proprietary formats. Also the Kobo is supported by Calibre so I reckoned the odds were that it would be reasonable quality (most reviews were positive) and hopefully I wouldn't need to feck about in Windows just to use it.
This view was reinforced when I did what I always do when considering buying a product which undoubtedly would come with software that only works on Windows, which is to Google for "kobo linux" and see what comes back.
The top result was this blog which cheerfully stated that using Kobo on your Linux PC is super easy;
- Plug it in to your Ubuntu machine. It shows up as a USB storage device.
- Drag and drop books in any supported format onto it.
- Unplug, switch on, read books.
Neat! Except for one tiny thing.
It turns out that the advice given there is pretty much totally wrong.
You see, when you fire up your shiny new Kobo the first thing it presents you with is a screen that tells you that you need to "setup your ereader by browsing to www.kobosetup.com". Ummm, no, I don't reckon I should need to do that, after all the afore mentioned site says this on the subject;
The setup software is Win / Mac only, but you don’t need it. When you start the device, it insists that you run the setup software. You don’t have to. As far as I can tell, the setup does two things:
- Forces you to create a kobobooks.com account. Lame.
- Updates the software on the device.
I should note here that the above blog was written over a year ago and it is possible that the fun guys in the Kobo marketing department have spent that year "improving the user experience" of their formerly excellent product, and as part of that "improvement" they have made the setup process mandatory and "improved" it to the point that a retarded chimpanzee would be embarrassed to wipe his arse with it, so enough said of that.
Anyway, it is true that you are given a skip button, but when you choose to skip you are told that "setting up your Kobo is important" and you will be "reminded later".
When they say "reminded later" they actually mean "nagged relentlessly".
Undaunted, I ignored the nagging, figuring I could find a way to turn it off later and went straight to Calibre. Cool, straight away the device was detected and I was able to send books to the it.
Browsing in Nautilus showed that the books were indeed copied onto the Kobo as expected.
However, after ejecting the device and attempting to view my library I found that there were no books on it at all and that further, when re-connecting the USB cable to it that Calibre had now changed its mind and was reporting that indeed, there are in fact, no books on the device after all.
Rinse, lather, repeat.
It turns out that you MUST download the "Kobo Desktop" application, install it, run it, allow it to "Connect to your Kobo account" (which seemed to do nothing) and then download what looked to be a massive "upgrade".
Going by the time it took to download I'm talking an iOS sized update download in the hundreds of megabytes range.
When it finally completed the upgrade it started "Checking for books".
This went on for over half an hour until I gave up and pressed "cancel", after which it congratulated me on successfully setting the thing up. WTF? Whatever dude.
Ok, cool, apparently it is done, so, I rebooted to Linux so I could try Calibre again.
While doing so I disconnected it from the USB.
The device then went straight to a screen that basically said "Updating" in 10 different languages, it sat there for a couple of minutes, restarted and went straight back to nagging me about doing the "Computer setup" thing again! Gaah!
So, reboot to bloody Windows again and the "Setup your Kobo" thing autostarts.
Now it wants me to login to my "Kobo Account". Hmmm, it did seem a tad strange that it tried connecting to my imaginary Kobo account earlier but appeared to do nothing.
Anyway, I do not have, nor do I wish to have, a fucking Kobo Account.
Apparently I can also use Facebook. Unfortunately, my negative feelings towards participating in Faceplant in any way at all pretty much means I guess I will have to open a goddam Kobo account.
I hate these fucking companies. No, really.
So I register with my dodgy email address, making sure, as a big FU to "Team Kobo" that I untick the "Spam me relentlessly with crap I don't want to buy" tick box anyway and let this god forsaken piece of crap continue on with whatever it is it needs to do.
To really ice the cake, the stupid setup app cheerfully informs me when it is finished that now "You and your friends can now share your Kobo reading activity through Facebook Timeline"
O'rlly? WHY IN GODS NAME WOULD I EVER WANT TO DO THAT?
Did I mention how much I fucking hate these companies?
OK, whatever, at least it is finished, hopefully I will never have to revisit this retarded piece of crap ever again.
Before I could manage to restart however, I note that the idiotic Kobo app had now noticed that I don't have any books yet and started presenting me with a screen full of random books with YES/NO buttons beneath. Apparently it was interested in knowing if I had read them and whether I liked them or not. I assume that this is so they can bombard me with "helpful suggestions" as to what books I might like to purchase through their stupid fucking book store.
I am not making this up.
Note To Kobo Marketing Dept: NO I HAVEN'T READ THOSE BOOKS AND I HAVE NO INTENTION OF PARTICIPATING IN YOUR INFURIATING MARKETING CAMPAIGN OR EVER BUYING A FUCKING BOOK FROM YOUR STORE.
P.S. Also, the two ereaders I was considering buying my kids for xmas will almost certainly not have the Kobo logo on them.
Anyway, now when I unplug the device it just says "Sleep mode" instead of nagging about setting it up so that's a good sign I guess.
Uhoh, nope, not a good sign after all. When the Kobo touch screen is activated there it displays a "release notes" screen which when you can just click past, which then returns you to the hated "Computer Setup" nag screen.
WHAT THE FUCK!
Even better, now when I start Calibre, the device is not recognised at all.
FFS. I have actually gone backwards.
Once more, back to Windows I go and into the hated "Kobo Desktop" app.
The stupid Kobo app can't see it either and the device is now permantly in "Sleep mode."
I am just about ready to pack this thing up and send it back.
But no, I decide to persist. Back to google and I discover I can do a reset by pushing a button inside the tiniest hole I have ever seen. It looks even tinier than the hole you use to eject the sim card in an iphone. I resort to a using stripped wire tie to reset the thing.
Oh, happy days, it is asking me to connect to Facebook again.
This had better be the last time it asks me that or god help me . . . .
I press the FUCK OFF button.
Now I reach an ugly web2.0 style mash-up of book covers in my "Library" section.
Oh, but hold on, what's that? Those are the books I put on there a mere 2 hours ago!
Joy!
Apparently putting the books on the Kobo had been successful all along, it's just that the idiotic thing decided it would refuse to see them until I had signed up to their abomination of a book store.
That is the work of assorted Business Diploma & Marketing 'tards I'm sure.
Anyway, finally I had some success. The trouble is that now, I'm torn on whether I should be glad my books are finally readable or fucking furious at wasting over 2 hours getting to this point.
This is where I put on my grumpy old man hat and remember a day when you could buy a piece of tat, open the box and just use it. The worst that could happen is that it is Xmas day and you forgot to buy batteries. Oops. What do you mean the shops aren't open dad?
Imagine that, you opened the box, plugged it in and it did what it was supposed to do.
If it didn't it was broken and you took it back.
So what is it with this crap these days? Every arsehole with a Degree in Marketing wanting to tie you into fucking web stores and "ecosystems" (whatever the hell they are) and then wanting you to put your shit all over the Internet via Facestab or Twitter or whatever happens to be trendy at the moment.
When did we all decide that this sort of corporate rogering sans the lube and without even a courtesy reach-around was acceptable?
I bought the Kobo in a (clearly) misguided attempt to avoid at least some of that nonsense but no, it just doesn't seem possible to buy stuff these days without being required to undergo a full rectal examination in order to even use a device that you paid for with proper money.
Fuck you Kobo, Amazon, Apple and all you other privacy leaching sociopathic shitstain companies. FU to Facebook too. And when Microsoft start pushing everything through their new Win8 appstore that will be another entry on the long list of things to hate them for too.
Oh, and get off my god damn lawn!
Yeah, I'm totally going with furious.
Epilog:
After battling for more than 2 hours with this infuriating thing I decided to go take a dump and do a bit of reading on it.
I reckoned I deserved it.
So, I settled in and fired up a book. I flipped through a page or three and something pops up at the bottom of the screen.
What is that? Hmmm, it looks like a facestab logo.
Next to the facestab logo it says, and I shit you not, "New award: Page Turner"
What?
Am I playing a game here or reading a god damned book?
And most importantly, DO YOU THINK THAT I AM 8 YEARS FUCKING OLD?
Die marketing scum, DIE NOW!
Friday, 14 September 2012
Converting wavpack files
I recently obtained an music album which was in a wavpack (.wv) file.
First thing I thought was what the hell is that?
Second thing was how do I get to the audio tracks?
I had these two files;
CDImage.cue
CDImage.wv
First thing to do is install some packages;
Next thing is to split the file into discrete tracks;
Then you simply need to convert the split file to another format.
A simple script can be used to do multiple files;
First thing I thought was what the hell is that?
Second thing was how do I get to the audio tracks?
I had these two files;
CDImage.cue
CDImage.wv
First thing to do is install some packages;
sudo apt-get install wavpack cuetools shntool libav-tools
Next thing is to split the file into discrete tracks;
cuebreakpoints CDImage.cue | shnsplit -o wv CDImage.wv
Then you simply need to convert the split file to another format.
avconv -i split-track01.wv test.flac
A simple script can be used to do multiple files;
#!/bin/sh
for file in *.wv; do
avconv -i $file $file.flac
done
Thursday, 30 August 2012
Install scp, rsync and other tools on CentOS
When you do a "minimal" install of CentOS it doesn't install things like rsync and scp by default.
To install them do;
To install them do;
yum install openssh-clients rsync
Wednesday, 29 August 2012
Unmount stale NFS mounts
If you have a stale NFS mount hanging on your system it can cause various programs and utilities to fail.
A typical symptom is a hang when using the 'df' command.
In such cases you cant do umount /path/to/stale/nfs because it will say "the device is busy" or words to that effect
To fix this you can unmount it with the 'lazy' option;
If you don't expect that mount point to ever be available again (for example the nfs server was decommissioned) then make sure you adjust /etc/fstab accordingly.
In such cases you cant do umount /path/to/stale/nfs because it will say "the device is busy" or words to that effect
To fix this you can unmount it with the 'lazy' option;
umount -l /path/to/stale/nfs
If you don't expect that mount point to ever be available again (for example the nfs server was decommissioned) then make sure you adjust /etc/fstab accordingly.
Sunday, 19 August 2012
Remove Subtitles and Audio tracks from MKV files
To remove unwanted audio and subtitle tracks from Matroska (mkv) files you use mkvtools;
sudo apt-get install mkvtoolnix-gui
Once it is installed then open up the gui (Sound & Video menu) and follow these steps;
1) In the "Input files" box on the "Input" tab browse to the mkv file you want to modify.
2) In the "Tracks, chapters and tags" box uncheck any part you want to remove (leave the stuff you want to keep checked)
3) In the "Output filename" box keep the default name or modify to suit.
4) Click "Start muxing" and wait a minute or two until it completes.
Once you are done, you can delete the original file (after checking it worked of course!) and rename the new file accordingly.
Wednesday, 18 July 2012
Valve Launches Steam For Linux Blog
Still no launch date but at least we have somewhere to watch for news now!
http://blogs.valvesoftware.com/linux/
http://blogs.valvesoftware.com/linux/
Tuesday, 17 July 2012
Managing MYSQL users
These are a few commands I use in mysql to manage users and grants. I do this infrequently so I put them here to save having to google them when I need them.
Grant a user 'dev' all privileges on a database called "test";
See full syntax for GRANT command
To see what privileges a user has been granted;
Sometimes I need to list all the users that have had permissions granted to them;
From that table you can copy-paste the relevant line to see the grants for a particular user.
Revoke a grant
After revoking a users privileges, you will notice that the user still shows up with USAGE rights. To make a user go away completely you need to "drop" them;
Grant a user 'dev' all privileges on a database called "test";
mysql> GRANT ALL PRIVILEGES ON `test`.* TO 'dev'@'localhost' IDENTIFIED BY 'devtest';
Query OK, 0 rows affected (0.02 sec)
See full syntax for GRANT command
To see what privileges a user has been granted;
mysql> SHOW GRANTS FOR 'dev'@'localhost';
+-----------------------------------------------------------------------------------------+
| Grants for dev@localhost |
+-----------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'dev'@'localhost' IDENTIFIED BY PASSWORD '*D98YCCE724CCT7BFA48E1' |
| GRANT ALL PRIVILEGES ON `test`.* TO 'dev'@'localhost' |
+-----------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
Sometimes I need to list all the users that have had permissions granted to them;
mysql>
SELECT CONCAT('SHOW GRANTS FOR \'', user,'\'@\'', host, '\';') AS mygrants FROM mysql.user ORDER BY mygrants;
+-------------------------------------------------+
| mygrants |
+-------------------------------------------------+
| SHOW GRANTS FOR ''@'localhost'; |
| SHOW GRANTS FOR 'debian-sys-maint'@'localhost'; |
| SHOW GRANTS FOR 'dev'@'192.168.4.2'; |
| SHOW GRANTS FOR 'dev'@'localhost'; |
| SHOW GRANTS FOR 'root'@'127.0.0.1'; |
| SHOW GRANTS FOR 'root'@'::1'; |
| SHOW GRANTS FOR 'root'@'localhost'; |
+-------------------------------------------------+
From that table you can copy-paste the relevant line to see the grants for a particular user.
Revoke a grant
mysql> REVOKE ALL PRIVILEGES ON `test`.* FROM 'dev'@'localhost';
Query OK, 0 rows affected (0.02 sec)
After revoking a users privileges, you will notice that the user still shows up with USAGE rights. To make a user go away completely you need to "drop" them;
mysql> drop user 'dev'@'localhost';
Query OK, 0 rows affected (0.00 sec)
Thursday, 12 July 2012
HOWTO: Squid 3 Transparent Proxy
A lot of the stuff on the Internet describing how to do transparent proxy is outdated and does not work on more recent distro's that sport Squid V3.
This guide is Googles top hit for "squid transparent proxy" but it doesn't work. If you attempt to configure Squid 3 using the "
It seems that the developers of Squid 3 have streamlined the configuration of squids transparent proxy feature down to a single word.
If you find the http_port directive in your squid.conf and add the word "transparent" to the end of it then you have basically configured squid as a transparent proxy.
Find a line like this;
Add "transparent" to the end so that it looks like this;
Restart squid and you are done. All that is required now is to redirect traffic on your firewall to go to the proxy.
You can use your iptables firewall to redirect web traffic (port 80) to your squid proxy with these commands;
This assumes that your LAN adaptor (the adapter that your client requests are coming in on) is eth0 and that the IP address of your proxy is 10.1.1.1
You can test that your proxy is working by accessing the Internet from a network client on your LAN and monitoring squids access log file;
If you browse to www.tuxnetworks.com while watching the access.log file then you should see something like this;
Enjoy!
This guide is Googles top hit for "squid transparent proxy" but it doesn't work. If you attempt to configure Squid 3 using the "
httpd_accel"
directives provided in that post squid will simply fail to start.It seems that the developers of Squid 3 have streamlined the configuration of squids transparent proxy feature down to a single word.
If you find the http_port directive in your squid.conf and add the word "transparent" to the end of it then you have basically configured squid as a transparent proxy.
Find a line like this;
http_port 3128
Add "transparent" to the end so that it looks like this;
http_port 3128 transparent
Restart squid and you are done. All that is required now is to redirect traffic on your firewall to go to the proxy.
You can use your iptables firewall to redirect web traffic (port 80) to your squid proxy with these commands;
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 10.1.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
This assumes that your LAN adaptor (the adapter that your client requests are coming in on) is eth0 and that the IP address of your proxy is 10.1.1.1
You can test that your proxy is working by accessing the Internet from a network client on your LAN and monitoring squids access log file;
tail -f /var/log/squid3/access.log
If you browse to www.tuxnetworks.com while watching the access.log file then you should see something like this;
1342076113.358 1 10.1.1.14 TCP_HIT/200 437 GET http://www.tuxnetworks.com/ - NONE/- text/html
Enjoy!
Tuesday, 10 July 2012
HOWTO: nvidia-173 on Mint 13 (and Ubuntu 12.04 Precice)
I tried to install Mint 13 on an ancient PC with a Geforce 6200 graphics card.
It didn't work.
The symptom was that Cinnamon would be missing all panels and the window borders were missing. All that was visible on the desktop was the wallpaper and default icons.
It was possible to right-click the desktop and open a shell.
I then installed Mate desktop which worked, but was horribly slow.
I determined that the problem was with the nvidia-current driver, and that for the older 6200 adapter I needed to use the legacy nvidia 173 driver.
I couldn't install that due to an unresolvable dependency error. AAARGH!
I downloaded the binary from the nvidia website but that refused to build the kernel modules without providing any useful error feedback. AAAARGH again!
Eventually I found some clues on the 'net suggesting downgrading to the version of X from oneiric repository.
This is how you do that.
Add this repository to your sources list file.
deb http://archive.ubuntu.com/ubuntu/ oneiric main
Edit your apt preferences file;
# vi /etc/apt/preferences
Add a section as follows;
Package: xorg xserver-xorg*
Pin: release a=oneiric
Pin-Priority: 1050
This will instruct your package manager to always use the oneiric repository for xorg and xserver* packages
Update your sources and do an upgrade;
apt-get update &&
apt-get upgrade
Explicitly install the x server packages along with the nvidia-173 legacy package.
sudo apt-get install xorg xserver-xorg-input-all xserver-xorg-video-all nvidia-173 nvidia-settings
Update: If you take a look at which driver you using in the "Additional Drivers" utility it may report that "This driver is activated but not currently in use". This is an error in jockey which is not reporting the driver status properly.
Tuesday, 26 June 2012
HOWTO: Simple Git Repository
This is by no means a comprehensive guide to the Git version control system but is rather a few of the basic operations that I use to maintain a few small projects and scripts.
It is also worth understanding that unlike svn, git is a peer to peer based version control system where every "client" is a repository in its own right which can also push and pull from a "master" repository as required.
The first thing to do of course is to install git on your server. In my case this will be a PC named "jupiter". You will also require ssh server so we will also install that;
For Debian/Ubuntu/Mint;
or for Redhat/CentOS/Mandriva/Suse
I will be assuming that the user account that you logged in as will be the account that has access to the repository. Fine grained user control is not covered in this document.
Before we begin building a repository, we should configure git for our user account (change the following commands to suit own your user details).
This will create a ".gitconfig" file in your home directory. Edit this file;
Add some aliases to the end of the file;
Let's create a test project directory for our base repository and change to it;
OK, now we will create a "bare" repository. You can think of this as the master "shared" repository.
Do a directory listing and you will see this;
Surprisingly, that is all that is required on the master repository.
Now, we move to the client which will do the initial code push to the repository.
Note: This can be the same host as the server or a different one altogether. In this case I will log on to a different host which is configured with the same user account details as well as ssh key authorization.
To save configuring git again on this host, you can simply copy the git config file over from my server "jupiter";
Create a folder for your project in your home directory;
Initialize a local repository, this time without the "--bare" option
This time if we take a look we will see the directory is totally different than before (on the master);
Now we tell our local repository to track the master repository on jupiter;
Note: This command tells git that the branch is the master, and assigns the repository the name "repo". Other guides and examples you come across will generally use the name "origin" in place of "repo" by convention. I use "repo" but you can also have multiple remotes and give them all different names like "staging", "testing", "production" etc.
Of course we will need to have a file to be version managed by git. Create one now;
Tell git to manage the source file;
The "status" command can be used to see what state our local repository is in;
Counting objects: 3, done.
Writing objects: 100% (3/3), 250 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To jupiter:repository/project
* [new branch] master -> master
Note: For subsequent pushes and pulls, you may omit the "master" parameter as we configured the repository to track the master branch using "--track master" when we set it up. It is required for the first push though I'm not sure why.
We can take a look at the history using the "hist" alias (we configured that in .gitconfig earlier)
Cool. If we want to "pull" the source out of the repository then use the "pull" command insted of "push"
So there you have it, a rudimentary git setup.
Here are a few other commands that are useful;
Cancel an uncommited change;
Add a tag;
Push tags;
Remove a tag;
Push deleted tags;
It is also worth understanding that unlike svn, git is a peer to peer based version control system where every "client" is a repository in its own right which can also push and pull from a "master" repository as required.
The first thing to do of course is to install git on your server. In my case this will be a PC named "jupiter". You will also require ssh server so we will also install that;
For Debian/Ubuntu/Mint;
$ sudo apt-get install git openssh-server
or for Redhat/CentOS/Mandriva/Suse
$ sudo yum install git openssh-server
I will be assuming that the user account that you logged in as will be the account that has access to the repository. Fine grained user control is not covered in this document.
Before we begin building a repository, we should configure git for our user account (change the following commands to suit own your user details).
$ git config --global user.name "brett"
git config --global user.email "brett_AT_tuxnetworks.com"
git config --global core.autocrlf input
git config --global core.safecrlf true
This will create a ".gitconfig" file in your home directory. Edit this file;
$ vi ~/.gitconfig
Add some aliases to the end of the file;
[alias]
co = checkout
ci = commit
st = status
br = branch
hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short
type = cat-file -t
dump = cat-file -p
Let's create a test project directory for our base repository and change to it;
$ mkdir -p ~/repository/project
# cd ~/repository/project
OK, now we will create a "bare" repository. You can think of this as the master "shared" repository.
$ git --bare init
Initialized empty Git repository in /home/brett/repository/project/.git/
Do a directory listing and you will see this;
$ ls -al
total 40
drwxrwxr-x 7 brett brett 4096 Jun 26 11:24 .
drwxrwxr-x 3 brett brett 4096 Jun 26 11:24 ..
drwxrwxr-x 2 brett brett 4096 Jun 26 11:24 branches
-rw-rw-r-- 1 brett brett 66 Jun 26 11:24 config
-rw-rw-r-- 1 brett brett 73 Jun 26 11:24 description
-rw-rw-r-- 1 brett brett 23 Jun 26 11:24 HEAD
drwxrwxr-x 2 brett brett 4096 Jun 26 11:24 hooks
drwxrwxr-x 2 brett brett 4096 Jun 26 11:24 info
drwxrwxr-x 4 brett brett 4096 Jun 26 11:24 objects
drwxrwxr-x 4 brett brett 4096 Jun 26 11:24 refs
Surprisingly, that is all that is required on the master repository.
Now, we move to the client which will do the initial code push to the repository.
Note: This can be the same host as the server or a different one altogether. In this case I will log on to a different host which is configured with the same user account details as well as ssh key authorization.
To save configuring git again on this host, you can simply copy the git config file over from my server "jupiter";
scp jupiter:.gitconfig ~
Create a folder for your project in your home directory;
$ mkdir ~/project
$ cd ~/project
Initialize a local repository, this time without the "--bare" option
$ git init
Initialized empty Git repository in /home/brett/project/.git/
This time if we take a look we will see the directory is totally different than before (on the master);
t$ ls -al
total 12
drwxrwxr-x 3 brett brett 4096 Jun 26 11:23 .
drwxr-xr-x 16 brett brett 4096 Jun 26 11:24 ..
drwxrwxr-x 7 brett brett 4096 Jun 26 11:27 .git
Now we tell our local repository to track the master repository on jupiter;
$ git remote add --track master repo brett@jupiter:repository/project
Note: This command tells git that the branch is the master, and assigns the repository the name "repo". Other guides and examples you come across will generally use the name "origin" in place of "repo" by convention. I use "repo" but you can also have multiple remotes and give them all different names like "staging", "testing", "production" etc.
Of course we will need to have a file to be version managed by git. Create one now;
$ echo "This is my source file" > source.file
Tell git to manage the source file;
git add source.file
The "status" command can be used to see what state our local repository is in;
$ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use "git rm --cached ..." to unstage)
#
# new file: source.file
We can see that source.file is a new, uncommited file.
Let's commit it to the local repository now;
$ git commit -m "This is my first commit"
[master (root-commit) 4714b45] This is my first commit
1 file changed, 1 insertion(+)
create mode 100644 source.file
At the moment our file is only commited to the local repository. We need to push it to the master;
$ git push --tags repo masterCounting objects: 3, done.
Writing objects: 100% (3/3), 250 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To jupiter:repository/project
* [new branch] master -> master
Note: For subsequent pushes and pulls, you may omit the "master" parameter as we configured the repository to track the master branch using "--track master" when we set it up. It is required for the first push though I'm not sure why.
We can take a look at the history using the "hist" alias (we configured that in .gitconfig earlier)
$ git hist
* 4714b45 2012-06-26 | This is my first commit (HEAD, repo/master, master) [brett]
Cool. If we want to "pull" the source out of the repository then use the "pull" command insted of "push"
$ git pull repo
So there you have it, a rudimentary git setup.
Here are a few other commands that are useful;
Cancel an uncommited change;
$ git reset HEAD
$ git checkout
Add a tag;
$ git tag "v1.0"
Push tags;
$ git push --tags repo
Total 0 (delta 0), reused 0 (delta 0)
To jupiter:repository/project
* [new tag] v1.0 -> v1.0
Remove a tag;
$ git tag -d "v1.0"
Deleted tag 'v1.0' (was 4714b45)
Push deleted tags;
$ git push repo :refs/tags/"v1.0"
To jupiter:repository/project
- [deleted] v1.0
Thursday, 21 June 2012
HOWTO: Subversion 1.7.x on Centos 6
Howto: Subversion 1.7.x on Centos 6
Prerequisites:
* Minimal CentOS 6 installation
* SELinux & Firewall disabled
First, I'm going to start off with a bit of a mini-rant.
So, the first step is to obtain the following packages (64 bit links to my site below);
subversion-1.7.5-1.x86_64.rpm
mod_dav_svn-1.7.5-1.x86_64.rpm
neon-0.25.5-10.el5_4.1.x86_64.rpm
OK, with the three files in hand, we probably should ensure our system is up to date before we proceed.
Subversion requires Apache web server, let's install it now;
You probably want Apache to start automatically after a reboot;
We should also start it now;
These dependencies are required for SVN 1.7.x and thankfully they are all available in the standard CentOS repository;
Here's where we hit a minor snag. If we try and install the subversion rpm at this point it will complain that it depends on neon-0.25 but the CentOS6 repository provides v0.29.
Anyway, luckily for us neon does not have any onerous dependency requirements of it's own so we can go ahead and install the older version manually without falling into dependency hell, which is a bit of luck.
With neon v0.25 installed, we can go ahead and install subversion and mod_dav_svn;
We can confirm that subversion is now installed;
OK, all is good, right? Well, yes and no. Everything is OK right now but if you try and do a yum update, it will fail like this;
Aaargh!
To workaround this, we can exclude neon from being updated;
Add this line somewhere in the file;
Now our yum update won't try and upgrade neon and therefore complain about dependency problems;
And that's it, go grab yourself a beverage for a job well done!
Prerequisites:
* Minimal CentOS 6 installation
* SELinux & Firewall disabled
First, I'm going to start off with a bit of a mini-rant.
As usual, installing stuff on RHEL/CentOS is much harder than it is on Debian based systems. For one, the repositories are far more limited and what packages there are are hopelessly outdated. Of course I understand that the RHEL philosophy is to freeze packages for a particular major version (6 in this case) and only provide security and bug fixes to these packages because this makes sense when running servers in a business environment, which is, of course, their target market.
This is a good thing.
However sometimes you want a newer version of something for whatever reason. Debian manages this by having a backports repository which can be optionally enabled to allow easy access to newer packages from the Debian testing branch. From what I can tell RHEL/CentOS do not have an equivalent option. Of course there are third party repositories that provide access to newer packages (to a degree), but coverage is sporadic at best.
In this case we will be required to resort to manually downloading third party RPM packages from WANDisco because they are not provided via any repository I could find. Apparently you are meant to fill out some sort of webform where you have to provide them with your personal details and "request" the packages along with an installer script that will allegedly install all the dependancies.
Feel free to go and fill out that form, however, I wasn't willing to go that route, it evoked too many memories of when I used to be a Windows user, where everything comes with strings attached.
The good news is that the RPMs are available without having to request them by going directly to here. The bad news is that these packages are for RHEL/CentOS 5 and I have been unable to find the equivalent packages for CentOS 6. This crucial difference will cause us to briefly flirt with the dreaded dependancy hell later on.
So, the first step is to obtain the following packages (64 bit links to my site below);
subversion-1.7.5-1.x86_64.rpm
mod_dav_svn-1.7.5-1.x86_64.rpm
neon-0.25.5-10.el5_4.1.x86_64.rpm
Note: If you don't want to trust my linked packages (and why should you?) or you require 32 bit versions then the first two are available from the WANDisco website mentioned above, the third I found at rpm.pbone.net.
OK, with the three files in hand, we probably should ensure our system is up to date before we proceed.
yum update
Subversion requires Apache web server, let's install it now;
yum install httpd
You probably want Apache to start automatically after a reboot;
chkconfig httpd on
We should also start it now;
service httpd start
These dependencies are required for SVN 1.7.x and thankfully they are all available in the standard CentOS repository;
yum install openssl098e compat-db43 compat-expat1 compat-openldap
Tip: In cases like this you can use something like "yum whatprovides ./libldap-2.3.so.0" to find where your dependencies live
Here's where we hit a minor snag. If we try and install the subversion rpm at this point it will complain that it depends on neon-0.25 but the CentOS6 repository provides v0.29.
Another rant: This is another bugbear I have with the yum/rpm system. It seems to be much more finicky than Debian and backwards compatibility is often non-existent. In this case the WANDisco RPM has been built to explicitly require neon 0.25 even though I am pretty sure that v 0.29 is fully backwards compatible and would work. It's a stupid situation and one that I honestly can't remember finding in Debian/Ubuntu over nearly 10 years of using that distro. Maybe that is because you are not forced to rely on dodgy third party compiled packages on a regular basis I suppose.
Anyway, luckily for us neon does not have any onerous dependency requirements of it's own so we can go ahead and install the older version manually without falling into dependency hell, which is a bit of luck.
rpm -i neon-0.25.5-10.el5_4.1.x86_64.rpm
With neon v0.25 installed, we can go ahead and install subversion and mod_dav_svn;
rpm -i subversion-1.7.5-1.x86_64.rpm
rpm -i mod_dav_svn-1.7.5-1.x86_64.rpm
Note: If you see something like this;
warning: mod_dav_svn-1.7.5-1.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 3bbf077a: NOKEY
you can ignore it, unless you want to obtain and configure the appropriate validation keys for these files which is outside the scope of this document.
We can confirm that subversion is now installed;
# rpm -qa | grep subversion
subversion-1.7.5-1.x86_64
OK, all is good, right? Well, yes and no. Everything is OK right now but if you try and do a yum update, it will fail like this;
--> Finished Dependency Resolution
Error: Package: subversion-1.7.5-1.x86_64 (installed)
Requires: libneon.so.25()(64bit)
Removing: neon-0.25.5-10.el5_4.1.x86_64 (installed)
libneon.so.25()(64bit)
Updated By: neon-0.29.3-1.2.el6.x86_64 (base)
Not found
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Aaargh!
To workaround this, we can exclude neon from being updated;
vi /etc/yum.conf
Add this line somewhere in the file;
exclude=neon*
Now our yum update won't try and upgrade neon and therefore complain about dependency problems;
# yum update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: ftp.swin.edu.au
* extras: ftp.swin.edu.au
* updates: ftp.swin.edu.au
base
extras updates Setting up Update Process
No Packages marked for Update
And that's it, go grab yourself a beverage for a job well done!
Tuesday, 5 June 2012
Updated VPN script v1.03
I have updated my 5 minute VPN script to make key transfers work better.
Monday, 4 June 2012
Passwordless SSH login fails
If you are attempting to log in to an openssh server using public key authorisation and it keeps asking for your password anyway then check the permissions on the ssh directory for the user account you are trying to log in as;
If the permissions are anything other than those shown above then you need to fix that;
ls -al ~/.ssh
drwx------ 2 brett brett 4096 Jun 4 13:40 .
drwx------ 6 brett brett 4096 Jun 4 13:37 ..
-rwx------ 1 brett brett 398 Jun 4 13:40 authorized_keys
If the permissions are anything other than those shown above then you need to fix that;
chmod 700 ~/.ssh
chmod 700 ~/.ssh/*
Friday, 1 June 2012
Steam is coming to Linux
Wow! Native Steam is coming to Linux within months.
http://www.escapistmagazine.com/news/view/116943-Steam-Coming-to-Linux-Soon
http://www.escapistmagazine.com/news/view/116943-Steam-Coming-to-Linux-Soon
Git Aliases "cannot exec"
I'm playing around with git and following the tutorial here.
I ran into a problem when attempting to use git aliases.
Turns out the issue was because I had logged in as user "git" via the su command but had neglected to use the - (minus)
When I logged in directly as user git or from root using '
I got the clue to the problem from running strace.
The fact that it was trying to execute the command in
I ran into a problem when attempting to use git aliases.
[git@git hello]$ git hist
fatal: cannot exec 'git-hist': Permission denied
Turns out the issue was because I had logged in as user "git" via the su command but had neglected to use the - (minus)
When I logged in directly as user git or from root using '
su -l git
' everything worked as advertised.I got the clue to the problem from running strace.
[git@git hello]$ strace -f -e execve git hist
execve("/usr/bin/git", ["git", "hist"], [/* 21 vars */]) = 0
Process 1560 attached
[pid 1560] execve("/usr/libexec/git-core/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/usr/local/sbin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/usr/local/bin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/sbin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/bin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/usr/sbin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/usr/bin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 ENOENT (No such file or directory)
[pid 1560] execve("/root/bin/git-hist", ["git-hist"], [/* 21 vars */]) = -1 EACCES (Permission denied)
fatal: cannot exec 'git-hist': Permission denied
The fact that it was trying to execute the command in
/root/bin
(wtf is that? Some sort of hardcoded path? Further investigation is required) indicated there was some sort of user identity problem.Monday, 28 May 2012
Rip a DVD Using The Command Line
Use this command to rip a DVD using the command line.
Mount a DVD disc or iso to a convenient place. I will use an iso in this example;
mkdir /tmp/dvd
sudo mount -o loop /path/to/dvd.iso /tmp/dvd
Issue this command to rip a DVD to a file in your home directory;
cat ./VIDEO_TS/VTS_01_1.VOB | nice avconv -i - -s 512x384 -vcodec libtheora
-acodec libvorbis ~/dvd_rip.mp4
If you prefer to use closed h264 and mp3 codecs you can install them from the multiverse repository;
sudo apt-get install libavcodec-extra-53 (libavcodec-extra-52 for maverick and earlier)
Change these parameters;
-vcodec libx264
-acodec libmp3lame
The default bitrate is about 800kbs. To change this use these parameters;
-b 1200000 : Video in bps
-ab 128000 : Audio in bps
Sometimes you don't want to re-encode the streams, in such cases you copy the stream using -c:v copy, -c:a copy and -c:s copy (for subtitles)
Example: Transcode video but copy audio;
cat ./VIDEO_TS/VTS_01_1.VOB | nice avconv -i - -vcodec libx264 -c:a copy -b 1200000 ~/dvd_rip.mp4
Notes: The '-i -' means take input from stdin
Thursday, 24 May 2012
Tuesday, 22 May 2012
CentOS 6 Bridged Networking
If you are intending to run KVM under Centos, you will most likely want to use bridged networking.
I am starting with a standard CentOS 6 "minimal" install but the same process applies to RHEL and CentOS all versions.
First, install the bridge utils package;
Create/edit these two files, substituting the ipaddress and other details as applicable;
Restart your server and you should now have a bridge adapter called "br0";
Confirm the bridge;
I am starting with a standard CentOS 6 "minimal" install but the same process applies to RHEL and CentOS all versions.
First, install the bridge utils package;
yum install bridge-utils
Create/edit these two files, substituting the ipaddress and other details as applicable;
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT=yes
HWADDR=FF:FF:FF:FF:FF:FF # Use the actual hardware address for your NIC
TYPE=Ethernet
BRIDGE=br0
# cat /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE="br0"
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.0.0.1
PREFIX=24
GATEWAY=10.0.0.254 # You can put this in /etc/sysconfig/network if you prefer
DNS1=10.0.0.2
DOMAIN=example.net
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System br0"
Restart your server and you should now have a bridge adapter called "br0";
# ifconfig br0
br0 Link encap:Ethernet HWaddr FF:FF:FF:FF:FF:FF
inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::21a:64ff:fe78:3f44/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9092 errors:0 dropped:0 overruns:0 frame:0
TX packets:4424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9175506 (8.7 MiB) TX bytes:369549 (360.8 KiB)
Confirm the bridge;
#brctl show
bridge name bridge id STP enabled interfaces
br0 8000.001a64783f44 no eth0
virbr0 8000.525400badaa9 yes virbr0-nic
Sunday, 13 May 2012
The Games Industry
I'm old enough to have been there at the birth of what we now know as the multi-billion dollar "games industry" and I've been along for the ride ever since.
Over the last few years I have noticed a disturbing trend, and that is tying games to the Internet, even when the game has no online aspect at all (or virtually none).
I don't count idiotic "achievements" as online play.
This weekend on Steam, an entry for "Tropico 4" appeared in my library and the news section indicated that it is one of those "free weekend play" offers where you get to play the game for the weekend and then decide whether to buy it once the deal ends.
I've seen Tropico in the past and thought I might take a look. It took some time to download it on my slow adsl but eventually it was all done and installed.
So, this morning the weather is crappy so I decide to spend a few hours trying it out.
I fire up the game and lo, what are I met with but an "enter your email address and password" screen.
What? Yes, like so many modern games it requires that I must register the game before I can start it.
This wouldn't be so bad if the game at least used my existing Steam credentials but no, it is not that clever. I must provide details to "Kalypso Media" before I can play.
Apparently the benefits to me of this mandatory registration is the usual bunk that has been the unappetising bait for registration seekers since the dawn of time, namely "Game news, Updates, "premium support" and "community features".
Right. Nothing I want then. The only benefit that of any value, which is not mentioned in that list, is the ability to play the game in the first place.
Thanks but no thanks, I think I'll pass.
This is not the worst offender in the dubious registration stakes though, that accolade goes to Grand Theft Auto 4, which requires a grand total of three separate accounts (Steam, Rockstar Social Club and Windows Live) before you can play it.
Unbelievable.
It's this sort of nonsense which drives people to pirating games.
Obviously if you aren't playing it in Steam it only requires two registrations but that is still two too many.
I didn't register for Tropico 4 and have already deleted the game. That's one potential sale lost due to game industry stupidity.
Nice one marketing cretins.
Over the last few years I have noticed a disturbing trend, and that is tying games to the Internet, even when the game has no online aspect at all (or virtually none).
I don't count idiotic "achievements" as online play.
This weekend on Steam, an entry for "Tropico 4" appeared in my library and the news section indicated that it is one of those "free weekend play" offers where you get to play the game for the weekend and then decide whether to buy it once the deal ends.
I've seen Tropico in the past and thought I might take a look. It took some time to download it on my slow adsl but eventually it was all done and installed.
So, this morning the weather is crappy so I decide to spend a few hours trying it out.
I fire up the game and lo, what are I met with but an "enter your email address and password" screen.
What? Yes, like so many modern games it requires that I must register the game before I can start it.
This wouldn't be so bad if the game at least used my existing Steam credentials but no, it is not that clever. I must provide details to "Kalypso Media" before I can play.
Apparently the benefits to me of this mandatory registration is the usual bunk that has been the unappetising bait for registration seekers since the dawn of time, namely "Game news, Updates, "premium support" and "community features".
Right. Nothing I want then. The only benefit that of any value, which is not mentioned in that list, is the ability to play the game in the first place.
Thanks but no thanks, I think I'll pass.
This is not the worst offender in the dubious registration stakes though, that accolade goes to Grand Theft Auto 4, which requires a grand total of three separate accounts (Steam, Rockstar Social Club and Windows Live) before you can play it.
Unbelievable.
It's this sort of nonsense which drives people to pirating games.
Obviously if you aren't playing it in Steam it only requires two registrations but that is still two too many.
I didn't register for Tropico 4 and have already deleted the game. That's one potential sale lost due to game industry stupidity.
Nice one marketing cretins.
Monday, 7 May 2012
Friday, 4 May 2012
HOWTO: Upgrade from Lucid to Precise
UPDATED 12/06/2012. I have had reason to attempt this on two more systems and both times it was successful.
The Ubuntu distribution continues its rapid decline with the Precise release.
The Internet is teeming with examples of people who have discovered that upgrading to Precise is difficult at best, and near impossible at worst.
It doesn't appear that upgrading from the last LTS, 10.04 Lucid is possible at all.
Well, not easily anyway.
Attempting to upgrade a server from Lucid to Precise will most likely result in an error;
Searching the Internet might lead you to a suggested fix such as this one;
Apparently, sometimes that doesn't work either, forum post suggests adding apt to that command;
Having got that far, I received another error;
Joy.
No help was forthcoming from the Internet on that one.
So, I tried a desperate move.
I decided to remove the offending file (libnih-dbus1) and re-install it.
Now, before I continue, I should make it absolutely clear that what follows is capital N Nasty.
The server I was working on was a scratch virtual machine that I would not care about if I accidentally toasted it.
It is entirely possible that doing this on your server may completely trash it!
You have been warned.
OK, with that out of the way, what I did was this;
Apt went away and calculated a whole lot of dependencies that would be removed which resulted in it giving me a nasty warning;
Undaunted, I copy-pasted the list of files being removed into a text editor (just in case) and typed the "
After a while apt was finished.
Note: If you are following this "procedure", do not reboot your system now!
OK, I was afraid my SSH session or network (or something) may have been broken causing me to lose my connection (yes, I was doing this remotely) but the server still seemed to be working, which was good.
So I installed everything back.
This returned no errors.
Now, when we did the nasty remove of libnih-dbus1 and its dependents earlier, one of the things that was removed was the Linux kernel.
Without being to dramatic, it is fair to say that this is an extremely important package. Another important thing that was removed was openssh-server
Install them now;
The final thing to do is to reboot and to make sure everything is truly OK
The server rebooted without problems and finally I have managed to upgrade from Lucid to Precise.
Yay, I suppose, but it really shouldn't be that hard.
Canonical should spend less time working on horrible user interfaces and more time getting the basics right.
A final note: Check your list of files that were removed to check whether anything else that may have been installed was removed. You should manually re-install anything you need.
The Ubuntu distribution continues its rapid decline with the Precise release.
The Internet is teeming with examples of people who have discovered that upgrading to Precise is difficult at best, and near impossible at worst.
It doesn't appear that upgrading from the last LTS, 10.04 Lucid is possible at all.
Well, not easily anyway.
Attempting to upgrade a server from Lucid to Precise will most likely result in an error;
E: Could not perform immediate configuration on
'python-minimal'.Please see man 5 apt.conf under
APT::Immediate-Configure for details. (2)
Searching the Internet might lead you to a suggested fix such as this one;
sudo apt-get install -o APT::Immediate-Configure=false -f python-minimal
Apparently, sometimes that doesn't work either, forum post suggests adding apt to that command;
sudo apt-get install -o APT::Immediate-Configure=false -f python-minimal apt
Having got that far, I received another error;
E: Couldn't configure pre-depend multiarch-support for libnih-dbus1, probably a dependency cycle
Joy.
No help was forthcoming from the Internet on that one.
So, I tried a desperate move.
I decided to remove the offending file (libnih-dbus1) and re-install it.
Now, before I continue, I should make it absolutely clear that what follows is capital N Nasty.
The server I was working on was a scratch virtual machine that I would not care about if I accidentally toasted it.
It is entirely possible that doing this on your server may completely trash it!
You have been warned.
OK, with that out of the way, what I did was this;
apt-get remove libnih-dbus1
Apt went away and calculated a whole lot of dependencies that would be removed which resulted in it giving me a nasty warning;
You are about to do something potentially harmful.
To continue type in the phrase 'Yes, do as I say!'
Undaunted, I copy-pasted the list of files being removed into a text editor (just in case) and typed the "
Yes, do as I say!
" phrase as requested;After a while apt was finished.
Note: If you are following this "procedure", do not reboot your system now!
OK, I was afraid my SSH session or network (or something) may have been broken causing me to lose my connection (yes, I was doing this remotely) but the server still seemed to be working, which was good.
So I installed everything back.
apt-get install ubuntu-minimal
This returned no errors.
Now, when we did the nasty remove of libnih-dbus1 and its dependents earlier, one of the things that was removed was the Linux kernel.
Without being to dramatic, it is fair to say that this is an extremely important package. Another important thing that was removed was openssh-server
Install them now;
apt-get install linux-image-server openssh-server
The final thing to do is to reboot and to make sure everything is truly OK
The server rebooted without problems and finally I have managed to upgrade from Lucid to Precise.
Yay, I suppose, but it really shouldn't be that hard.
Canonical should spend less time working on horrible user interfaces and more time getting the basics right.
A final note: Check your list of files that were removed to check whether anything else that may have been installed was removed. You should manually re-install anything you need.
Tuesday, 1 May 2012
Socks5 Proxy using SSH
ssh -f -N -D 0.0.0.0:1080 localhost
Notes;
-f
run as a daemon
-N
stay idle and don't execute commands on localhost
-D
dynamic port forwarding on port 1080
You test it using curl;
curl --socks5 localhost:1080 www.google.com
Friday, 27 April 2012
Simple Monitoring of VPN links
I use this script to check the status of non-critical vpn links, but you could use it for anything where a simple ping test is sufficient and you couldn't be bothered setting up SNMP.
Of course SNMP + Nagios is a better choice for mission critical monitoring than simply pinging something like I do here.
Here is the script;
This will ping test to 192.168.1.2 as user "brett" every minute and send email alerts to brett@example.com
By default the script will send an email after the test has failed 5 consecutive times. This can be changed by editing the script and changing the TIMEOUT variable.
Of course SNMP + Nagios is a better choice for mission critical monitoring than simply pinging something like I do here.
Here is the script;
#!/bin/bash TIMEOUT=5 # Number of failed attempts before sending email STATUSDIR=/tmp/vpn # Status files are written here ADDRESS=$1 # The target address to be monitored EMAIL=$2 # Optional email address to send alerts to if [ ! -d $STATUSDIR ] ; then mkdir -p $STATUSDIR fi if [ ! -f $STATUSDIR/$ADDRESS ] ; then `echo "0" > $STATUSDIR/$ADDRESS` fi EXPECTEDCOUNT=`cat $STATUSDIR/$ADDRESS` if ping -c 1 -w 5 "$ADDRESS" &>/dev/null ; then DOWNCOUNT=0 else DOWNCOUNT=$(($EXPECTEDCOUNT+1)) fi echo "Expected count is :"$EXPECTEDCOUNT echo "Down count is :"$DOWNCOUNT # Something has changed if [ ! $EXPECTEDCOUNT = $DOWNCOUNT ] ; then if [ $DOWNCOUNT = 0 ] ; then STATUS="UP" else STATUS="DOWN" fi MSG="vpn-link: "$ADDRESS" is "$STATUS" (count="$DOWNCOUNT")" logger $MSG # if the change was to 0 or TIMEOUT then trigger an email if [ $DOWNCOUNT = 0 -o $DOWNCOUNT = $TIMEOUT ] ; then # If the expected count has not reached the timeout setting then # we dont want to send email EXPECTEDCOUNT=$(($EXPECTEDCOUNT+1)) if [ $EXPECTEDCOUNT -ge $TIMEOUT -a -n "$EMAIL" ] ; then echo $ADDRESS" is "$STATUS": Sending email" mail -s "$MSG" $EMAIL < /dev/null > /dev/null fi fi echo $DOWNCOUNT > $STATUSDIR/$ADDRESS fiTo use it simply place the script in a convenient location such as /usr/sbin and create an entry in your system crontab like this;
* * * * * brett /usr/sbin/vpn-mon 192.168.1.2 brett@example.com >> /dev/null 2>&1
This will ping test to 192.168.1.2 as user "brett" every minute and send email alerts to brett@example.com
By default the script will send an email after the test has failed 5 consecutive times. This can be changed by editing the script and changing the TIMEOUT variable.
Monday, 23 April 2012
Have CRON Log To A Separate File
Sometimes you might want to have cron events logged to a file other than the standard syslog (/var/log/syslog)
This is how you do it.
Edit this file;
Find the line starting with
This will cause all cron events to be logged to /var/log/cron.log (unless you changed the path) however the same events will also continue to be logged to syslog.
In the same file, find the line that looks like this;
Alter the line so that it looks like this;
Restart the logging service;
Now cron will log to /var/log/cron.log but not to syslog
This is how you do it.
Edit this file;
vi /etc/rsyslog.d/50-default.conf
Find the line starting with
#cron.*
and uncomment it.This will cause all cron events to be logged to /var/log/cron.log (unless you changed the path) however the same events will also continue to be logged to syslog.
In the same file, find the line that looks like this;
*.*;auth,authpriv.none -/var/log/syslog
Alter the line so that it looks like this;
*.*;auth,authpriv.none;cron.none -/var/log/syslog
Restart the logging service;
service rsyslog restart
Now cron will log to /var/log/cron.log but not to syslog
Tuesday, 17 April 2012
Coding WTF
I got asked to help on a small PHP project that has already been started by another coder.
The very first thing I was confronted with was this;
The very first thing I was confronted with was this;
if($variable == false)
{
}
else
{
//do something
}
Ummm, WTF?
Wednesday, 11 April 2012
TMDB Scraper Fails in XBMC
A recent update to Xbox Media Centre (XBMC) has (temporarily) broken the ability to scrape The Open Movie Database returning an error "Unable to connect to remote server"
A bug report has been lodged.
The bugged version is git20120229 compiled April 7 2012 and is currently in the XBMC PPA.
You can check your verison of XBMC on the System->System Info screen within XBMC.
The offending version is shown as;
"XBMC 11.0 Git:Unknown (Compiled : Apr 7 2012)"
You can wait for a fix to come down the pipeline, or if you are impatient (like me) you can revert to the older version.
You will need to download these files;
Next, uninstall the bugged version;
Now, install the previous XBMC release from the files you downloaded;
Now, you should be able to scrape TMDB without connection issues.
A bug report has been lodged.
The bugged version is git20120229 compiled April 7 2012 and is currently in the XBMC PPA.
You can check your verison of XBMC on the System->System Info screen within XBMC.
The offending version is shown as;
"XBMC 11.0 Git:Unknown (Compiled : Apr 7 2012)"
You can wait for a fix to come down the pipeline, or if you are impatient (like me) you can revert to the older version.
You will need to download these files;
xbmc_11.0~git20120321.14feb09-0ubuntu1~ppa1~oneiric_all.deb
xbmc-bin_11.0~git20120321.14feb09-0ubuntu1~ppa1~oneiric_amd64.deb
Next, uninstall the bugged version;
sudo apt-get remove xbmc
Now, install the previous XBMC release from the files you downloaded;
sudo dpkg -i xbmc-bin_11.0~git20120321.14feb09-0ubuntu1~ppa1~oneiric_amd64.deb
sudo dpkg -i xbmc_11.0~git20120321.14feb09-0ubuntu1~ppa1~oneiric_all.deb
Now, you should be able to scrape TMDB without connection issues.
Monday, 26 March 2012
On Adolescent Coders
Warning: Rant mode engaged
It appears to me that the Open Source Software (OSS) mantle has been passed on to a gaggle of post adolescent coders with some sort of group ADD disability.
Or something.
It started with the total fiasco that was KDE4, and all the idiocy that was apparently introduced on its release.
I didn't use KDE though, so although I was aware of all the angst, it didn't directly affect me.
Now we have Gnome 3, or more accurately, Gnome Shell, which is part of the new Gnome 3.
What an abomination. You can understand why Canonical balked at presenting their users with the steaming pile that is Gnome Shell, and it is almost understandable that they decided to promote their Unity shell from the netbook remix (where it belonged, and was actually quite useful in that context), but in the age of HD, multi-monitor setups Unity too is totally underwhelming.
Alternatives such as Mate and Cinnamon are worthwhile projects which will hopefully mature to become usable desktops but right now the state of user interfaces in the OSS sphere is totally unsatisfactory (and yes, I have tried XFCE and KDE4 and I don't like them either. I like my menu bar at the top, my task bar at the bottom and the freedom to put icons wherever the hell I please, thank you.
However, as much as it may seem otherwise, this rant is not about the sorry state of OSS desktops at all.
It is about the state of OSS video players. Specifically, VLC, Totem and XBMC.
You see, back in the old days you would get an AVI file, or maybe an MPG file, and you could open that file in one of the afore-mentioned players and life was good.
However, if the movie was in a language you were not familiar with, you had a problem. To solve that problem some bright spark came up with the idea of having a subtitles file (.srt), which was OK, if a bit cumbersome.
Or, occasionally some mentally challenged dingbat would hard code subtitles into the actual video stream and not actually mention that fact when presenting the resulting file to the general public.
I've done my fair share of shouting spittle laden expletives at people who do that I assure you.
It would be better if somehow the subtitles and the audio/video could all be encapsulated in one file, right?
Clearly this is a good idea.
While you are at it, maybe we could include separate audio tracks for other languages. You could even put in the commentary track! Just like a DVD!
So the ever resourceful guys in OSS land went to work, and gave us "containers" (initially MKV but now also the venerable AVI files) that held all this extra data in one convenient file.
Life, you would think, is good.
But no.
Having spent all this time very cleverly making all this stuff work it seems the adolescents at the wheel have decided that we must all have the results of their handiwork shoved in our faces every time we watch a movie.
This is why, when you play a movie these days, a movie that has a whole lot of nice subtitles conveniently bundled up into the container file, it plays with the subtitles ON by default.
Oh, I can hear you now. "Well, if it such a problem for you, then you can just turn them off!"
If it were only that simple.
Yes, you can turn them off every time you start a movie and it is not that hard. It is ctrl+t if I recall.
However, I don't want to have to do that every time I watch a movie, and if I am on my media centre I don't want to have to find where my keyboard is under a cushion on the couch (or where ever) just so I can turn the goddamn subtitles off.
Again!
I just want to watch the goddam movie.
Without subtitles.
I also wouldn't mind so much if it was a simple thing to turn them off globally, although I would still contend that having them on by default is borderline retarded and only required because the ADD sufferers who implemented the subtitle functionality wanted to make damn sure you got to experience the wonder of their coding prowess, but given a method to easily and permanently turn off subtitles I could forgive the ADD sufferers and give them their 15 seconds of fame.
After all, they earned it by contributing to the OSS pool, right?
Unfortunately, the reality is somewhat different. The sad fact is that it is far from easy to turn subtitles off by default in VLC and to a lesser degree XBMC. I have not tried it with Totem because I don't use it much, but I have noticed it behaves this way as well.
Try doing a google search for "permanently disable subtitles vlc" and you will find vast numbers of people asking how to do it with a number of proposed, convoluted, obfuscated solutions, none of which seem to work on the latest version of VLC.
I spent half an hour on trying to do that without any success.
I've found it is easier to download mkvtoolnix and actually remove the subtitle track than it is to fucking disable it permanently in VLC.
In Xbox Media Centre things are slightly better. You can't (apparently) disable subtitles globally in XBMC via the front end, but it is, at least, possible to go and edit a configuration file in a text editor to turn them off.
How very user friendly.
The fact is there is absolutely no reason to have subtitles on by default, certainly not when the default language is set to English. Sure, if my OS was configured to use Spanish by default then you could make a case to set subtitles to on, on the assumption that I wanted to watch the typical idiocy inducing rubbish that pours out of Hollywood with only an English soundtrack.
But I am not Spanish.
The unfortunate truth here is that this is the ADD adolescents telling users "we spent all this time making this shit work, and you are damn well going to use it, whether you like it or not."
It it weren't for the god-awful mess that MS appear to be making with Windows 8 it would almost be enough to push me back to the dark side I swear.
It appears to me that the Open Source Software (OSS) mantle has been passed on to a gaggle of post adolescent coders with some sort of group ADD disability.
Or something.
It started with the total fiasco that was KDE4, and all the idiocy that was apparently introduced on its release.
I didn't use KDE though, so although I was aware of all the angst, it didn't directly affect me.
Now we have Gnome 3, or more accurately, Gnome Shell, which is part of the new Gnome 3.
What an abomination. You can understand why Canonical balked at presenting their users with the steaming pile that is Gnome Shell, and it is almost understandable that they decided to promote their Unity shell from the netbook remix (where it belonged, and was actually quite useful in that context), but in the age of HD, multi-monitor setups Unity too is totally underwhelming.
Alternatives such as Mate and Cinnamon are worthwhile projects which will hopefully mature to become usable desktops but right now the state of user interfaces in the OSS sphere is totally unsatisfactory (and yes, I have tried XFCE and KDE4 and I don't like them either. I like my menu bar at the top, my task bar at the bottom and the freedom to put icons wherever the hell I please, thank you.
However, as much as it may seem otherwise, this rant is not about the sorry state of OSS desktops at all.
It is about the state of OSS video players. Specifically, VLC, Totem and XBMC.
You see, back in the old days you would get an AVI file, or maybe an MPG file, and you could open that file in one of the afore-mentioned players and life was good.
However, if the movie was in a language you were not familiar with, you had a problem. To solve that problem some bright spark came up with the idea of having a subtitles file (.srt), which was OK, if a bit cumbersome.
Or, occasionally some mentally challenged dingbat would hard code subtitles into the actual video stream and not actually mention that fact when presenting the resulting file to the general public.
I've done my fair share of shouting spittle laden expletives at people who do that I assure you.
It would be better if somehow the subtitles and the audio/video could all be encapsulated in one file, right?
Clearly this is a good idea.
While you are at it, maybe we could include separate audio tracks for other languages. You could even put in the commentary track! Just like a DVD!
So the ever resourceful guys in OSS land went to work, and gave us "containers" (initially MKV but now also the venerable AVI files) that held all this extra data in one convenient file.
Life, you would think, is good.
But no.
Having spent all this time very cleverly making all this stuff work it seems the adolescents at the wheel have decided that we must all have the results of their handiwork shoved in our faces every time we watch a movie.
This is why, when you play a movie these days, a movie that has a whole lot of nice subtitles conveniently bundled up into the container file, it plays with the subtitles ON by default.
Oh, I can hear you now. "Well, if it such a problem for you, then you can just turn them off!"
If it were only that simple.
Yes, you can turn them off every time you start a movie and it is not that hard. It is
However, I don't want to have to do that every time I watch a movie, and if I am on my media centre I don't want to have to find where my keyboard is under a cushion on the couch (or where ever) just so I can turn the goddamn subtitles off.
Again!
I just want to watch the goddam movie.
Without subtitles.
I also wouldn't mind so much if it was a simple thing to turn them off globally, although I would still contend that having them on by default is borderline retarded and only required because the ADD sufferers who implemented the subtitle functionality wanted to make damn sure you got to experience the wonder of their coding prowess, but given a method to easily and permanently turn off subtitles I could forgive the ADD sufferers and give them their 15 seconds of fame.
After all, they earned it by contributing to the OSS pool, right?
Unfortunately, the reality is somewhat different. The sad fact is that it is far from easy to turn subtitles off by default in VLC and to a lesser degree XBMC. I have not tried it with Totem because I don't use it much, but I have noticed it behaves this way as well.
Try doing a google search for "permanently disable subtitles vlc" and you will find vast numbers of people asking how to do it with a number of proposed, convoluted, obfuscated solutions, none of which seem to work on the latest version of VLC.
I spent half an hour on trying to do that without any success.
I've found it is easier to download mkvtoolnix and actually remove the subtitle track than it is to fucking disable it permanently in VLC.
In Xbox Media Centre things are slightly better. You can't (apparently) disable subtitles globally in XBMC via the front end, but it is, at least, possible to go and edit a configuration file in a text editor to turn them off.
How very user friendly.
The fact is there is absolutely no reason to have subtitles on by default, certainly not when the default language is set to English. Sure, if my OS was configured to use Spanish by default then you could make a case to set subtitles to on, on the assumption that I wanted to watch the typical idiocy inducing rubbish that pours out of Hollywood with only an English soundtrack.
But I am not Spanish.
The unfortunate truth here is that this is the ADD adolescents telling users "we spent all this time making this shit work, and you are damn well going to use it, whether you like it or not."
It it weren't for the god-awful mess that MS appear to be making with Windows 8 it would almost be enough to push me back to the dark side I swear.
Wednesday, 21 March 2012
PHP: Create variable names from XML element names
I needed to convert an xml file into an array. You can use an element name to name a variable using this example.
$string = "<postcode>5253 </postcode>";
$xml = new SimpleXMLElement($string);
$varname = $xml->getName();
${$varname} = trim($xml);
echo($varname." is ".${$varname}."\n");
echo($varname." is ".$postcode."\n");
Output:
postcode is 5253
postcode is 5253
Thursday, 8 March 2012
Remove Unwanted Audio Tracks From AVI Files
If you have downloaded videos from certain sources lately, you may have noticed that it is now possible to create a video container (AVI,MKV) that includes multiple audio channels just like on a DVD.
This is a great thing because it allows people of different languages to use the same video file. Alternately it allows the directors commentary to be included.
That said, I am an English speaker, and I have never had any interest in directors commentaries so all these extra audio tracks represent unwanted data in my movie library.
Also, some files default to playing the commentary or the non English track in some players which is also mildly annoying.
So, in shuch circumstances you can use ffmpeg to remove the unnecessary tracks from an AVI file (I have not tried it for MKV, I will update this page if I do.
Things you need to install are vlc and avconv (avconv is the replacement for ffmpg which is now deprecated)
Once you have determined which track you want to keep, you can run the file through avconv to strip the unwanted tracks. In this example I use the second map parameter to keep track 2 (ie lose track 1);
And that's it, happy Linuxing (is that a word?)
This is a great thing because it allows people of different languages to use the same video file. Alternately it allows the directors commentary to be included.
That said, I am an English speaker, and I have never had any interest in directors commentaries so all these extra audio tracks represent unwanted data in my movie library.
Also, some files default to playing the commentary or the non English track in some players which is also mildly annoying.
So, in shuch circumstances you can use ffmpeg to remove the unnecessary tracks from an AVI file (I have not tried it for MKV, I will update this page if I do.
Things you need to install are vlc and avconv (avconv is the replacement for ffmpg which is now deprecated)
sudo apt-get install vlc libav-tools
Note: On RedHat based distributions you must install libav. ie:
You can see what audio tracks are available, and select them by opening the video file in vlc and looking in Audio>Audio Track.yum install libav
Once you have determined which track you want to keep, you can run the file through avconv to strip the unwanted tracks. In this example I use the second map parameter to keep track 2 (ie lose track 1);
avconv -i sourcefile.avi -map 0:0 -map 0:2 -acodec copy -vcodec copy outfile.avi
And that's it, happy Linuxing (is that a word?)
Internet Mail With Postfix
Setting up an Internet mail server with postfix is pretty easy.
Choose "internet site" when asked.
You will require fully qualified domain name (FQDN) and with properly configured MX record.
You can use the "dig" command to verify your MX record. The output will include the following lines;
;; ANSWER SECTION:
mydomain.com. 2857 IN MX 10 mail.mydomain.com.
;; ADDITIONAL SECTION:
mail.mydomain.com. 2862 IN A 123.2.101.184
Your MX record should point to the A record for your mail server as it does in the above example.
The mail server must of course be contactable from the Internet.
Now, we need to configure a few things in postfix. Edit mail.cf
Modify the following lines with the details shown as bold
If you are running behind NAT/proxy you will need to ensure that port 25 is forwarded from your router and advise postfix it is operating behind a NAT/firewall.
Add this line to the main.cf file;
apt-get install postfix mailutils alpine
Choose "internet site" when asked.
You will require fully qualified domain name (FQDN) and with properly configured MX record.
You can use the "dig" command to verify your MX record. The output will include the following lines;
dig mydomain.com mx
;; ANSWER SECTION:
mydomain.com. 2857 IN MX 10 mail.mydomain.com.
;; ADDITIONAL SECTION:
mail.mydomain.com. 2862 IN A 123.2.101.184
Your MX record should point to the A record for your mail server as it does in the above example.
The mail server must of course be contactable from the Internet.
Now, we need to configure a few things in postfix. Edit mail.cf
sudo vi /etc/postfix/main.cf
Modify the following lines with the details shown as bold
myhostname = mydomain.com
mydestination = mydomain.com, localhost.localdomain, localhost
If you are running behind NAT/proxy you will need to ensure that port 25 is forwarded from your router and advise postfix it is operating behind a NAT/firewall.
Add this line to the main.cf file;
proxy_interfaces
= 123.2.101.184
The address must be the same as the mail server IP that is shown in the ADDITIONAL SECTION from our dig command from before.
Restart postfix;
sudo service postfix restart
Try sending an email to an external address;
mail -s Test me@gmail.com
Assuming you get the mail you should reply to it.
You can check whether it arrives using alpine;
alpine
Select "Folder List" ("L" key) to see any messages.
Subscribe to:
Posts (Atom)