Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Tuesday, 19 March 2019

autofs: keep devices permanently mounted

I have some ISO images which I use autofs to mount as loop devices.

For reasons that are not important I want them to stay mounted permanently.

I couldn't find any information online on how to do this so I poked around in the related autofs man pages.

I noticed that there is a time out option which is set by default to 600 seconds.

I wondered what would happen if I set that to 0 seconds so I tried it.

So far the devices in question have stayed mounted for 15 minutes

Here's how to do it:

/etc/auto.master
/mnt/loop /etc/auto.loops -t 0

/etc/auto.loops
* -fstype=iso9660,loop     :/store/ISO.archives/&.iso


The -t 0 is where we set the time out to 0 (infinite)
   
In case you are wondering the * at the beginning and the &.iso at the end of auto.loops will mount all of the iso files found in the /store/ISO.archives/ directory.

Monday, 3 September 2018

Steam controller doesn't work in Ubuntu 18.04

After upgrading or fresh install of Ubuntu 18.04 your previously working Steam controller will no longer be detected.

To fix this you must install a new package;

sudo apt install steam-devices

Friday, 6 October 2017

Querying video file metadata with mediainfo

I am working on a script that will query media files (mp4/mkv videos) to obtain metadata that can be subsequently used to rename the file to enforce a naming convention. I use the excellent mediainfo tool (available in the standard repositories) to do this.

mediainfo has a metric tonne of options and functions that you can use for various purposes. In my case I want to know the aspect ratio, vertical height and video codec for the file. This can be done in a single command;

mediainfo --Inform="Video;%DisplayAspectRatio%,%Height%,%Format%"

This works fine and returns something like this;

1.85,720p,AVC

When I say it works fine I mean it works fine in 99% of cases. The other 1% are made up of files that contain more than one video stream. Sometimes people package a JPEG image inside the container which is designated internally as "Video#2". In such cases the above command will also return values relating to the JPEG image producing something like this;

1.85,720p,AVC1.85,720p,JPEG

When this happens my script breaks. The workaround for that is to pipe the results through some unix tools to massage the output;

mediainfo --Inform="Video;%DisplayAspectRatio%,%Height%,%Format%\n" "${_TARGET}" | xargs | awk '{print $1;}'

Things to note in the revised command. There is a carriage return ("\n") at the end of the --Inform parameters which will put the unwanted data on a new line like this;

1.85,720p,AVC
1.85,720p,JPEG

xargs will remove that line feed and replace it with a space;

1.85,720p,AVC 1.85,720p,JPEG

And finally awk will produce only the first "word" (space delimited) from the result, which produces the desired output.

1.85,720p,AVC

Now obviously this method assumes that the first video stream in the container is the one we are interested in. I'm struggling to imagine a scenario where this would not be the case so at this point I am OK with that. If I find a file that doesn't work I might have to revise my script, but for now I will stick with this solution.

Tuesday, 24 January 2017

Using H.265 (HEVC) on Ubuntu

If you google search how to install H.265 on ubuntu you get a a bunch of posts that describe how to add a PPA for the necessary files.

However the repository hasn't been updated since 2015 (vivid vervet)

If you try to use the vivid repo then things fail because of dependency issues.

But not to worry as it seems that H.265 is now included in the standard xenial repository.

apt-cache search x265
libx265-79 - H.265/HEVC video stream encoder (shared library)
libx265-dev - H.265/HEVC video stream encoder (development files)
libx265-doc - H.265/HEVC video stream encoder (documentation)
x265 - H.265/HEVC video stream encoder

So, all you need to do is;

sudo apt-get install x265

and you are good to go.





Thursday, 9 April 2015

Configuration management: Salt vs Puppet

I've spent some time experimenting with puppet and I have come to the conclusion that I don't much like it.

There are several reasons why this is the case but the main ones are the module system that it uses and the way it abstracts away the configuration details on the client.

 Note: I gave up on puppet after only a couple of days after reaching a point where there was an error somewhere which was impossible to understand because it had no logical behaviour. Therefore I don't profess to be an expert on puppet to any degree at all so any comments from here on are merely my opinion and will definitely include liberal amounts of my misunderstanding of how Puppet works. That is another one of the problems I had with Puppet. The errors it produces are cryptic and there are so many things going on in the box of smoke and mirrors that you can't see the workings of that it is extremely hard to comprehend and debug things. Especially as a n00b. 

From what I have managed to partially understand, Puppet ships with a bunch of pre-installed modules for common applications (such as apache) so you don't need to find and download the appropriate module.

This is all good of course until you need to do something that is not included by default. In that case you need to get on the Internet and find the module you need and install it. Sometimes there will be an "official" puppetlabs module and sometimes you need to trawl through many community provided modules of varying degrees of quality to find one that looks like it might do what you want. This in itself is not terrible but it does involve a lot more mucking about than I think should be required.

Once you have all your modules in order you need to start configuring stuff and this is where the frustration really kicks in.

Puppet uses a bunch of "manifest" files that must be created for each service, host, or groups of hosts. Manifests can include other manifests and everything must be carefully declared in the right places or else things just don't work, and most of the time you will get an obtuse error message that doesn't help a great deal in tracking things down.

Circular dependencies can be common without due care being taken.

For example, the straw that broke the proverbial camels back for me and turned me away from Puppet was where I had two similar hosts with two identical manifests that despite that would produce different results when trying to update them.

One would fail with "Error: Failed to apply catalog: Could not find dependency File[/etc/postfix/main.cf]" while the other one worked fine. This was strange to me because

    a) the file it was complaining about existed on both hosts and

    b) nowhere was I even trying to manage postfix.

I even went to the length of creating two completely empty manifests for the two hosts that still produced the same dissimilar results.

Somewhere in the "box of smoke and mirrors" something was going on but I could not for the life of me figure it out. I even posted a question on serverfault (I rarely need to go to that extreme) but the kind folks there were unable to help either.

So I just gave up because life is too short for that crap.

There are other reasons that I dislike puppet.

A major one is how it abstracts out configuration details for the hosts it manages.

To explain what I mean I will return to the example of apache from before. Now, I am fairly familiar with apache, and I know my way around the apache config files reasonably well. Consider this snippet:

ServerName webserver01.tuxnetworks.com

<VirtualHost first.example.com:80>
    ServerAdmin brett@mydomain.com
    DocumentRoot /var/www/first
</VirtualHost>

That is a pretty simple config for a global server name and a virtual host in apache.

Now, let's look at how you configure that in puppet.

class { 'apache':
default_mods   => true,
}

apache:servername { 'webserver01.tuxnetworks.com' }

apache::vhost { 'first.example.com':
  port    => '80',
  docroot => '/var/www/first',
}

Now this is just a really simple example, but you will note the syntax that you are required to learn and use is totally different to how the target application would normally be configured. Things get even more excitingly confusing when you are trying to configure a service that you are not familiar with. In such cases you are not only required to first figure out what the native settings you need to put in the config file should be but then you must also figure out the correct Puppet syntax for getting there.

I had to do that just yesterday to add another domain to a haproxy server.

You will also note that the ServerAdmin variable has not been set in the puppet example because the 5 minutes that I spent searching the Internet on how to do it came up with nothing. I'm sure with more time I could find the answer (and there is a good chance just adding "serveradmin => brett@mydomain.com'" in there would work) but the point is that I really don't want to have to deal with this level of abstraction for something that I already know how to do.

So, given that you already think that the concept of configuration management systems is great but you are not prepared to devote the rest of your life wrestling with puppet then what do you do?

 Well, another popular system is Chef, but after doing some research I concluded that Chef had almost as many idiocyncracies as Puppet so I have decided to explore the possibilities of saltstack.

So far I've managed to do most of what I want to do with salt and it has been far easier to get my head around it than I found with puppet, although there are still some conceptual things to understand as well as some gotchas that can stall things early on.

Come back later for a post on getting things up and running if you are interested.

The first good thing is that Salt does not require modules.

Another good thing is that Salt does not abstract away configuration details for most use cases.

Note: Salt has something called "pillars" that I have no understanding of. From what I have read these can get a bit more complex but for my purposes (installing packages & pushing configuration files) I have not yet needed to use those. If my requirements get more complex then I may have to visit them but the point is that as a new salt user you don't need to use them) 
Anyway, on a salt server, you would simply copy a working apache.conf over to a directory on the salt-master and the minions will just use it as is. Simples!

But what if you need to set some of those details on a per host basis I hear you ask? For example maybe I want to set the ServerName directive to be whatever the hostname of the server is?

In that case all I need to do is modify it like so:

ServerName {{servername}}

<VirtualHost first.example.com:80>
    ServerAdmin brett@mydomain.com
    DocumentRoot /var/www/first
</VirtualHost>


What I have done there is simply replaced the server name string to become a variable place holder that can be used by the jinja templating engine that salt uses to insert the proper data whenever the minion is updated.

I don't need to learn a whole new syntax because I can just use the standard Apache conf layout that I am already familiar with and make certain parts of it "variable" as required.

Salt really is in my humble opinion much simpler to get up and running (but that may just be because I'm a bit thick of course). Also no offence intended to the folks who love Puppet, I know it is widely used out there and I still have an inherited puppet system here at work. The bottom line is that OSS is all about choice after all.

That is not to say it is perfect however. The documentation leaves a lot to be desired for instance, which is a pretty big problem in itself.

Anyway, stay tuned for a quick start guide to SaltStack at a later date.

Wednesday, 2 July 2014

Libvirt/qemu/kvm as non-root user

Prerequisites:

A server with KVM

I'm going to use the qemu user that is created when you install KVM but you could use any user you like.

First, your user should belong to the kvm group:

grep kvm /etc/group kvm:x:36:qemu

Create a libvirtd group and add your user to it

groupadd libvirt
usermod -a -G libvirt qemu


Create a new policykit config to allow access to libvirtd using your user account via ssh

vi /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla

Add the following content:

[Remote libvirt SSH access]
Identity=unix-group:libvirt
Identity=unix-user:qemu
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes


Restart libvirt

service libvirtd restart

Thursday, 12 December 2013

Fix FontConfig Warning in LMDE

I found another bug in Mint Debian relating to how fonts are setup.

Originally I found the issue while playing about in imagemajick which would produce an error like this.

"Fontconfig warning: "/etc/fonts/conf.d/53-monospace-lcd-filter.conf", line 10: Having multiple values in isn't supported and may not work as expected"

You can reproduce that error using this command;

fc-match sans

So, I opened up the file referenced in the error and found it was an XML file.

In the element test name="family" there were two fonts configured, in my case these were "DejaVu Sans Mono" and "Bitstream Vera Sans Mono".

Now, considering that the error was complaining about not liking having two values present, I decided to remove one. I removed the second one.

After doing that things behaved in much more polite way;

fc-match sans
DejaVuSans.ttf: "DejaVu Sans" "Book"

Tuesday, 10 December 2013

Problems connecting to libvirtd (KVM) on remote hosts

I ran into this annoying bug trying to connect using SSH (key auth) to libvirtd (running on CentOS6) from a LMDE host.

The error I received was unhelpful.

Unable to connect to libvirt.

Cannot recv data: Value too large for defined data type

Verify that the 'libvirtd' daemon is running
on the remote host.


I was pretty sure that the problem was not with the server running libvirtd because it had been working the day before and was unchanged since then. On the other hand my LMDE install was completely fresh.

To cut the chase I don't know what the fix is (it seems to be a bug).

If you read to the end of that bug thread it seems you can work around the problem by using the hostname instead of its FQDN.

For this to work of course you need to be able to resolve the target IP address using just the hostname. Since I was on the same domain as the libvirt server this was simply a matter of defining the domain in /etc/resolv.conf on the client.

domain tuxnetworks.net

If that is not a practical solution (because your client and server on different domains) I reckon you could probably configure the server hostname as an individual entry in your /etc/hosts file too, although I have not tried that. Let me know in the comments if that works for you!

Thursday, 19 September 2013

Disable DNSMASQ on KVM host

I have a fleet of servers with bridged, static IP's running as KVM guests. These servers do not require DHCP yet KVM by default starts up dnsmasq regardless.

Normally this is not an issue but I just so happened to need dnsmasq for DNS on one of the KVM hosts and it would refuse to start due to it being already invoked by libvirt.

You can't just disable the libvirt dnsmasq because it seems required for any virtual network that is active. You can however disable the unused virtual network which has the same effect.

# virsh net-destroy default
# virsh net-autostart --disable default



Then you can configure dnsmasq by editing /etc/dnsmasq.conf and it should work normally.

Wednesday, 29 August 2012

Unmount stale NFS mounts

If you have a stale NFS mount hanging on your system it can cause various programs and utilities to fail. A typical symptom is a hang when using the 'df' command.

In such cases you cant do umount /path/to/stale/nfs because it will say "the device is busy" or words to that effect

To fix this you can unmount it with the 'lazy' option;

umount -l /path/to/stale/nfs

If you don't expect that mount point to ever be available again (for example the nfs server was decommissioned) then make sure you adjust /etc/fstab accordingly.

Sunday, 19 August 2012

Remove Subtitles and Audio tracks from MKV files

To remove unwanted audio and subtitle tracks from Matroska (mkv) files you use mkvtools; sudo apt-get install mkvtoolnix-gui Once it is installed then open up the gui (Sound & Video menu) and follow these steps; 1) In the "Input files" box on the "Input" tab browse to the mkv file you want to modify. 2) In the "Tracks, chapters and tags" box uncheck any part you want to remove (leave the stuff you want to keep checked) 3) In the "Output filename" box keep the default name or modify to suit. 4) Click "Start muxing" and wait a minute or two until it completes. Once you are done, you can delete the original file (after checking it worked of course!) and rename the new file accordingly.

Wednesday, 4 May 2011

Getting Up To Speed With IPv6: Introduction

With all the news about ipv4 address exhaustion going around not to mention that IPv6 Day is just around the corner I thought it was time that I investigate IPv4's successor, IPv6.

(If you are wondering whatever happened to IPv5 then look here)

The good old IPv4 address that we all currently know and love (and understand, I hope) is basically a 32 bit number divided into four 8 bit "octets".

That gives us a theoretical address space of 2^32 (4.3 billion) addresses.

On the other hand, IPv6 uses 128 bit address's represented as eight groups of four hexadecimal digits, each group representing 16 bits (two octets)

This new address space supports 2^128 (340 undecillion) addresses which is more than we will ever conceivably require on Planet Earth.

Right now you are probably thinking "Yeah, I've heard that before about 640K RAM and we all know how that one worked out" so to put the sheer size of the IPv6 address space into perspective let me quote an excellent IPv6 primer over at Ars Technica, "there are currently 130 million people born each year. If this number of births remains the same until the sun goes dark in 5 billion years, and all of these people live to be 72 years old, they can all have 53 times the address space of the IPv4 Internet for every second of their lives."

Now, I'm sure you'll agree, that is a lot of addresses.

For a good overview of how IPv6 addressing works, I recommend this article.

After reading up a bit about IPv6 you could be excused for concluding that the whole idea of IPv6 is quite daunting and then push the whole damned thing into the "too hard" basket. This is what most people and organisations have been doing up to this point and explains why adoption rates are currently so low.

ISP's in particular are putting off the inevitable by hoarding blocks of the remaining IPv4 space.

Don't be put off by the apparent complexity of IPv6 though!

In practice it is in fact not that hard to get up and running, even if you don't completely understand how it all hangs together at first. I still haven't figured it out properly, but soldier on I will!

As they say, practice makes perfect and this is the intention of the series of articles I will be posting on getting up to speed with IPv6.

Note: A word of warning, these articles are intended to be used for educational purposes only. Because we cannot use an IPv6 address range of our own we are going to be obtaining one through what is known as an "IPv6 Tunnel Broker". This is of course not an ideal situation because we are going to be relying on that broker for all of our IPv6 addresses and routing. I do not advise that you configure a production network for IPv6 connectivity using this guide as you will surely face performance penalties, possible reliability issues, and (most importantly) future migration issues when your ISP eventually starts providing you IPv6 directly. If you are intending to roll out IPv6 in a production scenario I suggest that you choose an ISP that is already providing native IPv6 to their customers.


Next up, Setting The Stage

Thursday, 27 January 2011

HOWTO: Caching Authoritative Nameserver

OK, so you have built yourself a Linux Router and now you want to add DNS support for extra goodness.

Here's what to do.

I'll be working on Debian 6.01 "squeeze" but any Debian based distro should be fine. I've also tested on Ubuntu 10.04 without problems. I am using 10.1.1.0/24 as my local network and tuxnetworks.net as my local domain. Where you see references in italics to these you should change them to suit your own network.

First we will log in as root

As root, install Bind v9 from the standard repositories;

apt-get install bind9

Now we need to tell Bind to listen for queries on our IPv4 LAN.

Open this file for editing;

vi /etc/bind/named.conf.options

add the following lines inside the "options { };" section ;

listen-on { any; };
allow-recursion { 127.0.0.1; 10.1.1.0/24; };


Note:
"Recursion" is a fancy term for "which clients can query our server". Basically, you want to enter the local loop interface (127.0.0.1) and the subnet of your LAN. You can add any other hosts or subnets as required. Also, if you are not intending to use IPv6 you can comment or remove the "listen-on-v6" line out if you like.

Now we will create a zone file for our local zone;

Note:
I prefer to keep the configuration details for zones in separate files so I am going to create a file in which I will define the details for the "tuxnetworks.net" zone. If you don't want to do this just add the following code section to named.conf.options

vi /etc/bind/named.conf.tuxnetworks-net

Adjust the following code to suit your own network and then add it to the zone file;
# This is the zone declaration for our local zone.
zone "tuxnetworks.net" {
type master;
file "/etc/bind/db.tuxnetworks.net";
};

# This is the zone declaration for reverse DNS
zone "1.1.10.in-addr.arpa" {
type master;
file "/etc/bind/db.1.1.10.in-addr.arpa";
};

Now that we have created our zone declaration file, we have to tell bind to load this file when it starts. We do this by adding an "include" line to named.conf

Edit named.conf;

vi /etc/bind/named.conf

Append this line to the end;

include "/etc/bind/named.conf.tuxnetworks-net";

Now we need to make a zone file for our domain. This is where we define the names and attributes of our zone.

Create a zone file;

vi /etc/bind/db.tuxnetworks.net

Modify these details and save them to your zone file;
$TTL    3600
tuxnetworks.net. IN SOA ns.tuxnetworks.net. jupiter.tuxnetworks.net. (
2011051701 ; Serial
86400 ; Refresh
7200 ; Retry
3600000 ; Expire
172800 ) ; Minimum TTL

; DNS Servers
tuxnetworks.net. IN NS ns.tuxnetworks.net.

tuxnetworks.net. IN MX 10 mail.tuxnetworks.net.

@ IN A 10.1.1.1
ns IN A 10.1.1.1
www IN A 10.1.1.1
mail IN A 10.1.1.1
jupiter IN A 10.1.1.1

Note: Take note that several names in this file have a trailing period (fullstop). Make sure that your file matches the format of this one exactly, only changing the names and IP addresses, but where present keeping the periods intact.

We also need to make a reverse zone file. Create a new file;

vi /etc/bind/db.1.1.10.in-addr.arpa

Add this code (modified to suit your network of course);
;
; BIND reverse data file for local network
;
$TTL 604800
@ IN SOA jupiter. ns. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns.
1 IN PTR ns.

Finally, there is one thing more to do if we are going to be good netizens and that is to ensure that we play by the rules according to RFC 1918

After running your server for a while you might take a look at /var/log/syslog and notice lots of bind related errors saying something like "RFC 1918 response from Internet for . . . .".

If you do then this means that your server is not playing nice with the root servers and is forwarding on requests for private IP ranges. That is not something we want to do.

To fix it edit your named.conf file;

vi /etc/bind/named.conf

Add the following line to the end;

include "/etc/bind/zones.rfc1918";

This stops any requests for private IP address's (192.168 etc) from being passed on to the Internet root servers. One wonders why Bind isn't configured like this by default . . .

And that's it. Of course we need to restart bind before our changes take effect;

service bind9 restart

Also, before we can query our new DNS server, we need to configure the client to look to this server for name resolution.

To do that, edit your resolv.conf file;

vi /etc/resolv.conf

Remove any exiting lines and add these (modified to suit your own network of course);

nameserver 10.1.1.1
domain tuxnetworks.net


Optional:
If you don't wish to query against the Internet root servers, edit your named.conf.options file and configure a forwarder or two.

vi ./etc/bind/named.conf.options

Uncomment the "forwarders {" section and replace the 0.0.0.0 with the ip address of the upstream DNS server(s) you would like to use.
  forwarders {
208.67.222.222;
208.67.220.220;
};

Now let's do some tests to see if it is all good with our new server;

Let's dig our domain;

dig tuxnetworks.net

; <<>> DiG 9.7.0-P1 <<>> tuxnetworks.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28277
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;tuxnetworks.net. IN A

;; ANSWER SECTION:
tuxnetworks.net. 3600 IN A 10.1.1.1

;; AUTHORITY SECTION:
tuxnetworks.net. 3600 IN NS ns.tuxnetworks.net.

;; ADDITIONAL SECTION:
ns.tuxnetworks.net. 3600 IN A 10.1.1.1

;; Query time: 1 msec
;; SERVER: 10.1.1.1#53(10.1.1.1)
;; WHEN: Tue May 17 12:09:08 2011
;; MSG SIZE rcvd: 82


We can see from this that we received an answer (ANSWER: 1) along with some other details about our domain. Also note the AUTHORITY: 1
which lets us know that this server is acting as the authority for the domain.

We can also do some ping tests.

Do a test ping to an internal name (we don't need to provide the domain name because we used the "domain" directive in resolv.conf which will automatically append our domain to any query that doesn't include a domain. Neat!;

ping -c 4 www
PING www.tuxnetworks.net (10.1.1.1) 56(84) bytes of data.
64 bytes from jupiter.tuxnetworks.com (10.1.1.1): icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from jupiter.tuxnetworks.com (10.1.1.1): icmp_seq=2 ttl=64 time=0.089 ms
64 bytes from jupiter.tuxnetworks.com (10.1.1.1): icmp_seq=3 ttl=64 time=0.090 ms
64 bytes from jupiter.tuxnetworks.com (10.1.1.1): icmp_seq=4 ttl=64 time=0.091 ms

--- www.tuxnetworks.net ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.089/0.100/0.130/0.017 ms


Our Bind server will forward any requests that it isn't an authority for out to the Internets DNS root servers which means we no longer need to rely on our dodgy ISP's DNS servers where they are quite likely engaged in DNS Hijacking shenaningans.

Do a test ping to the outside world;

ping -c 4 www.google.com
PING www.l.google.com (74.125.237.84) 56(84) bytes of data.
64 bytes from 74.125.237.84: icmp_seq=1 ttl=57 time=57 ms
64 bytes from 74.125.237.84: icmp_seq=2 ttl=57 time=40.3 ms
64 bytes from 74.125.237.84: icmp_seq=3 ttl=57 time=59 ms
64 bytes from 74.125.237.84: icmp_seq=4 ttl=57 time=45 ms

--- www.l.google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 90.384/137.973/159.187/28.004 ms


So, our DNS server is all up and running, it is acting as authority for tuxnetworks.net and passing any other requests out to the Internet. It is also caching any queries to reduce on net traffic!

OK, we are all done now, grab yourself a beverage and relax!

Now that you have built yourself a DNS server, why not build a second one and configure it as a SLAVE?

Friday, 17 December 2010

Intellinet 150n USB WiFi key

I had some trouble getting my new "Intellinet 150n Wireless LAN Adapter" working in Ubuntu 10.04 (Lucid)

The problem is that one of the other Realtek drivers (rt2800usb) conflicts with the driver we need (rt2870sta)

To fix it you need to stop the rt2800usb driver from loading by blacklisting it.

sudo vi /etc/modprobe.d/blacklist.conf

Add this line;

blacklist rt2800usb

Reboot and you should be up and running.

Tuesday, 2 November 2010

Stop cron jobs from sending email

If you have a cronjob that is constantly spamming you with emails, then add the following to the offending line in your crontab;

>> /dev/null 2>&1

Example;

* * * * * root /root/checkvpn.cron >> /dev/null 2>&1

Voila! No more annoying emails!

Friday, 4 June 2010

USB Devices in VirtualBox guests

Prerequisites:
    Oracle version of Virtualbox (The OSE version found in the standard repository does not support USB)
    Virtualbox extension pack
    Virtualbox guest tools
    Member of the vboxusers group

Note: It is usually not necessary to add the USB device from the command line as described here. As long as the above prerequisites are met you can usually use the gui to add device filters.

I want to connect an external HDD to a virtualbox guest. The unit is a WD "My Book" and the guest is named "Windows". Make sure your VM is powered off before you start!

First, check that your user is a member of the vboxusers group:

groups | grep vboxusers
myuser adm disk cdrom sudo dip plugdev lpadmin sambashare vboxusers libvirt

Note: If you aren't a member of the vboxusers group you must add yourself, logout and login again for the changes to take.

On the host running VirtualBox run the following command:

sudo VBoxManage list usbhost

Find the section that relates to the device you want to use. In my case it looks like this:

UUID:               f61de8f1-9c92-4781-92c5-d091705a0b79
VendorId:           0x1058 (1058)
ProductId:          0x1100 (1100)
Revision:           1.117 (01117)
Manufacturer:       Western Digital 
Product:            My Book         
SerialNumber:       57442D574341565930303934373837
Address:            sysfs:/sys/devices/pci0000:00/0000:00:1d.7/usb1/1-4//device:/dev/bus/usb/001/003
Current State:      Busy

Add a usb filter using the device details gleaned from the previous command:

VBoxManage usbfilter add 0 --target Windows --vendorid 1058 --productid 1100 --name "2TbExt" --active yes

The number after the "add" is the index number, if this is not the first device on the guest then adjust to the next "free" index. You can see the devices currently associated with a guest using this command:

VBoxManage showvminfo Windows

Finally enable usb for the guest:

VBoxManage modifyvm Windows --usb on

Now, after starting your VM, you should be able to see that the usb device is present in the virtual machine.

Note: If you have this working and it suddenly stops working, possibly after an update check that you still belong to the vboxusers group. If you don't belong to that group then no devices will show but there will be no "permisison denied" errors that might help to explain why.

UPDATED: November 2018

Wednesday, 24 March 2010

Network lockup during heavy load

Since karmic, the Realtek drivers for the 8169 card have become borked. If you try and copy large amounts of data your machine will hang and require a hard reset.

I got this solution from here;

1) Check to see if the r8169 module is loaded
lsmod | grep r816
r8168 41104 0
-> lspci -v
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03)
Subsystem: ASRock Incorporation Device 8168
Kernel driver in use: r8169
Kernel modules: r8169


2) Download the official Realtek driver
Realtek RTL8111/RTL8168

Update: I'm using the 8.017 driver but the current driver on that website is 8.018. It is possible that this version contains the same regression fault that causes this lock up behaviour as I experienced trouble after installing it. If you have trouble, download the older driver from here and add a comment to this post.

3) Remove the r8169 module
rmmod r8169
mv /lib/modules/`uname -r`/kernel/drivers/net/r8169.ko ~/r8169.ko.backup
(Note: the ` is a backtick, it is not an apostrophe or single quote )

4) Build the new r8168 module for the kernel
bzip2 -d r8168-8.009.00.tar.bz2
tar -xf r8168-8.009.00.tar
cd r8168-8.009.00
make clean modules
make install


5) Rebuild the kernel module dependencies
depmod -a
insmod ./src/r8168.ko


6) Remove the r8169 module from initrd
mv /initrd.img ~/initrd.img.backup
mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r`


7) Add r8168 module to /etc/modules
echo "r8168" >> /etc/modules


Reboot, You are done!

Thursday, 12 March 2009

Disappearing network interfaces in Ubuntu Server

If you change network cards on ubuntu server then you will find that the new cards no longer come up. This also occurs when you copy a vmware virtual machine.

To fix this, edit the persistent-net-rules file

sudo vi /etc/udev/rules.d/70-persistent-net.rules

You should see a line (plus a commented heading) for the affected interface(s) like this;
# PCI device 0x14e4:0x1659 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1c:c0:db:32:e7", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

(Forgive the formatting but blogger.com has a ridiculously narrow content column and long lines tend to disappear behind crap on the RHS of the page or wrap around and look like multiple lines)

Anyway, all you need to do is delete the line for the affected interface and reboot your system.

Once the system has rebooted, the persistent-net-rules file will be repopulated with the details for the new interface and your Ethernet adapter will be working once again.