Ubuntu 8.04 “Hardy Heron” announced by Jono

On jonobacon@home web blog, Jono has announce the next release of Ubuntu which is going to be the 2nd LTS release :-)

I am delighted to have the pleasure of announcing the Hardy Heron (Ubuntu 8.04), the next version of Ubuntu that will succeed Gutsy Gibbon (Ubuntu 7.10, due for release in October 2007). Not only will the Ubuntu community continue to do what it does best, produce an easy-to-use, reliable, free software platform, but this release will proudly wear the badge of Long Term Support (LTS) and be supported with security updates for five years on the server and three years on the desktop. We look forward to releasing the Hardy Heron in April 2008.

Read the full blog entry here

Incorrect resolution with usplash

If you ever have problem with monitor out of sync when you meant to see the usplash. here is how to fix the problem. I my case my 19″ wide screen LCD can only handle maximum 1440×900 resolution.

Edit the usplash config file:

sudo nano /etc/usplash.conf

and change to the resolution to what your screen can do and save the file.

To activate the changes:

sudo update-initramfs -u

All done :-)

Now when you next reboot you should see usplash on your screen.

Bad hard drive noise on shutdown (HDD Park)

This should only be used as a temporary workaround
And has been tested to work on a Acer Aspire 5601AWLMi laptop with a 80GB IDE hard drive, but this work around may not work for everyone.

Create file:

sudo nano /etc/rc0.d/S00hdd-shutdown-workaround

which includes this two lines:

echo 1 > /sys/class/scsi_disk/0\:0\:0\:0/stop_on_shutdown

Then make it executable:

sudo chmod +x /etc/rc0.d/S00hdd-shutdown-workaround

On the next shutdown it should park the hdd heads correctly.

If you have more than one directory at “/sys/class/scsi_disk/“, add another line for each of them in the S00hdd-shutdown-workaround file.

Cloned desktop on i945 using the i810 driver

Here is what i have added to my xorg.conf file to get cloned desktop working on my laptop.

In the
Section “Device”

this is the extra i added

	Option		"MonitorLayout"		"CRT,LFP"
	Option		"Clone"			"True"
	Option		"DevicePresence"	"True"
	Option		"VBERestore"		"True"
	BusID		"PCI:0:2:0"

Remember before playing with your xorg.conf file to back it up

sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf.backup

To see my xorg.conf file
Continue reading

Generating DSA or RSA SSH Keys

From: original link
Generating Keys
The first step involves the generation of a set of DSA or RSA keys for use in authentication. Typically, you would do this on the machine you intend to use for logging into all other machines, but this does not matter too much, as you can always move the keys around to other machines as needed.

To generate a set of DSA or RSA public/private keys, use the following command:

ssh-keygen -t rsa


ssh-keygen -t dsa

You will be prompted for a location for saving the keys, and a passphrase for the keys. When choosing the passphrase for the keys, pick a very strong passphrase, and remember, or note it in a secure place. This passphrase will be required to use the keys every time you need to login to a key-based system:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.

Locating the Keys on Remote Computers
Assuming the remote computers you wish to use the keys for have running ssh daemons already, then locating your public portion of the key pair on those machines is quite simple. For example, if you’d like to begin using key-based logins as user username on a remote machine named host, and host is running sshd, and reachable by name on your network, simply use the ssh-copy-id command to properly locate your key:

ssh-copy-id -i ~/.ssh/id_rsa.pub username@host


ssh-copy-id -i ~/.ssh/id_dsa.pub username@host

Testing the Login
Next, you need to test the login, by attempting a connection to the machine and using your passphrase to unlock the key:

ssh username@host

You will be prompted for the passphrase for your key:

Enter passphrase for key ‘/home/username/.ssh/id_rsa':

Enter your passphrase, and provided host is configured to allow key-based logins, you should then be logged in as usual.

Mount a remote filesystem via sshfs

sshfs is a file system client based on the SSH File Transfer Protocol. Since most SSH servers already support this protocol it is very easy to set up: i.e. on the server side there’s nothing to do. On the client side mounting the file system is as easy as logging into the server with ssh.

Install SSH Server:

sudo apt-get update
sudo apt-get install ssh

NOTE: Throughout this part of the tutorial, always replace username with your server’s username, and host with the IP Address or domain of your server.

Test your SSH connection to the server:

ssh username@host

If your connection was successful move on to the next step

Install sshfs:

sudo apt-get update
sudo apt-get install sshfs
sudo modprobe fuse

Configure your user to be a member of the FUSE group:

sudo adduser username fuse
sudo chown root:fuse /dev/fuse
sudo chmod +x /dev/fuse

Because a new user group has been created, we must now logout and back into the system. A reboot is not required.

When you have logged back in, we need to create a mount point within your home folder. It is important to note that the mount point must be within a folder owned by your user, so the safest place to put the mount point will be in your home directory.

mkdir ~/what_ever_you_like_to_call_this_directory

Let’s mount and test the remote file system:

sshfs username@host:/remote/dir/to/mount ~/what_ever_you_like_to_call_this_directory/

Now if all is successful you remote directory should be mounted. You should be able to type ls -lg in terminal or use you favorite file manager like nautalus to view the remote server mount point.

To unmount the remote file system:

fusermount -u ~/what_ever_you_like_to_call_this_directory/