Wednesday, November 7, 2007

Citrix su Command

Citrix states in their article: CTX753098

CTX753098 - 'su - ' Command Does not Bring Necessary $DISPLAY Variable with it

This document was published at: http://support.citrix.com/kb/entry.jspa?externalID=CTX753098


Document ID: CTX753098, Created on: Mar 30, 2001, Updated: Apr 23, 2003

Products: Citrix MetaFrame 1.1 for UNIX

MetaFrame for UNIX requires a $DISPLAY value to be in the format of unix:x.0, where x is the display number. This value is set initially at session login.
< p>When user1 issues the command "su - user2," a new shell starts up as if the new user had initiated a new login session; the environment is deleted and reset to the login state. As a result, after the command "su - user2" is issued, $DISPLAY is incorrect.

To fix this, we need to create a couple of scripts that will:

1. Add user2 to the ACL of user1's X display.

2. Execute "su" with additional arguments.

3. Check for the current $DISPLAY variable and pass it to the new shell that the "su - user2" invokes.

Here is one possible solution:

Create a script called ctxsu and place it in /opt/CTXSmf/bin.

#!/bin/ksh
# Set the shell for this script
# Add su'd user to ACL of su-er's X display
/usr/openwin/bin/xhost +local:$1
# Print output of current $DISPLAY
echo $DISPLAY
# Invoke su with added arguments
su - $1 -c "/opt/CTXSmf/bin/ctxsudisp.sh $DISPLAY"

Make this Read and Execute for everyone.

Then create a script called ctxsudisp.sh and place it in /opt/CTXSmf/bin.

#!/bin/ksh
# Set the shell for this script
# Set and export the current display for su'd user
DISPLAY=$1;export DISPLAY
# Use the su'd user's own shell as a new shell
# This will allow 'exit' to user's prior shell
exec $SHELL

Make this Read and Execute for everyone.

Now, when you want to issue the command "su - user," call ctxsu as such:

ctxsu user

This gives the proper $DISPLAY along with login environment variables to the new shell.

Tuesday, November 6, 2007

Playing asx videos with mplayer

asx videos are just containers. You can download them using wget to see what the actual video file is (the URL XML balise will tell you) or you can use mplayer :

mplayer -playlist http://my.video.url/my/path/myfile.asx

Thursday, November 1, 2007

Solaris BSM Auditing

Hal Flynn 2000-11-27
(original page : http://www.securityfocus.com/infocus/1362)

# Introduction

When considering the security of a system we need to be concerned not only with which features and tools we use to implement the access restrictions, but also with what logging of access we do.

Logging is important for two main reasons: regular analysis of our logs gives us an early warning of suspicious activity and, if stored securely it can provide the evidence required to find out what went wrong when a breach in the security policy occurs. There are other areas where logging helps as well, such as analysis of our security policies for correct implementation, as well as debugging auditing that can report pertinent information to our security model. Solaris provides a rich logging system available as part of the core OS in the form of SunSHIELD BSM Auditing. This is one of the most powerful security features that Solaris provides out of the box, yet it is probably the least understood and least used.

This article will give an overview of what Solaris BSM auditing can do and will give some examples of implementing common auditing policies. In a future article we will cover how to use auditing as an application and kernel developer.
# What is BSM Auditing?

So, what is BSM auditing? First, it is not syslog. The audit trail is written to binary files on the local system (or NFS mounts). The system provides two utilities for filtering (auditreduce) and viewing (praudit). Audit records are initiated from two distinct places in Solaris: privileged user land programs (such as login) and the Solaris kernel. All security sensitive kernel system calls will generate an audit record when BSM auditing is enabled.

The following user land programs in Solaris can write audit records:


/bin/login (rlogind, rshd, telnetd)
/usr/bin/su
/usr/bin/newgrp
/usr/dt/bin/dtlogin
/usr/sbin/in.ftpd
/usr/sbin/rexd
/usr/sbin/in.uucpd
/usr/bin/passwd
/usr/sbin/allocate
/usr/sbin/deallocate
/usr/sbin/mountd
/usr/sbin/crond
/usr/sbin/init
/usr/sbin/halt
/usr/sbin/uadmin

Adminsuite v2.3 and v3.0 also do auditing via BSM for account and host maintenance.
# Basic Configuration of Auditing

The first step in using BSM auditing is enabling the kernel support and ensuring auditd is started at boot time. To do this you need to run /etc/security/bsmconv (either as root or a user that has been given the Audit Control RBAC profile in Solaris 8) since auditing is not enabled in the default Solaris installation. Running bsmconv not only enables auditing but also sets up device allocation (which disables vold(1M)) and disables Stop-A. It disables Stop-A by putting "set abort_enable = 0" into /etc/system. If you don't wish Stop-A to be disabled or if you have already done this by updating /etc/default/kbd (Solaris 7 and above), you can remove this line from /etc/system. If you don't want to use device allocation and want vold(1M) to continue to run then move /etc/security/audit/spool/S92volmgt back to /etc/rc2.d/.

After bsmconv has been run the system needs to be rebooted so that the c2audit module is properly loaded and the internal audit settings and structures are setup. Before rebooting it is wise to setup the /etc/security/audit_control file to say what auditing we want - this file can be updated without a reboot but it is good practice to set it now before the first reboot.

The main configuration file for auditing is /etc/security/audit_control, in this file we set which classes of events we want to generate audit records and where we want those records to go.
# Example: "Login" Events

To record the "login" events for all users add the class `lo` to the "flags:" line of /etc/security/audit_control - don't worry about the other lines in there just now we will come back to those later. The login events are created by login (telnet, rsh, rlogin), dtlogin, in.ftpd, su, rexd, in.uucpd. For example:


dir:/var/audit
flags: lo
minfree: 20
naflags: lo

An example successful event for a remote login from hepcat:


header,81,2,login - rlogin,,Wed Aug 27 09:46:53 1997, + 511485295 msec
subject,darrenm,darrenm,techies,darrenm,techies,10100,10100,24 5 hepcat
text,successful login

An example failed login event when coming in via ftp from netwon:


header,77,2,ftp access,,Wed Sep 03 16:56:30 1997, + 712178483 msec
subject,darrenm,darrenm,techies,darrenm,techies,1200,1200,0 20 newton
text,bad password
return,failure,1

Including `lo` on the flags line will log events regardless of whether it was a success or a failure, if we only want to log failures then like all classes we put a - in front of the class name.

It is good practice to have both success and failure login events in the audit trail regardless of what we audit, as this will help to provide context to us humans for everything else we look at in the audit trail.
# Example: Logging All Commands

Some sites require as part of their security policy that all of the commands run by a user are logged. There are many insecure solutions touted for this requirement that involve using the shell .history file and putting it in a place the user can't see etc. The only sure way to do this is to intercept the execve(2) call and log at that point, this is what BSM auditing does when we turn on the class `ex`. The events get logged by the kernel implementation of execve(2) so no changing of LD_LIBRARY_PATH or other user configurables can bypass this.

An example auditrecord for the class `ex`:


header,103,2,execve(2),,Thu Jun 25 11:39:32 1998, + 52420844 msec
path,/usr/bin/ls
attribute,100555,bin,bin,8388608,0,0
subject,darrenm,root,other,root,other,8722,408,0 0 braveheart
return,success,0

This shows that the user darrenm run /usr/bin/ls as root on the host braveheart on 25 Jun 1998.

By default only the command is logged to the BSM audit trail. If you wish to have the command arguments logged as well then we need to change a policy in the audit system. To do this run:


# auditconfig -setpolicy +argv

This instructs auditing to log the arguments to commands. This takes effect immediately but to ensure that it is set on each system boot add the same line to /etc/security/audit_startup. The environment variables in effect at the time can also be logged by adding the +arge policy (see auditconfig(1M) for more details on audit policies, though this is less useful.) Note that this is the command line as seen by the execve(2) system and may not reflect exactly what the user typed on the command line because of shell matching and globing.

Often sites wish to log all of the commands that the root user runs believing that this gives them a trail of everything that went on. There is a big caveat here: if you have root access to a host, you can turn off auditing - doing so does generate an event but the logs can always be modified/destroyed. If this is an important part of your site security policy, you should look into using write-only media for storing the audit log files. You should also consider having a warning system external to Solaris that detects when the write-only media is disconnected from the OS.

The /etc/security/audit_event file and the audit_control(4) man page describe the other audit classes available. Turning on classes such as fr for file reads will generate a lot of audit data even on an system with low usage. It is not possible to audit access-only to specific files in Solaris but auditreduce can filter the audit trail to show only the files you are interested in.

A recommended minimum set of classes is: lo, ad, na. Which includes login/out events (lo), admin events (ad) such as filesystem mounts creation of users and non-attributable events (na). These include Stop-A, which we can't be sure was done by any particular user.
# Configuring Auditing On a Per User Basis

So far we have dealt with auditing at the system level so all users are audited equally. For reasons of disk space or organization policy it is sometimes necessary to have a different policy for particular users. Setting flags in the audit_control file applies them for all users on the system. To set audit flags for selected users we use the /etc/security/audit_user file. The file has the following format:


username:always audit flags:never audit flags


As of Solaris 8, audit_user(4) can be stored in the nameservice - note that audit_user is normally not world-readable so, storing it in a nameservice may reveal important system policy information that is not available when using files. The audit_user source is not listed explicitly in the nsswitch.conf but follows the same search order as used for the passwd entry.
# Audit trail analysis

To convert the binary audit trail into human readable ASCII, the praudit(1M) command is used. praudit uses the information from the audit_event and audit_class files together with data from the nameservice (via getXbyY() calls) to turn raw events, uids, IP addresses etc into text. It is important to remember that the binary audit trail stores raw uid, gid and ip addresses so if uids or IP addresses are used later for different names some further context from your own administration change system may be required to identify the correct user or host. In general, I recommend never to reuse uids or gids. There is plenty of space for unique values for everyone who ever comes through your organization.

praudit has a few basic options that determine single or multi-line display and delimiters but provides no mechanism for choosing which events get displayed. Choosing the events is done by using auditreduce(1M). The auditreduce(1M) command is often thought to be overloaded to performing two functions 1) audit record selection 2) audit trail managment. In actuality, these are one in the same. auditreduce takes binary audit trail(s) as its input and generates a new binary audit trail as the output.

If we are using auditreduce to get a selection of audit records, such as the commands run by a user during a given time period, and we want to display those records, then we would probably use the output of auditreduce and pipe it directly to praudit. For example, to find all of the login events for user alice in October 2000:


# auditreduce -a 20001001 -b +31d -u alice -c lo | praudit


This says to auditreduce, start processing records from 1st October 2000 and before 31 days (so until the end of 31st October) this forms a range. We then specify the username using -u and finally we say the audit class of login events `lo`. If we didn't pipe this to praudit we would get a binary audit trail as standard output or in the file specified using the -O flag.

So where did the input data come from? Unless instructed otherwise, auditreduce will read all of the audit files under /etc/security/audit and process each one looking for records that match the required selection criteria.

Using auditreduce multiple times to drill down can be a very effective tool: instead of translating the output using praudit, the binary file is preserved on each search. Rather than give lots of further examples, I suggest reading over the auditreduce(1M) man pages to get a feel for what you can filter on. The important thing to understand at this point is that auditreduce conducts selection of records and praudit simply displays them in a human readable form.
# Managing the Audit Trails

Audit records are actually written to files by the kernel rather than directly by the process being audited. There is a userland daemon, auditd(1M) that tells the kernel which file to write to and does basic management to ensure that there is space to write the audit records.

The name of the current audit file is in /etc/security/audit_data. This file has two fields ":" separated, the first is the PID of auditd, the second is the full path name of the active audit file. For example:


# cat /etc/security/audit_data
431:/etc/security/audit/talisker.0/files/20001031192753.not_terminated.talisker


The location of the files is determined by the "dir" entries in /etc/security/audit_control. For example:


dir:/etc/security/audit/talisker.0/files
dir:/etc/security/audit/talisker.1/files
minfree: 20
flags: lo,-ex,-ad
naflags: lo,ex,ad


We mentioned in the previous section that auditreduce(1M) looks for audit trail files under /etc/security/audit if no files are given on the command line. For this reason it is recommended to have all filesystems that are used for audit files mounted under /etc/security/audit. To use the name of the host followed by a number is the normal practice, but anything that means something to you can be used. Note that it is possible to have NFS-mounted directories but they will have to be shared with root access to the client doing the auditing. Since auditd(1M) runs as uid 0 - this is different to the audit system in SunOS 4.x where auditd ran as the audit user, the reason for this change was to support having Secure NFS-mounted filesystems used for auditing (there is no way to ensure that the key for the audit user would be in keyserv at startup so root is used instead).

On startup, or when instructed to start a new audit file by running audit -n, auditd(1M) will create the file in the first listed directory with at least minfree percentage space available. Timestamps are of the form %Y%m%d%H%M%S (as defined in date(1)). The phrase "not_terminated" in place of an end timestamp means that auditd has not closed this file. In normal operation, there is only one "not_terminated" file per host, but if the machine should panic or lose power it is unlikely that auditd would have a chance to close and rename the file.

The following example shows the audit records for the host talisker between October 30th and November 19th. Note that the timestamps are in GMT not local time, so you may see "future" dates for the time zone you live in.


20001030225810.20001031192753.talisker
20001031192753.20001119043210.talisker
20001119043210.not_terminated.talisker


Best practice dictates that each directory listed in the audit_control file should be a separate filesystem and should be used only for audit records. Since it is a normal UFS filesystem, you can use logical volume management software to mirror the filesystems and protect your audit data and your normal backup software to preserve and archive the data.

Once a directory has less than minfree percentage space remaining, auditd will start a new audit file in the next directory with enough space (ie less than minfree). On doing this it runs the /etc/security/audit_warn script which sends an email to the members of the audit_warn alias. When each filesystem listed has reached minfree it will then start back at the first and fill it until no space is available - a further warning will be sent by audit_warn saying that the hard limit has been reached.

Once all filesystems have been filled another audit policy comes into place. At this point in time, one of two things should be done: the audit records that can't be written should be dropped, but a count should be kept and when space is available a entry saying how many lost audit records there were should be written. For some sites this is not acceptable and it is better to stop the system from functioning until space is available. To change this policy from the default of count to suspending processing run:


# auditconfig -setpolicy -cnt


To make this the default, remove the line in /etc/security/audit_startup that has -setpolicy +cnt.

Note that auditconfig(1M) says that suspending is the default. This is true; however, the configuration setup by running bsmconv adds an entry to audit_startup that sets the count policy so the default for Solaris is actually to count rather than suspend.
# Summary

This article has discussed the basic setup of SunSHIELD BSM auditing and basic analysis and management of the audit trail. The configuration of BSM, details of a working configuration, and management of the configuration were covered. And finally, links were provided to further the knowledge and sharpen the learning curve of the reader.

Wednesday, October 31, 2007

Hook up a serial port and configure minicom

find a linux box closeby and hook the db9(m) to db25(m) null modem cable up to the port marked 'A' on the back of the machine
run minicom on the linux box
press ^A, then z
press o
-> Serial port setup
you will most likely have to change the serial port to /dev/ttyS0, so press a and do so
press e and make the bps/par/bits 9600 8n1
enter
-> Save setup as dfl
^A then q, to quit without reset
yes
run minicom again
^A then t, press b to make backspace send DEL

press enter, you should see a prompt
type ^A, f to send a break, you should see the solaris "{0} ok" prompt
ensure you have backed up your data, and type:
boot cdrom


(key words : minicom, linux, solaris, serial, port, console, terminal)

Thursday, October 25, 2007

Converting Realvideo files to avi

In this article, I am going to show how to convert a rmvb file to an avi file (mpeg4 video + mp3 audio)

Tools
1. mplayer
2. mencoder
3. essential codecs for mplayer

note: to install the above tools, please take a look to http://stanton-finley.net/fedora_co...tion_notes.html

File
1. rmvb file: in.rmvb
2. avi file: out.avi

Information for the avi file
Video
format: mpeg4
bitrate:1200 kb/s
fps: 25 fps
Audio
format: mp3
bitrate: 128 kb/s

Command
mencoder in.rmvb -oac mp3lame -lameopts preset=128 -ovc lavc -lavcopts vcodec=mpeg4:vbitrate=1200 -ofps 25 -of avi -o out.avi

Explanation
-oac: output audio codec
mp3lame: library used for audio encoding
-lameopts: options used along with lame
preset: values for audio bitrate, you can set 64, 128, 224, etc
-ovc: ouput video codec
lavc: library used for video encoding
-lavcopts: options used along with lavc
vcodec: video codec, you can use mpeg1video, mpeg4, etc
vbitrate: video bitrate, you can set 600, 1000, 1200, etc
-ofps: outpt frame per second ( fps)
-of: output file container type
-o: output filename

Mencoder is a powerful tool to convert multimedia, just like the above example, we can use it to convert rmvb to avi. With suitable library and codecs, we can even use it to convert file format like rm, wmv etc.

Tuesday, September 18, 2007

Ajouter un module perl

Ajouter un module perl a la ligne de commande :


perl -MCPAN -e "install Unicode::MapUTF8"

Can’t create home directory Solaris 9

Can’t create home directory

On a fresh install of Solaris, I got this error when I was trying to manually create a user’s home directory :
# cd /home
# mkdir dirabc
mkdir: Failed to make directory “dirabc”; Operation not applicable
I can’t create the /home/dirabc directory because by default automounter keeps track of any changes in /home directory. To resolve this, just comment out any line regarding /home in these two files :

/etc/autohome
/etc/automaster

Restart automounter (or reboot the system!) and you should be able to manually control the /home directory.

Monday, August 27, 2007

Removing all blank space in filenames

Here is a simple script to remove blank space in filenames in a directory tree. One could definitely use that script to change other characters too.
You can use it as a script or a single command line.

IFS=$'\n';for files in `find . -type f`; do NewFile=`echo $files | sed 's/ /_/g'`; mv "$files" $NewFile; done;unset IFS

Tuesday, August 21, 2007

Recovering from a hard disk crash on Sun servers

A server was setup with a mirror of 2 internal disk. This mirror was created using Solaris 9 built-in disk tools : disksuite.
Following a reboot, one of the 2 disks died and it was the primary boot disk ( of course ! ). So the server was not able to boot. We had to find a way of booting from the second disk.
At the ok prompt we created a new alias :
nvalias disk2 /pci@1f,4000/scsi@3/disk@0,0
then
boot disk2
and the server booted. The challenge here was to find the correct syntax for the device. So we typed in :
show-devs
We saw 2 disks. From another identical server, we typed in /usr/sbin/prtconf -vp :
Node 0xf002ce38
screen: /pci@1f,2000/TSI,gfxp@1
net: /pci@1f,4000/network@1,1
disk: /pci@1f,4000/scsi@3/disk@0,0
cdrom: /pci@1f,4000/scsi@3/disk@6,0:f
tape: /pci@1f,4000/scsi@3,1/tape@4,0
tape1: /pci@1f,4000/scsi@3,1/tape@5,0
tape0: /pci@1f,4000/scsi@3,1/tape@4,0
disk6: /pci@1f,4000/scsi@3/disk@6,0
disk5: /pci@1f,4000/scsi@3/disk@5,0
disk4: /pci@1f,4000/scsi@3/disk@4,0
disk3: /pci@1f,4000/scsi@3/disk@3,0
disk2: /pci@1f,4000/scsi@3/disk@2,0
disk1: /pci@1f,4000/scsi@3/disk@1,0
disk0: /pci@1f,4000/scsi@3/disk@0,0
scsi: /pci@1f,4000/scsi@3
floppy: /pci@1f,4000/ebus@1/fdthree
ttyb: /pci@1f,4000/ebus@1/se:b
ttya: /pci@1f,4000/ebus@1/se:a
keyboard!: /pci@1f,4000/ebus@1/su@14,3083f8:forcemode
keyboard: /pci@1f,4000/ebus@1/su@14,3083f8
mouse: /pci@1f,4000/ebus@1/su@14,3062f8
name: 'aliases'

This is where we found the string for the nvalias command.

A nice link for all openboot command is : http://www.adminschoice.com/docs/open_boot.htm

Thursday, August 9, 2007

websm on Fedora core 6

I recently installed FC6 on my linux workstation. Everything was fine until I tried running IBM websm management console.
I got the following error message :
java full version "J2RE 1.4.2 IBM build cxia32142-20050609"
+ java -Xbootclasspath/p:auiml/xerces.jar -Xms20m -Xmine4m -Xmx512m -DWEBSM_NO_REMOTE_CLASS_LOADING=false -DWEBSM_NO_SECURITY_MANAGER=false -Djava.security.policy=../config/websm.policy '-Dawt.appletWarning=Remote class Window' -DWEBSM_ALL_PERMISSIONS_FOR_SECURE=true -DWSMDIR=/opt/websm com.ibm.websm.console.WConsole
java.lang.ExceptionInInitializerError
at com.ibm.websm.etc.EImageCache._init(EImageCache.java:164)
at com.ibm.websm.etc.EImageCache.init(EImageCache.java:255)
at com.ibm.websm.etc.EImageCache.(EImageCache.java:152)
at com.ibm.websm.bridge.WSessionMgr.(WSessionMgr.java:185)
at com.ibm.websm.bridge.WSessionMgr.(WSessionMgr.java:217)
at com.ibm.websm.bridge.WSessionMgr.getSessionMgr(WSessionMgr.java:241)
at com.ibm.websm.gevent.GEventSupport.(GEventSupport.java:34)
at com.ibm.websm.gevent.GEventSupport.doSetup(GEventSupport.java:97)
at com.ibm.websm.gevent.GEventSupport.addEventListener(GEventSupport.java:115)
at com.ibm.websm.diagnostics.IDebug.Setup(IDebug.java:789)
at com.ibm.websm.diagnostics.IDebug.enabled(IDebug.java:1110)
at com.ibm.websm.diagnostics.Diag.(Diag.java:53)
at com.ibm.websm.console.WConsole.main(WConsole.java:1641)
Caused by: java.lang.NullPointerException
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2171)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:2006)
at java.lang.Runtime.loadLibrary0(Runtime.java:824)
at java.lang.System.loadLibrary(System.java:908)
at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:76)
at java.security.AccessController.doPrivileged1(Native Method)
at java.security.AccessController.doPrivileged(AccessController.java:287)
at java.awt.Toolkit.loadLibraries(Toolkit.java:1488)
at java.awt.Toolkit.(Toolkit.java:1511)
... 13 more


I don't know exactly what was wrong but it had something to do with my java installation. So I edited the file /opt/websm/bin/wsm and did the change below :

BEFORE
#Set new path to WEBSM export PATH=../_jvm/bin:../bin:/bin:/usr/bin:/opt/kde3/bin echo PATH = $PATH

AFTER
#Set new path to WEBSM export PATH=/usr/java/jre/bin:../_jvm/bin:../bin:/bin:/usr/bin:/opt/kde3/bin echo PATH = $PATH

Then I run it : ./wsm and got the following message :
USING JAVA:
java full version "1.5.0_10-b03"
+ java -Xbootclasspath/p:auiml/xerces.jar -Xms20m -Xmine4m -Xmx512m -DWEBSM_NO_REMOTE_CLASS_LOADING=false -DWEBSM_NO_SECURITY_MANAGER=false -Djava.security.policy=../config/websm.policy '-Dawt.appletWarning=Remote class Window' -DWEBSM_ALL_PERMISSIONS_FOR_SECURE=true -DWSMDIR=/opt/websm com.ibm.websm.console.WConsole
Unrecognized option: -Xmine4m
Could not create the Java virtual machine.


So I edited again the file and took out the option parameter for the JVM :
BEFORE
java ${BOOTPATH} -Xms$W_HEAP_MIN_SIZE -Xmine$W_HEAP_INC_SIZE -Xmx$W_HEAP_MAX_SIZE $ACCESSIBLE -DWEBSM_NO_REMOTE_CLASS_LOADING=$WNRCL -DWEBSM_NO_SECURITY_MANAGER=$WNSM $WSMSSL -Djava.security.policy=../config/websm.policy -Dawt.appletWarning="Remote class Window" -DWEBSM_ALL_PERMISSIONS_FOR_SECURE=true -DWSMDIR="$WSMDIR" com.ibm.websm.console.WConsole

AFTER
java ${BOOTPATH} -Xms$W_HEAP_MIN_SIZE -Xmx$W_HEAP_MAX_SIZE $ACCESSIBLE -DWEBSM_NO_REMOTE_CLASS_LOADING=$WNRCL -DWEBSM_NO_SECURITY_MANAGER=$WNSM $WSMSSL -Djava.security.policy=../config/websm.policy -Dawt.appletWarning="Remote class Window" -DWEBSM_ALL_PERMISSIONS_FOR_SECURE=true -DWSMDIR="$WSMDIR" com.ibm.websm.console.WConsole

I can now run websm on my workstation.

Tuesday, July 17, 2007

Configuring kvpnc

Here is the configuration options I use. Everything else should not be selected :

Profile
- General
-- Advanced
---- Enable advanced settings
---- Perfect Forward secrecy (PFS): server

- Authenticate
-- User data
---- Username : put your username
---- Password : put your password
---- Save user password
-- PSK
---- Save PSK
---- Pre shared key: Company pre-shared key

-- Network
--- General
---- Use connection status check
------ Interval : 1
------ Success count : 4
---- Use specified address to ping : put your workstation IP
--- Routes
---- Replace default route
--- NAT
---- Use UDP (NAT-T)

I sometime get timeout on the VPN server but If I retry it works.

Debugging LIRCD

Here are some information I took on a working machine :

lirc stuff works like this :
Kernel module gets IR stuff -> /dev/lirc
lircd reads /dev/lirc and uses /etc/lircd.conf to get /dev/lircd
irw attaches to /dev/lircd
mode2 attaches to /dev/lirc
problem is I think only one program can attach to /dev/lirc but gladly
lircd will accept multiple connections

[root@moon dev]# ls -l lirc*
lrwxrwxrwx 1 root root 5 Mar 18 10:11 lirc -> lirc0
crw------- 1 root root 61, 0 Mar 18 10:11 lirc0
srw-rw-rw- 1 root root 0 Mar 18 14:29 lircd
prw-r--r-- 1 root root 0 Mar 18 10:11 lircm

[root@moon dev]# lsmod | grep -i lirc
lirc_pvr150 19136 5
lirc_dev 12708 1 lirc_pvr150
ivtv 175760 5 lirc_pvr150
i2c_core 22209 10 lirc_pvr150,wm8775,cx25840,tda9887,tuner,ivtv,i2c_algo_bit,tveeprom,nvidia,i2c_i801

[root@moon dev]# mode2
code: 0x1794
code: 0x1f95
code: 0x1797
code: 0x1f96
code: 0x1f96

an interesting discussion can be found <a href="http://www.nabble.com/No--dev-lirc-when-upgrading-from-FC6-from-FC5.-t3167625.html">here</a>.

Compiling Cisco vpnclient-linux-x86_64-4.8 on Fedora Core 6 x86_64 kernel 2.6.18/2.6.19

Thanks a lot to Amit who found the following solution ( http://blog.360.yahoo.com/blog-.WURHFYwdq8.zfEosWC6j8jQ?p=55 )

Unfortunately the cisco vpnclient-linux-x86_64-4.8 will not compile with the kernel (2.6.18) which comes with fedora core 6 or 2.6.19(latest stable from http://kernel.org) both x86 and x86_64 .

<u> <b>Error(s) you get while compiling</b></u>

#./vpn_install
Cisco Systems VPN Client Version 4.8.00 (0490) Linux Installer
Copyright (C) 1998-2005 Cisco Systems, Inc. All Rights Reserved.
By installing this product you agree that you have read the
license.txt file (The VPN Client license) and will comply with
its terms.
Directory where binaries will be installed [/usr/local/bin]
Automatically start the VPN service at boot time [yes]
In order to build the VPN kernel module, you must have the
kernel headers for the version of the kernel you are running.
Directory containing linux kernel source code [/lib/modules/2.6.19-1.meaks/build]
* Binaries will be installed in "/usr/local/bin".
* Modules will be installed in "/lib/modules/2.6.19-1.meaks/CiscoVPN".
* The VPN service will be started AUTOMATICALLY at boot time.
* Kernel source from "/lib/modules/2.6.19-1.meaks/build" will be used to build the module.
Is the above correct [y]
Making module
make -C /lib/modules/2.6.19-1.meaks/build SUBDIRS=/home/amitkr/setups/vpnclient modules
make[1]: Entering directory `/home/amitkr/kernel/linux-2.6.19-1.meaks/build'
make[1]: Warning: File `Makefile' has modification time 1e+04 s in the future
make -C /home/amitkr/kernel/linux-2.6.19-1.meaks O=/home/amitkr/kernel/linux-2.6.19-1.meaks/build modules
CC [M] /home/amitkr/setups/vpnclient/interceptor.o
In file included from /home/amitkr/setups/vpnclient/Cniapi.h:15,
from /home/amitkr/setups/vpnclient/interceptor.c:30:
/home/amitkr/setups/vpnclient/GenDefs.h:110:2: warning: #warning 64 bit
/home/amitkr/setups/vpnclient/interceptor.c: In function handle_vpnup:
/home/amitkr/setups/vpnclient/interceptor.c:310: warning: assignment from incompatible pointer type
/home/amitkr/setups/vpnclient/interceptor.c:334: warning: assignment from incompatible pointer type
/home/amitkr/setups/vpnclient/interceptor.c:335: warning: assignment from incompatible pointer type
/home/amitkr/setups/vpnclient/interceptor.c: In function do_cleanup:
/home/amitkr/setups/vpnclient/interceptor.c:378: warning: assignment from incompatible pointer type
/home/amitkr/setups/vpnclient/interceptor.c: In function recv_ip_packet_handler:
/home/amitkr/setups/vpnclient/interceptor.c:553: error: CHECKSUM_HW undeclared (first use in this function)
/home/amitkr/setups/vpnclient/interceptor.c:553: error: (Each undeclared identifier is reported only once
/home/amitkr/setups/vpnclient/interceptor.c:553: error: for each function it appears in.)
/home/amitkr/setups/vpnclient/interceptor.c:557: error: too many arguments to function skb_checksum_help
/home/amitkr/setups/vpnclient/interceptor.c: In function do_cni_send:
/home/amitkr/setups/vpnclient/interceptor.c:680: error: CHECKSUM_HW undeclared (first use in this function)
/home/amitkr/setups/vpnclient/interceptor.c:683: error: too many arguments to function skb_checksum_help
make[4]: *** [/home/amitkr/setups/vpnclient/interceptor.o] Error 1
make[3]: *** [_module_/home/amitkr/setups/vpnclient] Error 2
make[2]: *** [modules] Error 2
make[1]: *** [modules] Error 2
make[1]: Leaving directory `/home/amitkr/kernel/linux-2.6.19-1.meaks/build'
make: *** [default] Error 2
Failed to make module "cisco_ipsec.ko".


<b> <u> What is the Problem ?</u></b>


This is because of
[1]
no more exists with, which used to be present in ${KSRCPATH}/build/include/linux/config.h with old kernels, after doing a make O=build menuconfig

[2]
For kernel 2.6.19 things are even worse, the CHECKSUM_HW macro does not exists any more in

[3]
the function skb_checksum_help(skb) declared in file and defined in file has been changed to take only a single argument.

<b> <u>Solution</u></b>

So, to compile the cisco vpn 4.8 x86_64 you need to:
[1] make a symlink to autoconfig.h as config.h
$ cd ${KSRCPATH}/build/include/linux
$ ln -s autoconfig.h config.h
This will solve the first problem

[2] the macro CHECKSUM_HW needs to be replaced with CHECKSUM_COMPLETE in the file ${VPNCLIENT}/interceptor.c

[3] edit the files calling skb_checksum_help() , look for the proper LINUX_VERSION_CODE macro that matches your kernel version(2.6.19) and remove the second argument from skb_checksum_help() so that the function looks like skb_checksum_help(skb).
The macro looks something like this

KERNEL_VERSION(2,6,10)

You can find the changes in concurrent releases of kernel in kernel ChangeLog<b></b>

Create a dedicated user for mysql backup script

Here is how you create a backup user that will be used in backup scripts :

[root@MyServer /]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 89061 to server version: 4.1.12

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> GRANT SELECT, SHOW DATABASES, LOCK TABLES ON *.* TO 'backup'@'localhost' IDENTIFIED BY 'MyPasswd';
Query OK, 0 rows affected (0.00 sec)


mysql> exit
Bye

Changing a user's passwd with mysql command line

Using PHPMyAdmin simplifies greatly MySQL database administration, but there are times where this software is not available...

So to change a user's password from the commandline, follow this procedure :
[root@MyServer /]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 89057 to server version: 4.1.12

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> set password for 'root'@'localhost' = PASSWORD('MyNewPassword');
Query OK, 0 rows affected (0.05 sec)

Converting video_ts and audio_ts to an iso image

You have your DVD structure on your hard disk and you want to convert it to an iso image file, here is what you do :
mkisofs -pad -J -R -oYourDVD.iso -graft-points "/AUDIO_TS=/path/to/AUDIO_TS" "/VIDEO_TS=/path/to/VIDEO_TS"

Mount Bin/Cue files in Linux

It’s really easy to convert bin/cue files into an ISO file, and then mount the ISO in linux.

yum install bchunk

The syntax from bchunk is as follows:
bchunk [-v] [-p] [-r] [-w] [-s]

So if i wanted to convert image,bin and image.cue into image.iso, I’d run the command:
bchunk image.bin image.cue image.iso

Then to mount the ISO in linux you run the command:
mount -o loop -t iso9660 image.iso /mnt/image, where image.iso is the iso is the image that you want to mount and /mnt/image is the mount directory.

NX client and linux

In order to install nxclient for linux we need :
yum install compat-libstdc++-296

Setting up NFS on Solaris

<b>Setup the server :</b>

Edit /etc/dfs/dfstab :

share -F nfs -o root=client_nfs.domain.com -d "Backup FS" /raid5

Note : By default root is assigned nobody permission on an NFS share. To enable root permission on the share, we need to specify root= in the share? options.

Start the nfs server :

# /etc/init.d/nfs.server start

#

Check if the share is effective :

# showmount -e

export list for server_nfs:

/raid5 @204.19.29.0

#

To unshare :

# unshare /raid5

<b>Setup the client :</b>

mount server_nfs:/raid5 /nfs

Check the mount :

root@client_nfs:nfsstat -m

/nfs from server_nfs:/raid5

Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=32768,wsize=32768,retrans=5

root@client_nfs:

Setup automatic mount at boot time :

Edit /etc/vfstab and add the following line :

columbia:/raid5 - /nfs nfs - yes -

<b>Availability tests</b>

Case 1 :

Initial state : NFS Server is UP, NFS client has a mounted share.

Observation : Everything works perfectly.

Failure : NFS Server is DOWN, NFS client still has a mounted share.

Observation : NFS client still works fine. The share is not accessible. When we try to access it, the command froze. No implication on the system.

Recovery : NFS Server is brought back UP.

NFS client automatically finds the share. We can use it. Everything is automatic.

Case 2 :

Initial state : NFS Server is UP, NFS client has a mounted share.

Observation : Everything works perfectly.

Failure : Client is DOWN

Recovery : Client is brought back UP .

Observation : The client is able to use the share. Everything is back to normal.

Case 3 :

Initial state : NFS Server is UP, NFS client has a mounted share.

Observation : Everything works perfectly.

Failure : NFS Server AND NFS client shutdown due to a power loss.

Recovery : NFS client go back up first. NFS Server is down.

NFS client goes back UP and is ready. We can access the shared directory but nothing is there.

NFS Server is brought back UP. The NFS client automatically remount the share and can access it.

Everything is back to normal.<i></i>

Monday, July 16, 2007

Preparing DVD from RHEL 4 installation CDs

Hi,

Ever wonder how to create a DVD from the 4 RHEL 4 installation CDs ? Well Chris Kloiber seems to have made up a script that works very well on Fedora Core 5.

Requirements :
The installation ISO CD image of RHEL (but I suspect this would work with any distributions)

Here is the script :

#/bin/bash

# by Chris Kloiber <ckloiber@redhat.com>

# A quick hack that will create a bootable DVD iso of a Red Hat Linux
# Distribution. Feed it either a directory containing the downloaded
# iso files of a distribution, or point it at a directory containing
# the "RedHat", "isolinux", and "images" directories.

# This version only works with "isolinux" based Red Hat Linux versions.

# Lots of disk space required to work, 3X the distribution size at least.

# GPL version 2 applies. No warranties, yadda, yadda. Have fun.


if [ $# -lt 2 ]; then
echo "Usage: `basename $0` source /destination/DVD.iso"
echo ""
echo " The 'source' can be either a directory containing a single"
echo " set of isos, or an exploded tree like an ftp site."
exit 1
fi

cleanup() {
[ ${LOOP:=/tmp/loop} = "/" ] && echo "LOOP mount point = \/, dying!" && exit
[ -d $LOOP ] && rm -rf $LOOP
[ ${DVD:=~/mkrhdvd} = "/" ] && echo "DVD data location is \/, dying!" && exit
[ -d $DVD ] && rm -rf $DVD
}

cleanup
mkdir -p $LOOP
mkdir -p $DVD

if [ !`ls $1/*.iso 2>&1>/dev/null ; echo $?` ]; then
echo "Found ISO CD images..."
CDS=`expr 0`
DISKS="1"

for f in `ls $1/*.iso`; do
mount -o loop $f $LOOP
cp -av $LOOP/* $DVD
if [ -f $LOOP/.discinfo ]; then
cp -av $LOOP/.discinfo $DVD
CDS=`expr $CDS + 1`
if [ $CDS != 1 ] ; then
DISKS=`echo ${DISKS},${CDS}`
fi
fi
umount $LOOP
done
if [ -e $DVD/.discinfo ]; then
awk '{ if ( NR == 4 ) { print disks } else { print ; } }' disks="$DISKS" $DVD/.discinfo > $DVD/.discinfo.new
mv $DVD/.discinfo.new $DVD/.discinfo
fi
else
echo "Found FTP-like tree..."
rsync -avP --exclude SRPMS $1/* $DVD
# cp -av $1/* $DVD
[ -e $1/.discinfo ] && cp -av $1/.discinfo $DVD
fi

rm -rf $DVD/isolinux/boot.cat
find $DVD -name TRANS.TBL | xargs rm -f

# My thanks to Mubashir Cheema for suggesting this fix.
# cd $DVD
mkisofs -J -R -v -T -o $2 -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 8 -boot-info-table $DVD

/usr/lib/anaconda-runtime/implantisomd5 --force $2
# Don't like forced mediacheck? Try this instead.
# /usr/lib/anaconda-runtime/implantisomd5 --supported-iso --force $2

cleanup
echo ""
echo "Process Complete!"
echo ""

Monitoring RAID status

When using a RAID hardware on a linux server, we need to find out if a disk failed. If you use a Compaq/HP server, chances are that you are using a CCISS driver to access the raid device.

In that case, you can install an HP package that will work on fedora and other flavour of linux. You get it from <a href="wget ftp://ftp.compaq.com/pub/products/servers/supportsoftware/linux/hpacucli-7.20-16.linux.rpm">here</a>.

Then you can use the following 2 scripts :

Here is "hwraidinfo":
-----cut here----------
#!/bin/sh
SLOTLIST=$(hpacucli ctrl all show | \
grep Slot | sed -e 's/^.*Slot //g' -e 's/ .*$//g')

for i in $SLOTLIST
do
echo
hpacucli ctrl slot=$i show | grep -v "^$"
echo
hpacucli ctrl slot=$i ld all show | grep -v "^$"
hpacucli ctrl slot=$i pd all show | grep -v "^$"
done
echo
---------cut here--------------
And "hwraidstatus":
---------cut here--------------
#!/bin/sh
SLOTLIST=$(hpacucli ctrl all show | \
grep Slot | sed -e 's/^.*Slot //g' -e 's/ .*$//g')

for i in $SLOTLIST
do
echo
hpacucli ctrl slot=$i show status | grep -v "^$"
echo
hpacucli ctrl slot=$i ld all show status | grep -v "^$"
hpacucli ctrl slot=$i pd all show status | grep -v "^$"
done
echo
-------------cut here------------

This was posted by Matti Kurkela.

Getting VMware-player to work on Fedora Core 5

This is an extract from <a href="http://clunixchit.blogspot.com/2006_03_01_clunixchit_archive.html">this page</a>.

Getting VMware-player-1.0.1-19317.i386.rpm work on Fedora Core 5 'Bordeaux':
1. Get the rpm of the free VMWare Player
2. Download the linux image e.g VMware-player-1.0.1-19317.i386.rpm
3. Install it with rpm -ivh VMware-player-1.0.1-19317.i386.rpm
4. Then its time to download the patch: wget -c http://ftp.cvut.cz/vmware/vmware-any-any-update104.tar.gz
5. Extract the archive: tar xvf vmware-any-any-update104.tar.gz
6. cd vmware-any-any-update104
7. Complete the installation by: su -c "./runme.pl"

Adressing StorEdge L280 connected to SUN E450 running Solaris 9 in 64 bits mode with MTX and sst

<H3><BOLD>BEFORE YOU EVEN CONSIDER GETTING A SUN StorEdge L280 :</BOLD></H3>

Get the proper SUN Part. Be aware that the L280 won't connect through the standard SCSI port on the E450. It is single ended whereas the L280 requires a differential SCSI bus. Therefore you will need a differential SCSI host adapter (you can get it around 700 CAD$) and the proper cables. This is SUN option number X6541A (which is also SUN Part number 375-0006).

<u><b>Introduction</b></u>
When you buy a Sun StorEdge L280, it consists of a tape drive and a robotic device. You can address the tape drive as you do with standard tape drive (/dev/rmt). But you won't be able to address the robotic device. Therefore you will have to load tapes manually or buy a commercial backup software that will recognise and will be able to use the robot. <br>
There is a third solution which is the purpose of this document. You can use open source software to address the robotic device. and it works !

<u><b>Preparing the installation</b></u>
Power up your L280. Make sure the L280 is in random mode and write down your tape drive and your library SCSI IDs (you will need them later on).<br>
Begin with installing gcc3.2. You can get the package gcc-3.2-sol9-sparc-local.gz from <a href="http://www.sunfreeware.com">www.sunfreeware.com</a>. Note that this is actually a sun package. So, when you unzip it, type pkgadd -d gcc-3.2-sol9-sparc-local to install it.<br>
make sure your path includes /usr/local/bin:/usr/ccs/bin.<br>
To be able to address your library, you need a SCSI driver (it is not provided by SUN when you buy the StorEdge L280). This SCSI driver is sst. You can download it from <a href="http://www.arkeia.com/download.html">www.arkeia.com</a> and mtx software from <a href="http://mtx.badtux.net/">mtx.badtux.net</a>. <br>

<u><b>Installing softwares and connecting the devices</b></u>
untar the sst driver and follow the procedure describes in the sst64/installsst.txt.<br>
My SCSI tape ID was 4 and the library SCSI ID was 5. So I edited the /usr/kernel/drv/sst.conf and left the file as it was (it was already setup for a library on SCSI ID 5).<br>
I did not find any reference to the L280 in the /kernel/drv/st.conf so I just added the following line in the tape-config-list section :<br>
"SUN L280", "SUN L280 Library", "DLT-data";<br>
Follow the rest of the procedure, it is straightforward.<br>
Unzip and untar the mtx software and follow the instructions to install it (./configure, make, make install).<br>
Halt your server, insert the host adapter and connect your cables.<br>
power on the E450. At the OK prompt, try scsi-probe-all. You should be able to see both the tape drive and the library.<br>

<u><b>Finishing the installation and testing</b></u>
Boot your server : boot -r<br>
Once the system is up and running, look through your /var/adm/messages file. You should see the string "changer found". If so do the following command to get your device name :<br>
# ls -l /dev | grep sst<br>
lrwxrwxrwx 1 root other 48 Nov 13 14:53 rsst5 -> ../devices/pci@4,4000/scsi@3,1/sst@5,0:character<br>
#<br>
You can now try to issue mtx commands. IMPORTANT : make sure to always specify the mtx option nobarcode. <br>
Here are few exemples of mtx commands with the L280 library :<br>
We begin with issuing an inventory check :<br>
# mtx -f /dev/rsst5 nobarcode inventory<br>
Then (and it does not have to be after the inventory check), we ask a status of the library :<br>
# mtx -f /dev/rsst5 nobarcode status<br>
Storage Changer /dev/rsst5:1 Drives, 8 Slots ( 0 Import/Export )<br>
Data Transfer Element 0:Empty<br>
Storage Element 1:Empty<br>
Storage Element 2:Empty<br>
Storage Element 3:Empty<br>
Storage Element 4:Empty<br>
Storage Element 5:Empty<br>
Storage Element 6:Empty<br>
Storage Element 7:Full<br>
Storage Element 8:Full<br>
#<br>
We load slot 8 tape into the tape drive<br>
# mtx -f /dev/rsst5 nobarcode load 8<br>
The status has changed :<br>
# mtx -f /dev/rsst5 nobarcode status<br>
Storage Changer /dev/rsst5:1 Drives, 8 Slots ( 0 Import/Export )<br>
Data Transfer Element 0:Full (Storage Element 8 Loaded)<br>
Storage Element 1:Empty<br>
Storage Element 2:Empty<br>
Storage Element 3:Empty<br>
Storage Element 4:Empty<br>
Storage Element 5:Empty<br>
Storage Element 6:Empty<br>
Storage Element 7:Full<br>
Storage Element 8:Empty<br>
#<br>
We Unload the tape :<br>
# mtx -f /dev/rsst5 nobarcode unload<br>
Unloading Data Transfer Element into Storage Element 8...done<br>
#<br>
NOTE : I was not able to unload the tape to a slot of my choice. It seems that this function does not work on my library. So I can only unload a tape and put it back in its original slot.<br>
Anyway, That's great. Congratulations to everybody involved in the development of the sst and the mtx software. Good job !<br>
The only thing left is making your own scripts using these commands, or use Amanda.<br>

Changing expirery date of a gpg key

# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub 1024D/0638309A 2005-06-29 Groupe Unix (no comment) <xxxx@xxxx.com>
sub 2048g/A50FAD76 2005-06-29 [expires: 2006-06-29]

# gpg --edit-key "Groupe Unix"
gpg (GnuPG) 1.2.1; Copyright (C) 2002 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.

Secret key is available.

pub 1024D/0638309A created: 2005-06-29 expires: 2006-06-29 trust: -/e
sub 2048g/A50FAD76 created: 2005-06-29 expires: 2006-06-29
(1). Groupe Unix (no comment) <xxxx@xxxx.com>

Command> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct (y/n)? y

You need a passphrase to unlock the secret key for
user: "Groupe Unix (no comment) <xxxx@xxxx.com>"
1024-bit DSA key, ID 0638309A, created 2005-06-29


pub 1024D/0638309A created: 2005-06-29 expires: never trust: -/-
sub 2048g/A50FAD76 created: 2005-06-29 expires: 2006-06-29
(1). Groupe Unix (no comment) <xxxx@xxxx.com>

Command> key 1

pub 1024D/0638309A created: 2005-06-29 expires: never trust: -/-
sub* 2048g/A50FAD76 created: 2005-06-29 expires: 2006-06-29
(1). Groupe Unix (no comment) <xxxx@xxxx.com>

Command> expire
Changing expiration time for a secondary key.
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct (y/n)? y

You need a passphrase to unlock the secret key for
user: "Groupe Unix (no comment) <xxxx@xxxx.com>"
1024-bit DSA key, ID 0638309A, created 2005-06-29


pub 1024D/0638309A created: 2005-06-29 expires: never trust: -/-
sub* 2048g/A50FAD76 created: 2005-06-29 expires: never
(1). Groupe Unix (no comment) <xxxx@xxxx.com>

Command> quit
Save changes? yes
root@spco1dpt1:/root/.gnupg
# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub 1024D/0638309A 2005-06-29 Groupe Unix (no comment) <xxxx@xxxx.com>
sub 2048g/A50FAD76 2005-06-29

Converting video files with ffmpeg

This is a quick reference for converting video files with ffmpeg :

From any format to dvd :
ffmpeg -i <source_file> -target dvd <target-file>

VOB to Divx using ffmpeg

I was so fed up with my children DVD being damaged after few weeks, that I decided to convert them to divX and store them on my computer. So, when the DVD is no longer playable, I play them from the computer.
I also would like to setup a nice TV interface is the future, to select them from my living roon.

Anyway, to convert a DVD to DivX, you can use the DVD O'matic software or follow the procedure below :
1 ) extract the contents to you hard disk
2 ) use the following command line to do a 2 pass encoding :
date;ffmpeg -i filename.VOB -pass 1 -passlogfile vts.log -qscale 2 -vcodec xvid VTS_02_temp.avi;date;ffmpeg -i filename.VOB -f avi -vcodec xvid -pass 2 -passlogfile vts.log -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 output_filename.avi;date
or for a single pass encoding :
ffmpeg -i filename.VOB -f avi -vcodec xvid -b 800 -g 300 -bf 2 -acodec mp3 -ab 128 ouput_filename.avi

NOTE : If your video sequence is split in 2 VOB files, you can do a simple >> to create a single VOB file :
cp VTS_01_1.VOB filename.VOB
cat VTS_01_2.VOB >> filename.VOB

RPM commands

RPM is a great packager, but it might be difficult to interogate it. So here is a summary of the command I used most :

Find out which package contains a file :
$ rpm -qf /usr/bin/vim
vim-enhanced-6.3.086-0.fc4

List files in an installed package :
$ rpm -ql vim-enhanced-6.3.086-0.fc4
/etc/profile.d/vim.csh
/etc/profile.d/vim.sh
/usr/bin/ex
/usr/bin/rvim
/usr/bin/vim
/usr/bin/vimdiff
/usr/bin/vimtutor
/usr/share/man/man1/rvim.1.gz
/usr/share/man/man1/vimdiff.1.gz
/usr/share/man/man1/vimtutor.1.gz

Find which files a package contains :
$ ls vim-enh*
vim-enhanced-6.3.071-3.i386.rpm
$ rpm -qpl vim-enhanced-6.3.071-3.i386.rpm
/etc/profile.d/vim.csh
/etc/profile.d/vim.sh
/usr/bin/ex
/usr/bin/rvim
/usr/bin/vim
/usr/bin/vimdiff
/usr/bin/vimtutor
/usr/share/man/man1/rvim.1.gz
/usr/share/man/man1/vimdiff.1.gz
/usr/share/man/man1/vimtutor.1.gz

Multimedia keyboard on Fedora Core 4

Here is how to setup your multimedia keyboard if your are using Fedora Core 4 and KDE.

1 ) Get the keycode of your multimedia keys

Start xev. This program traps the keyboard events and display information about them. Among this information you will find the keycode which will be needed later.
A sample of the output is :
KeyRelease event, serial 28, synthetic NO, window 0x1200001,
root 0x48, subw 0x0, time 1985516936, (163,-13), root:(166,60),
state 0x10, keycode 162 (keysym 0x1008ff14, XF86AudioPlay), same_screen YES,
XLookupString gives 0 bytes:
...

KeyRelease event, serial 31, synthetic NO, window 0x1200001,
root 0x48, subw 0x0, time 1985521875, (163,-13), root:(166,60),
state 0x10, keycode 234 (keysym 0x1008ff16, XF86AudioPrev), same_screen YES,
XLookupString gives 0 bytes:
...

KeyRelease event, serial 31, synthetic NO, window 0x1200001,
root 0x48, subw 0x0, time 1985522575, (163,-13), root:(166,60),
state 0x10, keycode 233 (keysym 0x1008ff17, XF86AudioNext), same_screen YES,
XLookupString gives 0 bytes:

Here I pressed the Play/Pause key, then next and previous key on my keyboard.
From the output we see that Play/Pause Key has a keycode of 162.

Do the same for the +, - and mute buttons.

2 ) Creating the .Xmodmap file
Create the file and populate it with the keycode found previously :
keycode 234 = XF86AudioPrev
keycode 162 = XF86AudioPlay
keycode 233 = XF86AudioNext
keycode 174 = XF86AudioLowerVolume
keycode 176 = XF86AudioRaiseVolume
keycode 160 = XF86AudioMute

For other XF86 references, look <a href="http://wiki.linuxquestions.org/wiki/XF86_keyboard_symbols">here</a>.

Once this file is created, load it : xmodmap ~/.Xmodmap and don't forget to add to your rc files so it will be loaded automatically when you loggon.

3 ) Configure keyboard in Control Center
We have 2 categories of command to define here. The first ones will concern the player. We will do it for Amarok but it works with other players (xmms, noatun...). The second ones will control the volume. This will be effective system wide.


3.1 ) Controling the Player
Open Control Center and click on "Regional and Accessibility", then "Input Actions"
Create a new group ( I called mine Multimedia keys ). Then create a new action.
Give it a name and select Keyboard shortcut -> DCOP call (simple) as the action type.
Click on keyboard shortcut, then map the key you want by pressing it.
Finally click on DCOP call Settings. Remote application will be amarok, Remote object will be player, Call function will be playPause. If you don't know these variables, you can click on "run KDCOP" and browse remote application, object and function.
Map as many multimedia keys as you wish then click apply.


<img border='0' hspace='5' src='/blog/uploads/xmodmap_khotkeys.jpg' alt='' />

3.2 ) Controlling the sound
The sound is controled via the amixer command. It is extremely simple. To mute, you can type amixer set Master toggle. To lower volume, type amixer set Master 1- and to raise volume, amixer set Master 1+.
So what we need to do is to add these keys/actions in khotkeys.
Go back to Control Center and add a new action in "Input Actions".
Give it a name and select "Keyboard shortcut -> Command/URL (simple)"
Map your key in Keyboard shortcut
Then type the command in "Command/URL to execute".

<img width='950' height='531' border='0' hspace='5' src='/blog/uploads/xmodmap_khotkeys2.jpg' alt='' />

Hardening your linux box

This is a tipical configuration of iptables :

#!/bin/ksh


# On re-initialise les tables
iptables -F

# On ouvre le port pour CUPS seulement pour les requetes locales
iptables -A INPUT -s localhost -p tcp --destination-port 631 -j ACCEPT
# On autorise les stations des collegues
iptables -A INPUT -s xx.xx.xx.xx -p tcp --destination-port 22 -j ACCEPT
iptables -A INPUT -s xx.xx.xx.xx -p tcp --destination-port 22 -j ACCEPT

# On autorise la connection au port VNC/RemoteDesktop 3389
iptables -A INPUT -p tcp --destination-port 3389 -j ACCEPT

# On drop tout le reste
iptables -A INPUT -p tcp --syn -j DROP

# On liste les tables
iptables -L
#!/bin/ksh


# On re-initialise les tables
iptables -F

# On ouvre le port pour CUPS seulement pour les requetes locales
iptables -A INPUT -s localhost -p tcp --destination-port 631 -j ACCEPT
# On autorise les stations des collegues
iptables -A INPUT -s 10.160.100.40 -p tcp --destination-port 22 -j ACCEPT
iptables -A INPUT -s 10.160.100.31 -p tcp --destination-port 22 -j ACCEPT

# On autorise la connection au port VNC/RemoteDesktop 3389
iptables -A INPUT -p tcp --destination-port 3389 -j ACCEPT

# On drop tout le reste
iptables -A INPUT -p tcp --syn -j DROP

# On liste les tables
iptables -L

Running Windows XP on Fedora Core 4 with qemu

Installing QEMU with Accelerator Module on Fedora Core 4

Once again, this is a doc I found on the internet. You can see the original page at <a href="http://www.brandonhutchinson.com/Installing_QEMU_with_Accelerator_Module_on_Fedora_Core.html">this link</a>.

The following are step-by-step instructions for installing QEMU with the Accelerator Module (kqemu) on Fedora Core 4. I also provide instructions for installing a Windows XP Professional guest system for use with QEMU.

1. Download and extract QEMU.
wget http://fabrice.bellard.free.fr/qemu/qemu-0.7.0.tar.gz
tar zxvf qemu-0.7.0.tar.gz

2. Download and extract the Accelerator Module in the QEMU directory.
cd qemu-0.7.0
wget http://fabrice.bellard.free.fr/qemu/kqemu-0.6.2-1.tar.gz
tar zxvf kqemu-0.6.2-1.tar.gz

3. Configure QEMU to build with Accelerator Module support using gcc 3.2.3. QEMU 0.7.0 does not compile with Fedora Core 4's gcc 4.0.0. If you do not have gcc32, install it with the yum -y install compat-gcc-32 as root.

In this example, I will install QEMU in /usr/local/qemu instead of its default location.
./configure --prefix=/usr/local/qemu --cc=gcc32

If you see:
SDL support no
SDL static link no

you must install SDL-devel and its dependencies with the yum -y install SDL-devel command as root.

4. Build QEMU.
make

5. Install QEMU.
make install (as root)

Installing a Windows XP Professional guest system
In this example, I have a bootable Windows XP Professional CD in /dev/cdrom, the logical path for my CD-ROM drive.

1. Create a disk image for the guest system. I'll create a 10 GB disk image named hd.img in my home directory for the guest system.
/usr/local/qemu/bin/qemu-img create hd.img 10G

2. Install the guest system from the bootable Windows XP Professional CD.
/usr/local/qemu/bin/qemu -boot d -hda ~/hd.img -localtime

3. After the system is installed, turn off the guest system, and boot the guest system from the disk image. I use the -user-net parameter to enable userspace networking.
/usr/local/qemu/bin/qemu -boot c -hda ~/hd.img -localtime -user-net

4. Creating a SAMBA user :
smbpasswd -a jboismar

Replacing a disk in a mirrored rootvg

In the following example, an RS6000 has 3 disks, 2 of which have the AIX
filesystems mirrored on. The boolist contains both hdisk0 and hdisk1.
There are no other logical volumes in rootvg other than the AIX system
logical volumes. hdisk0 has failed and need replacing, both hdisk0 and hdisk1
are in "Hot Swap" carriers and therefore the machine does not need shutting
down.

lspv

hdisk0 00522d5f22e3b29d rootvg
hdisk1 00522d5f90e66fd2 rootvg
hdisk2 00522df586d454c3 datavg

lsvg -l rootvg

rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 4 8 2 open/syncd N/A
hd5 boot 1 2 2 closed/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
hd2 jfs 12 24 2 open/syncd /usr
hd9var jfs 1 2 2 open/syncd /var
hd3 jfs 2 4 2 open/syncd /tmp
hd1 jfs 1 2 2 open/syncd /home



1, Reduce the logical volume copies from both disks to hdisk1 only :-

rmlvcopy hd6 1 hdisk0
rmlvcopy hd5 1 hdisk0
rmlvcopy hd8 1 hdisk0
rmlvcopy hd4 1 hdisk0
rmlvcopy hd2 1 hdisk0
rmlvcopy hd9var 1 hdisk0
rmlvcopy hd3 1 hdisk0
rmlvcopy hd1 1 hdisk0

2, Check that no logical volumes are left on hdisk0 :-

lspv -p hdisk0

hdisk0:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-101 free outer edge
102-201 free outer middle
202-301 free center
302-401 free inner middle
402-501 free inner edge

3, Remove the volume group from hdisk0

reducevg -df rootvg hdisk0

4, Recreate the boot logical volume on hdisk1, and reset bootlist:-

bosboot -a -d /dev/hdisk1
bootlist -m normal rmt0 cd0 hdisk1

5, Check that everything has been removed from hdisk0 :-

lspv

hdisk0 00522d5f22e3b29d None
hdisk1 00522d5f90e66fd2 rootvg
hdisk2 00522df586d454c3 datavg

6, Delete hdisk0 :-

rmdev -l hdisk0 -d

7, Remove the failed hard drive and replace with a new hard drive.

8, Configure the new disk drive :-

cfgmgr

9, Check new hard drive is present :-

lspv

10, Include the new hdisk in root volume group :-

extendvg rootvg hdisk? (where hdisk? is the new hard disk)

11, Re-create the mirror :-

mirrorvg rootvg hdisk? (where hdisk? is the new hard disk)

12, Syncronise the mirror :-

syncvg -v rootvg

13, Reset the bootlist :-

bootlist -m normal rmt0 cd0 hdisk0 hdisk1

14, Turn off Quorum checking on rootvg :-

chvg -Q n rootvg

Enabling command history search

On Fedora Core 4, the command history search function is not bind to any key. Therefore if you want to find a command you typed in, you will have to go through them all.

A solution is to create a binding for this search function. Edit the file /etc/inputrc and add the following 2 lines :
"\C-f": history-search-backward
"\C-g": history-search-forward

Logout and log in again and Voila !
Type the beginning of the command you want to find and press CTRL-f ou CRTL-g.

Finding out how long ago a daemon wrote to a log file

In my day to day job, I often have to write monitoring scripts. Recently we had to write a script that would check if a daemon wrote in its log in the last 5 minutes. This particular daemon had the habit of freezing. So you would see it in the process list but it would not do anything.
Therefore, to ensure that this process is alive and working, we have to check the last entry it creates in the log file and see if it is newer than 5 mns ( It is supposed to write at least 4 entries per minutes). Here is the litlle perl script to do that check :

##########################################
# Nouveau check pour DaemonA
&debug ( "=-=-=-=-= Verification de DaemonA =-=-=-=-= " );
$last_Order=`grep "DaemonA" /var/logs/efixeng/enginelog4j.log | tail -1`;
if ( $last_Order == "" )
{
&debug ( "We did not find any log entry for DaemonA");
$ORDER_TIME = 0;
} else {
&debug ( "last log entry was $last_Order" );
($order_date,$order_heure,$trash) = split (/ /,$last_Order);
&debug ( "date is $order_date, heure is $order_heure");
# Setting up date
($ohours,$ominutes,$osecondes)=split(/:/,$order_heure);
($oyear,$omonth,$oday)=split(/\./,$order_date);
$ORDER_TIME = timelocal($osecondes, $ominutes, $ohours, $oday, $omonth, $oyear);
&debug ( "Epoch time for the last occurence of DaemonA is $ORDER_TIME");
}

$CURRENT_TIME = timelocal(`date +%S`,`date +%M`, `date +%H`, `date +%d`, `date +%m`, `date +%Y`);
&debug ( "Current Epoch time is $CURRENT_TIME");

$diff = $CURRENT_TIME - $ORDER_TIME;
if ( $diff < 300 )
{
&debug ( "Last Occurence of DaemonA was $diff secondes ago. Everything is OK");
$message .= "\nLe Daemon DaemonA a ecrit une entree il y a $diff secondes. tout est OK.\n";
} else {
&debug ( "[ERREUR!] Last Occurence of DaemonA was $diff secondes ago.");
$summary .= "\n[ERREUR!] Le DaemonA Daemon ne roule pas ($cdate $cheure) !!";
$summary .= "\n[ERREUR!] La derniere ecriture dans le log etait il y a $diff secondes !!";
$color = "red";
};
# fin de la verification
##########################################

nic speed on a solaris

The following script will give you a Sun Solaris network interface information like link speed...

#!/bin/sh

# Only the root user can run the ndd commands
if [ "`/usr/bin/id | /usr/bin/cut -c1-5`" != "uid=0" ] ; then
echo "You must be the root user to run `basename $0`."
exit 1
fi

# Print column header information
/usr/bin/echo "Interface\tSpeed\t\tDuplex\t\tAutoneg"
/usr/bin/echo "---------\t-----\t\t------\t\t-------"

# Determine the speed and duplex for each live NIC on the system
for INTERFACE in `/usr/bin/netstat -i | /usr/bin/egrep -v "^Name|^lo0" \
| /usr/bin/awk '{print $1}' | /usr/bin/sort | /usr/bin/uniq`
do
# Special handling for "ce" interfaces
if [ "`/usr/bin/echo $INTERFACE \
| /usr/bin/awk '/^ce[0-9]+/ { print }'`" ] ; then
# Determine the ce interface number
INSTANCE=`/usr/bin/echo $INTERFACE | cut -c 3-`
DUPLEX=`/usr/bin/kstat ce:$INSTANCE | /usr/bin/grep link_duplex \
| /usr/bin/awk '{ print $2 }'`
case "$DUPLEX" in
1) DUPLEX="half" ;;
2) DUPLEX="full" ;;
esac
SPEED=`/usr/bin/kstat ce:$INSTANCE | /usr/bin/grep link_speed \
| /usr/bin/awk '{ print $2 }'`
case "$SPEED" in
10) SPEED="10 Mbit/s" ;;
100) SPEED="100 Mbit/s" ;;
1000) SPEED="1 Gbit/s" ;;
esac
AUTONEG=`/usr/bin/kstat ce:$INSTANCE | /usr/bin/grep adv_cap_autoneg \
| /usr/bin/awk '{ print $2 }'`
case "$AUTONEG" in
0) AUTONEG="NO" ;;
1) AUTONEG="YES" ;;
esac

# Special handling for "bge" interfaces
elif [ "`/usr/bin/echo $INTERFACE \
| /usr/bin/awk '/^bge[0-9]+/ { print }'`" ] ; then
BGE_INT_LINE_NO=`/usr/bin/kstat bge | /usr/bin/grep -n $INTERFACE \
| /usr/bin/awk -F: '{print $1}'`
BGE_INT_DUPLEX_LINE_NO=`/usr/bin/expr $BGE_INT_LINE_NO + 9`
BGE_INT_SPEED_LINE_NO=`/usr/bin/expr $BGE_INT_LINE_NO + 14`
DUPLEX=`/usr/bin/kstat bge | /usr/bin/awk 'NR == LINE { print $2 }' \
LINE=$BGE_INT_DUPLEX_LINE_NO`
SPEED=`/usr/bin/kstat bge | /usr/bin/awk 'NR == LINE { print $2 }' \
LINE=$BGE_INT_SPEED_LINE_NO`
case "$SPEED" in
10000000) SPEED="10 Mbit/s" ;;
100000000) SPEED="100 Mbit/s" ;;
1000000000) SPEED="1 Gbit/s" ;;
esac
# All other interfaces
else
INTERFACE_TYPE=`/usr/bin/echo $INTERFACE | /usr/bin/sed -e "s/[0-9]*$//"`
INSTANCE=`/usr/bin/echo $INTERFACE | /usr/bin/sed -e "s/^[a-z]*//"`
/usr/sbin/ndd -set /dev/$INTERFACE_TYPE instance $INSTANCE
SPEED=`/usr/sbin/ndd -get /dev/$INTERFACE_TYPE link_speed`
case "$SPEED" in
0) SPEED="10 Mbit/s" ;;
1) SPEED="100 Mbit/s" ;;
1000) SPEED="1 Gbit/s" ;;
esac
DUPLEX=`/usr/sbin/ndd -get /dev/$INTERFACE_TYPE link_mode`
case "$DUPLEX" in
0) DUPLEX="half" ;;
1) DUPLEX="full" ;;
*) DUPLEX="" ;;
esac
AUTONEG=`/usr/sbin/ndd -get /dev/$INTERFACE_TYPE adv_autoneg_cap`
case "$AUTONEG" in
0) AUTONEG="NO" ;;
1) AUTONEG="YES" ;;
esac
fi
/usr/bin/echo "$INTERFACE\t\t$SPEED\t$DUPLEX\t\t$AUTONEG"
done

Looking for information about a stale partition

When you have mirrors for your data, it might be a little difficult to find out which disk has the stale partition. The following few commands can help you find out quickly where the stale partition is located physically.

1 ) List of volumes groups
root@myhost:/root# lsvg
rootvg
datavg

2 ) Which volume group has a stale partition ?
root@myhost:/root# lsvg -l rootvg | grep stale
hd2 jfs 108 216 2 open/stale /usr
root@myhost:/root# lsvg -l datavg | grep stale

3 ) Which physical volumes are defined in that VG ?
root@myhost:/root# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 542 81 00..00..00..00..81
hdisk1 active 542 97 08..02..00..00..87

4 ) Which volume group has the stale partition ?
root@myhost:/root# lspv -p hdisk1 | grep stale
300-300 stale center hd2 jfs /usr
root@myhost:/root# lspv -p hdisk0 | grep stale

Other commandes you might find usefull :
root@myhost:/root# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 32 64 2 open/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 1 2 2 open/syncd /
<b>hd2 jfs 108 216 2 open/stale /usr</b>
hd9var jfs 128 256 2 open/syncd /var
hd3 jfs 8 16 2 open/syncd /tmp
hd1 jfs 10 20 2 open/syncd /home
hd7 dump 16 16 1 open/syncd N/A

To see a mapping of this logical volume on disk :
root@myhost:/root# lslv -m hd2
hd2:/usr
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0220 hdisk0 0220 hdisk1
0002 0221 hdisk0 0221 hdisk1
0003 0222 hdisk0 0222 hdisk1
0004 0223 hdisk0 0223 hdisk1
0005 0224 hdisk0 0224 hdisk1
0006 0225 hdisk0 0225 hdisk1
0007 0226 hdisk0 0226 hdisk1
0008 0227 hdisk0 0227 hdisk1
0009 0228 hdisk0 0228 hdisk1
...

To see which part of the mirror has the stale partition :
root@myhost:/root# lslv -p hdisk0
hdisk0:::
USED USED USED USED USED USED USED USED USED USED 1-10
USED USED USED USED USED USED USED USED USED USED 11-20
USED USED USED USED USED USED USED USED USED USED 21-30
USED USED USED USED USED USED USED USED USED USED 31-40
USED USED USED USED USED USED USED USED USED USED 41-50
USED USED USED USED USED USED USED USED USED USED 51-60
USED USED USED USED USED USED USED USED USED USED 61-70
...

root@myhost:/root# lslv -p hdisk1
hdisk1:::
USED FREE FREE FREE FREE FREE FREE FREE FREE USED 1-10
USED USED USED USED USED USED USED USED USED USED 11-20
...
USED USED USED USED USED USED USED USED USED USED 278-287
USED USED USED USED USED USED USED USED USED USED 288-297
USED USED <b>STALE</b> USED USED USED USED USED USED USED 298-307
USED USED USED USED USED USED USED USED USED USED 308-317
...
So the stale partition is on hdisk1.

Another command to be user is :
lspv -p hdisk1
hdisk1:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-1 used outer edge hd5 boot N/A
2-9 free outer edge
10-15 used outer edge lvlogsihs4 jfs /var/logs/ihs4
16-45 used outer edge lvarchive jfs /var/archive
46-52 used outer edge lvdomidata2 jfs /usr/domino/data2
53-106 used outer edge paging00 paging N/A
107-109 used outer edge lvdomilogs jfs /var/logs/domino
110-141 used outer middle hd6 paging N/A
142-145 used outer middle lvwaslogs jfs /var/logs/was
146-147 used outer middle lvhttplogs jfs /var/logs/ihs
148-187 used outer middle lvdomidata jfs /usr/domino/data1
188-197 used outer middle lvdomibin jfs /usr/domino/bin
198-199 free outer middle
200-209 used outer middle hd1 jfs /home
210-217 used outer middle hd3 jfs /tmp
218-218 used center hd8 jfslog N/A
219-219 used center hd4 jfs /
220-299 used center hd2 jfs /usr
<b>300-300 stale center hd2 jfs /usr</b>
301-325 used center hd2 jfs /usr
326-327 used inner middle hd2 jfs /usr
328-433 used inner middle hd9var jfs /var
434-455 used inner edge hd9var jfs /var
456-542 free inner edge

Image borders in HTML

There seems to be a bug in Firefox when we try to display images side by side. I wrote a PHP application to show digital pictures to friends and family. I use a group of images setup side by side to simulate a picture frame around my digital picture :
<center><img width='230' height='201' border='0' hspace='5' src='/blog/uploads/meimei_borber_ok.jpg' alt='' /></center>

Firefox does not want to display 2 images side by side without space. This is how my page is displayed with Firefox :


<center><img width='232' height='239' border='0' hspace='5' src='/blog/uploads/meimei_border_nonok.jpg' alt='' /></center>

After looking everywhere on the web, I resigned myself of changing the display. If the browser is not Internet Explorer, I do not display these nice picture frames, I display simple image borders. I found a very good page about setting up borders in CSS : <a href="http://www.mandarindesign.com/boxes.html">click here</a>.

So until I found a fix, I'll be playing with image borders in CSS.

Nicer fonts with Fedora 4

This is a reproduction of a page I gound on the internet. Here is the <a href="http://cri.ch/linux/docs/sk0017.html">original page</a>.

Nicer fonts for Fedora Core 4

Author: Sven Knispel
Updated: 01-11-2005
Feedback welcome: linux@cri.ch
Free service provided by: www.cri.ch

Ever since I am using Fedora Core I have been complaining about the poor rendering of fonts: they look ugly and unsharp.
One week ago I switched my desktop PC from Windows to Linux and I really couldn't stand it so I started a little research.

The reason of the poor rendering is that FreeType is compiled by default with the bytecode interpreter switched off. I didn't find the real reason for that but it seems related to some patent issues.
Fortunately it is quite simple to turn the bytecode interpreter on by recompiling freetype after a slight change.
This article is about recompiling freetype with the bytecode interpreter switched on:

* get the source rpm
* make the required changed
* recompile and install the modified freetype version


1. Getting the sources

First we must heck for current version of freetype:
# rpm --query freetype
# freetype-2.1.9-2
Then download the sources, e.g. from rpm.bone: http://rpm.pbone.net/index.php3/stat/4/idpl/1981137/com/freetype-2.1.9-2.i386.rpm.html
(I suggest downloading the binary rpm as well in case of dammage as we will freshen the installed one)
Install the sources:
sudo rpm -ivh ./Desktop/freetype-2.1.9-2.src.rpm
2. Enabling the bytecode interpreter

Change to the directory /usr/src/redhat/SPECS and edit freetype.spec.
What we are looking for is: %define without_bytecode_interpreter 1 and we want to replace it by: %define without_bytecode_interpreter 0 in order to enable the bytecode interpreter.
3. Rebuild and install

After having changed the code we need to rebuild the RPMs:
rpmbuild -bb freetype.spec (if you get errors make sure that XFree86-devel is installed as freetype depends on it)
If the build was successfull the result can be found in /usr/src/redhat/RPMS/i386/: [~]$ cd /usr/src/redhat/RPMS/i386/
[i386]$ ls -l
total 3268
-rw-r--r-- 1 root root 824001 Oct 21 15:28 freetype-2.1.9-2.i386.rpm
-rw-r--r-- 1 root root 1781435 Oct 21 15:28 freetype-debuginfo-2.1.9-2.i386.rpm
-rw-r--r-- 1 root root 105464 Oct 21 15:28 freetype-demos-2.1.9-2.i386.rpm
-rw-r--r-- 1 root root 585133 Oct 21 15:28 freetype-devel-2.1.9-2.i386.rpm
-rw-r--r-- 1 root root 25476 Oct 21 15:28 freetype-utils-2.1.9-2.i386.rpm
Now check for the ones you need to reinstall with rpm --query <package-name> (e.g. rpm --query freetype-devel) and reinstall whatever is required with the force option as they are already installed in the original version: rpm -Uvh --force freetype-2.1.9-2.i386.rpm
rpm -Uvh --force freetype-devel-2.1.9-2.i386.rpm
That's it!
Now just restart X and enjoy...
4. Settings

If you use a TFT I recommend deactivating anti-aliasing for even crisper fonts.
4. Results

And here is the comparison before and after:
before after
5. Consistent font for all applications (updated 31.10.2005)

If you are using KDE you may have noticed that the settings from KDE are not inherited to Qt applications. Especially Firefox kept showing bigger fonts not corresponding to me KDE font-settings.

After some research I found out how to force the fonts for non KDE applications:

* the gnome font-settings must be changed. This can be done by exectuting gnome-font-properties and select e.g. Tahoma 8 as default font
* under KDE the gnome-settings-daemon mus be started: this can be done e.g. by adding a symlink to /usr/libexec/gnome-settings-daemon from ~/.kde/Autostart (ln -s /usr/libexec/gnome-settings-daemon ~/.kde/Autostart/gnome-settings-daemon)


After these changes all my apps have neat and consistent fonts...

Capturing DV video from camcoder

After spending few weeks in Taiwan at my family in law's place, I had to transfer some videos I took with my Bro's miniDV video camera. I found very usefull information on how to set up Linux <a href="https://www.redhat.com/archives/fedora-list/2004-December/msg06925.html">here</a>. To make sure these information won't go away, I extracted what I have used from this link and present them below :

As root:
$ cd /dev/
$ ./MAKEDEV raw1394
$ mknod -m 666 /dev/dv1394 c 171 32
$ chmod 666 raw1394

Then add DAG to your yum repository list:
$ cat /etc/yum.repos.d/dag.repo
[dag]
name=Dag RPM Repository for Fedora Core
baseurl=http://apt.sw.be/fedora/$releasever/en/$basearch/dag

Then install kino and associated tools:
$ yum install kino
$ yum install dvgrab

Then plug in the card and the cam camcorder, set the camera to the
"play" mode, and load the necessary modules:
$ modprobe ieee1394
$ modprobe ohci1394
$ modprobe raw1394 dv1394

Finally, as a normal user:
$ dvgrab file001

and you should start getting file transfers from the camcorder.

More info is available at the following places:

http://kino.schirmacher.de/
http://www.linux1394.org/
http://www.syba.com/us/en/product/43/05/04/index.html
http://www.linux1394.org/hcl.php?class_id=3
http://www.linux1394.org/hcl.php?class_id=1

Exec statement in KDE

A very nice feature of KDE environment is to personalized servicemenu (right click in Knoqueror). You can define an action to do but I was looking for arguments to pass by to the command. I found them <a href="http://standards.freedesktop.org/desktop-entry-spec/desktop-entry-spec-0.9.4.html">here</a>

Recognized fields are as follows:
%f A single file name, even if multiple files are selected. The system reading the desktop entry should recognize that the program in question cannot handle multiple file arguments, and it should should probably spawn and execute multiple copies of a program for each selected file if the program is not able to handle additional file arguments. If files are not on the local file system (i.e. are on HTTP or FTP locations), the files will be copied to the local file system and %f will be expanded to point at the temporary file. Used for programs that do not understand the URL syntax.
%F A list of files. Use for apps that can open several local files at once.
%u A single URL.
%U A list of URLs.
%d Directory containing the file that would be passed in a %f field.
%D List of directories containing the files that would be passed in to a %F field.
%n A single filename (without path).
%N A list of filenames (without paths).
%i The Icon field of the desktop entry expanded as two parameters, first --icon and then the contents of the Icon field. Should not expand as any parameters if the Icon field is empty or missing.
%c The translated Name field associated with the desktop entry.
%k The location of the desktop file as either a URI (if for example gotten from the vfolder system) or a local filename or empty if no location is known.
%v The name of the Device entry in the desktop file.

a tree view of directories

Working in a production environment, we often need to find out which directories take up all space. At first, I was using the command df -k . | sort -u but the output is not very easy to look at. I decided to create a small perl script that would display the size of all the directories in the current location. Here is an exemple :

<img width='450' height='379' border='0' hspace='5' src='/blog/uploads/jbdu01.gif' alt='' />

You can also call the script with a depth argument. If you want to have the size of 2 subdir levels (current level and one sub level), call the script with 2 :

<img width='499' height='464' border='0' hspace='5' src='/blog/uploads/jbdu02.gif' alt='' />


This is a very simple script but it helps me a lot. That's why I decided to publish it on my blog. Use it as you wish !!! It works as is on Linux (Fedora core 3 and 4), AIX 4.X and 5.X, and Solaris 8 and 9.

<a href="/blog/uploads/jbdu" title="jbdu" target="_blank">You can download the code from here.

How to run Lotus Notes with Wine

1. Download and install the 20041019 versions of the wine, libwine, and wine-utils binaries from this Sourceforge page (newer versions may have regression bug #2660).

2. Download and install the latest version of winesetuptk using apt-get or Synaptic.

3. Run winesetup (as you, not root) and accept the defaults.

4. Copy the Notes folder and all subfolders from a working copy of the Lotus Notes [6.5.1 or later] client on Windows to the equivalent directory in your wine installation (for example, if it was "c:\lotus\notes" under Windows, it should be copied to "~/.wine/c/lotus/notes" on Linux).

Make sure that your notes.ini file is in your Notes program directory (the same directory that nlnotes.exe is in). If it's not, you should find it (probably in the Windows directory) and copy it there.

Also, while you're looking at the notes.ini file, make sure that any AddinMenus= or EXTMGR_ADDINS= settings are commented out. They may not hurt anything if they're there, but you should definitely test without them first.

5. Copy mfc42.dll and msvcp60.dll from your "c:\windows\system32" directory in Windows to "~/.wine/c/windows/system" on Linux.

6. Add the following sections to your ~/.wine/config file:

[AppDefaults\\nlnotes.exe\\Version]
"Windows" = "win98"
[AppDefaults\\nlnotes.exe\\x11drv]
"Managed" = "Y"
"DesktopDoubleBuffered" = "Y"

If there are any other sections that were already there for [AppDefaults\\nlnotes.exe], [AppDefaults\\notes.exe], or [AppDefaults\\nhldaemn.exe], just comment them out. We don't need to override any DLLs or set screen resolution or anything with this setup.

Along those lines, make sure any global "Resolution=" settings are commented out. You don't need that either.

7. Try to run Notes. From a terminal window, type:

wine "c:\lotus\notes\nlnotes.exe"

Obviously, you should use whatever path you copied the Notes installation to (c:\lotus\notes, c:\Program Files\lotus\notes, whatever).

Using ssh keys with Filezilla

One of the best FTP client I used under windows is FileZilla. It is simple but works very well. You can download it <a href="http://filezilla.sourceforge.net/">here.</a>
It is great to connect to old FTP sites and can now be used to connect to site using FTP over ssh. It is pretty straight forward if you can login using user/password authentification but it is more tricky if you want to use public/private ssh keys.
The trick is that FileZilla uses modules from putty (the best free telnet/ssh client for windows) to authenticate using keys.

Here is the complete (I hope !) configuration of FileZilla and putty to use ssh keys.

1 ) download the components
Go to <a href="http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html">putty download page</a> and download either putty.zip (which contains everything needed) or download the following components :
- PuTTY : The telnet/ssh client itself
- PuTTYgen : The ssh key generator
- Pageant : The ssh agent to handle passphrase ( I will explain it later ).

2 ) Generating your keys
The first step is to create your public/private key pair that you will be using to access sites. We will only consider RSA ssh keys in this document. Open PuTTYgen.<BR>


<img width='' height='' border='0' hspace='5' align='left' src='http://moon.homeunix.com:8080/blog/uploads/putty01.jpg' alt='' /><BR>


Click on "Generate". You will have to move your mouse to generate random numbers. When it 's finished you will see the following screen :<BR>


<img width='' height='' border='0' hspace='5' align='left' src='http://moon.homeunix.com:8080/blog/uploads/putty02.jpg' alt='' /><BR>


You can modify the Key Comment with whatever you want. It is a good idea to put something like : MyWindowsXP key.... This description is for you, it is not parsed by the ssh system.
The next field is extremelly important : <b>Please, USE A PASSPHRASE</b>.
When everything is done, click on "Save public key". It is a good idea to name your file id_rsa.pub. Although this is not mandatory (you can give any name you want), this would respect the standard name for rsa key file.
Then click on save private key. You can name this file id_rsa.
A quick explanation on these keys. The 2 keys you generated work together. You always keep you private key for yourself. You will never have to send it to somebody. It is your property so keep it secure ;-) ! You public key will have to be sent to every single system you access.

The key generation process if finished. You can close PuTTYgen.
Now is the right time to add your key to the destination account you want to access. Let's say you want to access host1 and account account1. You need to add (or the host administrator) your public key to the file account1@host1:~/.ssh/authorized_keys2.

3 ) configuring PuTTY
Open PuTTY and configure the host you want to access. Click on ssh->auth and browse to get your private key file :<BR>


<img width='' height='' border='0' hspace='5' align='left' src='http://moon.homeunix.com:8080/blog/uploads/putty04.jpg' alt='' /><BR>



Now click back to session, give it a name and save it.

4) Configuring Pageant
If you try to connect to your host using PuTTY and your key, the system will ask for your passphrase. Unfortunately, FileZilla does not ask for a passphrase. So we need the help of Pageant to provide it for you to FileZilla. Pageant is called an ssh-agent. It is used to store your key and your passphrase and give it to client like PuTTY and FileZilla. This is very convenient if you work with a lot of servers. You enter your key and passphrase once, and you can login to any host without typing it again. Each time you start a new session, PuTTY will get its authentification info directly from Pageant.
Let's configure Pageant. Run it and look for it in your task bar. It is the little icon of a computer with a hat on :
<BR>


<img width='' height='' border='0' hspace='5' align='left' src='http://moon.homeunix.com:8080/blog/uploads/Pageant01.jpg' alt='' />
<BR>


Right click on it add key. Browse to get your private key. It will then ask you for you passphrase.

5 ) Using FileZilla to connect to your site

Finally you can open FileZilla. Configure your session. You should be able to connect using FTP over ssh2 to your host.