Francais | Espanol | Deutsche | Italiano | 中文 | 한국어 | 日本語 |

This blog is owned by
Amey Palyekar
software engineer by profession


Add to Technorati Favorites
~My Links~



October 2006
April 2007


Computer Blogs -  Blog Catalog Blog Directory


Linux Commands References III

Apr 21, 2007
More File Utilities

cat filename1
To print contents of the file.

cut-f 1 index
extract column from a file. In this case we specify to extract first column from index file.
prints to standard output.
But can be redirected to some other file as shown below;
cut -f 2 index > works
doesn't alter original file, regards column as seprated by spaces.
cut -d , -f 1 index
use -d option to change delimiter to column as in above case ',' (comma).
-d ' ' = single space delimiter
cut -d ' ' -f 1-3 index
extracts three column at a time delimited by space.

paste poets works
merge contents of two files on screen.
Maximum number of files that you can merge with single paste command is 12 or more depending on operating system.
contents appear on screen in orders of file specified in command & entry seprated by tab.
paste -d / chapter poets works
tells the paste command to use '/' as column seprator
paste -s chapter poets
output does not appear in column but horizontally from wach file
paste chapter poets works > file1
redirect the output of paste command to some file1 instead of standard output.

(translate) does changes to several characters in large file.
tr "[:lower"] "[:upper"]
this command changes all characters in file contents from lowercase to uppercase and displays output on standard output.
tr "[:lower"] "[:upper"] <> file3
redirects output to file3 instead of standard output
tr "chapter" "page"
searches in file1 for occurrence of word "chapter" and replaces it with word "page"
but since "chapter" has more characters than "page" last three characters(ter) from 'chapter' still remain in output.
tr -d ' '
deletes space characters from file.
tr -s l <>
replaces repeated instances of l with single instance and writes the output to file

find repeated lines & filter them using this command
uniq written
only considers repeated lines if they are adjacent to each other
uniq -c writers
tells how much times lines appear consecutively.
uniq -u writers
tells not to print repeated lines at all to output.
so only one line of that type in whole file.
uniq -d writers
prints only repeated lines from file writers
uniq writers writers.uniq
saves output of uniq command on file writers to file writers.uniq

reports number of lines, words or characters in file
wc -l file
reports number of lines
wc -w file1
reports number of words
wc -c file1
reports number of characters.
In ASCII file each character represents bytes so in ASCII files no of characters = no of bytes
wc file1
gives no of lines words and characters in file also
wc file1 file2
can use wc on more than one files
wc *
prints no of lines, words and characters in all the files of present working directory.

sort filename
sorts line in file in certain order
blank lines appear first followed by
lines that start with special characters.
when first word in multiple lines are same then second word is compared and so on.
lines that begin with lower case appear after lines with upper case
sort -f filename
ignore case
sort -f -r filename
sort in reverse alphabetical order with -r option
sort +1 filename
sorts alphabetically starting from second word in every line.
first word = +0
second word = +1
third word = +2 so on
grep a * sort
returns the output from grep command in sorted order
ls -l sort +4
sorts the output from ls command based on 5 column.



Apr 20, 2007
Awk language is very powerful component of unix system.
It is used for data manipulation tasks, such as extracting fields from
a line of input.

The awk language makes assumption about the format of its input
which provide you with the ability to make simple programs. It
assumes input is ASCII text, input can be organized into lines
or records, records can be organized into fields. An awk processes file
line by line, each line is treated individually and is considered to be
a record. Each record is collection of fields that are separated by white

awk '{print $1}' dogs-passage
This command print out the first field in every line in file dogs-passage.
In action statement {print $1}, refers to field using $, where is field number
you are interested in.

awk '{print $1,$2}' dogs-passage
This prints first two fields

Programs in awk contain pattern-action statements in one of these formats,
  • BEGIN{}
  • END {}
  • {}
awk 'BEGIN {total=5}'
the BEGIN {} format allows you to specify an action for awk to perform,
before it reads in any data.

awk 'END {print total}'
the END{} format allows you to specify action that are performed after all
data has been read in.

awk '/rule/ {print}'
when you use the {action} format, awk executes action whenever pattern
is matched.

You can specify more than one pattern if you want to specify range of values.
, {}

you can exclude either of pattern or action. If you exclude pattern part, specified
action is executed on every line
awk '{print $1}'

If you exclude action part, awk simply prints out each record within pattern range
awk '{print}'

If pattern does not match awk does not execute the action, action part tells
what to carry out.

awk '/hear/' dogs-passage
finds lines in file 'dogs-passage' that contains particular word 'hear', syntax is //

awk '/hear/{print}' dogs-passage
prints the line containing word hear from file 'dogs-passage'

awk 'BEGIN {total=5}
END {print "Total=" total}' file-numbers

output: Total=794

above command first initializes total to 5 in BEGIN, then adds numbers from
first field in every line from file 'file-numbers' with total, and at last prints the
total in END.



Allows to create alternate fielname for you files, by associating multiple directory
entries with same inode.

One of the fields in inode called link count field is used to keep track of number of
entries that refer to same inode. Links can also be useful to provide an alternative
name in order to simplify typing of long path name.

There are two kind of links:
  • Hard links
  • Soft Links
-> Hard Links

With hard links all the different directory entries point to the same inode.
You can't create hard link to directory, so directory can never have more than
one name and each of filenames must be within same filesystem.

When link count for file equals zero, it means that there are no more directort
entries pointing to an inode. At this point inode data blocks are removed.
e.g: ln bellini /home/rule/compose/bellinih1
above command creates hard link to file 'bellini', in rule\compose directory with
name 'bellinih1'.

-> Soft links

are also known as symbolic links.
With symbolic links two files are different. One file contains the data and the other
file contains name of original file and acts as pointer or link to it.

One can also make symbolic links to directory, and they can cross different

Every symbolic link has its own inode, and each symbolic link uses an amount
of disk space.
If you delete symbolic link original file is not affected.
But if original file is deleted, link remains but no longer points to any data.
e.g: ln -s opera /export/home/nikkij/opera1
creates a soft link for file opera with name opera1 using -s option

ls -l
command gives the number of links for that particular directory or file in column 2.

rm /home/rule/compose/bellinih1
to remove the link you use rm command followed by filename



Every file that is created in Unix has correspomding Inode (information node).
Inode is data structure on disk that contains information about file or directory.

It indicates physical location or data blocks. It does not contain names of files, but
contains ownership information & permission details includes - file type, file size,
last accessed, modified when inodes themselves were last modified. When File is
created some inodes are allocated to it.

Partially allocated Inodes means inode isn't formatted correctly, thsi may happen
because of hardware failure.

Number of files in filesystem can't exceed its number of inodes. To modify this
number you have to reinitilaize this filesystem. Filesystem can run out of inodes
if there are too many small files in filesystem.

Every inode has identifying number caled i-number. To display files i-number use
"ls -i" command.
Inode number identifies data block address for it.
-> ls -i filename1 = inode number of filename1
-> ls -il = full directory listing including inode i-number.

An i-number is unique within a given filesystem.
So you can have same inode on different devices.

Every directory entry keeps track of two pieces of information about a file; filename
and inode number associated with that file.


Linux Commands References - II

Apr 9, 2007
File Utilities

cmp - compares two files
cmp file1 folder1/file2
no command output means file are identical
also compares the punctuation marks
reports only the fist difference found within the files.

diff -
diff path/file1 path2/file2
reports detail differences between two files
no command output means files are identical
sample output legends:
1c2 - line 1 from file1 changed to line 2 in file2
0a1 - add line 1 from file2 to line 0 (top of line 1) in file1
5d5- delete line 1 from file1 to gt line 5 from file2
diff command can tell which file is more complete by showing new lines within that file.
diff command is case sensitive by default
diff -i file1 file2 : disables case sensitivity.
diff -b file1 file2: ignores spaces, blanks and tabs while comparing

rm path/file1
removes a file

compress filename1
Command to compress the large files
the one that are not used often are good selection for compression
when compressed '.z' extension is added to file.
once compressed you can't access file by its old name.
The ownership modes of original file are preserved when file is compressed
can compress more than one file at a time
e.g: compress file1 file2
compress -v *w*
the above command gives the verbose information of all the files that has 'w' in their name, about how much file is compressed.
this command may not compress some files , if file is too small or contains encrypted data.
Compression algorithm uses like pattern in files as basis for compression
compress -v -f file1
the above switch -f can be used to force compress the file1
force compression using -f can sometimes increase the size of the file after compression.

uncompress compfile.z
to uncompress the compressed file
using extension in command is optional
The wildcard * is useful to uncompress all files in directory
e.g: uncompress dir/*.z
uncompress -c filename1
above command writes the compressed or uncompressed files to standard output, but file isn't altered and remain compressed.
this can also be done using zcat command
e.g: zcat filename1.z
but zcat needs '.z' extension to be specified explicitly.

some files are written in binary code to send such file by mails, you have to encode it first
this can be done using following command
uuencode file1 - encodes executable file file1
uudecode file2 - decode file file2 that is encoded


Linux Commands References I

Apr 7, 2007
Basic File/Text Manipulation

To ‘concatenate’ files and then print on the standard output. i.e. cat file1 file2 will print out the contents of ‘file1’ followed by ‘file2’

To list the contents of the directory specified by ‘path’. If no path is specified, then list the contents of the current directory.

To open ‘file’ in a primitive text reader that will allow you to scroll through a file using the arrow keys.

display current working directory

To show current running processes on the system.
options: -l process status in second column
-f show owner
-e old and current process
ps -ef grep Amey - process with owner as Amey
ps aux grep process We print the entire process list, and we pipe standard output, using the ‘’ to the standard input of ‘grep’.

Simple command line calculator.

File compression/uncompression using the bzip2 standard.

Change file permission modes/file permission ownership.

Copy a file

Move a file

Text matching using regular expressions (regexes).

Print a stream to standard output.

Simple text editor

More advanced text editor

We can redirect the standard output and standard error streams from a file.
Example:echo “test” will print “test” to standard output, which in this case is the screen. echo “test” > outputfile will redirect that standard output into a file called “outputfile” which will be created if it doesn’t currently exist. Just as > redirects standard output, 2> redirects the standard error stream. Example:./program > output_file 2> error_file

Forking into the background

ctrl Z - To suspend current process in foreground.

bg - to send current process suspended to background

jobs - to get the list of jobs running in background
+ sign at start of job in list denotes default job running in background.
- sign at start of job in list denotes next default job after current default job finishes in background.

fg [pid] / [pname]
brings background job [jobno] to foreground.
fg without jobno will bring default job to foreground.

We can fork any process into the background, using ‘&’. Example: ./process &And this will ‘daemonize’ the process.

We can then find this process’s PID (Process ID): ps aux will show all processes, and we can then kill -9 to terminate this process; -9 will kill process urgently.
we can also use kill comman in following way
Kill [pid] - pid is process identifier uniquely aasigned by unix kernel; you can get pid from ps comand, here we can also give kill -9 [pid]
or kill [pname] - here pname is process name; but beware this may also kill some other users process so kill [pid] is often used.
for background job - kill %[jobno]

continue command even after session have logged out
e.g: nohup lp speech &
This command will put the lp command in background even after logged out, so it keeps running.

wait [pid]
makes current process wait until the pid process has finished.

at 1:00
to schedule the job at specified time;one can use 24 hour clock or 1pm, 1:00pm, 7pm
more e.g's: at 1:00 oct 23
at now +1 year
now, noon, midnight, today, tommorrow.
at 1:00 > lp speech - specify the lp command to run at said time.
ctrl d - to exit out of at command
at -l = to see all scheduled jobs with at command.
at -r [jobno] - to cancel job with jobno.

to schedule job every time after same period.
this is done in directory /var/spool/cron/crontab in this directory there are files for different user, insert your crontab command in file with your username.
crontab [-e] [-l] [-r]
-e : edit
-l : listed
-r : removed

We can separate commands on the command line with either a ‘;’ or a ‘&&’. Using a semicolon means that each command will be executed in order regardless of the result of the last. Using a double ampersand means that the next command will only be executed if the last one succeeded.

Help Commands

you can use man command any time to get complete knowledge of commands.
e.g.: man ps - will give complete help on PS command with various switches used with ps.

The ability to switch between foreground and background process is called job control.