Awk.Info

"Cause a little auk awk
goes a long way."

About awk.info
 »  table of contents
 »  featured topics
 »  page tags


About Awk
 »  advocacy
 »  learning
 »  history
 »  Wikipedia entry
 »  mascot
Implementations
 »  Awk (rarely used)
 »  Nawk (the-one-true, old)
 »  Gawk (widely used)
 »  Mawk
 »  Xgawk (gawk + xml + ...)
 »  Spawk (SQL + awk)
 »  Jawk (Awk in Java JVM)
 »  QTawk (extensions to gawk)
 »  Runawk (a runtime tool)
 »  platform support
Coding
 »  one-liners
 »  ten-liners
 »  tips
 »  the Awk 100
Community
 »  read our blog
 »  read/write the awk wiki
 »  discussion news group

Libraries
 »  Gawk
 »  Xgawk
 »  the Lawker library
Online doc
 »  reference card
 »  cheat sheet
 »  manual pages
 »  FAQ

Reading
 »  articles
 »  books:

WHAT'S NEW?

Mar 01: Michael Sanders demos an X-windows GUI for AWK.

Mar 01: Awk100#24: A. Lahm and E. de Rinaldis' patent search, in AWK

Feb 28: Tim Menzies asks this community to write an AWK cookbook.

Feb 28: Arnold Robbins announces a new debugger for GAWK.

Feb 28: Awk100#23: Premysl Janouch offers a IRC bot, In AWK

Feb 28: Updated: the AWK FAQ

Feb 28: Tim Menzies offers a tiny content management system, in Awk.

Jan 31: Comment system added to awk.info. For example, see discussion bottom of ?keys2awk

Jan 31: Martin Cohen shows that Gawk can handle massively long strings (300 million characters).

Jan 31: The AWK FAQ is being updated. For comments/ corrections/ extensions, please mail tim@menzies.us

Jan 31: Martin Cohen finds Awk on the Android platform.

Jan 31: Aleksey Cheusov released a new version of runawk.

Jan 31: Hirofumi Saito contributes a candidate Awk mascot.

Jan 31: Michael Sanders shows how to quickly build an AWK GUI for windows.

Jan 31: Hyung-Hwan Chung offers QSE, an embeddable Awk Interpreter.

[More ...]

Bookmark and Share

categories: Learn,Jan,2009,Admin

Learning Awk

Short Overviews

The following list is sorted by newbie-ness (so best to start at the top):

Longer Tutorials

The following list is sorted by the number of times this material is tagged at delicious.com (most tagged at top):

Other Stuff


categories: Learn,Jan,2009,Ronl

Teaching Awk

(For tutorial material on Awk, see Learning Awk page.)

R. Loui loui@ai.wustl.edu is Associate Professor of Computer Science, at Washington University in St. Louis. He has published in AI Journal, Computational Intelligence, ACM SIGART, AI Magazine, AI and Law, the ACM Computing Surveys Symposium on AI, Cognitive Science, Minds and Machines, Journal of Philosophy.

Whenever Ronald Loui teaches GAWK, he gives the students the choice of learning PERL instead. Ninety percent will choose GAWK after looking at a few simple examples of each language (samples shown below). Those who choose PERL do so because someone told them to learn PERL.

After one laboratory, more than half of the GAWK students are confident with their GAWK skills and can begin designing. Almost no student can become confident in PERL that quickly.

After a week, 90% of those who have attempted GAWK have mastered it, compared to fewer than 50% of PERL students attaining similar facility with the language (it would be unfair to require one to `master' PERL).

By the end of the semester, over 90% who have attempted GAWK have succeeded, and about two-thirds of those who have attempted PERL have succeeded.

To be fair, within a year, half of the GAWK programmers have also studied PERL. Most are doing so in order to read PERL and will not switch to writing PERL. No one who learns PERL migrates to GAWK.

PERL and GAWK appear to have similar programming, development, and debugging cycle times.

Finally, there seems to be a small advantage for GAWK over PERL, after a year, for the programmers willingness to begin a new program. That is, both GAWK and PERL programmers tend to enjoy writing a lot of programs, but GAWK has the slight edge here.


categories: Learn,Jan,2009,Timm

Four Keys to Gawk

by T. Menzies

Imagine Gawk as a kind of a cut-down C language with four tricks:

  1. self-initializing variables
  2. pattern-based programming
  3. regular expressions
  4. associative arrays.

What to all these do? Well....

Self-initializing variables.

You don't need to define variables- they appear as your use them.

There are only three types: stings, numbers, and arrays.

To ensure a number is a number, add zero to it.

x=x+0

To ensure a string is a string, add an empty string to it.

x= x "" "the string you really want to add"

To ensure your variables aren't global, use them within a function and add more variables to the call. For example if a function is passed two variables, define it with two PLUS the local variables:

 function haslocals(passed1,passed2,         local1,local2,local3) {
        passed1=passes1+1  # changes externally
        local1=7           # only changed locally
 }

Note that its good practice to add white space between passed and local variables.

Pattern-based programming

Gawk programs can contain functions AND pattern/action pairs.

If the pattern is satisfied, the action is called.

 /^\.P1/ { if (p != 0) print ".P1 after .P1, line", NR;
           p = 1;
         }
 /^\.P2/ { if (p != 1) print ".P2 with no preceding .P1, line", NR;
           p = 0;
         }
 END     { if (p != 0) print "missing .P2 at end" }

Two magic patterns are BEGIN and END. These are true before and after all the input files are read. Use END of end actions (e.g. final reports) and BEGIN for start up actions such as initializing default variables, setting the field separator, resetting the seed of the random number generator:

 BEGIN {
        while (getline < "Usr.Dict.Words") #slurp in dictionary 
                dict[$0] = 1
        FS=",";                            #set field seperator
        srand();                           #reset random seed
        Round=10;                          #always start globals with U.C.
 }

The default action is {print $0}; i.e. print the whole line.

The default pattern is 1; i.e. true.

Patterns are checked, top to bottom, in source-code order.

Patterns can contain regular expressions. In the above example /^\.P1/ means "front of line followed by a full stop followed by P1". Regular expressions are important enough for their own section.

A Small Example

Ok, so now we know enough to explain an simple report function. How does hist.awk work in the following?

 
% cat /etc/passwd | grep -v \# | cut -d: -f 6|sort |
                    uniq -c | sort -r -n | Gawk -f hist.awk

              **************************  26 /var/empty
                                      **   2 /var/virusmails
                                      **   2 /var/root
                                       *   1 /var/xgrid/controller
                                       *   1 /var/xgrid/agent
                                       *   1 /var/teamsserver
                                       *   1 /var/spool/uucp
                                       *   1 /var/spool/postfix
                                       *   1 /var/spool/cups
                                       *   1 /var/pcast/server
                                       *   1 /var/pcast/agent
                                       *   1 /var/imap
                                       *   1 /Library/WebServer

hist.awk reads the maximum width from line one (when NR==1), then scales it to some maximum width value. For each line, it then prints the line ($0) with some stars at front.

NR==1  { Width = Width ? Width : 40 ; sets Width if it is missing
         Scale = $1 > Width ? $1 / Width : 1 
       }
       { Stars=int($1*Scale);  
         print str(Width - Stars," ") str(Stars,"*") $0 
       }

# note that, in the following "tmp" is a local variable
function str(n,c, tmp) { # returns a string, size "n", of all  "c" 
    while((n--) > 0 ) tmp= c tmp 
    return tmp 
}

Regular Expressions

Do you know what these mean?

  • /^[ \t\n]*/
  • /[ \t\n]*$/
  • /^[+-]?([0-9]+[.]?[0-9]*|[.][0-9]+)([eE][+-]?[0-9]+)?$/

Well, the first two are leading and trailing blank spaces on a line and the last one is the definition of an IEEE-standard number written as a regular expression. Once we know that, we can do a bunch of common tasks like trimming away white space around a string:

  function trim(s,     t) {
    t=s;
    sub(/^[ \t\n]*/,"",t);
    sub(/[ \t\n]*$/,"",t);
    return t
 }

or recognize something that isn't a number:

if ( $i !~ /^[+-]?([0-9]+[.]?[0-9]*|[.][0-9]+)([eE][+-]?[0-9]+)?$/ ) 
    {print "ERROR: " $i " not a number}

Regular expressions are an astonishingly useful tool supported by many languages (e.g. Awk, Perl, Python, Java). The following notes review the basics. For full details, see http://www.gnu.org/manual/Gawk-3.1.1/html_node/Regexp.html#Regexp.

Syntax: Here's the basic building blocks of regular expressions:

c
matches the character c (assuming c is a character with no special meaning in regexps).

\c
matches the literal character c; e.g. tabs and newlines are \t and \n respectively.

.
matches any character except newline.

^
matches the beginning of a line or a string.

$
matches the end of a line or a string.

[abc...]
matches any of the characters ac... (character class).

[^ac...]
matches any character except abc... and newline (negated character class).

r*
matches zero or more r's.

And that's enough to understand our trim function shown above. The regular expression /[ \t]*$/ means trailing whitespace; i.e. zero-or-more spaces or tabs followed by the end of line.

More Syntax:

But that's only the start of regular expressions. There's lots more. For example:

r+
matches one or more r's.

r?
matches zero or one r's.

r1|r2
matches either r1 or r2 (alternation).

r1r2
matches r1, and then r2 (concatenation).

(r)
matches r (grouping).

Now we can read ^[+-]?([0-9]+[.]?[0-9]*|[.][0-9]+)([eE][+-]?[0-9]+)?$ like this:

^[+-]? ...
Numbers begin with zero or one plus or minus signs.

...[0-9]+...
Simple numbers are just one or more numbers.

...[.]?[0-9]*...
which may be followed by a decimal point and zero or more digits.

...|[.][0-9]+...
Alternatively, a number can have zero leading numbers and just start with a decimal point.

.... ([eE]...)?$
Also, there may be an exponent added

...[+-]?[0-9]+)?$
and that exponent is a positive or negative bunch of digits.

Associative arrays

Gawk has arrays, but they are only indexed by strings. This can be very useful, but it can also be annoying. For example, we can count the frequency of words in a document (ignoring the icky part about printing them out):

Gawk '{for(i=1;i <=NF;i++) freq[$i]++ }' filename

The array will hold an integer value for each word that occurred in the file. Unfortunately, this treats foo'',Foo'', and foo,'' as different words. Oh well. How do we print out these frequencies? Gawk has a specialfor'' construct that loops over the values in an array. This script is longer than most command lines, so it will be expressed as an executable script:

 #!/usr/bin/awk -f
  {for(i=1;i <=NF;i++) freq[$i]++ }
  END{for(word in freq) print word, freq[word]  }

You can find out if an element exists in an array at a certain index with the expression:

index in array

This expression tests whether or not the particular index exists, without the side effect of creating that element if it is not present.

You can remove an individual element of an array using the delete statement:

delete array[index]

It is not an error to delete an element which does not exist.

Gawk has a special kind of for statement for scanning an array:

 for (var in array)
        body

This loop executes body once for each different value that your program has previously used as an index in array, with the variable var set to that index.

There order in which the array is scanned is not defined.

To scan an array in some numeric order, you need to use keys 1,2,3,... and store somewhere that the array is N long. Then you can do the Here are some useful array functions. We begin with the usual stack stuff. These stacks have items 1,2,3,.... and position 0 is reserved for the size of the stack

 function top(a)        {return a[a[0]]}
 function push(a,x,  i) {i=++a[0]; a[i]=x; return i}
 function pop(a,   x,i) {
   i=a[0]--;  
   if (!i) {return ""} else {x=a[i]; delete a[i]; return x}}

The pop function can be used in the usual way:

 BEGIN {push(a,1); push(a,2); push(a,3);
        while(x=pop(a)) print x
 3
 2
 1

We can catch everything in an array to a string:

 function a2s(a,  i,s) {
        s=""; 
        for (i in a) {s=s " " i "= [" a[i]"]\n"}; 
        return s}

  BEGIN {push(L,1); push(L,2); push(L,3);
        print a2s(L);}
  0= [3]
  1= [1]
  2= [2]
  3= [3]

And we can go the other way and convert a string into an array using the built in split function. These pod files were built using a recursive include function that seeks patterns of the form:

^=include file

This function splits likes on space characters into the array `a' then looks for =include in a[1]. If found, it calls itself recursively on a[2]. Otherwise, it just prints the line:

 function rinclude (line,    x,a) {
   split(line,a,/ /);
   if ( a[1] ~ /^\=include/ ) { 
     while ( ( getline x < a[2] ) > 0) rinclude(x);
     close(a[2])}
   else {print line}
 }

Note that the third argument of the split function can be any regular expression.

By the way, here's a nice trick with arrays. To print the lines in a files in a random order:

 BEGIN {srand()}
       {Array[rand()]=$0}
 END   {for(I in Array) print $0}

Short, heh? This is not a perfect solution. Gawk can only generate 1,000,000 different random numbers so the birthday theorem cautions that there is a small chance that the lines will be lost when different lines are written to the same randomly selected location. After some experiments, I can report that you lose around one item after 1,000 inserts and 10 to 12 items after 10,000 random inserts. Nothing to write home about really. But for larger item sets, the above three liner is not what you want to use. For exampl,e 10,000 to 12,000 items (more than 10%) are lost after 100,000 random inserts. Not good!


categories: OneLiners,Learn,Jan,2009,Admin

Awk one-liners

Awk is famous for how much it can do in one line.

This site has many samples of that capability. And if you have any more to add, please send them in.


categories: OneLiners,Learn,Jan,2009,EricP

Handy One-Liners For Awk (v0.22)

Eric Pement
pemente@northpark.edu

Latest version of this file is usually at:
http://www.student.northpark.edu/pemente/awk/awk1line.txt

USAGE

Unix:     awk '/pattern/ {print "$1"}'    # standard Unix shells
DOS/Win:  awk '/pattern/ {print "$1"}'    # okay for DJGPP compiled
          awk "/pattern/ {print \"$1\"}"  # required for Mingw32

Most of my experience comes from version of GNU awk (gawk) compiled for Win32. Note in particular that DJGPP compilations permit the awk script to follow Unix quoting syntax '/like/ {"this"}'. However, the user must know that single quotes under DOS/Windows do not protect the redirection arrows (<, >) nor do they protect pipes (|). Both are special symbols for the DOS/CMD command shell and their special meaning is ignored only if they are placed within "double quotes." Likewise, DOS/Win users must remember that the percent sign (%) is used to mark DOS/Win environment variables, so it must be doubled (%%) to yield a single percent sign visible to awk.

If I am sure that a script will NOT need to be quoted in Unix, DOS, or CMD, then I normally omit the quote marks. If an example is peculiar to GNU awk, the command 'gawk' will be used. Please notify me if you find errors or new commands to add to this list (total length under 65 characters). I usually try to put the shortest script first.

File Spacing

Double space a file

 awk '1;{print ""}'
 awk 'BEGIN{ORS="\n\n"};1'

Double space a file which already has blank lines in it. Output file should contain no more than one blank line between lines of text. NOTE: On Unix systems, DOS lines which have only CRLF (\r\n) are often treated as non-blank, and thus 'NF' alone will return TRUE.

awk 'NF{print $0 "\n"}'

Triple space a file

awk '1;{print "\n"}'

Numbering and Calculations

Precede each line by its line number FOR THAT FILE (left alignment). Using a tab (\t) instead of space will preserve margins.

awk '{print FNR "\t" $0}' files*

Precede each line by its line number FOR ALL FILES TOGETHER, with tab.

awk '{print NR "\t" $0}' files*

Number each line of a file (number on left, right-aligned) Double the percent signs if typing from the DOS command prompt.

awk '{printf("%5d : %s\n", NR,$0)}'

Number each line of file, but only print numbers if line is not blank Remember caveats about Unix treatment of \r (mentioned above)

awk 'NF{$0=++a " :" $0};{print}'
 awk '{print (NF? ++a " :" :"") $0}'

Count lines (emulates "wc -l")

awk 'END{print NR}'

Print the sums of the fields of every line

awk '{s=0; for (i=1; i<=NF; i++) s=s+$i; print s}'

Add all fields in all lines and print the sum

awk '{for (i=1; i<=NF; i++) s=s+$i}; END{print s}'

Print every line after replacing each field with its absolute value

 awk '{for (i=1; i<=NF; i++) if ($i < 0) $i = -$i; print }'
 awk '{for (i=1; i<=NF; i++) $i = ($i < 0) ? -$i : $i; print }'

Print the total number of fields ("words") in all lines

 awk '{ total = total + NF }; END {print total}' file

Print the total number of lines that contain "Beth"

 awk '/Beth/{n++}; END {print n+0}' file

Print the largest first field and the line that contains it Intended for finding the longest string in field #1

awk '$1 > max {max=$1; maxline=$0}; END{ print max, maxline}'

Print the number of fields in each line, followed by the line

awk '{ print NF ":" $0 } '

Print the last field of each line

awk '{ print $NF }'

Print the last field of the last line

awk '{ field = $NF }; END{ print field }'

Print every line with more than 4 fields

awk 'NF > 4'

Print every line where the value of the last field is > 4

awk '$NF > 4'

Text Conversion and Substitution

IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format

awk '{sub(/\r$/,"");print}'   # assumes EACH line ends with Ctrl-M

IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format

awk '{sub(/$/,"\r");print}

IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format

awk 1

IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format Cannot be done with DOS versions of awk, other than gawk:

gawk -v BINMODE="w" '1' infile >outfile

Use "tr" instead.

 tr -d \r outfile # GNU tr version 1.22 or higher

Delete leading whitespace (spaces, tabs) from front of each line aligns all text flush left

awk '{sub(/^[ \t]+/, ""); print}'

Delete trailing whitespace (spaces, tabs) from end of each line

awk '{sub(/[ \t]+$/, "");print}'

Delete BOTH leading and trailing whitespace from each line

awk '{gsub(/^[ \t]+|[ \t]+$/,"");print}'
awk '{$1=$1;print}'           # also removes extra space between fields

Insert 5 blank spaces at beginning of each line (make page offset)

awk '{sub(/^/, "     ");print}'

Align all text flush right on a 79-column width

awk '{printf "%79s\n", $0}' file*

Center all text on a 79-character width

awk '{l=length();s=int((79-l)/2); printf "%"(s+l)"s\n",$0}' file*

Substitute (find and replace) "foo" with "bar" on each line

awk '{sub(/foo/,"bar");print}'           # replaces only 1st instance
gawk '{$0=gensub(/foo/,"bar",4);print}'  # replaces only 4th instance
awk '{gsub(/foo/,"bar");print}'          # replaces ALL instances in a line

Substitute "foo" with "bar" ONLY for lines which contain "baz"

awk '/baz/{gsub(/foo/, "bar")};{print}'

Substitute "foo" with "bar" EXCEPT for lines which contain "baz"

awk '!/baz/{gsub(/foo/, "bar")};{print}'

Change "scarlet" or "ruby" or "puce" to "red"

awk '{gsub(/scarlet|ruby|puce/, "red"); print}'

Reverse order of lines (emulates "tac")

awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file*

If a line ends with a backslash, append the next line to it (fails if there are multiple lines ending with backslash...)

awk '/\\$/ {sub(/\\$/,""); getline t; print $0 t; next}; 1' file*

Print and sort the login names of all users

awk -F ":" '{ print $1 | "sort" }' /etc/passwd

Print the first 2 fields, in opposite order, of every line

awk '{print $2, $1}' file

Switch the first 2 fields of every line

awk '{temp = $1; $1 = $2; $2 = temp}' file

Print every line, deleting the second field of that line

awk '{ $2 = ""; print }'

Print in reverse order the fields of every line

awk '{for (i=NF; i>0; i--) printf("%s ",i);printf ("\n")}' file

Remove duplicate, consecutive lines (emulates "uniq")

awk 'a !~ $0; {a=$0}'

Remove duplicate, nonconsecutive lines

awk '! a[$0]++'                     # most concise script
awk '!($0 in a) {a[$0];print}'      # most efficient script

Concatenate every 5 lines of input, using a comma separator between fields

awk 'ORS=%NR%5?",":"\n"' file

Selective Printing of Certain Lines

Print first 10 lines of file (emulates behavior of "head")

awk 'NR < 11'

Print first line of file (emulates "head -1")

awk 'NR>1{exit};1'

Print the last 2 lines of a file (emulates "tail -2")

awk '{y=x "\n" $0; x=$0};END{print y}'

Print the last line of a file (emulates "tail -1")

awk 'END{print}'

Print only lines which match regular expression (emulates "grep")

awk '/regex/'

Print only lines which do NOT match regex (emulates "grep -v")

awk '!/regex/'

Print the line immediately before a regex, but not the line containing the regex

awk '/regex/{print x};{x=$0}'
 awk '/regex/{print (x=="" ? "match on line 1" : x)};{x=$0}'

Print the line immediately after a regex, but not the line containing the regex

awk '/regex/{getline;print}'

Grep for AAA and BBB and CCC (in any order)

awk '/AAA/; /BBB/; /CCC/'

Grep for AAA and BBB and CCC (in that order)

awk '/AAA.*BBB.*CCC/'

Print only lines of 65 characters or longer

awk 'length > 64'

Print only lines of less than 65 characters

awk 'length < 64'

Print section of file from regular expression to end of file

awk '/regex/,0'
awk '/regex/,EOF'

Print section of file based on line numbers (lines 8-12, inclusive)

awk 'NR==8,NR==12'

Print line number 52

awk 'NR==52'
awk 'NR==52 {print;exit}'          # more efficient on large files

Print section of file between two regular expressions (inclusive)

awk '/Iowa/,/Montana/'             # case sensitive

Selective Deletion of Certain Lines:

Delete ALL blank lines from a file (same as "grep '.' ")

awk NF
awk '/./'

Credits and Thanks

Special thanks to Peter S. Tillier for helping me with the first release of this FAQ file.

For additional syntax instructions, including the way to apply editing commands from a disk file instead of the command line, consult:

  • "sed & awk, 2nd Edition," by Dale Dougherty and Arnold Robbins O'Reilly, 1997
  • "UNIX Text Processing," by Dale Dougherty and Tim O'Reilly Hayden Books, 1987
  • "Effective awk Programming, 3rd Edition." by Arnold Robbins O'Reilly, 2001

To fully exploit the power of awk, one must understand "regular expressions." For detailed discussion of regular expressions, see

  • "Mastering Regular Expressions, 2d edition" by Jeffrey Friedl (O'Reilly, 2002).

The manual ("man") pages on Unix systems may be helpful (try "man awk", "man nawk", "man regexp", or the section on regular expressions in "man ed"), but man pages are notoriously difficult. They are not written to teach awk use or regexps to first-time users, but as a reference text for those already acquainted with these tools.

USE OF '\t' IN awk SCRIPTS: For clarity in documentation, we have used the expression '\t' to indicate a tab character (0x09) in the scripts. All versions of awk, even the UNIX System 7 version should recognize the '\t' abbreviation.


categories: OneLiners,Learn,Jan,2009,Admin

Explaining Pemet's One Liners

Peteris Krumins explaining Eric Pement's Awk one-liners:


categories: TenLiners,Learn,Jan,2009,Admin

Awk ten-liners

Awk is famous for how much it can do in (around) 101 lines. Here are some samples of that capability.

(And if you have any more to add, please send them in.)


categories: TenLiners,Learn,Jan,2009,Ronl

Some Gawk (and PERL) Samples

by R. Loui

Here are a few short programs that do the same thing in each language. When reading these examples, the question to ask is `how many language features do I need to understand in order to understand the syntax of these examples'.

Some of these are longer than they need to be since they don't exploit some (e.g.) command line trick to wrap the code in for each line do X. And that is the point- for teach-ability, the preferred language is the one you need to know LESS about before you can be useful in it.

hello world

PERL:

 print "hello world\n"

GAWK:

 BEGIN { print "hello world" }

One plus one

PERL

 $x= $x+1;

GAWK

 x= x+1

Printing

PERL

 print $x, $y, $z;

GAWK

 print x,y,z

Printing the first field in a file

PERL

 while (<>) { 
   split(/ /);
   print "@_[0]\n" 
 }

GAWK

 { print $1 }

Printing lines, reversing fields

PERL

 while (<>) { 
  split(/ /);
  print "@_[1] @_[0]\n" 
 }

GAWK

 { print $2, $1 }

Concatenation of variables

PERL

 command = "cat $fname1 $fname2 > $fname3"

GAWK

 command = "cat " fname1 " " fname2 " > " fname3

Looping

PERL:

 for (1..10) { print $_,"\n" }

GAWK:

 BEGIN { 
  for (i=1; i<=10; i++) print i
 }

Pairs of numbers

PERL:

 for (1..10) { print "$_ ",$_-1 }
 print "\n"

GAWK:

 BEGIN { 
  for (i=1; i<=10; i++) printf i " " i-1
  print ""
 }

List of words into a hash

PERL

  foreach $x ( split(/ /,"this is not stored linearly") ) 
  { print "$x\n" }

GAWK

 BEGIN { 
  split("this is not stored linearly",temp)
  for (i in temp) print temp[i]
 }

Printing a hash in some key order

PERL

 $n = split(/ /,"this is not stored linearly");
 for $i (0..$n-1) { print "$i @_[$i]\n" }
 print "\n";
 for $i (@_) { print ++$j," ",$i,"\n" }

AWK

 BEGIN { 
  n = split("this is not stored linearly",temp)
  for (i=1; i<=n; i++) print i, temp[i]
  print ""
  for (i in temp) print i, temp[i]
 }

Printing all lines in a file

PERL

 open file,"/etc/passwd";
 while (<file>) { print $_ }

GAWK

  BEGIN { 
  while (getline < "/etc/passwd") print
 }

Printing a string

PERL

 $x = "this " . "that " . "\n";
 print $x

GAWK

 BEGIN {
  x = "this " "that " "\n" ; printf x
 }

Building and printing an array

PERL

 $assoc{"this"} = 4;
 $assoc{"that"} = 4;
 $assoc{"the other thing"} = 15;
 for $i (keys %assoc) { print "$i $assoc{$i}\n" }

GAWK

 BEGIN {
   assoc["this"] = 4
   assoc["that"] = 4
   assoc["the other thing"] = 15
   for (i in assoc) print i,assoc[i]
 }

Sorting an array

PERL

 split(/ /,"this will be sorted once in an array");
 foreach $i (sort @_) { print "$i\n" }

GAWK

 BEGIN {
  split("this will be sorted once in an array",temp," ")
  for (i in temp) print temp[i] | "sort"
  while ("sort" | getline) print
 }

Sorting an array (#2)

GAWK

 BEGIN {
  split("this will be sorted once in an array",temp," ")
  n=asort(temp)
  for (i=1;i<=n;i++) print temp[i] 
 }

Print all lines, vowels changed to stars

PERL

 while (<STDIN>) {
  s/[aeiou]/*/g;
  print $_
 }

GAWK

 {gsub(/[aeiou]/,"*"); print }

Report from file

PERL

 #!/pkg/gnu/bin/perl
 # this is a comment
 #
 open(stream1,"w | ");
 while ($line = <stream1>) {
   ($user, $tty, $login, $junk) = split(/ +/, $line, 4);
   print "$user $login ",substr($line,49)
 }

GAWK

#!/pkg/gnu/bin/gawk -f
 # this is a comment
 #
 BEGIN {
   while ("w" | getline) {
     user = $1; tty = $2; login = $3
     print user, login, substr($0,49)
   }
 }

Web Slurping

PERL

 open(stream1,"lynx -dump 'cs.wustl.edu/~loui' | ");
 while ($line = <stream1>) {
   if ($flag && $line =~ /[0-9]/) { print $line }
   if ($line =~ /References/) { $flag = 1 }
 }

GAWK

 BEGIN {
  com = "lynx -dump 'cs.wustl.edu/~loui' &> /dev/stdout"
  while (com | getline line) {
    if (flag && line ~ /[0-9]/) { print line }
    if (line ~ /References/) { flag = 1 }
  }
 }
blog comments powered by Disqus