For Linux and Shell scripting.

Reading inputs from a file (line by line or by fields)

Leave a comment

Always use while loop to read the input lines from a text file. For loop is an alternative method that some guys use, but it’s not always reliable. We will come to those points after discussing the usage of while loops.

Note: I purposefully ignored awk here as it’s a different tool. I will add another post exclusively for that. In the meantime, you can find some example here in this post.

while read aline


echo Input Line: “$aline”

done < input.txt

Here is how you can write the above script in a single line.

while read aline; do echo Input Line : "$aline"; done < input.txt

Here is an example:


Now what if you have multiple fields in the input file and you want to work around those fields.

Here is an example where the input file has multiple fields with different data and you want to find the largest value of a specific field. This sample file has 4 fields and you want to find the largest value in 3rd field. Then print the matched line.

Notes: The input file doesn’t contain headings. I just included it below for your understand. And if it’s given in your input file, you have to remove them before you start the actual processing (using sed, head, tail etc). Also, the input file is a regular text file and so the fields are separated by space or tabs.

Name Matches Runs Wickets
Sachin 450 18000 150
Ganguly 300 11000 100
Azharuddin 334 9300 12
Dravid 344 10880 4



while read name matches runs wickets; do

if [ $largest -lt $runs ]; then




done < file.txt

echo highest runs = $largest

echo player name = $player

Why you shouldn’t read lines using for loop.

Here are a few negatives by using for loops.

  1. For loop always ignores the blank lines.
  2. When using the simple method (without IFS), the input lines will be split into words.
  1. Shell might expand the glob if it exists in your input file and it results in an unexpected output. You can escape from this using “set -f” option.
  1. The while loop reads one line at a time from streamline, but using $(<input.txt), the for loop reads entire file into memory. So the performance will be poor when working with huge files.

Here is an example usage and output:

bash$ IFS=$’\n’; set -f; for i in $(<input.txt); do echo “$i”; done; set +f; unset IFS

sample input line





Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s