ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am using the following code to read line by line from a file, however my problem is that $value=0 at the end of the loop possibly because bash has created a subshell for the loop or something similar. How can I solve this.
value=0;
while read line
do
value=`expr $value + 1`;
echo $value;
done < "myfile"
echo $value;
Note: This example just counts the number of lines, I actually desire to do more complex processing than this though, so 'wc' is not an alternative, nor is perl im afraid.
Thanks Darren.
most of the problems i need to solve doesn't involve the use of loops at all. sometimes, using while loops sometime could be slower than just using tools like awk.
if you want to do more complex processing, why not try awk.
Code:
awk '{
# some processing with file
}
END { print "number of lines is " NR }' file
I am trying to do a loop, the main puporse of the loop is that it will search in a multiple text files stored on tmp2 a pattern /remote-host so it would output the pattern into tmp3.
The code that I have is the following:
for i in $( tmp2 ) ; do
sed -n '/remote-host/p' tmp2 >> tmp3
done
and obviously is not working, I would appreciate any help.
you are digging up an old thread. check the date of post next time. also, no need to use cat. its useless
The only reason I posted is because such an old thread was the top search result on google and had no answers that a new user could, IMHO, understand.
Using cat rather than a redirect here (again IMO) makes the answer crystal clear and hopefully will educate rather than confuse
Code:
(
while read line
do
echo process $line
done
) < file
does work but it isn't clear as you start to read the code without realising that, potentially many lines later, a file will be directed into the sub shell.
If you are after an optimised solution feel free to use exec with a fd numeric redirect and read -u too :
Code:
exec 9<file
while read -u9 line
do
echo process $line
done
Regards "Bash can sometimes start a subshell in a PIPED "while-read" loop", what are the circumstances under which bash does not do so?
Regards
Code:
FILENAME=$1
...
done < $FILENAME
That will fail if $1 includes embedded IFS charcters; safer to use double quotes: done < "$FILENAME".
Regards "exec 3<&0 Now all of the keyboard and mouse input is going to our new file descriptor 3" it is only the keyboard input, not the mouse input.
Regards "while read LINE Using File Descriptor" an alternative, that does not require saving fd 0 and restoring it is to assign the file to, say, fd 3 and use read's -u option to read from fd3.
Regards typos, "The file descriptors for stdin,stdout, and stderr are 0,1, and 2, respectively" it is more correctly "The file descriptors for stdin, stdout, and stderr are 0, 1, and 2, respectively", that is with a space after the fist commas in the list.
The test timings are very useful showing how much faster awk is at processing a large file but it would be nice to see the other case -- the case when processing a single line. You would have to process the same line many times to get the timing and of course this would buffer both the input file and awk in RAM but it would still be interesting. I understand that the shell's fork-exec to run awk uses a lot of resource which will more than offset awk's much greater file IO and string handling efficiency.
Last edited by catkin; 05-15-2010 at 03:04 AM.
Reason: typodynamics
Besides the quoting issue that catkin mentioned, you should also use read -r in all the examples. You almost never want to not use -r. Also, use "printf" instead of "echo -e". The exact behavior of "echo -e" can change a lot from system to system.
Okay first off I realize this thread is old... very old. However the complexity that these guys are going about it is driving me nuts so I have to post this to enlighten them (and anyone who would be searching for this solution).
First off reading a file in shell is extremely simple.
lets make a simple file called mycat and let it's contents be...
Code:
#!/bin/bash
while read line;do
echo $line
done
That will read from stdin so you can read a file by simply...
Code:
chmod 755 ./mycat
./mycat < somefiletoread
The other thing is this guy is basically going about it the hard way because this can easily be done with a one liner.
Quote:
Originally Posted by dpoper1
hi,
I am trying to do a loop, the main puporse of the loop is that it will search in a multiple text files stored on tmp2 a pattern /remote-host so it would output the pattern into tmp3.
The code that I have is the following:
for i in $( tmp2 ) ; do
sed -n '/remote-host/p' tmp2 >> tmp3
done
and obviously is not working, I would appreciate any help.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.