mirror of
https://codeberg.org/hyperreal/techne
synced 2024-11-01 06:13:06 +01:00
2.0 KiB
2.0 KiB
Bash
- Split large text file into smaller files with equal number of lines
- Loop through lines of file
- Use grep to find URLs from HTML file
- Use Awk to print the first line of
ps aux
output followed by each grepped line
Split large text file into smaller files with equal number of lines
split -l 60 bigfile.txt prefix-
Loop through lines of file
while read line; do
echo "$line";
done </path/to/file.txt
Use grep to find URLs from HTML file
cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*"
grep -E
: egrepgrep -o
: only output what has been grepped(http|https)
: either http OR httpsa-zA-Z0-9
: match all lowercase, uppercase, and digits.
: match period/
: match slash?
: match ?=
: match =_
: match underscore%
: match percent:
: match colon-
: match dash*
: repeat the […] group any number of times
Use Awk to print the first line of ps aux
output followed by each grepped line
To find all cron processes with ps aux
.
ps aux | awk 'NR<2{print $0;next}{print $0 | grep "cron"}' | grep -v "awk"
ps aux
: equivalent tops -aux
.-a
displays info about other users processes besides to current user.-u
displays info associated with keywordsuser
,pid
,%cpu
,%mem
,vsz
,rss
,tt
,state
,start
,time
, andcommand
.-x
includes processes which do not have a controlling terminal. Seeman 1 ps
.awk 'NR<2{print $0;next}{print $0 | "grep cron"}' | grep -v "awk"
: For number of input records (NR
) less than 2,print
the input record ($0
), go to the next input record and repeat the{print $0}
pattern until the end is reached, then execute the END rule. The End rule in this case is{print $0 | "grep cron"}
, it prints the remaining input records after piping them through the"grep cron"
command. This allows printing the first line of theps aux
output, which consists of the column labels, and filters out everything besides what you want to grep for (e.g. "cron" processes).grep -v "awk"
: avoids printing the line containing this command.