automated terminal push
All checks were successful
code.softwareshinobi.com-learn/docker.softwareshinobi.com/pipeline/head This commit looks good

This commit is contained in:
2025-06-04 11:50:30 -04:00
parent 5770800032
commit f1997cab0f
195 changed files with 12169 additions and 0 deletions

11
.dockerignore Normal file
View File

@@ -0,0 +1,11 @@
.git
.pristine
.trash
.recycle
.backup
.template

0
.gitignore vendored Normal file
View File

15
Dockerfile Normal file
View File

@@ -0,0 +1,15 @@
FROM titom73/mkdocs AS MKDOCS_BUILD
RUN pip install markupsafe==2.0.1
RUN pip install mkdocs-blog-plugin
WORKDIR /docs
COPY . .
RUN mkdocs build
FROM mengzyou/bbhttpd:1.35
COPY --from=MKDOCS_BUILD --chown=www:www /docs/site /home/www/html

56
Jenkinsfile vendored Normal file
View File

@@ -0,0 +1,56 @@
pipeline {
agent none
options {
disableConcurrentBuilds(abortPrevious: true)
buildDiscarder(logRotator(numToKeepStr: '10'))
}
stages {
stage('docker compose build') {
agent {
label "sian"
}
steps {
dir('.') {
sh 'docker compose build'
}
}
}
stage('docker compose push') {
agent {
label "sian"
}
steps {
dir('.') {
sh 'docker compose push'
}
}
}
}
}

19
compose.bash Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash
##
reset
clear
##
set -e
set -x
##
docker compose down --remove-orphans
docker compose up --build -d

25
compose.yaml Normal file
View File

@@ -0,0 +1,25 @@
services:
docker.softwareshinobi.com:
container_name: docker.softwareshinobi.com
image: softwareshinobi/docker.softwareshinobi.com
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 8000:8000
volumes:
- ./docs:/docs/docs
- ./mkdocs.yml:/docs/mkdocs.yml

View File

@@ -0,0 +1,91 @@
# About the book
* **This version was published on Oct 30 2023**
This is an open-source introduction to Bash scripting guide that will help you learn the basics of Bash scripting and start writing awesome Bash scripts that will help you automate your daily SysOps, DevOps, and Dev tasks. No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you can use Bash scripts to combine different Linux commands and automate tedious and repetitive daily tasks so that you can focus on more productive and fun things.
The guide is suitable for anyone working as a developer, system administrator, or a DevOps engineer and wants to learn the basics of Bash scripting.
The first 13 chapters would be purely focused on getting some solid Bash scripting foundations, then the rest of the chapters would give you some real-life examples and scripts.
## About the author
My name is Bobby Iliev, and I have been working as a Linux DevOps Engineer since 2014. I am an avid Linux lover and supporter of the open-source movement philosophy. I am always doing that which I cannot do in order that I may learn how to do it, and I believe in sharing knowledge.
I think it's essential always to keep professional and surround yourself with good people, work hard, and be nice to everyone. You have to perform at a consistently higher level than others. That's the mark of a true professional.
For more information, please visit my blog at [https://bobbyiliev.com](https://bobbyiliev.com), follow me on Twitter [@bobbyiliev_](https://twitter.com/bobbyiliev_) and [YouTube](https://www.youtube.com/channel/UCQWmdHTeAO0UvaNqve9udRw).
## Sponsors
This book is made possible thanks to these fantastic companies!
### Materialize
The Streaming Database for Real-time Analytics.
[Materialize](https://materialize.com/) is a reactive database that delivers incremental view updates. Materialize helps developers easily build with streaming data using standard SQL.
### DigitalOcean
DigitalOcean is a cloud services platform delivering the simplicity developers love and businesses trust to run production applications at scale.
It provides highly available, secure, and scalable compute, storage, and networking solutions that help developers build great software faster.
Founded in 2012 with offices in New York and Cambridge, MA, DigitalOcean offers transparent and affordable pricing, an elegant user interface, and one of the largest libraries of open source resources available.
For more information, please visit [https://www.digitalocean.com](https://www.digitalocean.com) or follow [@digitalocean](https://twitter.com/digitalocean) on Twitter.
If you are new to DigitalOcean, you can get a free $200 credit and spin up your own servers via this referral link here:
[Free $200 Credit For DigitalOcean](https://m.do.co/c/2a9bba940f39)
### DevDojo
The DevDojo is a resource to learn all things web development and web design. Learn on your lunch break or wake up and enjoy a cup of coffee with us to learn something new.
Join this developer community, and we can all learn together, build together, and grow together.
[Join DevDojo](https://devdojo.com?ref=bobbyiliev)
For more information, please visit [https://www.devdojo.com](https://www.devdojo.com?ref=bobbyiliev) or follow [@thedevdojo](https://twitter.com/thedevdojo) on Twitter.
## Ebook PDF Generation Tool
This ebook was generated by [Ibis](https://github.com/themsaid/ibis/) developed by [Mohamed Said](https://github.com/themsaid).
Ibis is a PHP tool that helps you write eBooks in markdown.
## Ebook ePub Generation Tool
The ePub version was generated by [Pandoc](https://pandoc.org/).
## Book Cover
The cover for this ebook was created with [Canva.com](https://www.canva.com/join/determined-cork-learn).
If you ever need to create a graphic, poster, invitation, logo, presentation or anything that looks good — give Canva a go.
## License
MIT License
Copyright (c) 2020 Bobby Iliev
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,11 @@
# Introduction to Bash scripting
Welcome to this Bash basics training guide! In this **bash crash course**, you will learn the **Bash basics** so you could start writing your own Bash scripts and automate your daily tasks.
Bash is a Unix shell and command language. It is widely available on various operating systems, and it is also the default command interpreter on most Linux systems.
Bash stands for Bourne-Again SHell. As with other shells, you can use Bash interactively directly in your terminal, and also, you can use Bash like any other programming language to write scripts. This book will help you learn the basics of Bash scripting including Bash Variables, User Input, Comments, Arguments, Arrays, Conditional Expressions, Conditionals, Loops, Functions, Debugging, and testing.
Bash scripts are great for automating repetitive workloads and can help you save time considerably. For example, imagine working with a group of five developers on a project that requires a tedious environment setup. In order for the program to work correctly, each developer has to manually set up the environment. That's the same and very long task (setting up the environment) repeated five times at least. This is where you and Bash scripts come to the rescue! So instead, you create a simple text file containing all the necessary instructions and share it with your teammates. And now, all they have to do is execute the Bash script and everything will be created for them.
In order to write Bash scripts, you just need a UNIX terminal and a text editor like Sublime Text, VS Code, or a terminal-based editor like vim or nano.

View File

@@ -0,0 +1,32 @@
# Bash Structure
Let's start by creating a new file with a `.sh` extension. As an example, we could create a file called `devdojo.sh`.
To create that file, you can use the `touch` command:
```bash
touch devdojo.sh
```
Or you can use your text editor instead:
```bash
nano devdojo.sh
```
In order to execute/run a bash script file with the bash shell interpreter, the first line of a script file must indicate the absolute path to the bash executable:
```bash
#!/bin/bash
```
This is also called a [Shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)).
All that the shebang does is to instruct the operating system to run the script with the `/bin/bash` executable.
However, bash is not always in `/bin/bash` directory, particularly on non-Linux systems or due to installation as an optional package. Thus, you may want to use:
```bash
#!/usr/bin/env bash
```
It searches for bash executable in directories, listed in PATH environmental variable.

View File

@@ -0,0 +1,41 @@
# Bash Hello World
Once we have our `devdojo.sh` file created and we've specified the bash shebang on the very first line, we are ready to create our first `Hello World` bash script.
To do that, open the `devdojo.sh` file again and add the following after the `#!/bin/bash` line:
```bash
#!/bin/bash
echo "Hello World!"
```
Save the file and exit.
After that make the script executable by running:
```bash
chmod +x devdojo.sh
```
After that execute the file:
```bash
./devdojo.sh
```
You will see a "Hello World" message on the screen.
Another way to run the script would be:
```bash
bash devdojo.sh
```
As bash can be used interactively, you could run the following command directly in your terminal and you would get the same result:
```bash
echo "Hello DevDojo!"
```
Putting a script together is useful once you have to combine multiple commands together.

View File

@@ -0,0 +1,131 @@
# Bash Variables
As in any other programming language, you can use variables in Bash Scripting as well. However, there are no data types, and a variable in Bash can contain numbers as well as characters.
To assign a value to a variable, all you need to do is use the `=` sign:
```bash
name="DevDojo"
```
>{notice} as an important note, you can not have spaces before and after the `=` sign.
After that, to access the variable, you have to use the `$` and reference it as shown below:
```bash
echo $name
```
Wrapping the variable name between curly brackets is not required, but is considered a good practice, and I would advise you to use them whenever you can:
```bash
echo ${name}
```
The above code would output: `DevDojo` as this is the value of our `name` variable.
Next, let's update our `devdojo.sh` script and include a variable in it.
Again, you can open the file `devdojo.sh` with your favorite text editor, I'm using nano here to open the file:
```bash
nano devdojo.sh
```
Adding our `name` variable here in the file, with a welcome message. Our file now looks like this:
```bash
#!/bin/bash
name="DevDojo"
echo "Hi there $name"
```
Save it and run the file using the command below:
```bash
./devdojo.sh
```
You would see the following output on your screen:
```bash
Hi there DevDojo
```
Here is a rundown of the script written in the file:
* `#!/bin/bash` - At first, we specified our shebang.
* `name=DevDojo` - Then, we defined a variable called `name` and assigned a value to it.
* `echo "Hi there $name"` - Finally, we output the content of the variable on the screen as a welcome message by using `echo`
You can also add multiple variables in the file as shown below:
```bash
#!/bin/bash
name="DevDojo"
greeting="Hello"
echo "$greeting $name"
```
Save the file and run it again:
```bash
./devdojo.sh
```
You would see the following output on your screen:
```bash
Hello DevDojo
```
Note that you don't necessarily need to add semicolon `;` at the end of each line. It works both ways, a bit like other programming language such as JavaScript!
You can also add variables in the Command Line outside the Bash script and they can be read as parameters:
```bash
./devdojo.sh Bobby buddy!
```
This script takes in two parameters `Bobby`and `buddy!` separated by space. In the `devdojo.sh` file we have the following:
```bash
#!/bin/bash
echo "Hello there" $1
```
`$1` is the first input (`Bobby`) in the Command Line. Similarly, there could be more inputs and they are all referenced to by the `$` sign and their respective order of input. This means that `buddy!` is referenced to using `$2`. Another useful method for reading variables is the `$@` which reads all inputs.
So now let's change the `devdojo.sh` file to better understand:
```bash
#!/bin/bash
echo "Hello there" $1
# $1 : first parameter
echo "Hello there" $2
# $2 : second parameter
echo "Hello there" $@
# $@ : all
```
The output for:
```bash
./devdojo.sh Bobby buddy!
```
Would be the following:
```bash
Hello there Bobby
Hello there buddy!
Hello there Bobby buddy!
```

View File

@@ -0,0 +1,54 @@
# Bash User Input
With the previous script, we defined a variable, and we output the value of the variable on the screen with the `echo $name`.
Now let's go ahead and ask the user for input instead. To do that again, open the file with your favorite text editor and update the script as follows:
```bash
#!/bin/bash
echo "What is your name?"
read name
echo "Hi there $name"
echo "Welcome to DevDojo!"
```
The above will prompt the user for input and then store that input as a string/text in a variable.
We can then use the variable and print a message back to them.
The output of the above script would be:
* First run the script:
```bash
./devdojo.sh
```
* Then, you would be prompted to enter your name:
```
What is your name?
Bobby
```
* Once you've typed your name, just hit enter, and you will get the following output:
```
Hi there Bobby
Welcome to DevDojo!
```
To reduce the code, we could change the first `echo` statement with the `read -p`, the `read` command used with `-p` flag will print a message before prompting the user for their input:
```bash
#!/bin/bash
read -p "What is your name? " name
echo "Hi there $name"
echo "Welcome to DevDojo!"
```
Make sure to test this out yourself as well!

View File

@@ -0,0 +1,27 @@
# Bash Comments
As with any other programming language, you can add comments to your script. Comments are used to leave yourself notes through your code.
To do that in Bash, you need to add the `#` symbol at the beginning of the line. Comments will never be rendered on the screen.
Here is an example of a comment:
```bash
# This is a comment and will not be rendered on the screen
```
Let's go ahead and add some comments to our script:
```bash
#!/bin/bash
# Ask the user for their name
read -p "What is your name? " name
# Greet the user
echo "Hi there $name"
echo "Welcome to DevDojo!"
```
Comments are a great way to describe some of the more complex functionality directly in your scripts so that other people could find their way around your code with ease.

View File

@@ -0,0 +1,81 @@
# Bash Arguments
You can pass arguments to your shell script when you execute it. To pass an argument, you just need to write it right after the name of your script. For example:
```bash
./devdojo.com your_argument
```
In the script, we can then use `$1` in order to reference the first argument that we specified.
If we pass a second argument, it would be available as `$2` and so on.
Let's create a short script called `arguments.sh` as an example:
```bash
#!/bin/bash
echo "Argument one is $1"
echo "Argument two is $2"
echo "Argument three is $3"
```
Save the file and make it executable:
```bash
chmod +x arguments.sh
```
Then run the file and pass **3** arguments:
```bash
./arguments.sh dog cat bird
```
The output that you would get would be:
```bash
Argument one is dog
Argument two is cat
Argument three is bird
```
To reference all arguments, you can use `$@`:
```bash
#!/bin/bash
echo "All arguments: $@"
```
If you run the script again:
```bash
./arguments.sh dog cat bird
```
You will get the following output:
```
All arguments: dog cat bird
```
Another thing that you need to keep in mind is that `$0` is used to reference the script itself.
This is an excellent way to create self destruct the file if you need to or just get the name of the script.
For example, let's create a script that prints out the name of the file and deletes the file after that:
```bash
#!/bin/bash
echo "The name of the file is: $0 and it is going to be self-deleted."
rm -f $0
```
You need to be careful with the self deletion and ensure that you have your script backed up before you self-delete it.

View File

@@ -0,0 +1,112 @@
# Bash Arrays
If you have ever done any programming, you are probably already familiar with arrays.
But just in case you are not a developer, the main thing that you need to know is that unlike variables, arrays can hold several values under one name.
You can initialize an array by assigning values divided by space and enclosed in `()`. Example:
```bash
my_array=("value 1" "value 2" "value 3" "value 4")
```
To access the elements in the array, you need to reference them by their numeric index.
>{notice} keep in mind that you need to use curly brackets.
* Access a single element, this would output: `value 2`
```bash
echo ${my_array[1]}
```
* This would return the last element: `value 4`
```bash
echo ${my_array[-1]}
```
* As with command line arguments using `@` will return all arguments in the array, as follows: `value 1 value 2 value 3 value 4`
```bash
echo ${my_array[@]}
```
* Prepending the array with a hash sign (`#`) would output the total number of elements in the array, in our case it is `4`:
```bash
echo ${#my_array[@]}
```
Make sure to test this and practice it at your end with different values.
## Substring in Bash :: Slicing
Let's review the following example of slicing in a string in Bash:
```bash
#!/bin/bash
letters=( "A""B""C""D""E" )
echo ${letters[@]}
```
This command will print all the elements of an array.
Output:
```bash
$ ABCDE
```
Let's see a few more examples:
- Example 1
```bash
#!/bin/bash
letters=( "A""B""C""D""E" )
b=${letters:0:2}
echo "${b}"
```
This command will print array from starting index 0 to 2 where 2 is exclusive.
```bash
$ AB
```
- Example 2
```bash
#!/bin/bash
letters=( "A""B""C""D""E" )
b=${letters::5}
echo "${b}"
```
This command will print from base index 0 to 5, where 5 is exclusive and starting index is default set to 0 .
```bash
$ ABCDE
```
- Example 3
```bash
#!/bin/bash
letters=( "A""B""C""D""E" )
b=${letters:3}
echo "${b}"
```
This command will print from starting index
3 to end of array inclusive .
```bash
$ DE
```

View File

@@ -0,0 +1,186 @@
# Bash Conditional Expressions
In computer science, conditional statements, conditional expressions, and conditional constructs are features of a programming language, which perform different computations or actions depending on whether a programmer-specified boolean condition evaluates to true or false.
In Bash, conditional expressions are used by the `[[` compound command and the `[`built-in commands to test file attributes and perform string and arithmetic comparisons.
Here is a list of the most popular Bash conditional expressions. You do not have to memorize them by heart. You can simply refer back to this list whenever you need it!
## File expressions
* True if file exists.
```bash
[[ -a ${file} ]]
```
* True if file exists and is a block special file.
```bash
[[ -b ${file} ]]
```
* True if file exists and is a character special file.
```bash
[[ -c ${file} ]]
```
* True if file exists and is a directory.
```bash
[[ -d ${file} ]]
```
* True if file exists.
```bash
[[ -e ${file} ]]
```
* True if file exists and is a regular file.
```bash
[[ -f ${file} ]]
```
* True if file exists and is a symbolic link.
```bash
[[ -h ${file} ]]
```
* True if file exists and is readable.
```bash
[[ -r ${file} ]]
```
* True if file exists and has a size greater than zero.
```bash
[[ -s ${file} ]]
```
* True if file exists and is writable.
```bash
[[ -w ${file} ]]
```
* True if file exists and is executable.
```bash
[[ -x ${file} ]]
```
* True if file exists and is a symbolic link.
```bash
[[ -L ${file} ]]
```
## String expressions
* True if the shell variable varname is set (has been assigned a value).
```bash
[[ -v ${varname} ]]
```
True if the length of the string is zero.
```bash
[[ -z ${string} ]]
```
True if the length of the string is non-zero.
```bash
[[ -n ${string} ]]
```
* True if the strings are equal. `=` should be used with the test command for POSIX conformance. When used with the `[[` command, this performs pattern matching as described above (Compound Commands).
```bash
[[ ${string1} == ${string2} ]]
```
* True if the strings are not equal.
```bash
[[ ${string1} != ${string2} ]]
```
* True if string1 sorts before string2 lexicographically.
```bash
[[ ${string1} < ${string2} ]]
```
* True if string1 sorts after string2 lexicographically.
```bash
[[ ${string1} > ${string2} ]]
```
## Arithmetic operators
* Returns true if the numbers are **equal**
```bash
[[ ${arg1} -eq ${arg2} ]]
```
* Returns true if the numbers are **not equal**
```bash
[[ ${arg1} -ne ${arg2} ]]
```
* Returns true if arg1 is **less than** arg2
```bash
[[ ${arg1} -lt ${arg2} ]]
```
* Returns true if arg1 is **less than or equal** arg2
```bash
[[ ${arg1} -le ${arg2} ]]
```
* Returns true if arg1 is **greater than** arg2
```bash
[[ ${arg1} -gt ${arg2} ]]
```
* Returns true if arg1 is **greater than or equal** arg2
```bash
[[ ${arg1} -ge ${arg2} ]]
```
As a side note, arg1 and arg2 may be positive or negative integers.
As with other programming languages you can use `AND` & `OR` conditions:
```bash
[[ test_case_1 ]] && [[ test_case_2 ]] # And
[[ test_case_1 ]] || [[ test_case_2 ]] # Or
```
## Exit status operators
* returns true if the command was successful without any errors
```bash
[[ $? -eq 0 ]]
```
* returns true if the command was not successful or had errors
```bash
[[ $? -gt 0 ]]
```

View File

@@ -0,0 +1,187 @@
# Bash Conditionals
In the last section, we covered some of the most popular conditional expressions. We can now use them with standard conditional statements like `if`, `if-else` and `switch case` statements.
## If statement
The format of an `if` statement in Bash is as follows:
```bash
if [[ some_test ]]
then
<commands>
fi
```
Here is a quick example which would ask you to enter your name in case that you've left it empty:
```bash
#!/bin/bash
# Bash if statement example
read -p "What is your name? " name
if [[ -z ${name} ]]
then
echo "Please enter your name!"
fi
```
## If Else statement
With an `if-else` statement, you can specify an action in case that the condition in the `if` statement does not match. We can combine this with the conditional expressions from the previous section as follows:
```bash
#!/bin/bash
# Bash if statement example
read -p "What is your name? " name
if [[ -z ${name} ]]
then
echo "Please enter your name!"
else
echo "Hi there ${name}"
fi
```
You can use the above if statement with all of the conditional expressions from the previous chapters:
```bash
#!/bin/bash
admin="devdojo"
read -p "Enter your username? " username
# Check if the username provided is the admin
if [[ "${username}" == "${admin}" ]] ; then
echo "You are the admin user!"
else
echo "You are NOT the admin user!"
fi
```
Here is another example of an `if` statement which would check your current `User ID` and would not allow you to run the script as the `root` user:
```bash
#!/bin/bash
if (( $EUID == 0 )); then
echo "Please do not run as root"
exit
fi
```
If you put this on top of your script it would exit in case that the EUID is 0 and would not execute the rest of the script. This was discussed on [the DigitalOcean community forum](https://www.digitalocean.com/community/questions/how-to-check-if-running-as-root-in-a-bash-script).
You can also test multiple conditions with an `if` statement. In this example we want to make sure that the user is neither the admin user nor the root user to ensure the script is incapable of causing too much damage. We'll use the `or` operator in this example, noted by `||`. This means that either of the conditions needs to be true. If we used the `and` operator of `&&` then both conditions would need to be true.
```bash
#!/bin/bash
admin="devdojo"
read -p "Enter your username? " username
# Check if the username provided is the admin
if [[ "${username}" != "${admin}" ]] || [[ $EUID != 0 ]] ; then
echo "You are not the admin or root user, but please be safe!"
else
echo "You are the admin user! This could be very destructive!"
fi
```
If you have multiple conditions and scenarios, then can use `elif` statement with `if` and `else` statements.
```bash
#!/bin/bash
read -p "Enter a number: " num
if [[ $num -gt 0 ]] ; then
echo "The number is positive"
elif [[ $num -lt 0 ]] ; then
echo "The number is negative"
else
echo "The number is 0"
fi
```
## Switch case statements
As in other programming languages, you can use a `case` statement to simplify complex conditionals when there are multiple different choices. So rather than using a few `if`, and `if-else` statements, you could use a single `case` statement.
The Bash `case` statement syntax looks like this:
```bash
case $some_variable in
pattern_1)
commands
;;
pattern_2| pattern_3)
commands
;;
*)
default commands
;;
esac
```
A quick rundown of the structure:
* All `case` statements start with the `case` keyword.
* On the same line as the `case` keyword, you need to specify a variable or an expression followed by the `in` keyword.
* After that, you have your `case` patterns, where you need to use `)` to identify the end of the pattern.
* You can specify multiple patterns divided by a pipe: `|`.
* After the pattern, you specify the commands that you would like to be executed in case that the pattern matches the variable or the expression that you've specified.
* All clauses have to be terminated by adding `;;` at the end.
* You can have a default statement by adding a `*` as the pattern.
* To close the `case` statement, use the `esac` (case typed backwards) keyword.
Here is an example of a Bash `case` statement:
```bash
#!/bin/bash
read -p "Enter the name of your car brand: " car
case $car in
Tesla)
echo -n "${car}'s car factory is in the USA."
;;
BMW | Mercedes | Audi | Porsche)
echo -n "${car}'s car factory is in Germany."
;;
Toyota | Mazda | Mitsubishi | Subaru)
echo -n "${car}'s car factory is in Japan."
;;
*)
echo -n "${car} is an unknown car brand"
;;
esac
```
With this script, we are asking the user to input a name of a car brand like Telsa, BMW, Mercedes and etc.
Then with a `case` statement, we check the brand name and if it matches any of our patterns, and if so, we print out the factory's location.
If the brand name does not match any of our `case` statements, we print out a default message: `an unknown car brand`.
## Conclusion
I would advise you to try and modify the script and play with it a bit so that you could practice what you've just learned in the last two chapters!
For more examples of Bash `case` statements, make sure to check chapter 16, where we would create an interactive menu in Bash using a `cases` statement to process the user input.

View File

@@ -0,0 +1,197 @@
# Bash Loops
As with any other language, loops are very convenient. With Bash you can use `for` loops, `while` loops, and `until` loops.
## For loops
Here is the structure of a for loop:
```bash
for var in ${list}
do
your_commands
done
```
Example:
```bash
#!/bin/bash
users="devdojo bobby tony"
for user in ${users}
do
echo "${user}"
done
```
A quick rundown of the example:
* First, we specify a list of users and store the value in a variable called `$users`.
* After that, we start our `for` loop with the `for` keyword.
* Then we define a new variable which would represent each item from the list that we give. In our case, we define a variable called `user`, which would represent each user from the `$users` variable.
* Then we specify the `in` keyword followed by our list that we will loop through.
* On the next line, we use the `do` keyword, which indicates what we will do for each iteration of the loop.
* Then we specify the commands that we want to run.
* Finally, we close the loop with the `done` keyword.
You can also use `for` to process a series of numbers. For example here is one way to loop through from 1 to 10:
```bash
#!/bin/bash
for num in {1..10}
do
echo ${num}
done
```
## While loops
The structure of a while loop is quite similar to the `for` loop:
```bash
while [ your_condition ]
do
your_commands
done
```
Here is an example of a `while` loop:
```bash
#!/bin/bash
counter=1
while [[ $counter -le 10 ]]
do
echo $counter
((counter++))
done
```
First, we specified a counter variable and set it to `1`, then inside the loop, we added counter by using this statement here: `((counter++))`. That way, we make sure that the loop will run 10 times only and would not run forever. The loop will complete as soon as the counter becomes 10, as this is what we've set as the condition: `while [[ $counter -le 10 ]]`.
Let's create a script that asks the user for their name and not allow an empty input:
```bash
#!/bin/bash
read -p "What is your name? " name
while [[ -z ${name} ]]
do
echo "Your name can not be blank. Please enter a valid name!"
read -p "Enter your name again? " name
done
echo "Hi there ${name}"
```
Now, if you run the above and just press enter without providing input, the loop would run again and ask you for your name again and again until you actually provide some input.
## Until Loops
The difference between `until` and `while` loops is that the `until` loop will run the commands within the loop until the condition becomes true.
Structure:
```bash
until [[ your_condition ]]
do
your_commands
done
```
Example:
```bash
#!/bin/bash
count=1
until [[ $count -gt 10 ]]
do
echo $count
((count++))
done
```
## Continue and Break
As with other languages, you can use `continue` and `break` with your bash scripts as well:
* `continue` tells your bash script to stop the current iteration of the loop and start the next iteration.
The syntax of the continue statement is as follows:
```bash
continue [n]
```
The [n] argument is optional and can be greater than or equal to 1. When [n] is given, the n-th enclosing loop is resumed. continue 1 is equivalent to continue.
```bash
#!/bin/bash
for i in 1 2 3 4 5
do
if [[ $i eq 2 ]]
then
echo "skipping number 2"
continue
fi
echo "i is equal to $i"
done
```
We can also use continue command in similar way to break command for controlling multiple loops.
* `break` tells your bash script to end the loop straight away.
The syntax of the break statement takes the following form:
```bash
break [n]
```
[n] is an optional argument and must be greater than or equal to 1. When [n] is provided, the n-th enclosing loop is exited. break 1 is equivalent to break.
Example:
```bash
#!/bin/bash
num=1
while [[ $num lt 10 ]]
do
if [[ $num eq 5 ]]
then
break
fi
((num++))
done
echo "Loop completed"
```
We can also use break command with multiple loops. If we want to exit out of current working loop whether inner or outer loop, we simply use break but if we are in inner loop & want to exit out of outer loop, we use break 2.
Example:
```bash
#!/bin/bash
for (( a = 1; a < 10; a++ ))
do
echo "outer loop: $a"
for (( b = 1; b < 100; b++ ))
do
if [[ $b gt 5 ]]
then
break 2
fi
echo "Inner loop: $b "
done
done
```
The bash script will begin with a=1 & will move to inner loop and when it reaches b=5, it will break the outer loop.
We can use break only instead of break 2, to break inner loop & see how it affects the output.

View File

@@ -0,0 +1,66 @@
# Bash Functions
Functions are a great way to reuse code. The structure of a function in bash is quite similar to most languages:
```bash
function function_name() {
your_commands
}
```
You can also omit the `function` keyword at the beginning, which would also work:
```bash
function_name() {
your_commands
}
```
I prefer putting it there for better readability. But it is a matter of personal preference.
Example of a "Hello World!" function:
```bash
#!/bin/bash
function hello() {
echo "Hello World Function!"
}
hello
```
>{notice} One thing to keep in mind is that you should not add the parenthesis when you call the function.
Passing arguments to a function work in the same way as passing arguments to a script:
```bash
#!/bin/bash
function hello() {
echo "Hello $1!"
}
hello DevDojo
```
Functions should have comments mentioning description, global variables, arguments, outputs, and returned values, if applicable
```bash
#######################################
# Description: Hello function
# Globals:
# None
# Arguments:
# Single input argument
# Outputs:
# Value of input argument
# Returns:
# 0 if successful, non-zero on error.
#######################################
function hello() {
echo "Hello $1!"
}
```
In the next few chapters we will be using functions a lot!

View File

@@ -0,0 +1,83 @@
# Debugging, testing and shortcuts
In order to debug your bash scripts, you can use `-x` when executing your scripts:
```bash
bash -x ./your_script.sh
```
Or you can add `set -x` before the specific line that you want to debug, `set -x` enables a mode of the shell where all executed commands are printed to the terminal.
Another way to test your scripts is to use this fantastic tool here:
[https://www.shellcheck.net/](https://www.shellcheck.net/)
Just copy and paste your code into the textbox, and the tool will give you some suggestions on how you can improve your script.
You can also run the tool directly in your terminal:
[https://github.com/koalaman/shellcheck](https://github.com/koalaman/shellcheck)
If you like the tool, make sure to star it on GitHub and contribute!
As a SysAdmin/DevOps, I spend a lot of my day in the terminal. Here are my favorite shortcuts that help me do tasks quicker while writing Bash scripts or just while working in the terminal.
The below two are particularly useful if you have a very long command.
* Delete everything from the cursor to the end of the line:
```
Ctrl + k
```
* Delete everything from the cursor to the start of the line:
```
Ctrl + u
```
* Delete one word backward from cursor:
```
Ctrl + w
```
* Search your history backward. This is probably the one that I use the most. It is really handy and speeds up my work-flow a lot:
```
Ctrl + r
```
* Clear the screen, I use this instead of typing the `clear` command:
```
Ctrl + l
```
* Stops the output to the screen:
```
Ctrl + s
```
* Enable the output to the screen in case that previously stopped by `Ctrl + s`:
```
Ctrl + q
```
* Terminate the current command
```
Ctrl + c
```
* Throw the current command to background:
```
Ctrl + z
```
I use those regularly every day, and it saves me a lot of time.
If you think that I've missed any feel free to join the discussion on [the DigitalOcean community forum](https://www.digitalocean.com/community/questions/what-are-your-favorite-bash-shortcuts)!

View File

@@ -0,0 +1,83 @@
# Creating custom bash commands
As a developer or system administrator, you might have to spend a lot of time in your terminal. I always try to look for ways to optimize any repetitive tasks.
One way to do that is to either write short bash scripts or create custom commands also known as aliases. For example, rather than typing a really long command every time you could just create a shortcut for it.
## Example
Let's start with the following scenario, as a system admin, you might have to check the connections to your web server quite often, so I will use the `netstat` command as an example.
What I would usually do when I access a server that is having issues with the connections to port 80 or 443 is to check if there are any services listening on those ports and the number of connections to the ports.
The following `netstat` command would show us how many TCP connections on port 80 and 443 we currently have:
```bash
netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l
```
This is quite a lengthy command so typing it every time might be time-consuming in the long run especially when you want to get that information quickly.
To avoid that, we can create an alias, so rather than typing the whole command, we could just type a short command instead. For example, lets say that we wanted to be able to type `conn` (short for connections) and get the same information. All we need to do in this case is to run the following command:
```bash
alias conn="netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
```
That way we are creating an alias called `conn` which would essentially be a 'shortcut' for our long `netstat` command. Now if you run just `conn`:
```bash
conn
```
You would get the same output as the long `netstat` command.
You can get even more creative and add some info messages like this one here:
```bash
alias conn="echo 'Total connections on port 80 and 443:' ; netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
```
Now if you run `conn` you would get the following output:
```bash
Total connections on port 80 and 443:
12
```
Now if you log out and log back in, your alias would be lost. In the next step you will see how to make this persistent.
## Making the change persistent
In order to make the change persistent, we need to add the `alias` command in our shell profile file.
By default on Ubuntu this would be the `~/.bashrc` file, for other operating systems this might be the `~/.bash_profle`. With your favorite text editor open the file:
```bash
nano ~/.bashrc
```
Go to the bottom and add the following:
```bash
alias conn="echo 'Total connections on port 80 and 443:' ; netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
```
Save and then exit.
That way now even if you log out and log back in again your change would be persisted and you would be able to run your custom bash command.
## Listing all of the available aliases
To list all of the available aliases for your current shell, you have to just run the following command:
```bash
alias
```
This would be handy in case that you are seeing some weird behavior with some commands.
## Conclusion
This is one way of creating custom bash commands or bash aliases.
Of course, you could actually write a bash script and add the script inside your `/usr/bin` folder, but this would not work if you don't have root or sudo access, whereas with aliases you can do it without the need of root access.
>{notice} This was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-create-custom-bash-commands)

View File

@@ -0,0 +1,180 @@
# Write your first Bash script
Let's try to put together what we've learned so far and create our first Bash script!
## Planning the script
As an example, we will write a script that would gather some useful information about our server like:
* Current Disk usage
* Current CPU usage
* Current RAM usage
* Check the exact Kernel version
Feel free to adjust the script by adding or removing functionality so that it matches your needs.
## Writing the script
The first thing that you need to do is to create a new file with a `.sh` extension. I will create a file called `status.sh` as the script that we will create would give us the status of our server.
Once you've created the file, open it with your favorite text editor.
As we've learned in chapter 1, on the very first line of our Bash script we need to specify the so-called [Shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)):
```bash
#!/bin/bash
```
All that the shebang does is to instruct the operating system to run the script with the /bin/bash executable.
## Adding comments
Next, as discussed in chapter 6, let's start by adding some comments so that people could easily figure out what the script is used for. To do that right after the shebang you can just add the following:
```bash
#!/bin/bash
# Script that returns the current server status
```
## Adding your first variable
Then let's go ahead and apply what we've learned in chapter 4 and add some variables which we might want to use throughout the script.
To assign a value to a variable in bash, you just have to use the `=` sign. For example, let's store the hostname of our server in a variable so that we could use it later:
```bash
server_name=$(hostname)
```
By using `$()` we tell bash to actually interpret the command and then assign the value to our variable.
Now if we were to echo out the variable we would see the current hostname:
```bash
echo $server_name
```
## Adding your first function
As you already know after reading chapter 12, in order to create a function in bash you need to use the following structure:
```bash
function function_name() {
your_commands
}
```
Let's create a function that returns the current memory usage on our server:
```bash
function memory_check() {
echo ""
echo "The current memory usage on ${server_name} is: "
free -h
echo ""
}
```
Quick run down of the function:
* `function memory_check() {` - this is how we define the function
* `echo ""` - here we just print a new line
* `echo "The current memory usage on ${server_name} is: "` - here we print a small message and the `$server_name` variable
* `}` - finally this is how we close the function
Then once the function has been defined, in order to call it, just use the name of the function:
```bash
# Define the function
function memory_check() {
echo ""
echo "The current memory usage on ${server_name} is: "
free -h
echo ""
}
# Call the function
memory_check
```
## Adding more functions challenge
Before checking out the solution, I would challenge you to use the function from above and write a few functions by yourself.
The functions should do the following:
* Current Disk usage
* Current CPU usage
* Current RAM usage
* Check the exact Kernel version
Feel free to use google if you are not sure what commands you need to use in order to get that information.
Once you are ready, feel free to scroll down and check how we've done it and compare the results!
Note that there are multiple correct ways of doing it!
## The sample script
Here's what the end result would look like:
```bash
#!/bin/bash
##
# BASH script that checks:
# - Memory usage
# - CPU load
# - Number of TCP connections
# - Kernel version
##
server_name=$(hostname)
function memory_check() {
echo ""
echo "Memory usage on ${server_name} is: "
free -h
echo ""
}
function cpu_check() {
echo ""
echo "CPU load on ${server_name} is: "
echo ""
uptime
echo ""
}
function tcp_check() {
echo ""
echo "TCP connections on ${server_name}: "
echo ""
cat /proc/net/tcp | wc -l
echo ""
}
function kernel_check() {
echo ""
echo "Kernel version on ${server_name} is: "
echo ""
uname -r
echo ""
}
function all_checks() {
memory_check
cpu_check
tcp_check
kernel_check
}
all_checks
```
## Conclusion
Bash scripting is awesome! No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you can use Bash scripts to combine different Linux commands and automate boring and repetitive daily tasks, so that you can focus on more productive and fun things!
>{notice} This was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/introduction-to-bash-scripting)

View File

@@ -0,0 +1,305 @@
# Creating an interactive menu in Bash
In this tutorial, I will show you how to create a multiple-choice menu in Bash so that your users could choose between what action should be executed!
We would reuse some of the code from the previous chapter, so if you have not read it yet make sure to do so.
## Planning the functionality
Let's start again by going over the main functionality of the script:
* Checks the current Disk usage
* Checks the current CPU usage
* Checks the current RAM usage
* Checks the check the exact Kernel version
In case that you don't have it on hand, here is the script itself:
```bash
#!/bin/bash
##
# BASH menu script that checks:
# - Memory usage
# - CPU load
# - Number of TCP connections
# - Kernel version
##
server_name=$(hostname)
function memory_check() {
echo ""
echo "Memory usage on ${server_name} is: "
free -h
echo ""
}
function cpu_check() {
echo ""
echo "CPU load on ${server_name} is: "
echo ""
uptime
echo ""
}
function tcp_check() {
echo ""
echo "TCP connections on ${server_name}: "
echo ""
cat /proc/net/tcp | wc -l
echo ""
}
function kernel_check() {
echo ""
echo "Kernel version on ${server_name} is: "
echo ""
uname -r
echo ""
}
function all_checks() {
memory_check
cpu_check
tcp_check
kernel_check
}
```
We will then build a menu that allows the user to choose which function to be executed.
Of course, you can adjust the function or add new ones depending on your needs.
## Adding some colors
In order to make the menu a bit more 'readable' and easy to grasp at first glance, we will add some color functions.
At the beginning of your script add the following color functions:
```bash
##
# Color Variables
##
green='\e[32m'
blue='\e[34m'
clear='\e[0m'
##
# Color Functions
##
ColorGreen(){
echo -ne $green$1$clear
}
ColorBlue(){
echo -ne $blue$1$clear
}
```
You can use the color functions as follows:
```bash
echo -ne $(ColorBlue 'Some text here')
```
The above would output the `Some text here` string and it would be blue!
# Adding the menu
Finally, to add our menu, we will create a separate function with a case switch for our menu options:
```bash
menu(){
echo -ne "
My First Menu
$(ColorGreen '1)') Memory usage
$(ColorGreen '2)') CPU load
$(ColorGreen '3)') Number of TCP connections
$(ColorGreen '4)') Kernel version
$(ColorGreen '5)') Check All
$(ColorGreen '0)') Exit
$(ColorBlue 'Choose an option:') "
read a
case $a in
1) memory_check ; menu ;;
2) cpu_check ; menu ;;
3) tcp_check ; menu ;;
4) kernel_check ; menu ;;
5) all_checks ; menu ;;
0) exit 0 ;;
*) echo -e $red"Wrong option."$clear; WrongCommand;;
esac
}
```
### A quick rundown of the code
First we just echo out the menu options with some color:
```
echo -ne "
My First Menu
$(ColorGreen '1)') Memory usage
$(ColorGreen '2)') CPU load
$(ColorGreen '3)') Number of TCP connections
$(ColorGreen '4)') Kernel version
$(ColorGreen '5)') Check All
$(ColorGreen '0)') Exit
$(ColorBlue 'Choose an option:') "
```
Then we read the answer of the user and store it in a variable called `$a`:
```bash
read a
```
Finally, we have a switch case which triggers a different function depending on the value of `$a`:
```bash
case $a in
1) memory_check ; menu ;;
2) cpu_check ; menu ;;
3) tcp_check ; menu ;;
4) kernel_check ; menu ;;
5) all_checks ; menu ;;
0) exit 0 ;;
*) echo -e $red"Wrong option."$clear; WrongCommand;;
esac
```
At the end we need to call the menu function to actually print out the menu:
```bash
# Call the menu function
menu
```
## Testing the script
In the end, your script will look like this:
```bash
#!/bin/bash
##
# BASH menu script that checks:
# - Memory usage
# - CPU load
# - Number of TCP connections
# - Kernel version
##
server_name=$(hostname)
function memory_check() {
echo ""
echo "Memory usage on ${server_name} is: "
free -h
echo ""
}
function cpu_check() {
echo ""
echo "CPU load on ${server_name} is: "
echo ""
uptime
echo ""
}
function tcp_check() {
echo ""
echo "TCP connections on ${server_name}: "
echo ""
cat /proc/net/tcp | wc -l
echo ""
}
function kernel_check() {
echo ""
echo "Kernel version on ${server_name} is: "
echo ""
uname -r
echo ""
}
function all_checks() {
memory_check
cpu_check
tcp_check
kernel_check
}
##
# Color Variables
##
green='\e[32m'
blue='\e[34m'
clear='\e[0m'
##
# Color Functions
##
ColorGreen(){
echo -ne $green$1$clear
}
ColorBlue(){
echo -ne $blue$1$clear
}
menu(){
echo -ne "
My First Menu
$(ColorGreen '1)') Memory usage
$(ColorGreen '2)') CPU load
$(ColorGreen '3)') Number of TCP connections
$(ColorGreen '4)') Kernel version
$(ColorGreen '5)') Check All
$(ColorGreen '0)') Exit
$(ColorBlue 'Choose an option:') "
read a
case $a in
1) memory_check ; menu ;;
2) cpu_check ; menu ;;
3) tcp_check ; menu ;;
4) kernel_check ; menu ;;
5) all_checks ; menu ;;
0) exit 0 ;;
*) echo -e $red"Wrong option."$clear; WrongCommand;;
esac
}
# Call the menu function
menu
```
To test the script, create a new filed with a `.sh` extension, for example: `menu.sh` and then run it:
```bash
bash menu.sh
```
The output that you would get will look like this:
```bash
My First Menu
1) Memory usage
2) CPU load
3) Number of TCP connections
4) Kernel version
5) Check All
0) Exit
Choose an option:
```
You will be able to choose a different option from the list and each number will call a different function from the script:
![Nice Bash interactive menu](https://imgur.com/8EgxX4m.png)
## Conclusion
You now know how to create a Bash menu and implement it in your scripts so that users could select different values!
>{notice} This content was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-work-with-json-in-bash-using-jq)

View File

@@ -0,0 +1,129 @@
# Executing BASH scripts on Multiple Remote Servers
Any command that you can run from the command line can be used in a bash script. Scripts are used to run a series of commands. Bash is available by default on Linux and macOS operating systems.
Let's have a hypothetical scenario where you need to execute a BASH script on multiple remote servers, but you don't want to manually copy the script to each server, then again login to each server individually and only then execute the script.
Of course you could use a tool like Ansible but let's learn how to do that with Bash!
## Prerequisites
For this example I will use 3 remote Ubuntu servers deployed on DigitalOcean. If you don't have a Digital Ocean account yet, you can sign up for DigitalOcean and get $100 free credit via this referral link here:
[https://m.do.co/c/2a9bba940f39](https://m.do.co/c/2a9bba940f39)
Once you have your Digital Ocean account ready go ahead and deploy 3 droplets.
I've gone ahead and created 3 Ubuntu servers:
![DigitalOcean Ubuntu servers](https://imgur.com/09xmq41.png)
I'll put a those servers IP's in a `servers.txt` file which I would use to loop though with our Bash script.
If you are new to DigitalOcean you can follow the steps on how to create a Droplet here:
* [How to Create a Droplet from the DigitalOcean Control Panel](https://www.digitalocean.com/docs/droplets/how-to/create/)
You can also follow the steps from this video here on how to do your initial server setup:
* [How to do your Initial Server Setup with Ubuntu](https://youtu.be/7NL2_4HIgKU)
Or even better, you can follow this article here on how to automate your initial server setup with Bash:
[Automating Initial Server Setup with Ubuntu 18.04 with Bash](https://www.digitalocean.com/community/tutorials/automating-initial-server-setup-with-ubuntu-18-04)
With the 3 new servers in place, we can go ahead and focus on running our Bash script on all of them with a single command!
## The BASH Script
I will reuse the demo script from the previous chapter with some slight changes. It simply executes a few checks like the current memory usage, the current CPU usage, the number of TCP connections and the version of the kernel.
```bash
#!/bin/bash
##
# BASH script that checks the following:
# - Memory usage
# - CPU load
# - Number of TCP connections
# - Kernel version
##
##
# Memory check
##
server_name=$(hostname)
function memory_check() {
echo "#######"
echo "The current memory usage on ${server_name} is: "
free -h
echo "#######"
}
function cpu_check() {
echo "#######"
echo "The current CPU load on ${server_name} is: "
echo ""
uptime
echo "#######"
}
function tcp_check() {
echo "#######"
echo "Total TCP connections on ${server_name}: "
echo ""
cat /proc/net/tcp | wc -l
echo "#######"
}
function kernel_check() {
echo "#######"
echo "The exact Kernel version on ${server_name} is: "
echo ""
uname -r
echo "#######"
}
function all_checks() {
memory_check
cpu_check
tcp_check
kernel_check
}
all_checks
```
Copy the code bellow and add this in a file called `remote_check.sh`. You can also get the script from [here](https://devdojo.com/bobbyiliev/executing-bash-script-on-multiple-remote-server).
## Running the Script on all Servers
Now that we have the script and the servers ready and that we've added those servers in our servers.txt file we can run the following command to loop though all servers and execute the script remotely without having to copy the script to each server and individually connect to each server.
```bash
for server in $(cat servers.txt) ; do ssh your_user@${server} 'bash -s' < ./remote_check.sh ; done
```
What this for loop does is, it goes through each server in the servers.txt file and then it runs the following command for each item in the list:
```bash
ssh your_user@the_server_ip 'bash -s' < ./remote_check.sh
```
You would get the following output:
![Running bash script on multiple remote servers](https://imgur.com/B1AmhUP.png)
## Conclusion
This is just a really simple example on how to execute a simple script on multiple servers without having to copy the script to each server and without having to access the servers individually.
Of course you could run a much more complex script and on many more servers.
If you are interested in automation, I would recommend checking out the Ansible resources page on the DigitalOcean website:
[Ansible Resources](https://www.digitalocean.com/community/tags/ansible)
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-summarize-your-nginx-and-apache-access-logs)

View File

@@ -0,0 +1,225 @@
# Work with JSON in BASH using jq
The `jq` command-line tool is a lightweight and flexible command-line **JSON** processor. It is great for parsing JSON output in BASH.
One of the great things about `jq` is that it is written in portable C, and it has zero runtime dependencies. All you need to do is to download a single binary or use a package manager like apt and install it with a single command.
## Planning the script
For the demo in this tutorial, I would use an external REST API that returns a simple JSON output called the [QuizAPI](https://quizapi.io/):
> [https://quizapi.io/](https://quizapi.io/)
If you want to follow along make sure to get a free API key here:
> [https://quizapi.io/clientarea/settings/token](https://quizapi.io/clientarea/settings/token)
The QuizAPI is free for developers.
## Installing jq
There are many ways to install `jq` on your system. One of the most straight forward ways to do so is to use the package manager depending on your OS.
Here is a list of the commands that you would need to use depending on your OS:
* Install jq on Ubuntu/Debian:
```bash
sudo apt-get install jq
```
* Install jq on Fedora:
```bash
sudo dnf install jq
```
* Install jq on openSUSE:
```bash
sudo zypper install jq
```
- Install jq on Arch:
```bash
sudo pacman -S jq
```
* Installing on Mac with Homebrew:
```bash
brew install jq
```
* Install on Mac with MacPort:
```bash
port install jq
```
If you are using other OS, I would recommend taking a look at the official documentation here for more information:
> [https://stedolan.github.io/jq/download/](https://stedolan.github.io/jq/download/)
Once you have jq installed you can check your current version by running this command:
```bash
jq --version
```
## Parsing JSON with jq
Once you have `jq` installed and your QuizAPI API Key, you can parse the JSON output of the QuizAPI directly in your terminal.
First, create a variable that stores your API Key:
```bash
API_KEY=YOUR_API_KEY_HERE
```
In order to get some output from one of the endpoints of the QuizAPI you can use the curl command:
```bash
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10"
```
For a more specific output, you can use the QuizAPI URL Generator here:
> [https://quizapi.io/api-config](https://quizapi.io/api-config)
After running the curl command, the output which you would get would look like this:
![Raw Json output](https://imgur.com/KghOfzj.png)
This could be quite hard to read, but thanks to the jq command-line tool, all we need to do is pipe the curl command to jq and we would see a nice formatted JSON output:
```bash
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10" | jq
```
> Note the `| jq` at the end.
In this case the output that you would get would look something like this:
![bash jq formatting](https://imgur.com/ebdTtVf.png)
Now, this looks much nicer! The jq command-line tool formatted the output for us and added some nice coloring!
## Getting the first element with jq
Let's say that we only wanted to get the first element from the JSON output, in order to do that we have to just specify the index that we want to see with the following syntax:
```bash
jq .[0]
```
Now, if we run the curl command again and pipe the output to jq .[0] like this:
```bash
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10" | jq.[0]
```
You will only get the first element and the output will look like this:
![jq get first element only](https://imgur.com/h9bFMAL.png)
## Getting a value only for specific key
Sometimes you might want to get only the value of a specific key only, let's say in our example the QuizAPI returns a list of questions along with the answers, description and etc. but what if you wanted to get the Questions only without the additional information?
This is going to be quite straight forward with `jq`, all you need to do is add the key after jq command, so it would look something like this:
```bash
jq .[].question
```
We have to add the `.[]` as the QuizAPI returns an array and by specifying `.[]` we tell jq that we want to get the .question value for all of the elements in the array.
The output that you would get would look like this:
![jq get a value only for specific key](https://imgur.com/0701wHD.png)
As you can see we now only get the questions without the rest of the values.
## Using jq in a BASH script
Let's go ahead and create a small bash script which should output the following information for us:
* Get only the first question from the output
* Get all of the answers for that question
* Assign the answers to variables
* Print the question and the answers
* To do that I've put together the following script:
>{notice} make sure to change the API_KEY part with your actual QuizAPI key:
```bash
#!/bin/bash
##
# Make an API call to QuizAPI and store the output in a variable
##
output=$(curl 'https://quizapi.io/api/v1/questions?apiKey=API_KEY&limit=10' 2>/dev/null)
##
# Get only the first question
##
output=$(echo $output | jq .[0])
##
# Get the question
##
question=$(echo $output | jq .question)
##
# Get the answers
##
answer_a=$(echo $output | jq .answers.answer_a)
answer_b=$(echo $output | jq .answers.answer_b)
answer_c=$(echo $output | jq .answers.answer_c)
answer_d=$(echo $output | jq .answers.answer_d)
##
# Output the question
##
echo "
Question: ${question}
A) ${answer_a}
B) ${answer_b}
C) ${answer_c}
D) ${answer_d}
"
```
If you run the script you would get the following output:
![Using jq in a bash script](https://imgur.com/LKEsrbq.png)
We can even go further by making this interactive so that we could actually choose the answer directly in our terminal.
There is already a bash script that does this by using the QuizAPI and `jq`:
You can take a look at that script here:
* [https://github.com/QuizApi/QuizAPI-BASH/blob/master/quiz.sh](https://github.com/QuizApi/QuizAPI-BASH/blob/master/quiz.sh)
## Conclusion
The `jq` command-line tool is an amazing tool that gives you the power to work with JSON directly in your BASH terminal.
That way you can easily interact with all kinds of different REST APIs with BASH.
For more information, you could take a look at the official documentation here:
* [https://stedolan.github.io/jq/manual/](https://stedolan.github.io/jq/manual/)
And for more information on the **QuizAPI**, you could take a look at the official documentation here:
* [https://quizapi.io/docs/1.0/overview](https://quizapi.io/docs/1.0/overview)
>{notice} This content was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-work-with-json-in-bash-using-jq)

View File

@@ -0,0 +1,104 @@
# Working with Cloudflare API with Bash
I host all of my websites on **DigitalOcean** Droplets and I also use Cloudflare as my CDN provider. One of the benefits of using Cloudflare is that it reduces the overall traffic to your user and also hides your actual server IP address behind their CDN.
My personal favorite Cloudflare feature is their free DDoS protection. It has saved my servers multiple times from different DDoS attacks. They have a cool API that you could use to enable and disable their DDoS protection easily.
This chapter is going to be an exercise! I challenge you to go ahead and write a short bash script that would enable and disable the Cloudflare DDoS protection for your server automatically if needed!
## Prerequisites
Before following this guide here, please set up your Cloudflare account and get your website ready. If you are not sure how to do that you can follow these steps here: [Create a Cloudflare account and add a website](https://support.cloudflare.com/hc/en-us/articles/201720164-Step-2-Create-a-Cloudflare-account-and-add-a-website).
Once you have your Cloudflare account, make sure to obtain the following information:
* A Cloudflare account
* Cloudflare API key
* Cloudflare Zone ID
Also, Make sure curl is installed on your server:
```bash
curl --version
```
If curl is not installed you need to run the following:
* For RedHat/CentOs:
```bash
yum install curl
```
* For Debian/Ubuntu
```bash
apt-get install curl
```
## Challenge - Script requirements
The script needs to monitor the CPU usage on your server and if the CPU usage gets high based on the number vCPU it would enable the Cloudflare DDoS protection automatically via the Cloudflare API.
The main features of the script should be:
* Checks the script CPU load on the server
* In case of a CPU spike the script triggers an API call to Cloudflare and enables the DDoS protection feature for the specified zone
* After the CPU load is back to normal the script would disable the "I'm under attack" option and set it back to normal
## Example script
I already have prepared a demo script which you could use as a reference. But I encourage you to try and write the script yourself first and only then take a look at my script!
To download the script just run the following command:
```bash
wget https://raw.githubusercontent.com/bobbyiliev/cloudflare-ddos-protection/main/protection.sh
```
Open the script with your favorite text editor:
```bash
nano protection.sh
```
And update the following details with your Cloudflare details:
```bash
CF_CONE_ID=YOUR_CF_ZONE_ID
CF_EMAIL_ADDRESS=YOUR_CF_EMAIL_ADDRESS
CF_API_KEY=YOUR_CF_API_KEY
```
After that make the script executable:
```bash
chmod +x ~/protection.sh
```
Finally, set up 2 Cron jobs to run every 30 seconds. To edit your crontab run:
```bash
crontab -e
```
And add the following content:
```bash
* * * * * /path-to-the-script/cloudflare/protection.sh
* * * * * ( sleep 30 ; /path-to-the-script/cloudflare/protection.sh )
```
Note that you need to change the path to the script with the actual path where you've stored the script at.
## Conclusion
This is quite straight forward and budget solution, one of the downsides of the script is that if your server gets unresponsive due to an attack, the script might not be triggered at all.
Of course, a better approach would be to use a monitoring system like Nagios and based on the statistics from the monitoring system then you can trigger the script, but this script challenge could be a good learning experience!
Here is another great resource on how to use the Discord API and send notifications to your Discord Channel with a Bash script:
[How To Use Discord Webhooks to Get Notifications for Your Website Status on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-use-discord-webhooks-to-get-notifications-for-your-website-status-on-ubuntu-18-04)
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-automatically-enable-cloudflare-ddos-protection)

View File

@@ -0,0 +1,83 @@
# BASH Script parser to Summarize Your NGINX and Apache Access Logs
One of the first things that I would usually do in case I notice a high CPU usage on some of my Linux servers would be to check the process list with either top or htop and in case that I notice a lot of Apache or Nginx process I would quickly check my access logs to determine what has caused or is causing the CPU spike on my server or to figure out if anything malicious is going on.
Sometimes reading the logs could be quite intimidating as the log might be huge and going though it manually could take a lot of time. Also, the raw log format could be confusing for people with less experience.
Just like the previous chapter, this chapter is going to be a challenge! You need to write a short bash script that would summarize the whole access log for you without the need of installing any additional software.
# Script requirements
This BASH script needs to parse and summarize your access logs and provide you with very useful information like:
* The 20 top pages with the most POST requests
* The 20 top pages with the most GET requests
* Top 20 IP addresses and their geo-location
## Example script
I already have prepared a demo script which you could use as a reference. But I encourage you to try and write the script yourself first and only then take a look at my script!
In order to download the script, you can either clone the repository with the following command:
```bash
git clone https://github.com/bobbyiliev/quick_access_logs_summary.git
```
Or run the following command which would download the script in your current directory:
```bash
wget https://raw.githubusercontent.com/bobbyiliev/quick_access_logs_summary/master/spike_check
```
The script does not make any changes to your system, it only reads the content of your access log and summarizes it for you, however, once you've downloaded the file, make sure to review the content yourself.
## Running the script
All that you have to do once the script has been downloaded is to make it executable and run it.
To do that run the following command to make the script executable:
```bash
chmod +x spike_check
```
Then run the script:
```bash
./spike_check /path/to/your/access_log
```
Make sure to change the path to the file with the actual path to your access log. For example if you are using Apache on an Ubuntu server, the exact command would look like this:
```bash
./spike_check /var/log/apache2/access.log
```
If you are using Nginx the exact command would be almost the same, but with the path to the Nginx access log:
```bash
./spike_check /var/log/nginx/access.log
```
## Understanding the output
Once you run the script, it might take a while depending on the size of the log.
The output that you would see should look like this:
![Summarized access log](https://imgur.com/WWHVMrj.png)
Essentially what we can tell in this case is that we've received 16 POST requests to our xmlrpc.php file which is often used by attackers to try and exploit WordPress websites by using various username and password combinations.
In this specific case, this was not a huge brute force attack, but it gives us an early indication and we can take action to prevent a larger attack in the future.
We can also see that there were a couple of Russian IP addresses accessing our site, so in case that you do not expect any traffic from Russia, you might want to block those IP addresses as well.
## Conclusion
This is an example of a simple BASH script that allows you to quickly summarize your access logs and determine if anything malicious is going on.
Of course, you might want to also manually go through the logs as well but it is a good challenge to try and automate this with Bash!
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-summarize-your-nginx-and-apache-access-logs)

View File

@@ -0,0 +1,95 @@
# Sending emails with Bash and SSMTP
SSMTP is a tool that delivers emails from a computer or a server to a configured mail host.
SSMTP is not an email server itself and does not receive emails or manage a queue.
One of its primary uses is for forwarding automated email (like system alerts) off your machine and to an external email address.
## Prerequisites
You would need the following things in order to be able to complete this tutorial successfully:
* Access to an Ubuntu 18.04 server as a non-root user with sudo privileges and an active firewall installed on your server. To set these up, please refer to our [Initial Server Setup Guide for Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04)
* An SMTP server along with SMTP username and password, this would also work with Gmail's SMTP server, or you could set up your own SMTP server by following the steps from this tutorial on [How to Install and Configure Postfix as a Send-Only SMTP Server on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-16-04)
## Installing SSMTP
In order to install SSMTP, youll need to first update your apt cache with:
```bash
sudo apt update
```
Then run the following command to install SSMTP:
```bash
sudo apt install ssmtp
```
Another thing that you would need to install is `mailutils`, to do that run the following command:
```bash
sudo apt install mailutils
```
## Configuring SSMTP
Now that you have `ssmtp` installed, in order to configure it to use your SMTP server when sending emails, you need to edit the SSMTP configuration file.
Using your favourite text editor to open the `/etc/ssmtp/ssmtp.conf` file:
```bash
sudo nano /etc/ssmtp/ssmtp.conf
```
You need to include your SMTP configuration:
```
root=postmaster
mailhub=<^>your_smtp_host.com<^>:587
hostname=<^>your_hostname<^>
AuthUser=<^>your_gmail_username@your_smtp_host.com<^>
AuthPass=<^>your_gmail_password<^>
FromLineOverride=YES
UseSTARTTLS=YES
```
Save the file and exit.
## Sending emails with SSMTP
Once your configuration is done, in order to send an email just run the following command:
```bash
echo "<^>Here add your email body<^>" | mail -s "<^>Here specify your email subject<^>" <^>your_recepient_email@yourdomain.com<^>
```
You can run this directly in your terminal or include it in your bash scripts.
## Sending A File with SSMTP (optional)
If you need to send files as attachments, you can use `mpack`.
To install `mpack` run the following command:
```bash
sudo apt install mpack
```
Next, in order to send an email with a file attached, run the following command.
```bash
mpack -s "<^>Your Subject here<^>" your_file.zip <^>your_recepient_email@yourdomain.com<^>
```
The above command would send an email to `<^>your_recepient_email@yourdomain.com<^>` with the `<^>your_file.zip<^>` attached.
## Conclusion
SSMTP is a great and reliable way to implement SMTP email functionality directly in bash scripts.
For more information about SSMTP I would recommend checking the official documentation [here](https://wiki.archlinux.org/index.php/SSMTP).
>{notice} This content was initially posted on the [DigitalOcean community forum](https://www.digitalocean.com/community/questions/how-to-send-emails-from-a-bash-script-using-ssmtp).

View File

@@ -0,0 +1,126 @@
# Password Generator Bash Script
It's not uncommon situation where you will need to generate a random password that you can use for any software installation or when you sign-up to any website.
There are a lot of options in order to achieve this. You can use a password manager/vault where you often have the option to randomly generate a password or to use a website that can generate the password on your behalf.
You can also use Bash in your terminal (command-line) to generate a password that you can quickly use. There are a lot of ways to achieve that and I will make sure to cover few of them and will leave up to you to choose which option is most suitable with your needs.
## :warning: Security
**This script is intended to practice your bash scripting skills. You can have fun while doing simple projects with BASH, but security is not a joke, so please make sure you do not save your passwords in plain text in a local file or write them down by hand on a piece of paper.**
**I will highly recommend everyone to use secure and trusted providers to generate and save the passwords.**
## Script summary
Let me first do a quick summary of what our script is going to do.:
1. We will have to option to choose the password characters length when the script is executed.
2. The script will then generate 5 random passwords with the length that was specified in step 1
## Prerequisites
You would need a bash terminal and a text editor. You can use any text editor like vi, vim, nano or Visual Studio Code.
I'm running the script locally on my Linux laptop but if you're using Windows PC you can ssh to any server of your choice and execute the script there.
Although the script is pretty simple, having some basic BASH scripting knowledge will help you to better understand the script and how it's working.
## Generate a random password
One of the great benefits of Linux is that you can do a lot of things using different methods. When it comes to generating a random string of characters it's not different as well.
You can use several commands in order to generate a random string of characters. I will cover few of them and will provide some examples.
- Using the ```date``` command.
The date command will output the current date and time. However we also further manipulate the output in order to use it as randomly generated password. We can hash the date using md5, sha or just run it through base64. These are few examples:
```
date | md5sum
94cb1cdecfed0699e2d98acd9a7b8f6d -
```
using sha256sum:
```
date | sha256sum
30a0c6091e194c8c7785f0d7bb6e1eac9b76c0528f02213d1b6a5fbcc76ceff4 -
```
using base64:
```
date | base64
0YHQsSDRj9C90YMgMzAgMTk6NTE6NDggRUVUIDIwMjEK
```
- We can also use openssl in order to generate pseudo-random bytes and run the output through base64. An example output will be:
```
openssl rand -base64 10
9+soM9bt8mhdcw==
```
Keep in mind that openssl might not be installed on your system so it's likely that you will need to install it first in order to use it.
- The most preferred way is to use the pseudorandom number generator - /dev/urandom
since it is intended for most cryptographic purposes. We would also need to manipulate the output using ```tr``` in order to translate it. An example command is:
```
tr -cd '[:alnum:]' < /dev/urandom | fold -w10 | head -n 1
```
With this command we take the output from /dev/urandom and translate it with ```tr``` while using all letters and digits and print the desired number of characters.
## The script
First we begin the script with the shebang. We use it to tell the operating system which interpreter to use to parse the rest of the file.
```
#!/bin/bash
```
We can then continue and ask the user for some input. In this case we would like to know how many characters the password needs to be:
```
# Ask user for password length
clear
printf "\n"
read -p "How many characters you would like the password to have? " pass_length
printf "\n"
```
Generate the passwords and then print it so the user can use it.
```
# This is where the magic happens!
# Generate a list of 10 strings and cut it to the desired value provided from the user
for i in {1..10}; do (tr -cd '[:alnum:]' < /dev/urandom | fold -w${pass_length} | head -n 1); done
# Print the strings
printf "$pass_output\n"
printf "Goodbye, ${USER}\n"
```
## The full script:
```
#!/bin/bash
#=======================================
# Password generator with login option
#=======================================
# Ask user for the string length
clear
printf "\n"
read -p "How many characters you would like the password to have? " pass_length
printf "\n"
# This is where the magic happens!
# Generate a list of 10 strings and cut it to the desired value provided from the user
for i in {1..10}; do (tr -cd '[:alnum:]' < /dev/urandom | fold -w${pass_length} | head -n 1); done
# Print the strings
printf "$pass_output\n"
printf "Goodbye, ${USER}\n"
```
## Conclusion
This is pretty much how you can use simple bash script to generate random passwords.
:warning: **As already mentioned, please make sure to use strong passwords in order to make sure your account is protected. Also whenever is possible use 2 factor authentication as this will provide additional layer of security for your account.**
While the script is working fine, it expects that the user will provide the requested input. In order to prevent any issues you would need to do some more advance checks on the user input in order to make sure the script will continue to work fine even if the provided input does not match our needs.
## Contributed by
[Alex Georgiev](https://twitter.com/alexgeorgiev17)

View File

@@ -0,0 +1,228 @@
# Redirection in Bash
A Linux superuser must have a good knowledge of pipes and redirection in Bash. It is an essential component of the system and is often helpful in the field of Linux System Administration.
When you run a command like ``ls``, ``cat``, etc, you get some output on the terminal. If you write a wrong command, pass a wrong flag or a wrong command-line argument, you get error output on the terminal.
In both the cases, you are given some text. It may seem like "just text" to you, but the system treats this text differently. This identifier is known as a File Descriptor (fd).
In Linux, there are 3 File Descriptors, **STDIN** (0); **STDOUT** (1) and **STDERR** (2).
* **STDIN** (fd: 0): Manages the input in the terminal.
* **STDOUT** (fd: 1): Manages the output text in the terminal.
* **STDERR** (fd: 2): Manages the error text in the terminal.
# Difference between Pipes and Redirections
Both *pipes* and *redidertions* redirect streams `(file descriptor)` of process being executed. The main difference is that *redirections* deal with `files stream`, sending the output stream to a file or sending the content of a given file to the input stream of the process.
On the other hand a pipe connects two commands by sending the output stream of the first one to the input stream of the second one. without any redidertions specified.
# Redirection in Bash
## STDIN (Standard Input)
When you enter some input text for a command that asks for it, you are actually entering the text to the **STDIN** file descriptor. Run the ``cat`` command without any command-line arguments.
It may seem that the process has paused but in fact it's ``cat`` asking for **STDIN**. ``cat`` is a simple program and will print the text passed to **STDIN**. However, you can extend the use case by redirecting the input to the commands that take **STDIN**.
Example with ``cat``:
```
cat << EOF
Hello World!
How are you?
EOF
```
This will simply print the provided text on the terminal screen:
```
Hello World!
How are you?
```
The same can be done with other commands that take input via STDIN. Like, ``wc``:
```
wc -l << EOF
Hello World!
How are you?
EOF
```
The ``-l`` flag with ``wc`` counts the number of lines.
This block of bash code will print the number of lines to the terminal screen:
```
2
```
## STDOUT (Standard Output)
The normal non-error text on your terminal screen is printed via the **STDOUT** file descriptor. The **STDOUT** of a command can be redirected into a file, in such a way that the output of the command is written to a file instead of being printed on the terminal screen.
This is done simply with the help of ``>`` and ``>>`` operators.
Example:
```
echo "Hello World!" > file.txt
```
The above command will not print "Hello World" on the terminal screen, it will instead create a file called ``file.txt`` and will write the "Hello World" string to it.
This can be verified by running the ``cat`` command on the ``file.txt`` file.
```
cat file.txt
```
However, everytime you redirect the **STDOUT** of any command multiple times to the same file, it will remove the existing contents of the file to write the new ones.
Example:
```
echo "Hello World!" > file.txt
echo "How are you?" > file.txt
```
On running ``cat`` on ``file.txt`` file:
```
cat file.txt
```
You will only get the "How are you?" string printed.
```
How are you?
```
This is because the "Hello World" string has been overwritten.
This behaviour can be avoided using the ``>>`` operator.
The above example can be written as:
```
echo "Hello World!" > file.txt
echo "How are you?" >> file.txt
```
On running ``cat`` on the ``file.txt`` file, you will get the desired result.
```
Hello World!
How are you?
```
Alternatively, the redirection operator for **STDOUT** can also be written as ``1>``. Like,
```
echo "Hello World!" 1> file.txt
```
## STDERR (Standard Error)
The error text on the terminal screen is printed via the **STDERR** of the command. For example:
```
ls --hello
```
would give an error messages. This error message is the **STDERR** of the command.
**STDERR** can be redirected using the ``2>`` operator.
```
ls --hello 2> error.txt
```
This command will redirect the error message to the ``error.txt`` file and write it to it. This can be verified by running the ``cat`` command on the ``error.txt`` file.
You can also use the ``2>>`` operator for **STDERR** just like you used ``>>`` for **STDOUT**.
Error messages in Bash Scripts can be undesirable sometimes. You can choose to ignore them by redirecting the error message to the ``/dev/null`` file.
``/dev/null`` is pseudo-device that takes in text and then immediately discards it.
The above example can be written follows to ignore the error text completely:
```
ls --hello 2> /dev/null
```
Of course, you can redirect both **STDOUT** and **STDERR** for the same command or script.
```
./install_package.sh > output.txt 2> error.txt
```
Both of them can be redirected to the same file as well.
```
./install_package.sh > file.txt 2> file.txt
```
There is also a shorter way to do this.
```
./install_package.sh > file.txt 2>&1
```
# Piping
So far we have seen how to redirect the **STDOUT**, **STDIN** and **STDOUT** to and from a file.
To concatenate the output of program *(command)* as the input of another program *(command)* you can use a vertical bar `|`.
Example:
```
ls | grep ".txt"
```
This command will list the files in the current directory and pass output to *`grep`* command which then filter the output to only show the files that contain the string ".txt".
Syntax:
```
[time [-p]] [!] command1 [ | or |& command2 ] …
```
You can also build arbitrary chains of commands by piping them together to achieve a powerful result.
This example creates a listing of every user which owns a file in a given directory as well as how many files and directories they own:
```
ls -l /projects/bash_scripts | tail -n +2 | sed 's/\s\s*/ /g' | cut -d ' ' -f 3 | sort | uniq -c
```
Output:
```
8 anne
34 harry
37 tina
18 ryan
```
# HereDocument
The symbol `<<` can be used to create a temporary file [heredoc] and redirect from it at the command line.
```
COMMAND << EOF
ContentOfDocument
...
...
EOF
```
Note here that `EOF` represents the delimiter (end of file) of the heredoc. In fact, we can use any alphanumeric word in its place to signify the start and the end of the file. For instance, this is a valid heredoc:
```
cat << randomword1
This script will print these lines on the terminal.
Note that cat can read from standard input. Using this heredoc, we can
create a temporary file with these lines as it's content and pipe that
into cat.
randomword1
```
Effectively it will appear as if the contents of the heredoc are piped into the command. This can make the script very clean if multiple lines need to be piped into a program.
Further, we can attach more pipes as shown:
```
cat << randomword1 | wc
This script will print these lines on the terminal.
Note that cat can read from standard input. Using this heredoc, we can
create a temporary file with these lines as it's content and pipe that
into cat.
randomword1
```
Also, pre-defined variables can be used inside the heredocs.
# HereString
Herestrings are quite similar to heredocs but use `<<<`. These are used for single line strings that have to be piped into some program. This looks cleaner that heredocs as we don't have to specify the delimiter.
```
wc <<<"this is an easy way of passing strings to the stdin of a program (here wc)"
```
Just like heredocs, herestrings can contain variables.
## Summary
|**Operator** |**Description** |
|:---|:---|
|`>`|`Save output to a file`|
|`>>`|`Append output to a file`|
|`<`|`Read input from a file`|
|`2>`|`Redirect error messages`|
|`\|`|`Send the output from one program as input to another program`|
|`<<`|`Pipe multiple lines into a program cleanly`|
|`<<<`|`Pipe a single line into a program cleanly`|

View File

@@ -0,0 +1,336 @@
# Automatic WordPress on LAMP installation with BASH
Here is an example of a full LAMP and WordPress installation that works on any Debian-based machine.
# Prerequisites
- A Debian-based machine (Ubuntu, Debian, Linux Mint, etc.)
# Planning the functionality
Let's start again by going over the main functionality of the script:
**Lamp Installation**
* Update the package manager
* Install a firewall (ufw)
* Allow SSH, HTTP and HTTPS traffic
* Install Apache2
* Install & Configure MariaDB
* Install PHP and required plugins
* Enable all required Apache2 mods
**Apache Virtual Host Setup**
* Create a directory in `/var/www`
* Configure permissions to the directory
* Create the `$domain` file under `/etc/apache2/sites-available` and append the required Virtualhost content
* Enable the site
* Restart Apache2
**SSL Config**
* Generate the OpenSSL certificate
* Append the SSL certificate to the `ssl-params.conf` file
* Append the SSL config to the Virtualhost file
* Enable SSL
* Reload Apache2
**Database Config**
* Create a database
* Create a user
* Flush Privileges
**WordPress Config**
* Install required WordPress PHP plugins
* Install WordPress
* Append the required information to `wp-config.php` file
Without further ado, let's start writing the script.
# The script
We start by setting our variables and asking the user to input their domain:
```bash
echo 'Please enter your domain of preference without www:'
read DOMAIN
echo "Please enter your Database username:"
read DBUSERNAME
echo "Please enter your Database password:"
read DBPASSWORD
echo "Please enter your Database name:"
read DBNAME
ip=`hostname -I | cut -f1 -d' '`
```
We are now ready to start writing our functions. Start by creating the `lamp_install()` function. Inside of it, we are going to update the system, install ufw, allow SSH, HTTP and HTTPS traffic, install Apache2, install MariaDB and PHP. We are also going to enable all required Apache2 mods.
```bash
lamp_install () {
apt update -y
apt install ufw
ufw enable
ufw allow OpenSSH
ufw allow in "WWW Full"
apt install apache2 -y
apt install mariadb-server
mysql_secure_installation -y
apt install php libapache2-mod-php php-mysql -y
sed -i "2d" /etc/apache2/mods-enabled/dir.conf
sed -i "2i\\\tDirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" /etc/apache2/mods-enabled/dir.conf
systemctl reload apache2
}
```
Next, we are going to create the `apache_virtualhost_setup()` function. Inside of it, we are going to create a directory in `/var/www`, configure permissions to the directory, create the `$domain` file under `/etc/apache2/sites-available` and append the required Virtualhost content, enable the site and restart Apache2.
```bash
apache_virtual_host_setup () {
mkdir /var/www/$DOMAIN
chown -R $USER:$USER /var/www/$DOMAIN
echo "<VirtualHost *:80>" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerName $DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerAlias www.$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerAdmin webmaster@localhost" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tDocumentRoot /var/www/$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e '\tErrorLog ${APACHE_LOG_DIR}/error.log' >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e '\tCustomLog ${APACHE_LOG_DIR}/access.log combined' >> /etc/apache2/sites-available/$DOMAIN.conf
echo "</VirtualHost>" >> /etc/apache2/sites-available/$DOMAIN.conf
a2ensite $DOMAIN
a2dissite 000-default
systemctl reload apache2
}
```
Next, we are going to create the `ssl_config()` function. Inside of it, we are going to generate the OpenSSL certificate, append the SSL certificate to the `ssl-params.conf` file, append the SSL config to the Virtualhost file, enable SSL and reload Apache2.
```bash
ssl_config () {
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
echo "SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLHonorCipherOrder On" >> /etc/apache2/conf-available/ssl-params.conf
echo "Header always set X-Frame-Options DENY" >> /etc/apache2/conf-available/ssl-params.conf
echo "Header always set X-Content-Type-Options nosniff" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLCompression off" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLUseStapling on" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLStaplingCache \"shmcb:logs/stapling-cache(150000)\"" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLSessionTickets Off" >> /etc/apache2/conf-available/ssl-params.conf
cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak
sed -i "s/var\/www\/html/var\/www\/$DOMAIN/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "s/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/etc\/ssl\/certs\/apache-selfsigned.crt/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "s/etc\/ssl\/private\/ssl-cert-snakeoil.key/etc\/ssl\/private\/apache-selfsigned.key/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "4i\\\t\tServerName $ip" /etc/apache2/sites-available/default-ssl.conf
sed -i "22i\\\tRedirect permanent \"/\" \"https://$ip/\"" /etc/apache2/sites-available/000-default.conf
a2enmod ssl
a2enmod headers
a2ensite default-ssl
a2enconf ssl-params
systemctl reload apache2
}
```
Next, we are going to create the `db_setup()` function. Inside of it, we are going to create the database, create the user and grant all privileges to the user.
```bash
db_config () {
mysql -e "CREATE DATABASE $DBNAME;"
mysql -e "GRANT ALL ON $DBNAME.* TO '$DBUSERNAME'@'localhost' IDENTIFIED BY '$DBPASSWORD' WITH GRANT OPTION;"
mysql -e "FLUSH PRIVILEGES;"
}
```
Next, we are going to create the `wordpress_config()` function. Inside of it, we are going to download the latest version of WordPress, extract it to the `/var/www/$DOMAIN` directory, create the `wp-config.php` file and append the required content to it.
```bash
wordpress_config () {
db_config
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip -y
systemctl restart apache2
sed -i "8i\\\t<Directory /var/www/$DOMAIN/>" /etc/apache2/sites-available/$DOMAIN.conf
sed -i "9i\\\t\tAllowOverride All" /etc/apache2/sites-available/$DOMAIN.conf
sed -i "10i\\\t</Directory>" /etc/apache2/sites-available/$DOMAIN.conf
a2enmod rewrite
systemctl restart apache2
apt install curl
cd /tmp
curl -O https://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
touch /tmp/wordpress/.htaccess
cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
mkdir /tmp/wordpress/wp-content/upgrade
cp -a /tmp/wordpress/. /var/www/$DOMAIN
chown -R www-data:www-data /var/www/$DOMAIN
find /var/www/$DOMAIN/ -type d -exec chmod 750 {} \;
find /var/www/$DOMAIN/ -type f -exec chmod 640 {} \;
curl -s https://api.wordpress.org/secret-key/1.1/salt/ >> /var/www/$DOMAIN/wp-config.php
echo "define('FS_METHOD', 'direct');" >> /var/www/$DOMAIN/wp-config.php
sed -i "51,58d" /var/www/$DOMAIN/wp-config.php
sed -i "s/database_name_here/$DBNAME/1" /var/www/$DOMAIN/wp-config.php
sed -i "s/username_here/$DBUSERNAME/1" /var/www/$DOMAIN/wp-config.php
sed -i "s/password_here/$DBPASSWORD/1" /var/www/$DOMAIN/wp-config.php
}
```
And finally, we are going to create the `execute()` function. Inside of it, we are going to call all the functions we created above.
```bash
execute () {
lamp_install
apache_virtual_host_setup
ssl_config
wordpress_config
}
```
With this, you have the script ready and you are ready to run it. And if you need the full script, you can find it in the next section.
# The full script
```bash
#!/bin/bash
echo 'Please enter your domain of preference without www:'
read DOMAIN
echo "Please enter your Database username:"
read DBUSERNAME
echo "Please enter your Database password:"
read DBPASSWORD
echo "Please enter your Database name:"
read DBNAME
ip=`hostname -I | cut -f1 -d' '`
lamp_install () {
apt update -y
apt install ufw
ufw enable
ufw allow OpenSSH
ufw allow in "WWW Full"
apt install apache2 -y
apt install mariadb-server
mysql_secure_installation -y
apt install php libapache2-mod-php php-mysql -y
sed -i "2d" /etc/apache2/mods-enabled/dir.conf
sed -i "2i\\\tDirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" /etc/apache2/mods-enabled/dir.conf
systemctl reload apache2
}
apache_virtual_host_setup () {
mkdir /var/www/$DOMAIN
chown -R $USER:$USER /var/www/$DOMAIN
echo "<VirtualHost *:80>" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerName $DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerAlias www.$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tServerAdmin webmaster@localhost" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e "\tDocumentRoot /var/www/$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e '\tErrorLog ${APACHE_LOG_DIR}/error.log' >> /etc/apache2/sites-available/$DOMAIN.conf
echo -e '\tCustomLog ${APACHE_LOG_DIR}/access.log combined' >> /etc/apache2/sites-available/$DOMAIN.conf
echo "</VirtualHost>" >> /etc/apache2/sites-available/$DOMAIN.conf
a2ensite $DOMAIN
a2dissite 000-default
systemctl reload apache2
}
ssl_config () {
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
echo "SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLHonorCipherOrder On" >> /etc/apache2/conf-available/ssl-params.conf
echo "Header always set X-Frame-Options DENY" >> /etc/apache2/conf-available/ssl-params.conf
echo "Header always set X-Content-Type-Options nosniff" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLCompression off" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLUseStapling on" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLStaplingCache \"shmcb:logs/stapling-cache(150000)\"" >> /etc/apache2/conf-available/ssl-params.conf
echo "SSLSessionTickets Off" >> /etc/apache2/conf-available/ssl-params.conf
cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak
sed -i "s/var\/www\/html/var\/www\/$DOMAIN/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "s/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/etc\/ssl\/certs\/apache-selfsigned.crt/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "s/etc\/ssl\/private\/ssl-cert-snakeoil.key/etc\/ssl\/private\/apache-selfsigned.key/1" /etc/apache2/sites-available/default-ssl.conf
sed -i "4i\\\t\tServerName $ip" /etc/apache2/sites-available/default-ssl.conf
sed -i "22i\\\tRedirect permanent \"/\" \"https://$ip/\"" /etc/apache2/sites-available/000-default.conf
a2enmod ssl
a2enmod headers
a2ensite default-ssl
a2enconf ssl-params
systemctl reload apache2
}
db_config () {
mysql -e "CREATE DATABASE $DBNAME;"
mysql -e "GRANT ALL ON $DBNAME.* TO '$DBUSERNAME'@'localhost' IDENTIFIED BY '$DBPASSWORD' WITH GRANT OPTION;"
mysql -e "FLUSH PRIVILEGES;"
}
wordpress_config () {
db_config
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip -y
systemctl restart apache2
sed -i "8i\\\t<Directory /var/www/$DOMAIN/>" /etc/apache2/sites-available/$DOMAIN.conf
sed -i "9i\\\t\tAllowOverride All" /etc/apache2/sites-available/$DOMAIN.conf
sed -i "10i\\\t</Directory>" /etc/apache2/sites-available/$DOMAIN.conf
a2enmod rewrite
systemctl restart apache2
apt install curl
cd /tmp
curl -O https://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
touch /tmp/wordpress/.htaccess
cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
mkdir /tmp/wordpress/wp-content/upgrade
cp -a /tmp/wordpress/. /var/www/$DOMAIN
chown -R www-data:www-data /var/www/$DOMAIN
find /var/www/$DOMAIN/ -type d -exec chmod 750 {} \;
find /var/www/$DOMAIN/ -type f -exec chmod 640 {} \;
curl -s https://api.wordpress.org/secret-key/1.1/salt/ >> /var/www/$DOMAIN/wp-config.php
echo "define('FS_METHOD', 'direct');" >> /var/www/$DOMAIN/wp-config.php
sed -i "51,58d" /var/www/$DOMAIN/wp-config.php
sed -i "s/database_name_here/$DBNAME/1" /var/www/$DOMAIN/wp-config.php
sed -i "s/username_here/$DBUSERNAME/1" /var/www/$DOMAIN/wp-config.php
sed -i "s/password_here/$DBPASSWORD/1" /var/www/$DOMAIN/wp-config.php
}
execute () {
lamp_install
apache_virtual_host_setup
ssl_config
wordpress_config
}
```
## Summary
The script does the following:
* Install LAMP
* Create a virtual host
* Configure SSL
* Install WordPress
* Configure WordPress
With this being said, I hope you enjoyed this example. If you have any questions, please feel free to ask me directly at [@denctl](https://twitter.com/denctl).

View File

@@ -0,0 +1,15 @@
# Wrap Up
Congratulations! You have just completed the Bash basics guide!
If you found this useful, be sure to star the project on [GitHub](https://github.com/bobbyiliev/introduction-to-bash-scripting)!
If you have any suggestions for improvements, make sure to contribute pull requests or open issues.
In this introduction to Bash scripting book, we just covered the basics, but you still have enough under your belt to start wringing some awesome scripts and automating daily tasks!
As a next step try writing your own script and share it with the world! This is the best way to learn any new programming or scripting language!
In case that this book inspired you to write some cool Bash scripts, make sure to tweet about it and tag [@bobbyiliev_](https://twitter.com) so that we could check it out!
Congrats again on completing this book!

82
docs/.recycle/extras.md Executable file
View File

@@ -0,0 +1,82 @@
# A Linux Learning Playground Situation
!["Shinobi Academy Linux"](/cover.png)
wanna play with the application in this picture? connect to the server using the instructions below and run the command `hollywood`
## Introduction:
Welcome, aspiring Linux ninjas! This tutorial will guide you through accessing Shinobi Academy Linux, a custom-built server designed to provide a safe and engaging environment for you to learn and experiment with Linux. Brought to you by Softwareshinobi ([https://softwareshinobi.digital/](https://softwareshinobi.digital/)), this server is your gateway to the exciting world of open-source exploration.
## What You'll Learn:
* Connecting to a Linux server (using SSH)
* Basic Linux commands (navigation, listing files, etc.)
* Exploring pre-installed tools like cmatrix and hollywood
## What You'll Need:
* A computer with internet access
* An SSH client (built-in on most Linux and macOS systems, downloadable for Windows)
## About Shinobi Academy:
Shinobi Academy, the online learning platform brought to you by Softwareshinobi!
Designed to empower aspiring tech enthusiasts, Shinobi Academy offers a comprehensive range of courses and resources to equip you with the skills you need to excel in the ever-evolving world of technology.
## Connecting to Shinobi Academy Linux:
1. Open your SSH client.
2. Enter the following command (including the port number):
```
ssh -p 2222 shinobi@linux.softwareshinobi.digital
```
3. When prompted, enter the password "shinobi".
```
username / shinobi
```
```
password / shinobi
```
**Congratulations!** You're now connected to Shinobi Academy Linux.
## Exploring the Server:
Once connected, you can use basic Linux commands to navigate the system and explore its features. Here are a few examples:
* **`ls`:** Lists files and directories in the current directory.
* **`cd`:** Changes directory. For example, `cd Desktop` will move you to the Desktop directory (if it exists).
* **`pwd`:** Shows the current working directory.
* **`man` followed by a command name:** Provides detailed information on a specific command (e.g., `man ls`).
## Pre-installed Goodies:
Shinobi Academy Linux comes pre-installed with some interesting tools to enhance your learning experience:
* **`cmatrix`:** Simulates the iconic falling code effect from the movie "The Matrix".
* **`hollywood`:** Creates a variety of dynamic text effects on your terminal.
**Experimenting with these tools is a great way to explore the possibilities of Linux.**
## Conclusion:
By following these steps, you've successfully connected to Shinobi Academy Linux and begun your journey into the world of Linux. Use this platform to explore, experiment, and build your Linux skills!
A big thanks to Gemini for putting together these awesome docs!
## Master Linux Like a Pro: 1-on-1 Tutoring:
**Tired of fumbling in the terminal?** Imagine wielding Linux commands with ease, managing servers like a corporate ninja just like my government and corporate gigs.
**1-on-1 tutoring unlocks your potential:**
* **Terminal mastery:** Conquer the command line and automate tasks like a pro.
* **Become a command jedi:** Craft commands with lightning speed, streamlining your workflow.
**Ready to transform your skills?** [Learn More!](tutor.softwareshinobi.digital/linux)

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

View File

@@ -0,0 +1,59 @@
# The `cal` Command
The `cal` command displays a formatted calendar in the terminal. If no options are specified, cal displays the current month, with the current day highlighted.
### Syntax:
```
cal [general options] [-jy] [[month] year]
```
### Options:
|**Option**|**Description**|
|:--|:--|
|`-h`|Don't highlight today's date.|
|`-m month`|Specify a month to display. The month specifier is a full month name (e.g., February), a month abbreviation of at least three letters (e.g., Feb), or a number (e.g., 2). If you specify a number, followed by the letter "f" or "p", the month of the following or previous year, respectively, display. For instance, `-m 2f` displays February of next year.|
|`-y year`|Specify a year to display. For example, `-y 1970` displays the entire calendar of the year 1970.|
|`-3`|Display last month, this month, and next month.|
|`-1`|Display only this month. This is the default.|
|`-A num`|Display num months occurring after any months already specified. For example, `-3 -A 3` displays last month, this month, and four months after this one; and `-y 1970 -A 2` displays every month in 1970, and the first two months of 1971.|
|`-B num`|Display num months occurring before any months already specified. For example, `-3 -B 2` displays the previous three months, this month, and next month.|
|`-d YYYY-MM`|Operate as if the current month is number MM of year YYYY.|
### Examples:
1. Display the calendar for this month, with today highlighted.
```
cal
```
2. Same as the previous command, but do not highlight today.
```
cal -h
```
3. Display last month, this month, and next month.
```
cal -3
```
4. Display this entire year's calendar.
```
cal -y
```
5. Display the entire year 2000 calendar.
```
cal -y 2000
```
6. Same as the previous command.
```
cal 2000
```
7. Display the calendar for December of this year.
```
cal -m [December, Dec, or 12]
```
10. Display the calendar for December 2000.
```
cal 12 2000
```

View File

@@ -0,0 +1,94 @@
# The `bc` command
The `bc` command provides the functionality of being able to perform mathematical calculations through the command line.
### Examples:
1 . Arithmetic:
```
Input : $ echo "11+5" | bc
Output : 16
```
2 . Increment:
- var ++ : Post increment operator, the result of the variable is used first and then the variable is incremented.
- ++var : Pre increment operator, the variable is increased first and then the result of the variable is stored.
```
Input: $ echo "var=3;++var" | bc
Output: 4
```
3 . Decrement:
- var : Post decrement operator, the result of the variable is used first and then the variable is decremented.
- var : Pre decrement operator, the variable is decreased first and then the result of the variable is stored.
```
Input: $ echo "var=3;--var" | bc
Output: 2
```
4 . Assignment:
- var = value : Assign the value to the variable
- var += value : similar to var = var + value
- var -= value : similar to var = var value
- var *= value : similar to var = var * value
- var /= value : similar to var = var / value
- var ^= value : similar to var = var ^ value
- var %= value : similar to var = var % value
```
Input: $ echo "var=4;var" | bc
Output: 4
```
5 . Comparison or Relational:
- If the comparison is true, then the result is 1. Otherwise,(false), returns 0
- expr1<expr2 : Result is 1, if expr1 is strictly less than expr2.
- expr1<=expr2 : Result is 1, if expr1 is less than or equal to expr2.
- expr1>expr2 : Result is 1, if expr1 is strictly greater than expr2.
- expr1>=expr2 : Result is 1, if expr1 is greater than or equal to expr2.
- expr1==expr2 : Result is 1, if expr1 is equal to expr2.
- expr1!=expr2 : Result is 1, if expr1 is not equal to expr2.
```
Input: $ echo "6<4" | bc
Output: 0
```
```
Input: $ echo "2==2" | bc
Output: 1
```
6 . Logical or Boolean:
- expr1 && expr2 : Result is 1, if both expressions are non-zero.
- expr1 || expr2 : Result is 1, if either expression is non-zero.
- ! expr : Result is 1, if expr is 0.
```
Input: $ echo "! 1" | bc
Output: 0
Input: $ echo "10 && 5" | bc
Output: 1
```
### Syntax:
```
bc [ -hlwsqv ] [long-options] [ file ... ]
```
### Additional Flags and their Functionalities:
*Note: This does not include an exhaustive list of options.*
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-i`|`--interactive`|Force interactive mode|
|`-l`|`--mathlib`|Use the predefined math routines|
|`-q`|`--quiet`|Opens the interactive mode for bc without printing the header|
|`-s`|`--standard`|Treat non-standard bc constructs as errors|
|`-w`|`--warn`|Provides a warning if non-standard bc constructs are used|
### Notes:
1. The capabilities of `bc` can be further appreciated if used within a script. Aside from basic arithmetic operations, `bc` supports increments/decrements, complex calculations, logical comparisons, etc.
2. Two of the flags in `bc` refer to non-standard constructs. If you evaluate `100>50 | bc` for example, you will get a strange warning. According to the POSIX page for bc, relational operators are only valid if used within an `if`, `while`, or `for` statement.

View File

@@ -0,0 +1,31 @@
# The `help` command
The `help` command displays information about builtin commands.
Display information about builtin commands.
If a `PATTERN` is specified, this command gives detailed help on all commands matching the `PATTERN`, otherwise the list of available help topics is printed.
## Syntax
```bash
$ help [-dms] [PATTERN ...]
```
## Options
|**Option**|**Description**|
|:--|:--|
|`-d`|Output short description for each topic.|
|`-m`|Display usage in pseudo-manpage format.|
|`-s`|Output only a short usage synopsis for each topic matching the provided `PATTERN`.|
## Examples of uses:
1. We get the complete information about the `cd` command
```bash
$ help cd
```
2. We get a short description about the `pwd` command
```bash
$ help -d pwd
```
3. We get the syntax of the `cd` command
```bash
$ help -s cd
```

View File

@@ -0,0 +1,29 @@
# The `factor` command
The `factor` command prints the prime factors of each specified integer `NUMBER`. If none are specified on the command line, it will read them from the standard input.
## Syntax
```bash
$ factor [NUMBER]...
```
OR:
```bash
$ factor OPTION
```
## Options
|**Option**|**Description**|
|:--|:--|
|`--help`|Display this a help message and exit.|
|`--version`|Output version information and exit.|
## Examples
1. Print prime factors of a prime number.
```bash
$ factor 50
```
2. Print prime factors of a non-prime number.
```bash
$ factor 75
```

View File

@@ -0,0 +1,32 @@
# The `whatis` command
The `whatis` command is used to display one-line manual page descriptions for commands.
It can be used to get a basic understanding of what a (unknown) command is used for.
### Examples of uses:
1. To display what `ls` is used for:
```
whatis ls
```
2. To display the use of all commands which start with `make`, execute the following:
```
whatis -w make*
```
### Syntax:
```
whatis [-OPTION] [KEYWORD]
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-d`|`--debug`|Show debugging messages|
|`-r`|`--regex`|Interpret each keyword as a regex|
|`-w`|`--wildcard`|The keyword(s) contain wildcards|

View File

@@ -0,0 +1,33 @@
# The `who` command
The `who` command lets you print out a list of logged-in users, the current run level of the system and the time of last system boot.
### Examples
1. Print out all details of currently logged-in users
```
who -a
```
2. Print out the list of all dead processes
```
who -d -H
```
### Syntax:
```
who [options] [filename]
```
### Additional Flags and their Functionalities
|**Short Flag** |**Description** |
|---|---|
| `-r` |prints all the current runlevel |
| `-d` |print all the dead processes |
|`-q`|print all the login names and total number of logged on users |
|`-h`|print the heading of the columns displayed |
|`-b`|print the time of last system boot |

View File

@@ -0,0 +1,33 @@
018-the-free-command.md
# The `free` command
The `free` command in Linux/Unix is used to show memory (RAM/SWAP) information.
# Usage
## Show memory usage
**Action:**
--- Output the memory usage - available and used, as well as swap
**Details:**
--- Outputted values are not human-readable (are in bytes)
**Command:**
```
free
```
## Show memory usage in human-readable form
**Action:**
--- Output the memory usage - available and used, as well as swap
**Details:**
--- Outputted values ARE human-readable (are in GB / MB)
**Command:**
```
free -h
```

View File

@@ -0,0 +1,19 @@
# The `sl` command
The `sl` command in Linux is a humorous program that runs a steam locomotive(sl) across your terminal.
![image](https://i.imgur.com/CInBHak.png)
## Installation
Install the package before running.
```
sudo apt install sl
```
## Syntax
```
sl
```

View File

@@ -0,0 +1,76 @@
# The `finger` command
The `finger` displays information about the system users.
### Examples:
1. View detail about a particular user.
```
finger abc
```
*Output*
```
Login: abc Name: (null)
Directory: /home/abc Shell: /bin/bash
On since Mon Nov 1 18:45 (IST) on :0 (messages off)
On since Mon Nov 1 18:46 (IST) on pts/0 from :0.0
New mail received Fri May 7 10:33 2013 (IST)
Unread since Sat Jun 7 12:59 2003 (IST)
No Plan.
```
2. View login details and Idle status about an user
```
finger -s root
```
*Output*
```
Login Name Tty Idle Login Time Office Office Phone
root root *1 19d Wed 17:45
root root *2 3d Fri 16:53
root root *3 Mon 20:20
root root *ta 2 Tue 15:43
root root *tb 2 Tue 15:44
```
### Syntax:
```
finger [-l] [-m] [-p] [-s] [username]
```
### Additional Flags and their Functionalities:
|**Flag** |**Description** |
|:---|:---|
|`-l`|Force long output format.|
|`-m`|Match arguments only on user name (not first or last name).|
|`-p`|Suppress printing of the .plan file in a long format printout.|
|`-s`|Force short output format.|
### Additional Information
**Default Format**
The default format includes the following items:
Login name
Full username
Terminal name
Write status (an * (asterisk) before the terminal name indicates that write permission is denied)
For each user on the host, the default information list also includes, if known, the following items:
Idle time (Idle time is minutes if it is a single integer, hours and minutes if a : (colon) is present, or days and hours if a “d” is present.)
Login time
Site-specific information
**Longer Format**
A longer format is used by the finger command whenever a list of users names is given. (Account names as well as first and last names of users are accepted.) This format is multiline, and includes all the information described above along with the following:
Users $HOME directory
Users login shell
Contents of the .plan file in the users $HOME directory
Contents of the .project file in the users $HOME directory

View File

@@ -0,0 +1,56 @@
# The `w` command
The `w` command displays information about the users that are currently active on the machine and their [processes](https://www.computerhope.com/jargon/p/process.htm).
### Examples:
1. Running the `w` command without [arguments](https://www.computerhope.com/jargon/a/argument.htm) shows a list of logged on users and their processes.
```
w
```
2. Show information for the user named *hope*.
```
w hope
```
### Syntax:
```
finger [-l] [-m] [-p] [-s] [username]
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-h`|`--no-header`|Don't print the header.|
|`-u`|`--no-current`|Ignores the username while figuring out the current process and cpu times. *(To see an example of this, switch to the root user with `su` and then run both `w` and `w -u`.)*|
|`-s`|`--short`|Display abbreviated output *(don't print the login time, JCPU or PCPU times).*|
|`-f`|`--from`|Toggle printing the from *(remote hostname)* field. The default as released is for the from field to not be printed, although your system administrator or distribution maintainer may have compiled a version where the from field is shown by default.|
|`--help`|<center>-</center>|Display a help message, and exit.|
|`-V`|`--version`|Display version information, and exit.|
|`-o`|`--old-style`|Old style output *(prints blank space for idle times less than one minute)*.|
|*`user`*|<center>-</center>|Show information about the specified the user only.|
### Additional Information
The [header](https://www.computerhope.com/jargon/h/header.htm) of the output shows (in this order): the current time, how long the system has been running, how many users are currently logged on, and the system [load](https://www.computerhope.com/jargon/l/load.htm) averages for the past 1, 5, and 15 minutes.
The following entries are displayed for each user:
- login name the [tty](https://www.computerhope.com/jargon/t/tty.htm)
- name the [remote](https://www.computerhope.com/jargon/r/remote.htm)
- [host](https://www.computerhope.com/jargon/h/hostcomp.htm) they are
- logged in from the amount of time they are logged in their
- [idle](https://www.computerhope.com/jargon/i/idle.htm) time JCPU
- PCPU
- [command line](https://www.computerhope.com/jargon/c/commandi.htm) of their current process
The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs, but does include currently running background jobs.
The PCPU time is the time used by the current process, named in the "what" field.

View File

@@ -0,0 +1,28 @@
# The `login` Command
The `login` command initiates a user session.
## Syntax
```bash
$ login [-p] [-h host] [-H] [-f username|username]
```
## Flags and their functionalities
|**Short Flag** |**Description** |
|---|---|
| `-f` |Used to skip a login authentication. This option is usually used by the getty(8) autologin feature. |
| `-h` | Used by other servers (such as telnetd(8) to pass the name of the remote host to login so that it can be placed in utmp and wtmp. Only the superuser is allowed use this option. |
|`-p`|Used by getty(8) to tell login to preserve the environment. |
|`-H`|Used by other servers (for example, telnetd(8)) to tell login that printing the hostname should be suppressed in the login: prompt. |
|`--help`|Display help text and exit.|
|`-v`|Display version information and exit.|
## Examples
To log in to the system as user abhishek, enter the following at the login prompt:
```bash
$ login: abhishek
```
If a password is defined, the password prompt appears. Enter your password at this prompt.

View File

@@ -0,0 +1,52 @@
# `lscpu` command
`lscpu` in Linux/Unix is used to display CPU Architecture info. `lscpu` gathers CPU architecture information from `sysfs` and `/proc/cpuinfo` files.
For example :
```
manish@godsmack:~$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
Stepping: 9
CPU MHz: 700.024
CPU max MHz: 3100.0000
CPU min MHz: 400.0000
BogoMIPS: 5399.81
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3
```
## Options
`-a, --all`
Include lines for online and offline CPUs in the output (default for -e). This option may only specified together with option -e or -p.
For example: `lsof -a`
`-b, --online`
Limit the output to online CPUs (default for -p). This option may only be specified together with option -e or -p.
For example: `lscpu -b`
`-c, --offline`
Limit the output to offline CPUs. This option may only be specified together with option -e or -p.
`-e, --extended [=list]`
Display the CPU information in human readable format.
For example: `lsof -e`
For more info: use `man lscpu` or `lscpu --help`

View File

@@ -0,0 +1,37 @@
# The `printenv` command
The `printenv` prints the values of the specified [environment _VARIABLE(s)_](https://www.computerhope.com/jargon/e/envivari.htm). If no [_VARIABLE_](https://www.computerhope.com/jargon/v/variable.htm) is specified, print name and value pairs for them all.
### Examples:
1. Display the values of all environment variables.
```
printenv
```
2. Display the location of the current user's [home directory](https://www.computerhope.com/jargon/h/homedir.htm).
```
printenv HOME
```
3. To use the `--null` command line option as the terminating character between output entries.
```
printenv --null SHELL HOME
```
*NOTE: By default, the* `printenv` *command uses newline as the terminating character between output entries.*
### Syntax:
```
printenv [OPTION]... PATTERN...
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-0`|`--null`|End each output line with **0** byte rather than [newline](https://www.computerhope.com/jargon/n/newline.htm).|
|`--help`|<center>-</center>|Display a help message, and exit.|

View File

@@ -0,0 +1,39 @@
# The `ip` command
The `ip` command is present in the net-tools which is used for performing several network administration tasks. IP stands for Internet Protocol. This command is used to show or manipulate routing, devices, and tunnels. It can perform tasks like configuring and modifying the default and static routing, setting up tunnel over IP, listing IP addresses and property information, modifying the status of the interface, assigning, deleting and setting up IP addresses and routes.
### Examples:
1. To assign an IP Address to a specific interface (eth1) :
```
ip addr add 192.168.50.5 dev eth1
```
2. To show detailed information about network interfaces like IP Address, MAC Address information etc. :
```
ip addr show
```
### Syntax:
```
ip [ OPTIONS ] OBJECT { COMMAND | help }
```
### Additional Flags and their Functionalities:
|**Flag** |**Description** |
|:---|:---|
|`-a`| Display and modify IP Addresses |
|`-l`|Display and modify network interfaces |
|`-r`|Display and alter the routing table|
|`-n`|Display and manipulate neighbor objects (ARP table) |
|`-ru`|Rule in routing policy database.|
|`-s`|Output more information. If the option appears twice or more, the amount of information increases |
|`-f`|Specifies the protocol family to use|
|`-r`|Use the system's name resolver to print DNS names instead of host addresses|
|`-c`|To configure color output |

View File

@@ -0,0 +1,23 @@
# The `last` command
This command shows you a list of all the users that have logged in and out since the creation of the `var/log/wtmp` file. There are also some parameters you can add which will show you for example when a certain user has logged in and how long he was logged in for.
If you want to see the last 5 logs, just add `-5` to the command like this:
```
last -5
```
And if you want to see the last 10, add `-10`.
Another cool thing you can do is if you add `-F` you can see the login and logout time including the dates.
```
last -F
```
There are quite a lot of stuff you can view with this command. If you need to find out more about this command you can run:
```
last --help
```

View File

@@ -0,0 +1,93 @@
# The `locate` command
The `locate` command searches the file system for files and directories whose name matches a given pattern through a database file that is generated by the `updatedb` command.
### Examples:
1. Running the `locate` command to search for a file named `.bashrc`.
```
locate .bashrc
```
*Output*
```
/etc/bash.bashrc
/etc/skel/.bashrc
/home/linuxize/.bashrc
/usr/share/base-files/dot.bashrc
/usr/share/doc/adduser/examples/adduser.local.conf.examples/bash.bashrc
/usr/share/doc/adduser/examples/adduser.local.conf.examples/skel/dot.bashrc
```
The `/root/.bashrc` file will not be shown because we ran the command as a normal user that doesnt have access permissions to the `/root` directory.
If the result list is long, for better readability, you can pipe the output to the [`less`](https://linuxize.com/post/less-command-in-linux/) command:
```
locate .bashrc | less
```
2. To search for all `.md` files on the system
```
locate *.md
```
3. To search all `.py` files and display only 10 results
```
locate -n 10 *.py
```
4. To performs case-insensitive search.
```
locate -i readme.md
```
*Output*
```
/home/linuxize/p1/readme.md
/home/linuxize/p2/README.md
/home/linuxize/p3/ReadMe.md
```
5. To return the number of all files containing `.bashrc` in their name.
```
locate -c .bashrc
```
*Output*
```
6
```
6. The following would return only the existing `.json` files on the file system.
```
locate -e *.json
```
7. To run a more complex search the `-r` (`--regexp`) option is used.
To search for all `.mp4` and `.avi` files on your system and ignore case.
```
locate --regex -i "(\.mp4|\.avi)"
```
### Syntax:
```
1. locate [OPTION]... PATTERN...
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-A`|`--all`|It is used to display only entries that match all PATTERNs instead of requiring only one of them to match.|
|`-b`|`--basename`|It is used to match only the base name against the specified patterns.|
|`-c`|`--count`|It is used for writing the number matching entries instead of writing file names on standard output.|
|`-d`|`--database DBPATH`|It is used to replace the default database with DBPATH.|
|`-e`|`--existing`|It is used to display only entries that refer to existing files during the command is executed.|
|`-L`|`--follow`|If the `--existing` option is specified, It is used for checking whether files exist and follow trailing symbolic links. It will omit the broken symbolic links to the output. This is the default behavior. The opposite behavior can be specified using the `--nofollow` option.|
|`-h`|`--help`|It is used to display the help documentation that contains a summary of the available options.|
|`-i`|`--ignore-case`|It is used to ignore case sensitivity of the specified patterns.|
|`-p`|`--ignore-spaces`|It is used to ignore punctuation and spaces when matching patterns.|
|`-t`|`--transliterate`|It is used to ignore accents using iconv transliteration when matching patterns.|
|`-l`|`--limit, -n LIMIT`|If this option is specified, the command exit successfully after finding LIMIT entries.|
|`-m`|`--mmap`|It is used to ignore the compatibility with BSD, and GNU locate.|
|`-0`|`--null`|It is used to separate the entries on output using the ASCII NUL character instead of writing each entry on a separate line.|
|`-S`|`--statistics`|It is used to write statistics about each read database to standard output instead of searching for files.|
|`-r`|`--regexp REGEXP`|It is used for searching a basic regexp REGEXP.|
|`--regex`|<center>-</center>|It is used to describe all PATTERNs as extended regular expressions.|
|`-V`|`--version`|It is used to display the version and license information.|
|`-w`|` --wholename`|It is used for matching only the whole path name in specified patterns.|

View File

@@ -0,0 +1,47 @@
# The `iostat` command
The `iostat` command in Linux is used for monitoring system input/output statistics for devices and partitions. It monitors system input/output by observing the time the devices are active in relation to their average transfer rates. The iostat produce reports may be used to change the system configuration to raised balance the input/output between the physical disks. iostat is being included in sysstat package. If you dont have it, you need to install first.
### Syntax:
```[linux]
iostat [ -c ] [ -d ] [ -h ] [ -N ] [ -k | -m ] [ -t ] [ -V ] [ -x ]
[ -z ] [ [ [ -T ] -g group_name ] { device [...] | ALL } ]
[ -p [ device [,...] | ALL ] ] [ interval [ count ] ]
```
### Examples:
1. Display a single history-since-boot report for all CPU and Devices:
```[linux]
iostat -d 2
```
2. Display a continuous device report at two-second intervals:
```[linux]
iostat -d 2 6
```
3.Display, for all devices, six reports at two-second intervals:
```[linux]
iostat -x sda sdb 2 6
```
4.Display, for devices sda and sdb, six extended reports at two-second intervals:
```[linux]
iostat -p sda 2 6
```
### Additional Flags and their Functionalities:
| **Short Flag** | **Description** |
| :------------------------------ | :--------------------------------------------------------- |
| `-x` | Show more details statistics information. |
| `-c` | Show only the cpu statistic. |
| `-d` | Display only the device report |
| `-xd | Show extended I/O statistic for device only. |
| `-k` | Capture the statistics in kilobytes or megabytes. |
| `-k23` | Display cpu and device statistics with delay. |
| `-j ID mmcbkl0 sda6 -x -m 2 2` | Display persistent device name statistics. |
| `-p ` | Display statistics for block devices. |
| `-N ` | Display lvm2 statistic information. |

View File

@@ -0,0 +1,77 @@
# The `sort` command
the `sort` command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts a file assuming the contents are ASCII. Using options in the sort command can also be used to sort numerically.
### Examples:
Suppose you create a data file with name file.txt:
```
Command :
$ cat > file.txt
abhishek
chitransh
satish
rajan
naveen
divyam
harsh
```
Sorting a file: Now use the sort command
Syntax :
```
sort filename.txt
```
```
Command:
$ sort file.txt
Output :
abhishek
chitransh
divyam
harsh
naveen
rajan
satish
```
Note: This command does not actually change the input file, i.e. file.txt.
### The sort function on a file with mixed case content
i.e. uppercase and lower case: When we have a mix file with both uppercase and lowercase letters then first the upper case letters would be sorted following with the lower case letters.
Example:
Create a file mix.txt
```
Command :
$ cat > mix.txt
abc
apple
BALL
Abc
bat
```
Now use the sort command
```
Command :
$ sort mix.txt
Output :
Abc
BALL
abc
apple
bat
```

View File

@@ -0,0 +1,33 @@
# The `paste` command
The `paste` command writes lines of two or more files, sequentially and separated by TABs, to the standard output
### Syntax:
```[linux]
paste [OPTIONS]... [FILE]...
```
### Examples:
1. To paste two files
```[linux]
paste file1 file2
```
2. To paste two files using new line as delimiter
```[linux]
paste -d '\n' file1 file2
```
### Additional Flags and their Functionalities:
| **Short Flag** | **Long Flag** | **Description** |
| :----------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------- |
| `-d` | `--delimiter` | use charater of TAB |
| `-s` | `--serial` | paste one file at a time instead of in parallel |
| `-z` | `--zero-terminated` | set line delimiter to NUL, not newline |
| | `--help` | print command help |
| | `--version` | print version information |

View File

@@ -0,0 +1,24 @@
# The `iptables` Command
The `iptables` command is used to set up and maintain tables for the Netfilter firewall for IPv4, included in the Linux kernel. The firewall matches packets with rules defined in these tables and then takes the specified action on a possible match.
### Syntax:
```
iptables --table TABLE -A/-C/-D... CHAIN rule --jump Target
```
### Example and Explanation:
*This command will append to the chain provided in parameters:*
```
iptables [-t table] --append [chain] [parameters]
```
*This command drops all the traffic coming on any port:*
```
iptables -t filter --append INPUT -j DROP
```
### Flags and their Functionalities:
|Flag|Description|
|:---|:---|
|`-C`|Check if a rule is present in the chain or not. It returns 0 if the rule exists and returns 1 if it does not.|
|`-A`|Append to the chain provided in parameters.|

View File

@@ -0,0 +1,50 @@
# The `lsof` command
The `lsof` command shows **file infomation** of all the files opened by a running process. It's name is also derived from the fact that, list open files > `lsof`
An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library , a stream or a network file (Internet socket, NFS file or UNIX domain socket). A specific file or all the files in a file system may be selected by path.
### Syntax:
```
lsof [-OPTION] [USER_NAME]
```
### Examples:
1. To show all the files opened by all active processes:
```
lsof
```
2. To show the files opened by a particular user:
```
lsof -u [USER_NAME]
```
3. To list the processes with opened files under a specified directory:
```
lsof +d [PATH_TO_DIR]
```
### Options and their Functionalities:
|**Option** |**Additional Options** |**Description** |
|:---|:---|:---|
|`-i`|`tcp`/ `udp`/ `:port`|List all network connections running, Additionally, on udp/tcp or on specified port.|
|`-i4`|<center>-</center>|List all processes with ipv4 connections.|
|`-i6`|<center>-</center>|List all processes with ipv6 connections.|
|`-c`|`[PROCESS_NAME]`|List all the files of a particular process with given name.|
|`-p`|`[PROCESS_ID]`|List all the files opened by a specified process id.|
|`-p`|`^[PROCESS_ID]`|List all the files that are not opened by a specified process id.|
|`+d`|`[PATH]`|List the processes with opened files under a specified directory|
|`+R`|<center>-</center>|List the files opened by parent process Id.|
### Help Command
Run below command to view the complete guide to `lsof` command.
```
man lsof
```

View File

@@ -0,0 +1,57 @@
# The `bzip2` command
The `bzip2` command lets you compress and decompress the files i.e. it helps in binding the files into a single file which takes less storage space as the original file use to take.
### Syntax:
```
bzip2 [OPTIONS] filenames ...
```
#### Note : Each file is replaced by a compressed version of itself, with the name original name of the file followed by extension bz2.
### Options and their Functionalities:
|**Option** |**Alias** |**Description** |
|:---|:---|:---|
|`-d`|`--decompress`|to decompress compressed file|
|`-f`|`--force`|to force overwrite an existing output file|
|`-h`|`--help`|to display the help message and exit|
|`-k`|`--keep`|to enable file compression, doesn't deletes the original input file|
|`-L`|`--license`|to display the license terms and conditions|
|`-q`|`--quiet`|to suppress non-essential warning messages|
|`-t`|`--test`|to check integrity of the specified .bz2 file, but don't want to decompress them|
|`-v`|`--erbose`|to display details for each compression operation|
|`-V`|`--version`|to display the software version|
|`-z`|`--compress`|to enable file compression, but deletes the original input file|
> #### By default, when bzip2 compresses a file, it deletes the original (or input) file. However, if you don't want that to happen, use the -k command line option.
### Examples:
1. To force compression:
```
bzip2 -z input.txt
```
**Note: This option deletes the original file also**
2. To force compression and also retain original input file:
```
bzip2 -k input.txt
```
3. To force decompression:
```
bzip2 -d input.txt.bz2
```
4. To test integrity of compressed file:
```
bzip2 -t input.txt.bz2
```
5. To show the compression ratio for each file processed:
```
bzip2 -v input.txt
```

View File

@@ -0,0 +1,30 @@
# The `service` command
Service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /.
The SCRIPT parameter specifies a System V init script, located in /etc/init.d/SCRIPT. The supported values of COMMAND depend on the invoked script, service passes COMMAND and OPTIONS it to the init script unmodified. All scripts should support at least the start and stop commands. As a special case, if COMMAND is --full-restart, the script is run twice, first with the stop command, then with the start command.
The COMMAND can be at least start, stop, status, and restart.
service --status-all runs all init scripts, in alphabetical order, with the `status` command
Examples :
1. To check the status of all the running services:
```
service --status-all
```
2. To run a script
```
service SCRIPT-Name start
```
3. A more generalized command:
```
service [SCRIPT] [COMMAND] [OPTIONS]
```

View File

@@ -0,0 +1,25 @@
# The `vmstat` command
The `vmstat` command lets you monitor the performance of your system. It shows you information about your memory, disk, processes, CPU scheduling, paging, and block IO. This command is also referred to as **virtual memory statistic report**.
The very first report that is produced shows you the average details since the last reboot and after that, other reports are made which report over time.
### `vmstat`
![vmstat](https://imgur.com/9HZgBRN.png)
As you can see it is a pretty useful little command. The most important things that we see above are the `free`, which shows us the free space that is not being used, `si` shows us how much memory is swapped in every second in kB, and `so` shows how much memory is swapped out each second in kB as well.
### `vmstat -a`
If we run `vmstat -a`, it will show us the active and inactive memory of the system running.
![vmstat -a](https://imgur.com/LjL4tRh.png)
### `vmstat -d`
The `vmstat -d` command shows us all the disk statistics.
![vmstat -d](https://imgur.com/y3L0pNN.png)
As you can see this is a pretty useful little command that shows you different statistics about your virtual memory

View File

@@ -0,0 +1,57 @@
# The `mpstat` command
The `mpstat` command is used to report processor related statistics. It accurately displays the statistics of the CPU usage of the system and information about CPU utilization and performance.
### Syntax:
```
mpstat [options] [<interval> [<count>]]
```
#### Note : It initializes the first processor with CPU 0, the second one with CPU 1, and so on.
### Options and their Functionalities:
|**Option** |**Description** |
|-------------|----------------------------------------------------------------------|
|`-A` |to display all the detailed statistics |
|`-h` |to display mpstat help |
|`-I` |to display detailed interrupts statistics |
|`-n` |to report summary CPU statistics based on NUMA node placement |
|`-N` |to indicate the NUMA nodes for which statistics are to be reported |
|`-P` |to indicate the processors for which statistics are to be reported |
|`-o` |to display the statistics in JSON (Javascript Object Notation) format |
|`-T` |to display topology elements in the CPU report |
|`-u` |to report CPU utilization |
|`-v` |to display utilization statistics at the virtual processor level |
|`-V` |to display mpstat version |
|`-ALL` |to display detailed statistics about all CPUs |
### Examples:
1. To display processor and CPU statistics:
```
mpstat
```
2. To display processor number of all CPUs:
```
mpstat -P ALL
```
3. To get all the information which the tool may collect:
```
mpstat -A
```
4. To display CPU utilization by a specific processor:
```
mpstat -P 0
```
5. To display CPU usage with a time interval:
```
mpstat 1 5
```
**Note: This command will print 5 reports with 1 second time interval**

View File

@@ -0,0 +1,36 @@
# The `ncdu` Command
`ncdu` (NCurses Disk Usage) is a curses-based version of the well-known `du` command. It provides a fast way to see what directories are using your disk space.
## Example
1. Quiet Mode
```
ncdu -q
```
2. Omit mounted directories
```
ncdu -q -x
```
## Syntax
```
ncdu [-hqvx] [--exclude PATTERN] [-X FILE] dir
```
## Additional Flags and their Functionalities:
|Short Flag | Long Flag | Description|
|---|---|---|
| `-h`| - |Print a small help message|
| `-q`| - |Quiet mode. While calculating disk space, ncdu will update the screen 10 times a second by default, this will be decreased to once every 2 seconds in quiet mode. Use this feature to save bandwidth over remote connections.|
| `-v`| - |Print version.|
| `-x`| - |Only count files and directories on the same filesystem as the specified dir.|
| - | `--exclude PATTERN`|Exclude files that match PATTERN. This argument can be added multiple times to add more patterns.|
| `-X FILE`| `--exclude-from FILE`| Exclude files that match any pattern in FILE. Patterns should be separated by a newline.|

View File

@@ -0,0 +1,69 @@
# The `uniq` command
The `uniq` command in Linux is a command line utility that reports or filters out the repeated lines in a file.
In simple words, `uniq` is the tool that helps you to detect the adjacent duplicate lines and also deletes the duplicate lines. It filters out the adjacent matching lines from the input file(that is required as an argument) and writes the filtered data to the output file .
### Examples:
In order to omit the repeated lines from a file, the syntax would be the following:
```
uniq kt.txt
```
In order to tell the number of times a line was repeated, the syntax would be the following:
```
uniq -c kt.txt
```
In order to print repeated lines, the syntax would be the following:
```
uniq -d kt.txt
```
In order to print unique lines, the syntax would be the following:
```
uniq -u kt.txt
```
In order to allows the N fields to be skipped while comparing uniqueness of the lines, the syntax would be the following:
```
uniq -f 2 kt.txt
```
In order to allows the N characters to be skipped while comparing uniqueness of the lines, the syntax would be the following:
```
uniq -s 5 kt.txt
```
In order to to make the comparison case-insensitive, the syntax would be the following:
```
uniq -i kt.txt
```
### Syntax:
```
uniq [OPTION] [INPUT[OUTPUT]]
```
### Possible options:
|**Flag** |**Description** |**Params** |
|:---|:---|:---|
|`-c`|It tells how many times a line was repeated by displaying a number as a prefix with the line.|-|
|`-d`|It only prints the repeated lines and not the lines which arent repeated.|-|
|`-i`|By default, comparisons done are case sensitive but with this option case insensitive comparisons can be made.|-|
|`-f`|It allows you to skip N fields(a field is a group of characters, delimited by whitespace) of a line before determining uniqueness of a line.|N|
|`-s`|It doesnt compares the first N characters of each line while determining uniqueness. This is like the -f option, but it skips individual characters rather than fields.|N|
|`-u`|It allows you to print only unique lines.|-|
|`-z`|It will make a line end with 0 byte(NULL), instead of a newline.|-|
|`-w`|It only compares N characters in a line.|N|
|`--help`|It displays a help message and exit.|-|
|`--version`|It displays version information and exit.|-|

View File

@@ -0,0 +1,103 @@
# The `RPM` command
`rpm` - RPM Package Manager
`rpm` is a powerful __Package Manager__, which can be used to build, install, query, verify, update, and erase individual software packages. A __package__ consists of an archive of files and meta-data used to install and erase the archive files. The meta-data includes helper scripts, file attributes, and descriptive information about the package. Packages come in two varieties: binary packages, used to encapsulate software to be installed, and source packages, containing the source code and recipe necessary to produce binary packages.
One of the following basic modes must be selected: __Query, Verify, Signature Check, Install/Upgrade/Freshen, Uninstall, Initialize Database, Rebuild Database, Resign, Add Signature, Set Owners/Groups, Show Querytags, and Show Configuration.__
**General Options**
These options can be used in all the different modes.
|Short Flag| Long Flag| Description|
|---|---|---|
| -? | --help| Print a longer usage message then normal.|
| - |--version |Print a single line containing the version number of rpm being used.|
| - | --quiet | Print as little as possible - normally only error messages will be displayed.|
| -v | - | Print verbose information - normally routine progress messages will be displayed.|
| -vv | - | Print lots of ugly debugging information.|
| - | --rcfile FILELIST | Each of the files in the colon separated FILELIST is read sequentially by rpm for configuration information. Only the first file in the list must exist, and tildes will be expanded to the value of $HOME. The default FILELIST is /usr/lib/rpm/rpmrc:/usr/lib/rpm/redhat/rpmrc:/etc/rpmrc:~/.rpmrc. |
| - | --pipe CMD | Pipes the output of rpm to the command CMD. |
| - | --dbpath DIRECTORY | Use the database in DIRECTORY rather than the default path /var/lib/rpm |
| - | --root DIRECTORY | Use the file system tree rooted at DIRECTORY for all operations. Note that this means the database within DIRECTORY will be used for dependency checks and any scriptlet(s) (e.g. %post if installing, or %prep if building, a package) will be run after a chroot(2) to DIRECTORY. |
| -D | --define='MACRO EXPR' | Defines MACRO with value EXPR.|
| -E | --eval='EXPR' | Prints macro expansion of EXPR. |
# Synopsis
## Querying and Verifying Packages:
```
rpm {-q|--query} [select-options] [query-options]
rpm {-V|--verify} [select-options] [verify-options]
rpm --import PUBKEY ...
rpm {-K|--checksig} [--nosignature] [--nodigest] PACKAGE_FILE ...
```
## Installing, Upgrading, and Removing Packages:
```
rpm {-i|--install} [install-options] PACKAGE_FILE ...
rpm {-U|--upgrade} [install-options] PACKAGE_FILE ...
rpm {-F|--freshen} [install-options] PACKAGE_FILE ...
rpm {-e|--erase} [--allmatches] [--nodeps] [--noscripts] [--notriggers] [--test] PACKAGE_NAME ...
```
## Miscellaneous:
```
rpm {--initdb|--rebuilddb}
rpm {--addsign|--resign} PACKAGE_FILE...
rpm {--querytags|--showrc}
rpm {--setperms|--setugids} PACKAGE_NAME .
```
### query-options
```
[--changelog] [-c,--configfiles] [-d,--docfiles] [--dump]
[--filesbypkg] [-i,--info] [--last] [-l,--list]
[--provides] [--qf,--queryformat QUERYFMT]
[-R,--requires] [--scripts] [-s,--state]
[--triggers,--triggerscripts]
```
### verify-options
```
[--nodeps] [--nofiles] [--noscripts]
[--nodigest] [--nosignature]
[--nolinkto] [--nofiledigest] [--nosize] [--nouser]
[--nogroup] [--nomtime] [--nomode] [--nordev]
[--nocaps]
```
### install-options
```
[--aid] [--allfiles] [--badreloc] [--excludepath OLDPATH]
[--excludedocs] [--force] [-h,--hash]
[--ignoresize] [--ignorearch] [--ignoreos]
[--includedocs] [--justdb] [--nodeps]
[--nodigest] [--nosignature] [--nosuggest]
[--noorder] [--noscripts] [--notriggers]
[--oldpackage] [--percent] [--prefix NEWPATH]
[--relocate OLDPATH=NEWPATH]
[--replacefiles] [--replacepkgs]
[--test]
```

View File

@@ -0,0 +1,69 @@
# The `scp` command
SCP (secure copy) is a command-line utility that allows you to securely copy files and directories between two locations.
Both the files and passwords are encrypted so that anyone snooping on the traffic doesn't get anything sensitive.
### Different ways to copy a file or directory:
- From local system to a remote system.
- From a remote system to a local system.
- Between two remote systems from the local system.
### Examples:
1. To copy the files from a local system to a remote system:
```
scp /home/documents/local-file root@{remote-ip-address}:/home/
```
2. To copy the files from a remote system to the local system:
```
scp root@{remote-ip-address}:/home/remote-file /home/documents/
```
3. To copy the files between two remote systems from the local system.
```
scp root@{remote1-ip-address}:/home/remote-file root@{remote2-ip-address}/home/
```
4. To copy file though a jump host server.
```
scp /home/documents/local-file -oProxyJump=<jump-host-ip> root@{remote-ip-address}/home/
```
On newer version of scp on some machines you can use the above command with a `-J` flag.
```
scp /home/documents/local-file -J <jump-host-ip> root@{remote-ip-address}/home/
```
### Syntax:
```
scp [OPTION] [user@]SRC_HOST:]file1 [user@]DEST_HOST:]file2
```
- `OPTION` - scp options such as cipher, ssh configuration, ssh port, limit, recursive copy …etc.
- `[user@]SRC_HOST:]file1` - Source file
- `[user@]DEST_HOST:]file2` - Destination file
Local files should be specified using an absolute or relative path, while remote file names should include a user and host specification.
scp provides several that control every aspect of its behaviour. The most widely used options are:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-P`|<center>-</center>|Specifies the remote host ssh port.|
|`-p`|<center>-</center>|Preserves files modification and access times.|
|`-q`|<center>-</center>|Use this option if you want to suppress the progress meter and non-error messages.|
|`-C`|<center>-</center>|This option forces scp to compresses the data as it is sent to the destination machine.|
|`-r`|<center>-</center>|This option tells scp to copy directories recursively.|
### Before you begin
The `scp` command relies on `ssh` for data transfer, so it requires an `ssh key` or `password` to authenticate on the remote systems.
The `colon (:)` is how scp distinguish between local and remote locations.
To be able to copy files, you must have at least read permissions on the source file and write permission on the target system.
Be careful when copying files that share the same name and location on both systems, `scp` will overwrite files without warning.
When transferring large files, it is recommended to run the scp command inside a `screen` or `tmux` session.

View File

@@ -0,0 +1,76 @@
# The `split` command
The `split` command in Linux is used to split a file into smaller files.
### Examples
1. Split a file into a smaller file using file name
```
split filename.txt
```
2. Split a file named filename into segments of 200 lines beginning with prefix file
```
split -l 200 filename file
```
This will create files of the name fileaa, fileab, fileac, filead, etc. of 200 lines.
3. Split a file named filename into segments of 40 bytes with prefix file
```
split -b 40 filename file
```
This will create files of the name fileaa, fileab, fileac, filead, etc. of 40 bytes.
4. Split a file using --verbose to see the files being created.
```
split filename.txt --verbose
```
### Syntax:
```
split [options] filename [prefix]
```
### Additional Flags and their Functionalities
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-a`|`--suffix-length=N`|Generate suffixes of length N (default 2)|
||`--additional-suffix=SUFFIX`|Append an additional SUFFIX to file names|
|`-b`|`--bytes=SIZE`|Put SIZE bytes per output file|
|`-C`|`--line-bytes=SIZE`|Put at most SIZE bytes of records per output file|
|`-d`| |Use numeric suffixes starting at 0, not alphabetic|
||`--numeric-suffixes[=FROM]`|Same as -d, but allow setting the start value|
|`-x`||Use hex suffixes starting at 0, not alphabetic|
||`--hex-suffixes[=FROM]`|Same as -x, but allow setting the start value|
|`-e`|`--elide-empty-files`|Do not generate empty output files with '-n'|
||`--filter=COMMAND`|Write to shell COMMAND;<br>file name is $FILE|
|`-l`|`--lines=NUMBER`|Put NUMBER lines/records per output file|
|`-n`|`--number=CHUNKS`|Generate CHUNKS output files;<br>see explanation below|
|`-t`|`--separator=SEP`|Use SEP instead of newline as the record separator;<br>'\0' (zero) specifies the NUL character|
|`-u`|`--unbuffered`|Immediately copy input to output with '-n r/...'|
||`--verbose`|Print a diagnostic just before each<br>output file is opened|
||`--help`|Display this help and exit|
||`--version`|Output version information and exit|
The SIZE argument is an integer and optional unit (example: 10K is 10*1024).
Units are K,M,G,T,P,E,Z,Y (powers of 1024) or KB,MB,... (powers of 1000).
CHUNKS may be:
|**CHUNKS** |**Description** |
|:---|:---|
|`N`|Split into N files based on size of input|
|`K/N`|Output Kth of N to stdout|
|`l/N`|Split into N files without splitting lines/records|
|`l/K/N`|Output Kth of N to stdout without splitting lines/records|
|`r/N`|Like 'l' but use round robin distribution|
|`r/K/N`|Likewise but only output Kth of N to stdout|

View File

@@ -0,0 +1,61 @@
# The `stat` command
The `stat` command lets you display file or file system status. It gives you useful information about the file (or directory) on which you use it.
### Examples:
1. Basic command usage
```
stat file.txt
```
2. Use the `-c` (or `--format`) argument to only display information you want to see (here, the total size, in bytes)
```
stat file.txt -c %s
```
### Syntax:
```
stat [OPTION] [FILE]
```
### Additional Flags and their Functionalities:
| Short Flag | Long Flag | Description |
| ---------- | ----------------- | ----------------------------------------------------------------------------- |
| `-L` | `--dereference` | Follow links |
| `-f` | `--file-system` | Display file system status instead of file status |
| `-c` | `--format=FORMAT` | Specify the format (see below) |
| `-t` | `--terse` | Print the information in terse form |
| - | `--cached=MODE` | Specify how to use cached attributes. Can be: `always`, `never`, or `default` |
| - | `--printf=FORMAT` | Like `--format`, but interpret backslash escapes (`\n`, `\t`, ...) |
| - | `--help` | Display the help and exit |
| - | `--version` | Output version information and exit |
### Example of Valid Format Sequences for Files:
| Format | Description |
| ------ | ---------------------------------------------------- |
| `%a` | Permission bits in octal |
| `%A` | Permission bits and file type in human readable form |
| `%d` | Device number in decimal |
| `%D` | Device number in hex |
| `%F` | File type |
| `%g` | Group ID of owner |
| `%G` | Group name of owner |
| `%h` | Number of hard links |
| `%i` | Inode number |
| `%m` | Mount point |
| `%n` | File name |
| `%N` | Quoted file name with dereference if symbolic link |
| `%s` | Total size, in bytes |
| `%u` | User ID of owner |
| `%U` | User name of owner |
| `%w` | Time of file birth, human-readable; - if unknown |
| `%x` | Time of last access, human-readable |
| `%y` | Time of last data modification, human-readable |
| `%z` | Time of last status change, human-readable |

View File

@@ -0,0 +1,93 @@
# The `ionice` command
The `ionice` command is used to set or get process I/O scheduling class and priority.
If no arguments are given , `ionice` will query the current I/O scheduling class and priority for that process.
## Usage
```
ionice [options] -p <pid>
```
```
ionice [options] -P <pgid>
```
```
ionice [options] -u <uid>
```
```
ionice [options] <command>
```
## A process can be of three scheduling classes:
- ### Idle
A program with idle I/O priority will only get disk time when `no other program has asked for disk I/O for a defined grace period`.
The impact of idle processes on normal system actively should be `zero`.
This scheduling class `doesnt take priority` argument.
Presently this scheduling class is permitted for an `ordinary user (since kernel 2.6.25)`.
- ### Best Effort
This is `effective` scheduling class for any process that has `not asked for a specific I/O priority`.
This class `takes priority argument from 0-7`, with `lower` number being `higher priority`.
Programs running at the same best effort priority are served in `round- robbin fashion`.
Note that before kernel 2.6.26 a process that has not asked for an I/O priority formally uses “None” as scheduling class , but the io schedular will treat such processes as if it were in the best effort class.
The priority within best effort class will be dynamically derived form the CPU nice level of the process : io_priority = ( cpu_nice + 20 ) / 5/
for kernels after 2.6.26 with CFQ I/O schedular a process that has not asked for sn io priority inherits CPU scheduling class.
`The I/O priority is derived from the CPU nice level of the process` ( smr sd before kernel 2.6.26 ).
- ### Real Time
The real time schedular class is `given first access to disk, regardless of what else is going on in the system`.
Thus the real time class needs to be used with some care, as it cans tarve other processes .
As with the best effort class, `8 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window`.
This scheduling class is `not permitted for an ordinary user(non-root)`.
## Options
| Options | Description |
|---|---|
| -c, --class <class> | name or number of scheduling class, 0: none, 1: realtime, 2: best-effort, 3: idle|
| -n, --classdata <num> | priority (0..7) in the specified scheduling class,only for the realtime and best-effort classes|
| -p, --pid <pid>... | act on these already running processes|
| -P, --pgid <pgrp>... | act on already running processes in these groups|
| -t, --ignore | ignore failures|
| -u, --uid <uid>... | act on already running processes owned by these users|
| -h, --help | display this help|
| -V, --version | display version|
For more details see ionice(1).
## Examples
| Command | O/P |Explanation|
|---|---|---|
|`$ ionice` |*none: prio 4*|Running alone `ionice` will give the class and priority of current process |
|`$ ionice -p 101`|*none : prio 4*|Give the details(*class : priority*) of the process specified by given process id|
|`$ ionice -p 2` |*none: prio 4*| Check the class and priority of process with pid 2 it is none and 4 resp.|
|`$ ionice -c2 -n0 -p2`|2 ( best-effort ) priority 0 process 2 | Now lets set process(pid) 2 as a best-effort program with highest priority|
|$ `ionice` -p 2|best-effort : prio 0| Now if I check details of Process 2 you can see the updated one|
|$ `ionice` /bin/ls||get priority and class info of bin/ls |
|$ `ionice` -n4 -p2||set priority 4 of process with pid 2 |
|$ `ionice` -p 2| best-effort: prio 4| Now observe the difference between the command ran above and this one we have changed priority from 0 to 4|
|$ `ionice` -c0 -n4 -p2|ionice: ignoring given class data for none class|(Note that before kernel 2.6.26 a process that has not asked for an I/O priority formally uses “None” as scheduling class , |
|||but the io schedular will treat such processes as if it were in the best effort class. )|
|||-t option : ignore failure|
|$ `ionice` -c0 -n4 -p2 -t| | For ignoring the warning shown above we can use -t option so it will ignore failure |
## Conclusion
Thus we have successfully learnt about `ionice` command.

View File

@@ -0,0 +1,85 @@
# The `rsync` command
The `rsync` command is probably one of the most used commands out there. It is used to securely copy files from one server to another over SSH.
Compared to the `scp` command, which does a similar thing, `rsync` makes the transfer a lot faster, and in case of an interruption, you could restore/resume the transfer process.
In this tutorial, I will show you how to use the `rsync` command and copy files from one server to another and also share a few useful tips!
Before you get started, you would need to have 2 Linux servers. I will be using DigitalOcean for the demo and deploy 2 Ubuntu servers.
You can use my referral link to get a free $100 credit that you could use to deploy your virtual machines and test the guide yourself on a few DigitalOcean servers:
**[DigitalOcean $100 Free Credit](https://m.do.co/c/2a9bba940f39)**
## Transfer Files from local server to remote
This is one of the most common causes. Essentially this is how you would copy the files from the server that you are currently on (the source server) to remote/destination server.
What you need to do is SSH to the server that is holding your files, cd to the directory that you would like to transfer over:
```
cd /var/www/html
```
And then run:
```
rsync -avz user@your-remote-server.com:/home/user/dir/
```
The above command would copy all the files and directories from the current folder on your server to your remote server.
Rundown of the command:
* `-a`: is used to specify that you want recursion and want to preserve the file permissions and etc.
* `-v`: is verbose mode, it increases the amount of information you are given during the transfer.
* `-z`: this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection.
I recommend having a look at the following website which explains the commands and the arguments very nicely:
[https://explainshell.com/explain?cmd=rsync+-avz](https://explainshell.com/explain?cmd=rsync+-avz)
In case that the SSH service on the remote server is not running on the standard `22` port, you could use `rsync` with a special SSH port:
```
rsync -avz -e 'ssh -p 1234' user@your-remote-server.com:/home/user/dir/
```
## Transfer Files remote server to local
In some cases you might want to transfer files from your remote server to your local server, in this case, you would need to use the following syntax:
```
rsync -avz your-user@your-remote-server.com:/home/user/dir/ /home/user/local-dir/
```
Again, in case that you have a non-standard SSH port, you can use the following command:
```
rsync -avz -e 'ssh -p 2510' your-user@your-remote-server.com:/home/user/dir/ /home/user/local-dir/
```
## Transfer only missing files
If you would like to transfer only the missing files you could use the `--ignore-existing` flag.
This is very useful for final sync in order to ensure that there are no missing files after a website or a server migration.
Basically the commands would be the same apart from the appended --ignore-existing flag:
```
rsync -avz --ignore-existing user@your-remote-server.com:/home/user/dir/
```
## Conclusion
Using `rsync` is a great way to quickly transfer some files from one machine over to another in a secure way over SSH.
For more cool Linux networking tools, I would recommend checking out this tutorial here:
[Top 15 Linux Networking tools that you should know!](https://devdojo.com/serverenthusiast/top-15-linux-networking-tools-that-you-should-know)
Hope that this helps!
Initially posted here: [How to Transfer Files from One Linux Server to Another Using rsync](https://devdojo.com/bobbyiliev/how-to-transfer-files-from-one-linux-server-to-another-using-rsync)

View File

@@ -0,0 +1,133 @@
# The `dig` command
dig - DNS lookup utility
The `dig` is a flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name server(s) that
were queried.
### Examples:
1. Dig is a network administration command-line tool for querying the Domain Name System.
```
dig google.com
```
2. The system will list all google.com DNS records that it finds, along with the IP addresses.
```
dig google.com ANY
```
### Syntax:
```
dig [server] [name] [type] [q-type] [q-class] {q-opt}
{global-d-opt} host [@local-server] {local-d-opt}
[ host [@local-server] {local-d-opt} [...]]
```
### Additional Flags and their Functionalities:
```bash
domain is in the Domain Name System
q-class is one of (in,hs,ch,...) [default: in]
q-type is one of (a,any,mx,ns,soa,hinfo,axfr,txt,...) [default:a]
(Use ixfr=version for type ixfr)
q-opt is one of:
-4 (use IPv4 query transport only)
-6 (use IPv6 query transport only)
-b address[#port] (bind to source address/port)
-c class (specify query class)
-f filename (batch mode)
-k keyfile (specify tsig key file)
-m (enable memory usage debugging)
-p port (specify port number)
-q name (specify query name)
-r (do not read ~/.digrc)
-t type (specify query type)
-u (display times in usec instead of msec)
-x dot-notation (shortcut for reverse lookups)
-y [hmac:]name:key (specify named base64 tsig key)
d-opt is of the form +keyword[=value], where keyword is:
+[no]aaflag (Set AA flag in query (+[no]aaflag))
+[no]aaonly (Set AA flag in query (+[no]aaflag))
+[no]additional (Control display of additional section)
+[no]adflag (Set AD flag in query (default on))
+[no]all (Set or clear all display flags)
+[no]answer (Control display of answer section)
+[no]authority (Control display of authority section)
+[no]badcookie (Retry BADCOOKIE responses)
+[no]besteffort (Try to parse even illegal messages)
+bufsize[=###] (Set EDNS0 Max UDP packet size)
+[no]cdflag (Set checking disabled flag in query)
+[no]class (Control display of class in records)
+[no]cmd (Control display of command line -
global option)
+[no]comments (Control display of packet header
and section name comments)
+[no]cookie (Add a COOKIE option to the request)
+[no]crypto (Control display of cryptographic
fields in records)
+[no]defname (Use search list (+[no]search))
+[no]dnssec (Request DNSSEC records)
+domain=### (Set default domainname)
+[no]dscp[=###] (Set the DSCP value to ### [0..63])
+[no]edns[=###] (Set EDNS version) [0]
+ednsflags=### (Set EDNS flag bits)
+[no]ednsnegotiation (Set EDNS version negotiation)
+ednsopt=###[:value] (Send specified EDNS option)
+noednsopt (Clear list of +ednsopt options)
+[no]expandaaaa (Expand AAAA records)
+[no]expire (Request time to expire)
+[no]fail (Don't try next server on SERVFAIL)
+[no]header-only (Send query without a question section)
+[no]identify (ID responders in short answers)
+[no]idnin (Parse IDN names [default=on on tty])
+[no]idnout (Convert IDN response [default=on on tty])
+[no]ignore (Don't revert to TCP for TC responses.)
+[no]keepalive (Request EDNS TCP keepalive)
+[no]keepopen (Keep the TCP socket open between queries)
+[no]mapped (Allow mapped IPv4 over IPv6)
+[no]multiline (Print records in an expanded format)
+ndots=### (Set search NDOTS value)
+[no]nsid (Request Name Server ID)
+[no]nssearch (Search all authoritative nameservers)
+[no]onesoa (AXFR prints only one soa record)
+[no]opcode=### (Set the opcode of the request)
+padding=### (Set padding block size [0])
+[no]qr (Print question before sending)
+[no]question (Control display of question section)
+[no]raflag (Set RA flag in query (+[no]raflag))
+[no]rdflag (Recursive mode (+[no]recurse))
+[no]recurse (Recursive mode (+[no]rdflag))
+retry=### (Set number of UDP retries) [2]
+[no]rrcomments (Control display of per-record comments)
+[no]search (Set whether to use searchlist)
+[no]short (Display nothing except short
form of answers - global option)
+[no]showsearch (Search with intermediate results)
+[no]split=## (Split hex/base64 fields into chunks)
+[no]stats (Control display of statistics)
+subnet=addr (Set edns-client-subnet option)
+[no]tcflag (Set TC flag in query (+[no]tcflag))
+[no]tcp (TCP mode (+[no]vc))
+timeout=### (Set query timeout) [5]
+[no]trace (Trace delegation down from root [+dnssec])
+tries=### (Set number of UDP attempts) [3]
+[no]ttlid (Control display of ttls in records)
+[no]ttlunits (Display TTLs in human-readable units)
+[no]unexpected (Print replies from unexpected sources
default=off)
+[no]unknownformat (Print RDATA in RFC 3597 "unknown" format)
+[no]vc (TCP mode (+[no]tcp))
+[no]yaml (Present the results as YAML)
+[no]zflag (Set Z flag in query)
global d-opts and servers (before host name) affect all queries.
local d-opts and servers (after host name) affect only that lookup.
-h (print help and exit)
-v (print version and exit)
```

View File

@@ -0,0 +1,66 @@
# The `whois` command
The `whois` command in Linux to find out information about a domain, such as the owner of the domain, the owners contact information, and the nameservers that the domain is using.
### Examples:
1. Performs a whois query for the domain name:
```
whois {Domain_name}
```
2. -H option omits the lengthy legal disclaimers that many domain registries deliver along with the domain information.
```
whois -H {Domain_name}
```
### Syntax:
```
whois [ -h HOST ] [ -p PORT ] [ -aCFHlLMmrRSVx ] [ -g SOURCE:FIRST-LAST ]
[ -i ATTR ] [ -S SOURCE ] [ -T TYPE ] object
```
```
whois -t TYPE
```
```
whois -v TYPE
```
```
whois -q keyword
```
### Additional Flags and their Functionalities:
|**Flag** |**Description** |
|:---|:---|
|`-h HOST`, `--host HOST`|Connect to HOST.|
|`-H`|Do not display the legal disclaimers some registries like to show you.|
|`-p`, `--port PORT`|Connect to PORT.|
|`--verbose`|Be verbose.|
|`--help`|Display online help.|
|`--version`|Display client version information. Other options are flags understood by whois.ripe.net and some other RIPE-like servers.|
|`-a`|Also search all the mirrored databases.|
|`-b`|Return brief IP address ranges with abuse contact.|
|`-B`|Disable object filtering *(show the e-mail addresses)*|
|`-c`|Return the smallest IP address range with a reference to an irt object.|
|`-d`|Return the reverse DNS delegation object too.|
|`-g SOURCE:FIRST-LAST`|Search updates from SOURCE database between FIRST and LAST update serial number. It's useful to obtain Near Real Time Mirroring stream.|
|`-G`|Disable grouping of associated objects.|
|`-i ATTR[,ATTR]...`|Search objects having associated attributes. ATTR is attribute name. Attribute value is positional OBJECT argument.|
|`-K`|Return primary key attributes only. Exception is members attribute of set object which is always returned. Another exceptions are all attributes of objects organisation, person, and role that are never returned.|
|`-l`|Return the one level less specific object.|
|`-L`|Return all levels of less specific objects.|
|`-m`|Return all one level more specific objects.|
|`-M`|Return all levels of more specific objects.|
|`-q KEYWORD`|Return list of keywords supported by server. KEYWORD can be version for server version, sources for list of source databases, or types for object types.|
|`-r`|Disable recursive look-up for contact information.|
|`-R`|Disable following referrals and force showing the object from the local copy in the server.|
|`-s SOURCE[,SOURCE]...`|Request the server to search for objects mirrored from SOURCES. Sources are delimited by comma and the order is significant. Use `-q` sources option to obtain list of valid sources.|
|`-t TYPE`|Return the template for a object of TYPE.|
|`-T TYPE[,TYPE]...`|Restrict the search to objects of TYPE. Multiple types are separated by a comma.|
|`-v TYPE`|Return the verbose template for a object of TYPE.|
|`-x`|Search for only exact match on network address prefix.|

View File

@@ -0,0 +1,90 @@
# The `awk` command
Awk is a general-purpose scripting language designed for advanced text processing. It is mostly used as a reporting and analysis tool.
#### WHAT CAN WE DO WITH AWK?
1. AWK Operations:
(a) Scans a file line by line
(b) Splits each input line into fields
(c) Compares input line/fields to pattern
(d) Performs action(s) on matched lines
2. Useful For:
(a) Transform data files
(b) Produce formatted reports
3. Programming Constructs:
(a) Format output lines
(b) Arithmetic and string operations
(c) Conditionals and loops
#### Syntax
```
awk options 'selection _criteria {action }' input-file > output-file
```
#### Example
Consider the following text file as the input file for below example:
```
$cat > employee.txt
```
```
ajay manager account 45000
sunil clerk account 25000
varun manager sales 50000
amit manager account 47000
tarun peon sales 15000
```
1. Default behavior of Awk: By default Awk prints every line of data from the specified file.
```
$ awk '{print}' employee.txt
```
```
ajay manager account 45000
sunil clerk account 25000
varun manager sales 50000
amit manager account 47000
tarun peon sales 15000
```
In the above example, no pattern is given. So the actions are applicable to all the lines. Action print without any argument prints the whole line by default, so it prints all the lines of the file without failure.
2. Print the lines which match the given pattern.
```
awk '/manager/ {print}' employee.txt
```
```
ajay manager account 45000
varun manager sales 50000
amit manager account 47000
```
In the above example, the awk command prints all the line which matches with the manager.
3. Splitting a Line Into Fields : For each record i.e line, the awk command splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and $4 respectively. Also, $0 represents the whole line.
```
$ awk '{print $1,$4}' employee.txt
```
```
ajay 45000
sunil 25000
varun 50000
amit 47000
tarun 15000
```
#### Built-In Variables In Awk
Awks built-in variables include the field variables—$1, $2, $3, and so on ($0 is the entire line) — that break a line of text into individual words or pieces called fields.
NR: NR command keeps a current count of the number of input records. Remember that records are usually lines. Awk command performs the pattern/action statements once for each record in a file.
NF: NF command keeps a count of the number of fields within the current input record.
FS: FS command contains the field separator character which is used to divide fields on the input line. The default is “white space”, meaning space and tab characters. FS can be reassigned to another character (typically in BEGIN) to change the field separator.
RS: RS command stores the current record separator character. Since, by default, an input line is the input record, the default record separator character is a newline.
OFS: OFS command stores the output field separator, which separates the fields when Awk prints them. The default is a blank space. Whenever print has several parameters separated with commas, it will print the value of OFS in between each parameter.
ORS: ORS command stores the output record separator, which separates the output lines when Awk prints them. The default is a newline character. print automatically outputs the contents of ORS at the end of whatever it is given to print.

View File

@@ -0,0 +1,69 @@
# The `pstree` command
The `pstree` command is similar to `ps`, but instead of listing the running processes, it shows them as a tree. The tree-like format is sometimes more suitable way to display the processes hierarchy which is a much simpler way to visualize running processes. The root of the tree is either init or the process with the given pid.
### Examples
1. To display a hierarchical tree structure of all running processes:
```
pstree
```
2. To display a tree with the given process as the root of the tree:
```
pstree [pid]
```
3. To show only those processes that have been started by a user:
```
pstree [USER]
```
4. To show the parent processes of the given process:
```
pstree -s [PID]
```
5. To view the output one page at a time, pipe it to the `less` command:
```
pstree | less
```
### Syntax
`ps [OPTIONS] [USER or PID]`
### Additional Flags and their Functionalities
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-a`|`--arguments`|Show command line arguments|
|`-A`|`--ascii`|use ASCII line drawing characters|
|`-c`|`--compact`|Don't compact identical subtrees|
|`-h`|`--highlight-all`|Highlight current process and its ancestors|
|`-H PID`|`--highlight-pid=PID`|highlight this process and its ancestors|
|`-g`|`--show-pgids`|show process group ids; implies `-c`|
|`-G`|`--vt100`|use VT100 line drawing characters|
|`-l`|`--long`|Don't truncate long lines|
|`-n`|`--numeric-sort`|Sort output by PID|
|`-N type`|`--ns-sort=type`|Sort by namespace type (cgroup, ipc, mnt, net, pid, user, uts)|
|`-p`|`--show-pids`|show PIDs; implies -c|
|`-s`|`--show-parents`|Show parents of the selected process|
|`-S`|`--ns-changes`|show namespace transitions|
|`-t`|`--thread-names`|Show full thread names|
|`-T`|`--hide-threads`|Hide threads, show only processes|
|`-u`|`--uid-changes`|Show uid transitions|
|`-U`|`--unicode`|Use UTF-8 (Unicode) line drawing characters|
|`-V`|`--version`|Display version information|
|`-Z`|`--security-context`|Show SELinux security contexts|

View File

@@ -0,0 +1,46 @@
# The `tree` command
The `tree` command in Linux recursively lists directories as tree structures. Each listing is indented according to its depth relative to root of the tree.
### Examples:
1. Show a tree representation of the current directory.
```
tree
```
2. -L NUMBER limits the depth of recursion to avoid display very deep trees.
```
tree -L 2 /
```
### Syntax:
```
tree [-acdfghilnpqrstuvxACDFQNSUX] [-L level [-R]] [-H baseHREF] [-T title]
[-o filename] [--nolinks] [-P pattern] [-I pattern] [--inodes]
[--device] [--noreport] [--dirsfirst] [--version] [--help] [--filelimit #]
[--si] [--prune] [--du] [--timefmt format] [--matchdirs] [--from-file]
[--] [directory ...]
```
### Additional Flags and their Functionalities:
|**Flag** |**Description** |
|:---|:---|
|`-a`|Print all files, including hidden ones.|
|`-d`|Only list directories.|
|`-l`|Follow symbolic links into directories.|
|`-f`|Print the full path to each listing, not just its basename.|
|`-x`|Do not move across file-systems.|
|`-L #`|Limit recursion depth to #.|
|`-P REGEX`|Recurse, but only list files that match the REGEX.|
|`-I REGEX`|Recurse, but do not list files that match the REGEX.|
|`--ignore-case`|Ignore case while pattern-matching.|
|`--prune`|Prune empty directories from output.|
|`--filelimit #`|Omit directories that contain more than # files.|
|`-o FILE`|Redirect STDOUT output to FILE.|
|`-i`|Do not output indentation.|

View File

@@ -0,0 +1,184 @@
# The `printf` command
This command lets you print the value of a variable by formatting it using rules. It is pretty similar to the printf in C language.
### Syntax:
```
$printf [-v variable_name] format [arguments]
```
### Options:
| OPTION | Description |
| --- | --- |
| `FORMAT` | FORMAT controls the output, and defines the way that the ARGUMENTs will be expressed in the output |
| `ARGUMENT` | An ARGUMENT will be inserted into the formatted output according to the definition of FORMAT |
| `--help` | Display help and exit | |
| `--version` | Output version information adn exit | |
### Formats:
The anatomy of the FORMAT string can be extracted into three different parts,
- _ordinary characters_, which are copied exactly the same characters as were used originally to the output.
- _interpreted character_ sequences, which are escaped with a backslash ("\\").
- _conversion specifications_, this one will define the way the ARGUMENTs will be expressed as part of the output.
You can see those parts in this example,
```
printf " %s is where over %d million developers shape \"the future of sofware.\" " Github 65
```
The output:
```
Github is where over 65 million developers shape "the future of sofware."
```
There are two conversion specifications `%s` and `%d`, and there are two escaped characters which are the opening and closing double-quotes wrapping the words of _the future of software_. Other than that are the ordinary characters.
### Conversion Specifications:
Each conversion specification begins with a `%` and ends with a `conversion character`. Between the `%` and the `conversion character` there may be, in order:
| | |
| --- | --- |
| `-` | A minus sign. This tells printf to left-adjust the conversion of the argument |
| _number_ | An integer that specifies field width; printf prints a conversion of ARGUMENT in a field at least number characters wide. If necessary it will be padded on the left (or right, if left-adjustment is called for) to make up the field width |
| `.` | A period, which separates the field width from the precision |
| _number_ | An integer, the precision, which specifies the maximum number of characters to be printed from a string, or the number of digits after the decimal point of a floating-point value, or the minimum number of digits for an integer |
| `h` or `l` | These differentiate between a short and a long integer, respectively, and are generally only needed for computer programming |
The conversion characters tell `printf` what kind of argument to print out, are as follows:
| Conversion char | Argument type |
| --- | --- |
| `s` | A string |
| `c` | An integer, expressed as a character corresponds ASCII code |
| `d, i` | An integer as a decimal number |
| `o` | An integer as an unsigned octal number |
| `x, X` | An integer as an unsigned hexadecimal number |
| `u` | An integer as an unsigned decimal number |
| `f` | A floating-point number with a default precision of 6 |
| `e, E` | A floating-point number in scientific notation |
| `p` | A memory address pointer |
| `%` | No conversion |
Here is the list of some examples of the `printf` output the ARGUMENT. we can put any word but in this one we put a 'linuxcommand` word and enclosed it with quotes so we can see easier the position related to the whitespaces.
| FORMAT string | ARGUMENT string | Output string |
| --- | --- | --- |
| `"%s"` | `"linuxcommand"` | "linuxcommand" |
| `"%5s"` | `"linuxcommand"` | "linuxcommand" |
| `"%.5s"` | `"linuxcommand"` | "linux" |
| `"%-8s"` | `"linuxcommand"` | "linuxcommand" |
| `"%-15s"` | `"linuxcommand"` | "linuxcommand " |
| `"%12.5s"` | `"linuxcommand"` | " linux" |
| `"%-12.5"` | `"linuxcommand"` | "linux " |
| `"%-12.4"` | `"linuxcommand"` | "linu " |
Notes:
- `printf` requires the number of conversion strings to match the number of ARGUMENTs
- `printf` maps the conversion strings one-to-one, and expects to find exactly one ARGUMENT for each conversion string
- Conversion strings are always interpreted from left to right.
Here's the example:
The input
```
printf "We know %f is %s %d" 12.07 "larger than" 12
```
The output:
```
We know 12.070000 is larger than 12
```
The example above shows 3 arguments, _12.07_, _larger than_, and _12_. Each of them interpreted from left to right one-to-one with the given 3 conversion strings (`%f`, `%d`, `%s`).
Character sequences which are interpreted as special characters by `printf`:
| Escaped char | Description |
| --- | --- |
| `\a` | issues an alert (plays a bell). Usually ASCII BEL characters |
| `\b` | prints a backspace |
| `\c` | instructs `printf` to produce no further output |
| `\e` | prints an escape character (ASCII code 27) |
| `\f` | prints a form feed |
| `\n` | prints a newline |
| `\r` | prints a carriage return |
| `\t` | prints a horizontal tab |
| `\v` | prints a vertical tab |
| `\"` | prints a double-quote (") |
| `\\` | prints a backslash (\) |
| `\NNN` | prints a byte with octal value `NNN` (1 to 3 digits)
| `\xHH` | prints a byte with hexadecimal value `HH` (1 to 2 digits)
| `\uHHHH`| prints the unicode character with hexadecimal value `HHHH` (4 digits) |
| `\UHHHHHHHH` | prints the unicode character with hexadecimal value `HHHHHHHH` (8 digits) |
| `%b` | prints ARGUMENT as a string with "\\" escapes interpreted as listed above, with the exception that octal escapes take the form `\0` or `\0NN` |
### Examples:
The format specifiers usually used with printf are stated in the examples below:
- %s
```
$printf "%s\n" "Printf command documentation!"
```
This will print `Printf command documentation!` in the shell.
### Other important attributes of printf command:
- `%b` - Prints arguments by expanding backslash escape sequences.
- `%q` - Prints arguments in a shell-quoted format which is reusable as input.
- `%d` , `%i` - Prints arguments in the format of signed decimal integers.
- `%u` - Prints arguments in the format of unsigned decimal integers.
- `%o` - Prints arguments in the format of unsigned octal(base 8) integers.
- `%x`, `%X` - Prints arguments in the format of unsigned hexadecimal(base 16) integers. %x prints lower-case letters and %X prints upper-case letters.
- `%e`, `%E` - Prints arguments in the format of floating-point numbers in exponential notation. %e prints lower-case letters and %E prints upper-case.
- `%a`, `%A` - Prints arguments in the format of floating-point numbers in hexadecimal(base 16) fractional notation. %a prints lower-case letters and %A prints upper-case.
- `%g`, `%G` - Prints arguments in the format of floating-point numbers in normal or exponential notation, whichever is more appropriate for the given value and precision. %g prints lower-case letters and %G prints upper-case.
- `%c` - Prints arguments as single characters.
- `%f` - Prints arguments as floating-point numbers.
- `%s` - Prints arguments as strings.
- `%%` - Prints a "%" symbol.
#### More Examples:
The input:
```
printf 'Hello\nyoung\nman!'
```
The output:
```
hello
young
man!
```
The two `\n` break the sentence into 3 parts of words.
The input:
```
printf "%f\n" 2.5 5.75
```
The output
```
2.500000
5.750000
```
The `%f` specifier combined with the `\n` interpreted the two arguments in the form of floating point in the seperated new lines.

View File

@@ -0,0 +1,42 @@
# The `cut` command
The `cut` command lets you remove sections from each line of files. Print selected parts of lines from each FILE to standard output. With no FILE, or when FILE is -, read standard input.
### Usage and Examples:
1. Selecting specific fields in a file
```
cut -d "delimiter" -f (field number) file.txt
```
2. Selecting specific characters:
```
cut -c [(k)-(n)/(k),(n)/(n)] filename
```
Here, **k** denotes the starting position of the character and **n** denotes the ending position of the character in each line, if _k_ and _n_ are separated by “-” otherwise they are only the position of character in each line from the file taken as an input.
3. Selecting specific bytes:
```
cut -b 1,2,3 filename //select bytes 1,2 and 3
cut -b 1-4 filename //select bytes 1 through 4
cut -b 1- filename //select bytes 1 through the end of file
cut -b -4 filename //select bytes from the beginning till the 4th byte
```
**Tabs and backspaces** are treated like as a character of 1 byte.
### Syntax:
```
cut OPTION... [FILE]...
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-b`|`--bytes=LIST`|select only these bytes|
|`-c`|`--characters=LIST`|select only these characters|
|`-d`|`--delimiter=DELIM`|use DELIM instead of TAB for field delimiter|
|`-f`|`--fields`|select only these fields; also print any line that contains no delimiter character, unless the -s option is specified|
|`-s`|`--only-delimited`|do not print lines not containing delimiters|
|`-z`|`--zero-terminated`|line delimiter is NUL, not newline|

View File

@@ -0,0 +1,53 @@
# The `sed` command
`sed` command stands for stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). For instance, it can perform lots of functions on files like searching, find and replace, insertion or deletion. While in some ways it is similar to an editor which permits scripted edits (such as `ed`), `sed` works by making only one pass over the input(s), and is consequently more efficient. But it is sed's ability to filter text in a pipeline that particularly distinguishes it from other types of editors.
The most common use of `sed` command is for a substitution or for find and replace. By using sed you can edit files even without opening it, which is a much quicker way to find and replace something in the file. It supports basic and extended regular expressions that allow you to match complex patterns. Most Linux distributions come with GNU and `sed` is pre-installed by default.
### Examples:
1. To Find and Replace String with `sed`
```
sed -i 's/{search_regex}/{replace_value}/g' input-file
```
2. For Recursive Find and Replace *(along with `find`)*
> Sometimes you may want to recursively search directories for files containing a string and replace the string in all files. This can be done using commands such as find to recursively find files in the directory and piping the file names to `sed`.
The following command will recursively search for files in the current working directory and pass the file names to `sed`. It will recursively search for files in the current working directory and pass the file names to `sed`.
```
find . -type f -exec sed -i 's/{search_regex}/{replace_value}/g' {} +
```
### Syntax:
```
sed [OPTION]... {script-only-if-no-other-script} [INPUT-FILE]...
```
- `OPTION` - sed options in-place, silent, follow-symlinks, line-length, null-data ...etc.
- `{script-only-if-no-other-script}` - Add the script to command if available.
- `INPUT-FILE` - Input Stream, A file or input from a pipeline.
If no option is given, then the first non-option argument is taken as the sed script to interpret. All remaining arguments are names of input files; if no input files are specified, then the standard input is read.
GNU sed home page: [http://www.gnu.org/software/sed/](http://www.gnu.org/software/sed/)
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-i[SUFFIX]`|<center>--in-place[=SUFFIX]</center>|Edit files in place (makes backup if SUFFIX supplied).|
|`-n`|<center>--quiet, --silent</center>|Suppress automatic printing of pattern space.|
|`-e script`|<center>--expression=script</center>|Add the script to the commands to be executed.|
|`-f script-file`|<center>--file=script-file</center>|Add the contents of script-file to the commands to be executed.|
|`-l N`|<center>--line-length=N</center>|Specify the desired line-wrap length for the `l` command.|
|`-r`|<center>--regexp-extended</center>|Use extended regular expressions in the script.|
|`-s`|<center>--separate</center>|Consider files as separate rather than as a single continuous long stream.|
|`-u`|<center>--unbuffered</center>|Load minimal amounts of data from the input files and flush the output buffers more often.|
|`-z`|<center>--null-data</center>|Separate lines by NULL characters.|
### Before you begin
It may seem complicated and complex at first, but searching and replacing text in files with sed is very simple.
To find out more: [https://www.gnu.org/software/sed/manual/sed.html](https://www.gnu.org/software/sed/manual/sed.html)

View File

@@ -0,0 +1,29 @@
# The `rmdir` command
The **rmdir** command is used to remove empty directories from the filesystem in Linux. The rmdir command removes each and every directory specified in the command line only if these directories are empty.
### Usage and Examples:
1. remove directory and its ancestors
```
rmdir -p a/b/c // is similar to 'rmdir a/b/c a/b a'
```
2. remove multiple directories
```
rmdir a b c // removes empty directories a,b and c
```
### Syntax:
```
rmdir [OPTION]... DIRECTORY...
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-`|`--ignore-fail-on-non-empty`|ignore each failure that is solely because a directory is non-empty|
|`-p`|`--parents`|remove DIRECTORY and its ancestors|
|`-d`|`--delimiter=DELIM`|use DELIM instead of TAB for field delimiter|
|`-v`|`--verbose`|output a diagnostic for every directory processed|

View File

@@ -0,0 +1,45 @@
# The `screen` command
`screen` - With screen you can start a screen session and then open any number of windows (virtual terminals) inside that session.
Processes running in Screen will continue to run when their window is not visible even if you get disconnected. This is very
handy for running long during session such as bash scripts that run very long.
To start a screen session you type `screen`, this will open a new screen session with a virtual terminal open.
Below are some most common commands for managing Linux Screen Windows:
|**Command** |**Description** |
|:---|:---|
|`Ctrl+a`+ `c`|Create a new window (with shell).|
|`Ctrl+a`+ `"`|List all windows.
|`Ctrl+a`+ `0`|Switch to window 0 (by number).
|`Ctrl+a`+ `A`|Rename the current window.
|`Ctrl+a`+ `S`|Split current region horizontally into two regions.
|`Ctrl+a`+ `'`|Split current region vertically into two regions.
|`Ctrl+a`+ `tab`|Switch the input focus to the next region.
|`Ctrl+a`+ `Ctrl+a`|Toggle between the current and previous windows
|`Ctrl+a`+ `Q`|Close all regions but the current one.
|`Ctrl+a`+ `X`|Close the current region.
## Restore a Linux Screen
To restore to a screen session you type `screen -r`, if you have more than one open screen session you have to add the
session id to the command to connect to the right session.
## Listing all open screen sessions
To find the session ID you can list the current running screen sessions with:
`screen -ls`
There are screens on:
```
18787.pts-0.your-server (Detached)
15454.pts-0.your-server (Detached)
2 Sockets in /run/screens/S-yourserver.
```
If you want to restore screen 18787.pts-0, then type the following command:
`screen -r 18787`

View File

@@ -0,0 +1,86 @@
# The `nc` command
The `nc` (or netcat) command is used to perform any operation involving TCP (Transmission Control Protocol, connection oriented), UDP (User Datagram Protocol, connection-less, no guarantee of data delivery) or UNIX-domain sockets. It can be thought of as swiss-army knife for communication protocol utilities.
### Syntax:
```
nc [options] [ip] [port]
```
### Examples:
#### 1. Open a TCP connection to port 80 of host, using port 1337 as source port with timeout of 5s:
```bash
$ nc -p 1337 -w 5 host.ip 80
```
#### 2. Open a UDP connection to port 80 on host:
```bash
$ nc -u host.ip 80
```
#### 3. Create and listen on UNIX-domain stream socket:
```bash
$ nc -lU /var/tmp/dsocket
```
#### 4. Create a basic server/client model:
This creates a connection, with no specific server/client sides with respect to nc, once the connection is established.
```bash
$ nc -l 1234 # in one console
$ nc 127.0.0.1 1234 # in another console
```
#### 5. Build a basic data transfer model:
After the file has been transferred, sequentially, the connection closes automatically
```bash
$ nc -l 1234 > filename.out # to start listening in one console and collect data
$ nc host.ip 1234 < filename.in
```
#### 6. Talk to servers:
Basic example of retrieving the homepage of the host, along with headers.
```bash
$ printf "GET / HTTP/1.0\r\n\r\n" | nc host.ip 80
```
#### 7. Port scanning:
Checking which ports are open and running services on target machines. `-z` flag commands to inform about those rather than initiate a connection.
```bash
$ nc -zv host.ip 20-2000 # range of ports to check for
```
### Flags and their Functionalities:
| **Short Flag** | **Description** |
| -------------- | ----------------------------------------------------------------- |
| `-4` | Forces nc to use IPv4 addresses |
| `-6` | Forces nc to use IPv6 addresses |
| `-b` | Allow broadcast |
| `-D` | Enable debugging on the socket |
| `-i` | Specify time interval delay between lines sent and received |
| `-k` | Stay listening for another connection after current is over |
| `-l` | Listen for incoming connection instead of initiate one to remote |
| `-T` | Specify length of TCP |
| `-p` | Specify source port to be used |
| `-r` | Specify source and/or destination ports randomly |
| `-s` | Specify IP of interface which is used to send the packets |
| `-U` | Use UNIX-domain sockets |
| `-u` | Use UDP instead of TCP as protocol |
| `-w` | Declare a timeout threshold for idle or unestablished connections |
| `-x` | Should use specified protocol when talking to proxy server |
| `-z` | Specify to scan for listening daemons, without sending any data |

View File

@@ -0,0 +1,48 @@
# The `make` command
The `make` command is used to automate the reuse of multiple commands in certain directory structure.
An example for that would be the use of `terraform init`, `terraform plan`, and `terraform validate` while having to change different subscriptions in Azure. This is usually done in the following steps:
```
az account set --subscription "Subscription - Name"
terraform init
```
How the `make` command can help us is it can automate all of that in just one go:
```make tf-init```
### Syntax:
```
make [ -f makefile ] [ options ] ... [ targets ] ...
```
### Example use (guide):
#### 1. Create `Makefile` in your guide directory
#### 2. Include the following in your `Makefile` :
```
hello-world:
echo "Hello, World!"
hello-bobby:
echo "Hello, Bobby!"
touch-letter:
echo "This is a text that is being inputted into our letter!" > letter.txt
clean-letter:
rm letter.txt
```
#### 3. Execute ```make hello-world``` - this echoes "Hello, World" in our terminal.
#### 4. Execute ```make hello-bobby``` - this echoes "Hello, Bobby!" in our terminal.
#### 5. Execute ```make touch-letter``` - This creates a text file named `letter.txt` and populates a line in it.
#### 6. Execute ```make clean-letter```
### References to lenghtier and more contentful tutorials:
(linoxide - linux make command examples)[https://linoxide.com/linux-make-command-examples/]
(makefiletutorial.com - the name itself gives it out)[https://makefiletutorial.com/]

View File

@@ -0,0 +1,90 @@
# The `basename` command
The `basename` is a command-line utility that strips directory from given file names. Optionally, it can also remove any trailing suffix. It is a simple command that accepts only a few options.
### Examples
The most basic example is to print the file name with the leading directories removed:
```bash
basename /etc/bar/foo.txt
```
The output will include the file name:
```bash
foo.txt
```
If you run basename on a path string that points to a directory, you will get the last segment of the path. In this example, /etc/bar is a directory.
```bash
basename /etc/bar
```
Output
```bash
bar
```
The basename command removes any trailing `/` characters:
```bash
basename /etc/bar/foo.txt/
```
Output
```bash
foo.txt
```
### Options
1. By default, each output line ends in a newline character. To end the lines with NUL, use the -z (--zero) option.
```bash
$ basename -z /etc/bar/foo.txt
foo.txt$
```
2. The `basename` command can accept multiple names as arguments. To do so, invoke the command with the `-a` (`--multiple`) option, followed by the list of files separated by space. For example, to get the file names of `/etc/bar/foo.txt` and `/etc/spam/eggs.docx` you would run:
```bash
basename -a /etc/bar/foo.txt /etc/spam/eggs.docx
```
```bash
foo.txt
eggs.docx
```
### Syntax
The basename command supports two syntax formats:
```bash
basename NAME [SUFFIX]
basename OPTION... NAME...
```
### Additional functionalities
**Removing a Trailing Suffix**: To remove any trailing suffix from the file name, pass the suffix as a second argument:
```bash
basename /etc/hostname name
host
```
Generally, this feature is used to strip file extensions
### Help Command
Run the following command to view the complete guide to `basename` command.
```bash
man basename
```

View File

@@ -0,0 +1,33 @@
# The `banner` command
The `banner` command writes ASCII character Strings to standard output in large letters. Each line in the output can be up to 10 uppercase or lowercase characters in length. On output, all characters appear in uppercase, with the lowercase input characters appearing smaller than the uppercase input characters.
Note: If you will define more than one NUMBER with sleep command then this command will delay for the sum of the values.
### Examples :
1. To display a banner at the workstation, enter:
```
banner LINUX!
```
2. To display more than one word on a line, enclose the text in quotation marks, as follows:
```
banner "Intro to" Linux
```
> This displays Intro to on one line and Linux on the next
3. Printing “101LinuxCommands” in large letters.
```
banner 101LinuxCommands
```
> It will print only 101LinuxCo as banner has a default capacity of 10
---

View File

@@ -0,0 +1,64 @@
# The `which` command
`which` command identifies the executable binary that launches when you issue a command to the shell.
If you have different versions of the same program on your computer, you can use which to find out which one the shell will use.
It has 3 return status as follows:
0 : If all specified commands are found and executable.
1 : If one or more specified commands is nonexistent or not executable.
2 : If an invalid option is specified.
### Examples
1. To find the full path of the ls command, type the following:
```
which ls
```
2. We can provide more than one arguments to the which command:
```
which netcat uptime ping
```
The which command searches from left to right, and if more than one matches are found in the directories listed in the PATH path variable, which will print only the first one.
3. To display all the paths for the specified command:
```
which [filename] -a
```
4. To display the path of node executable files, execute the command:
```
which node
```
5. To display the path of Java executable files, execute:
```
which java
```
### Syntax
```
which [filename1] [filename2] ...
```
You can pass multiple programs and commands to which, and it will check them in order.
For example:
```which ping cat uptime date head```
### Options
-a : List all instances of executables found (instead of just the first
one of each).
-s : No output, just return 0 if all the executables are found, or 1
if some were not found

View File

@@ -0,0 +1,35 @@
# The `nice/renice` command
The `nice/renice` commands is used to modify the priority of the program to be executed.
The priority range is between -20 and 19 where 19 is the lowest priority.
### Examples:
1. Running cc command in the background with a lower priority than default (slower):
```
nice -n 15 cc -c *.c &
```
2. Increase the priority to all processes belonging to group "test":
```
renice --20 -g test
```
### Syntax:
```
nice [ -Increment| -n Increment ] Command [ Argument ... ]
```
### Flags :
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-Increment`|<center>-</center>|Increment is the value of priority you want to assign.|
|`-n Increment`|<center>-</center>|Same as `-Increment`

View File

@@ -0,0 +1,42 @@
# The `wc` command
the `wc` command stands for word count. It's used to count the number of lines, words, and bytes *(characters)* in a file or standard input then prints the result to the standard output.
### Examples:
1. To count the number of lines, words and characters in a file in order:
```
wc file.txt
```
2. To count the number of directories in a directory:
```
ls -F | grep / | wc -l
```
### Syntax:
```bash
wc [OPTION]... [FILE]...
```
### Additional Flags and their Functionalities:
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-c` | `--bytes` | print the byte counts|
|`-m` | `--chars` | print the character counts|
|`-l` | `--lines` | print the newline counts|
|<center>-</center> | `--files0-from=F` | read input from the files specified by NUL-terminated names in file F. If F is `-` then read names from standard input|
|`-L` | `--max-line-length` | print the maximum display width|
|`-w` | `--words` | print the word counts|
### Additional Notes:
* Passing more than one file to `wc` command prints the counts for each file and the total conuts of them.
* you can combine more whan one flag to print the result as you want.

View File

@@ -0,0 +1,65 @@
# The `tr` command
The tr command in UNIX is a command line utility for translating or deleting characters.
It supports a range of transformations including uppercase to lowercase, squeezing repeating characters, deleting specific characters and basic find and replace.
It can be used with UNIX pipes to support more complex translation. tr stands for translate.
### Examples:
1. Convert all lowercase letters in file1 to uppercase.
```
$ cat file1
foo
bar
baz
tr a-z A-Z < file1
FOO
BAR
BAZ
```
2. Make consecutive line breaks into one.
```
$ cat file1
foo
bar
baz
$ tr -s "\n" < file1
foo
bar
baz
```
3. Remove the newline code.
```
$ cat file1
foo
bar
baz
$ tr -d "\n" < file1
foobarbaz%
```
### Syntax:
The general syntax for the tr command is as follows:
```
tr [options] string1 [string2]
```
### Additional Flags and their Functionalities:
| **Short Flag** | **Long Flag** | **Description** |
| :------------- | :------------ | :------------------------------------------------------------------------------------------------------------ |
| `-C` | | Complement the set of characters in string1, that is `-C ab` includes every character except for `a` and `b`. |
| `-c` | | Same as -C. |
| `-d` | | Delete characters in string1 from the input. |
| `-s` | | If there is a sequence of characters in string1, combine them into one. |

View File

@@ -0,0 +1,27 @@
# The `Wait` commands
It is a command that waits for completing any running process of given id. if the process id is not given then it waits for all current child processes to complete.
## Example
This example shows how the `wait` command works : <br />
**Step-1**:
Create a file named "wait_example.sh" and add the following script to it.
```
#!/bin/bash
echo "Wait command" &
process_id=$!
wait $process_id
echo "Exited with status $?"
```
**Step-2**:
Run the file with bash command.
```
$ bash wait_example.sh
```

View File

@@ -0,0 +1,30 @@
# The `zcat` command
The `zcat` allows you to look at a compressed file.
### Examples:
1. To view the content of a compressed file:
```
~$ zcat test.txt.gz
Hello World
```
2. It can also Works with multiple files:
```
~$ zcat test2.txt.gz test.txt.gz
hello
Hello world
```
### Syntax:
The general syntax for the `zcat` command is as follows:
```
zcat [ -n ] [ -V ] [ File ... ]
```

View File

@@ -0,0 +1,55 @@
# The `fold` command
The `fold` command in Linux wraps each line in an input file to fit a specified width and prints it to the standard output.
By default, it wraps lines at a maximum width of 80 columns but this is configurable.
To fold input using the fold command pass a file or standard input to the command.
### Syntax:
```
fold [OPTION]... [FILE]...
```
### Options
**-w** : By using this option in fold command, we can limit the width by number of columns.
By using this command we change the column width from default width of 80.
Syntax:
```
fold -w[n] [FILE]
```
Example: wrap the lines of file1.txt to a width of 60 columns
```
fold -w60 file1.txt
```
**-b** : This option of fold command is used to limit the width of the output by the number of bytes rather than the number of columns.
By using this we can enforce the width of the output to the number of bytes.
```
fold -b[n] [FILE]
```
Example: limit the output width of the file to 40 bytes and the command breaks the output at 40 bytes.
```
fold -b40 file1.txt
```
**-s** : This option is used to break the lines on spaces so that words are not broken.
If a segment of the line contains a blank character within the first width column positions, break the line after the last such blank character meeting the width constraints.
```
fold -w[n] -s [FILE]
```

View File

@@ -0,0 +1,28 @@
# The `quota` command
The `quota` display disk usage and limits.
### Installation:
You can simply go ahead and install quota on ubuntu systems by running:
```
sudo apt-get install quota
```
for Debian use the install command without sudo:
```
apt-get install quota
```
### Syntax:
The general syntax for the `quota` command is as follows:
```
quota [ -u [ User ] ] [ -g [ Group ] ] [ -v | -q ]
```

View File

@@ -0,0 +1,55 @@
# The `aplay` command
`aplay` is a command-line audio player for ALSA(Advanced Linux Sound Architecture) sound card drivers. It supports several file formats and multiple soundcards with multiple devices. It is basically used to play audio on command-line interface. aplay is much the same as arecord only it plays instead of recording. For supported soundfile formats, the sampling rate, bit depth, and so forth can be automatically determined from the soundfile header.
## Syntax:
```
$ aplay [flags] [filename [filename]] ...
```
## Options:
```
-h, help : Show the help information.
-d, duration=# : Interrupt after # seconds.
-r, rate=# : Sampling rate in Hertz. The default rate is 8000 Hertz.
version : Print current version.
-l, list-devices : List all soundcards and digital audio devices.
-L, list-pcms : List all PCMs(Pulse Code Modulation) defined.
-D, device=NAME : Select PCM by name.
```
Note: This command contain various other options that we normally dont need. If you want to know more about you can simply run following command on your terminal.
```
aplay --help
```
## Examples :
1. To play audio for only 10 secs at 2500hz frequency.
```
$ aplay -d 10 -r 2500hz sample.mp3
```
> Plays sample.mp3 file for only 10 secs at 2500hz frequency.
2. To play full audio clip at 2500hz frezuency.
```
$ aplay -r 2500hz sample.mp3
```
> Plays sample.mp3 file at 2500hz frezuency.
3. To Display version information.
```
$ aplay --version
```
> Displays version information. For me it shows aplay: vesrion 1.1.0
---

View File

@@ -0,0 +1,82 @@
# The `spd-say` command
`spd-say` sends text-to-speech output request to speech-dispatcher process which handles it
and ideally outputs the result to the audio system.
## Syntax:
```
$ spd-say [options] "some text"
```
## Options:
```
-r, --rate
Set the rate of the speech (between -100 and +100, default: 0)
-p, --pitch
Set the pitch of the speech (between -100 and +100, default: 0)
-i, --volume
Set the volume (intensity) of the speech (between -100 and +100, default: 0)
-o, --output-module
Set the output module
-l, --language
Set the language (iso code)
-t, --voice-type
Set the preferred voice type (male1, male2, male3, female1, female2, female3,
child_male, child_female)
-m, --punctuation-mode
Set the punctuation mode (none, some, all)
-s, --spelling
Spell the message
-x, --ssml
Set SSML mode on (default: off)
-e, --pipe-mode
Pipe from stdin to stdout plus Speech Dispatcher
-P, --priority
Set priority of the message (important, message, text, notification, progress;
default: text)
-N, --application-name
Set the application name used to establish the connection to specified string value
(default: spd-say)
-n, --connection-name
Set the connection name used to establish the connection to specified string value
(default: main)
-w, --wait
Wait till the message is spoken or discarded
-S, --stop
Stop speaking the message being spoken in Speech Dispatcher
-C, --cancel
Cancel all messages in Speech Dispatcher
-v, --version
Print version and copyright info
-h, --help
Print this info
```
## Examples :
1. To Play the given text as the sound.
```
$ spd-say "Hello"
```
>Plays "Hello" in sound.

View File

@@ -0,0 +1,15 @@
# The `xeyes` command
Xeyes is a graphical user interface program that creates a set of eyes on the desktop that follow the movement of the mouse cursor. It seems much of a funny command, than of any useful use. Being funny is as much useful, is another aspect.
### Syntax:
```
xeyes
```
### What is the purpose of xeyes?
`xeyes` is not for fun, at least not only. The purpose of this program is to let you follow the mouse pointer which is sometimes hard to see. It is very useful on multi-headed computers, where monitors are separated by some distance, and if someone (say teacher at school) wants to present something on the screen, the others on their monitors can easily follow the mouse with `xeyes`.

View File

@@ -0,0 +1,32 @@
# The `nl` command
The “nl” command enumerates lines in a file. A different way of viewing the contents of a file, the “nl” command can be very useful for many tasks.
## Syntax
```
nl [ -b Type ] [ -f Type ] [ -h Type ] [ -l Number ] [ -d Delimiter ] [ -i Number ] [ -n Format ] [ -v Number ] [ -w Number ] [ -p ] [ -s Separator ] [ File ]
```
## Examples:
1. To number all lines:
```
nl -ba chap1
```
2. Displays all the text lines:
```
[server@ssh ~]$ nl states
1 Alabama
2 Alaska
3 Arizona
4 Arkansas
5 California
6 Colorado
7 Connecticut.
8 Delaware
```
3. Specify a different line number format
```
nl -i10 -nrz -s:: -v10 -w4 chap1
```
You can name only one file on the command line. You can list the flags and the file name in any order.

View File

@@ -0,0 +1,68 @@
# The `pidof` command
The `pidof` is a command-line utility that allows you to find the process ID of a running program.
## Syntax
```
pidof [OPTIONS] PROGRAM_NAME
```
To view the help message and all options of the command:
```
[user@home ~]$ pidof -h
-c Return PIDs with the same root directory
-d <sep> Use the provided character as output separator
-h Display this help text
-n Avoid using stat system function on network shares
-o <pid> Omit results with a given PID
-q Quiet mode. Do not display output
-s Only return one PID
-x Return PIDs of shells running scripts with a matching name
-z List zombie and I/O waiting processes. May cause pidof to hang.
```
## Examples:
To find the PID of the SSH server, you would run:
```
pidof sshd
```
If there are running processes with names matching `sshd`, their PIDs will be displayed on the screen. If no matches are found, the output will be empty.
```
# Output
4382 4368 811
```
`pidof` returns `0` when at least one running program matches with the requested name. Otherwise, the exit code is `1`. This can be useful when writing shell scripts.
To be sure that only the PIDs of the program you are searching for are displayed, use the full pathname to the program as an argument. For example, if you have two running programs with the same name located in two different directories pidof will show PIDs of both running programs.
By default, all PIDs of the matching running programs are displayed. Use the `-s` option to force pidof to display only one PID:
```
pidof -s program_name
```
The `-o` option allows you to exclude a process with a given PID from the command output:
```
pidof -o pid program_name
```
When pidof is invoked with the `-o` option, you can use a special PID named %PPID that represents the calling shell or shell script.
To return only the PIDs of the processes that are running with the same root directory, use the `-c` option.
This option works only pidof is run as `root` or `sudo` user:
```
pidof -c pid program_name
```
## Conclusion
The `pidof` command is used to find out the PIDs of a specific running program.
`pidof` is a simple command that doesnt have a lot of options. Typically you will invoke pidof only with the name of the program you are searching for.

View File

@@ -0,0 +1,172 @@
# The `shuf` command
The `shuf` command in Linux writes a random permutation of the input lines to standard output. It pseudo randomizes an input in the same way as the cards are shuffled. It is a part of GNU Coreutils and is not a part of POSIX. This command reads either from a file or standard input in bash and randomizes those input lines and displays the output.
## Syntax
```
# file shuf
shuf [OPTION] [FILE]
# list shuf
shuf -e [OPTION]... [ARG]
# range shuf
shuf -i LO-HI [OPTION]
```
Like other Linux commands, `shuf` command comes with `-help` option:
```
[user@home ~]$ shuf --help
Usage: shuf [OPTION]... [FILE]
or: shuf -e [OPTION]... [ARG]...
or: shuf -i LO-HI [OPTION]...
Write a random permutation of the input lines to standard output.
With no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options too.
-e, --echo treat each ARG as an input line
-i, --input-range=LO-HI treat each number LO through HI as an input line
-n, --head-count=COUNT output at most COUNT lines
-o, --output=FILE write result to FILE instead of standard output
--random-source=FILE get random bytes from FILE
-r, --repeat output lines can be repeated
-z, --zero-terminated line delimiter is NUL, not newline
```
## Examples:
### shuf command without any option or argument.
```
shuf
```
When `shuf` command is used without any argument in the command line, it takes input from the user until `CTRL-D` is entered to terminate the set of inputs. It displays the input lines in a shuffled form. If `1, 2, 3, 4 and 5` are entered as input lines, then it generates `1, 2, 3, 4 and 5` in random order in the output as seen in the illustration below:
```
[user@home ~]$ shuf
1
2
3
4
5
4
5
1
2
3
```
Consider an example where Input is taken from the pipe:
```
{
seq 5 | shuf
}
```
`seq 5` returns the integers sequentially from `1` to `5` while the `shuf` command takes it as input and shuffles the content i.e, the integers from `1` to `5`. Hence, `1` to `5` is displayed as output in random order.
```
[user@home ~]$ {
> seq 5 | shuf
> }
5
4
2
3
1
```
### File shuf
When `shuf` command is used without `-e` or `-i` option, then it operates as a file shuf i.e, it shuffles the contents of the file. The `<file_name>` is the last parameter of the `shuf` command and if it is not given, then input has to be provided from the shell or pipe.
Consider an example where input is taken from a file:
```
shuf file.txt
```
Suppose file.txt contains 6 lines, then the shuf command displays the input lines in random order as output.
```
[user@home ~]$ cat file.txt
line-1
line-2
line-3
line-4
line-5
[user@home ~]$ shuf file.txt
line-5
line-4
line-1
line-3
line-2
```
Any number of lines can be randomized by using `-n` option.
```
shuf -n 2 file.txt
```
This will display any two random lines from the file.
```
line-5
line-2
```
### List shuf
When `-e` option is used with shuf command, it works as a list shuf. The arguments of the command are taken as the input line for the shuf.
Consider an example:
```
shuf -e A B C D E
```
It will take `A, B, C, D, E` as input lines, and will shuffle them to display the output.
```
A
C
B
D
E
```
Any number of input lines can be displayed using the `-n` option along with `-e` option.
```
shuf -e -n 2 A B C D E
```
This will display any two of the inputs.
```
E
A
```
### Range shuf
When `-i` option is used along with `shuf` command, it acts as a `range shuf`. It requires a range of input as input where `L0` is the lower bound while `HI` is the upper bound. It displays integers from `L0-HI` in shuffled form.
```
[user@home ~]$ shuf -i 1-5
4
1
3
2
5
```
## Conclusion
The `shuf` command helps you randomize input lines. And there are features to limit the number of output lines, repeat lines and even generate random positive integers. Once you're done practicing whatever we've discussed here, head to the tool's [man page](https://linux.die.net/man/1/shuf) to know more about it.

View File

@@ -0,0 +1,68 @@
# The `cmp` command
The `cmp` command is used to compare the two files byte by byte.
Example:
```
cmp file1.txt file2.txt
```
Syntax:
```
cmp [option] File1 File2
```
## Few Examples :
1. ### Comparison of two files:
Perform a simple comparison of the two files to check out if they differ from each other or not.
Example:
```
cmp File1 File2
```
2. ### Comparing Files after Skipping a Specified Number of Bytes:
Compare two files after skipping a certain number of bytes
Example:
```
cmp -i 2 list.txt list2.txt
```
Here “INT” represents the number of bytes to be skipped
3. ### Display the Differing Bytes of the Files in the Output:
Example:
```
cmp -b list.txt list1.txt
```
4. ### Display Byte Numbers and Differing Byte Values of the Files in the Output:
Example:
```
cmp -l list.txt list1.txt
```
5. ### Comparing the First “n” Number of Bytes of the Files:
Example:
```
cmp -n 10 list.txt list2.txt
```
### Additional Flags and their Functionalities
|**Short Flag** |**Long Flag** |**Description** |
|:---|:---|:---|
|`-b`|`--print-bytes`|print differing bytes|
|`-i`|`--ignore-initial=SKIP`|skip first SKIP bytes of both inputs|
|`-i`|`--ignore-initial=SKIP1:SKIP2`|skip first SKIP1 bytes of FILE1 and first SKIP2 bytes of FILE2|
|`-l`|`--verbose`|output byte numbers and differing byte values|
|`-n`|`--bytes=LIMIT`|compare at most LIMIT bytes|
|`-s`|`--quiet, --silent`|suppress all normal output|
|`v`|`--version`|output version information and exit|
||`--help`|Display this help and exit|

View File

@@ -0,0 +1,48 @@
# The `expr` command
The `expr` command evaluates a given expression and displays its corresponding output. It is used for basic operations like addition, subtraction, multiplication, division, and modulus on integers and Evaluating regular expressions, string operations like substring, length of strings etc.
## Syntax
```
expr expression
```
## Few Examples:
1. ### Perform basic arithmetic operations using expr command
```
expr 7 + 14
expr 7 * 8
```
2. ### Comparing two expressions
```
x=10
y=20
res=`expr $x = $y`
echo $res
```
3. ### Match the numbers of characters in two strings
```
expr alphabet : alpha
```
4. ### Find the modulus value
```
expr 20 % 30
```
5. ### Extract the substring
```
a=HelloWorld
b=`expr substr $a 6 10`
echo $b
```
### Additional Flags and their Functionalities
|**Flag** |**Description** |
:---|:---|
|`--version`|output version information and exit|
|`--help`|Display this help and exit|
For more details: [Expr on Wikipedia](https://en.wikipedia.org/wiki/Expr)

Some files were not shown because too many files have changed in this diff Show More