automated terminal push
All checks were successful
learn org at code.softwareshinobi.com/docker.softwareshinobi.com/pipeline/head This commit looks good
All checks were successful
learn org at code.softwareshinobi.com/docker.softwareshinobi.com/pipeline/head This commit looks good
This commit is contained in:
@@ -1,91 +0,0 @@
|
||||
# About the book
|
||||
|
||||
* **This version was published on Oct 30 2023**
|
||||
|
||||
This is an open-source introduction to Bash scripting guide that will help you learn the basics of Bash scripting and start writing awesome Bash scripts that will help you automate your daily SysOps, DevOps, and Dev tasks. No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you can use Bash scripts to combine different Linux commands and automate tedious and repetitive daily tasks so that you can focus on more productive and fun things.
|
||||
|
||||
The guide is suitable for anyone working as a developer, system administrator, or a DevOps engineer and wants to learn the basics of Bash scripting.
|
||||
|
||||
The first 13 chapters would be purely focused on getting some solid Bash scripting foundations, then the rest of the chapters would give you some real-life examples and scripts.
|
||||
|
||||
## About the author
|
||||
|
||||
My name is Bobby Iliev, and I have been working as a Linux DevOps Engineer since 2014. I am an avid Linux lover and supporter of the open-source movement philosophy. I am always doing that which I cannot do in order that I may learn how to do it, and I believe in sharing knowledge.
|
||||
|
||||
I think it's essential always to keep professional and surround yourself with good people, work hard, and be nice to everyone. You have to perform at a consistently higher level than others. That's the mark of a true professional.
|
||||
|
||||
For more information, please visit my blog at [https://bobbyiliev.com](https://bobbyiliev.com), follow me on Twitter [@bobbyiliev_](https://twitter.com/bobbyiliev_) and [YouTube](https://www.youtube.com/channel/UCQWmdHTeAO0UvaNqve9udRw).
|
||||
|
||||
## Sponsors
|
||||
|
||||
This book is made possible thanks to these fantastic companies!
|
||||
|
||||
### Materialize
|
||||
|
||||
The Streaming Database for Real-time Analytics.
|
||||
|
||||
[Materialize](https://materialize.com/) is a reactive database that delivers incremental view updates. Materialize helps developers easily build with streaming data using standard SQL.
|
||||
|
||||
### DigitalOcean
|
||||
|
||||
DigitalOcean is a cloud services platform delivering the simplicity developers love and businesses trust to run production applications at scale.
|
||||
|
||||
It provides highly available, secure, and scalable compute, storage, and networking solutions that help developers build great software faster.
|
||||
|
||||
Founded in 2012 with offices in New York and Cambridge, MA, DigitalOcean offers transparent and affordable pricing, an elegant user interface, and one of the largest libraries of open source resources available.
|
||||
|
||||
For more information, please visit [https://www.digitalocean.com](https://www.digitalocean.com) or follow [@digitalocean](https://twitter.com/digitalocean) on Twitter.
|
||||
|
||||
If you are new to DigitalOcean, you can get a free $200 credit and spin up your own servers via this referral link here:
|
||||
|
||||
[Free $200 Credit For DigitalOcean](https://m.do.co/c/2a9bba940f39)
|
||||
|
||||
### DevDojo
|
||||
|
||||
The DevDojo is a resource to learn all things web development and web design. Learn on your lunch break or wake up and enjoy a cup of coffee with us to learn something new.
|
||||
|
||||
Join this developer community, and we can all learn together, build together, and grow together.
|
||||
|
||||
[Join DevDojo](https://devdojo.com?ref=bobbyiliev)
|
||||
|
||||
For more information, please visit [https://www.devdojo.com](https://www.devdojo.com?ref=bobbyiliev) or follow [@thedevdojo](https://twitter.com/thedevdojo) on Twitter.
|
||||
|
||||
## Ebook PDF Generation Tool
|
||||
|
||||
This ebook was generated by [Ibis](https://github.com/themsaid/ibis/) developed by [Mohamed Said](https://github.com/themsaid).
|
||||
|
||||
Ibis is a PHP tool that helps you write eBooks in markdown.
|
||||
|
||||
## Ebook ePub Generation Tool
|
||||
|
||||
The ePub version was generated by [Pandoc](https://pandoc.org/).
|
||||
|
||||
## Book Cover
|
||||
|
||||
The cover for this ebook was created with [Canva.com](https://www.canva.com/join/determined-cork-learn).
|
||||
|
||||
If you ever need to create a graphic, poster, invitation, logo, presentation – or anything that looks good — give Canva a go.
|
||||
|
||||
## License
|
||||
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 Bobby Iliev
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@@ -1,11 +0,0 @@
|
||||
# Introduction to Bash scripting
|
||||
|
||||
Welcome to this Bash basics training guide! In this **bash crash course**, you will learn the **Bash basics** so you could start writing your own Bash scripts and automate your daily tasks.
|
||||
|
||||
Bash is a Unix shell and command language. It is widely available on various operating systems, and it is also the default command interpreter on most Linux systems.
|
||||
|
||||
Bash stands for Bourne-Again SHell. As with other shells, you can use Bash interactively directly in your terminal, and also, you can use Bash like any other programming language to write scripts. This book will help you learn the basics of Bash scripting including Bash Variables, User Input, Comments, Arguments, Arrays, Conditional Expressions, Conditionals, Loops, Functions, Debugging, and testing.
|
||||
|
||||
Bash scripts are great for automating repetitive workloads and can help you save time considerably. For example, imagine working with a group of five developers on a project that requires a tedious environment setup. In order for the program to work correctly, each developer has to manually set up the environment. That's the same and very long task (setting up the environment) repeated five times at least. This is where you and Bash scripts come to the rescue! So instead, you create a simple text file containing all the necessary instructions and share it with your teammates. And now, all they have to do is execute the Bash script and everything will be created for them.
|
||||
|
||||
In order to write Bash scripts, you just need a UNIX terminal and a text editor like Sublime Text, VS Code, or a terminal-based editor like vim or nano.
|
||||
@@ -1,32 +0,0 @@
|
||||
# Bash Structure
|
||||
|
||||
Let's start by creating a new file with a `.sh` extension. As an example, we could create a file called `devdojo.sh`.
|
||||
|
||||
To create that file, you can use the `touch` command:
|
||||
|
||||
```bash
|
||||
touch devdojo.sh
|
||||
```
|
||||
|
||||
Or you can use your text editor instead:
|
||||
|
||||
```bash
|
||||
nano devdojo.sh
|
||||
```
|
||||
|
||||
In order to execute/run a bash script file with the bash shell interpreter, the first line of a script file must indicate the absolute path to the bash executable:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
```
|
||||
|
||||
This is also called a [Shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)).
|
||||
|
||||
All that the shebang does is to instruct the operating system to run the script with the `/bin/bash` executable.
|
||||
|
||||
However, bash is not always in `/bin/bash` directory, particularly on non-Linux systems or due to installation as an optional package. Thus, you may want to use:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
```
|
||||
It searches for bash executable in directories, listed in PATH environmental variable.
|
||||
@@ -1,41 +0,0 @@
|
||||
# Bash Hello World
|
||||
|
||||
Once we have our `devdojo.sh` file created and we've specified the bash shebang on the very first line, we are ready to create our first `Hello World` bash script.
|
||||
|
||||
To do that, open the `devdojo.sh` file again and add the following after the `#!/bin/bash` line:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "Hello World!"
|
||||
```
|
||||
|
||||
Save the file and exit.
|
||||
|
||||
After that make the script executable by running:
|
||||
|
||||
```bash
|
||||
chmod +x devdojo.sh
|
||||
```
|
||||
|
||||
After that execute the file:
|
||||
|
||||
```bash
|
||||
./devdojo.sh
|
||||
```
|
||||
|
||||
You will see a "Hello World" message on the screen.
|
||||
|
||||
Another way to run the script would be:
|
||||
|
||||
```bash
|
||||
bash devdojo.sh
|
||||
```
|
||||
|
||||
As bash can be used interactively, you could run the following command directly in your terminal and you would get the same result:
|
||||
|
||||
```bash
|
||||
echo "Hello DevDojo!"
|
||||
```
|
||||
|
||||
Putting a script together is useful once you have to combine multiple commands together.
|
||||
@@ -1,131 +0,0 @@
|
||||
# Bash Variables
|
||||
|
||||
As in any other programming language, you can use variables in Bash Scripting as well. However, there are no data types, and a variable in Bash can contain numbers as well as characters.
|
||||
|
||||
To assign a value to a variable, all you need to do is use the `=` sign:
|
||||
|
||||
```bash
|
||||
name="DevDojo"
|
||||
```
|
||||
|
||||
>{notice} as an important note, you can not have spaces before and after the `=` sign.
|
||||
|
||||
After that, to access the variable, you have to use the `$` and reference it as shown below:
|
||||
|
||||
```bash
|
||||
echo $name
|
||||
```
|
||||
|
||||
Wrapping the variable name between curly brackets is not required, but is considered a good practice, and I would advise you to use them whenever you can:
|
||||
|
||||
```bash
|
||||
echo ${name}
|
||||
```
|
||||
|
||||
The above code would output: `DevDojo` as this is the value of our `name` variable.
|
||||
|
||||
Next, let's update our `devdojo.sh` script and include a variable in it.
|
||||
|
||||
Again, you can open the file `devdojo.sh` with your favorite text editor, I'm using nano here to open the file:
|
||||
|
||||
```bash
|
||||
nano devdojo.sh
|
||||
```
|
||||
|
||||
Adding our `name` variable here in the file, with a welcome message. Our file now looks like this:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
name="DevDojo"
|
||||
|
||||
echo "Hi there $name"
|
||||
```
|
||||
|
||||
Save it and run the file using the command below:
|
||||
|
||||
```bash
|
||||
./devdojo.sh
|
||||
```
|
||||
|
||||
You would see the following output on your screen:
|
||||
|
||||
```bash
|
||||
Hi there DevDojo
|
||||
```
|
||||
|
||||
Here is a rundown of the script written in the file:
|
||||
|
||||
* `#!/bin/bash` - At first, we specified our shebang.
|
||||
* `name=DevDojo` - Then, we defined a variable called `name` and assigned a value to it.
|
||||
* `echo "Hi there $name"` - Finally, we output the content of the variable on the screen as a welcome message by using `echo`
|
||||
|
||||
You can also add multiple variables in the file as shown below:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
name="DevDojo"
|
||||
greeting="Hello"
|
||||
|
||||
echo "$greeting $name"
|
||||
```
|
||||
|
||||
Save the file and run it again:
|
||||
|
||||
```bash
|
||||
./devdojo.sh
|
||||
```
|
||||
|
||||
You would see the following output on your screen:
|
||||
|
||||
```bash
|
||||
Hello DevDojo
|
||||
```
|
||||
Note that you don't necessarily need to add semicolon `;` at the end of each line. It works both ways, a bit like other programming language such as JavaScript!
|
||||
|
||||
|
||||
You can also add variables in the Command Line outside the Bash script and they can be read as parameters:
|
||||
|
||||
```bash
|
||||
./devdojo.sh Bobby buddy!
|
||||
```
|
||||
This script takes in two parameters `Bobby`and `buddy!` separated by space. In the `devdojo.sh` file we have the following:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "Hello there" $1
|
||||
|
||||
```
|
||||
`$1` is the first input (`Bobby`) in the Command Line. Similarly, there could be more inputs and they are all referenced to by the `$` sign and their respective order of input. This means that `buddy!` is referenced to using `$2`. Another useful method for reading variables is the `$@` which reads all inputs.
|
||||
|
||||
So now let's change the `devdojo.sh` file to better understand:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "Hello there" $1
|
||||
|
||||
# $1 : first parameter
|
||||
|
||||
echo "Hello there" $2
|
||||
|
||||
# $2 : second parameter
|
||||
|
||||
echo "Hello there" $@
|
||||
|
||||
# $@ : all
|
||||
```
|
||||
The output for:
|
||||
|
||||
```bash
|
||||
./devdojo.sh Bobby buddy!
|
||||
```
|
||||
Would be the following:
|
||||
|
||||
```bash
|
||||
Hello there Bobby
|
||||
Hello there buddy!
|
||||
Hello there Bobby buddy!
|
||||
```
|
||||
@@ -1,54 +0,0 @@
|
||||
# Bash User Input
|
||||
|
||||
With the previous script, we defined a variable, and we output the value of the variable on the screen with the `echo $name`.
|
||||
|
||||
Now let's go ahead and ask the user for input instead. To do that again, open the file with your favorite text editor and update the script as follows:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "What is your name?"
|
||||
read name
|
||||
|
||||
echo "Hi there $name"
|
||||
echo "Welcome to DevDojo!"
|
||||
```
|
||||
|
||||
The above will prompt the user for input and then store that input as a string/text in a variable.
|
||||
|
||||
We can then use the variable and print a message back to them.
|
||||
|
||||
The output of the above script would be:
|
||||
|
||||
* First run the script:
|
||||
|
||||
```bash
|
||||
./devdojo.sh
|
||||
```
|
||||
|
||||
* Then, you would be prompted to enter your name:
|
||||
|
||||
```
|
||||
What is your name?
|
||||
Bobby
|
||||
```
|
||||
|
||||
* Once you've typed your name, just hit enter, and you will get the following output:
|
||||
|
||||
```
|
||||
Hi there Bobby
|
||||
Welcome to DevDojo!
|
||||
```
|
||||
|
||||
To reduce the code, we could change the first `echo` statement with the `read -p`, the `read` command used with `-p` flag will print a message before prompting the user for their input:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
read -p "What is your name? " name
|
||||
|
||||
echo "Hi there $name"
|
||||
echo "Welcome to DevDojo!"
|
||||
```
|
||||
|
||||
Make sure to test this out yourself as well!
|
||||
@@ -1,27 +0,0 @@
|
||||
# Bash Comments
|
||||
|
||||
As with any other programming language, you can add comments to your script. Comments are used to leave yourself notes through your code.
|
||||
|
||||
To do that in Bash, you need to add the `#` symbol at the beginning of the line. Comments will never be rendered on the screen.
|
||||
|
||||
Here is an example of a comment:
|
||||
|
||||
```bash
|
||||
# This is a comment and will not be rendered on the screen
|
||||
```
|
||||
|
||||
Let's go ahead and add some comments to our script:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Ask the user for their name
|
||||
|
||||
read -p "What is your name? " name
|
||||
|
||||
# Greet the user
|
||||
echo "Hi there $name"
|
||||
echo "Welcome to DevDojo!"
|
||||
```
|
||||
|
||||
Comments are a great way to describe some of the more complex functionality directly in your scripts so that other people could find their way around your code with ease.
|
||||
@@ -1,81 +0,0 @@
|
||||
# Bash Arguments
|
||||
|
||||
You can pass arguments to your shell script when you execute it. To pass an argument, you just need to write it right after the name of your script. For example:
|
||||
|
||||
```bash
|
||||
./devdojo.com your_argument
|
||||
```
|
||||
|
||||
In the script, we can then use `$1` in order to reference the first argument that we specified.
|
||||
|
||||
If we pass a second argument, it would be available as `$2` and so on.
|
||||
|
||||
Let's create a short script called `arguments.sh` as an example:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "Argument one is $1"
|
||||
echo "Argument two is $2"
|
||||
echo "Argument three is $3"
|
||||
```
|
||||
|
||||
Save the file and make it executable:
|
||||
|
||||
```bash
|
||||
chmod +x arguments.sh
|
||||
```
|
||||
|
||||
Then run the file and pass **3** arguments:
|
||||
|
||||
```bash
|
||||
./arguments.sh dog cat bird
|
||||
```
|
||||
|
||||
The output that you would get would be:
|
||||
|
||||
```bash
|
||||
Argument one is dog
|
||||
Argument two is cat
|
||||
Argument three is bird
|
||||
```
|
||||
|
||||
To reference all arguments, you can use `$@`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "All arguments: $@"
|
||||
```
|
||||
|
||||
If you run the script again:
|
||||
|
||||
```bash
|
||||
./arguments.sh dog cat bird
|
||||
```
|
||||
|
||||
You will get the following output:
|
||||
|
||||
```
|
||||
All arguments: dog cat bird
|
||||
```
|
||||
|
||||
Another thing that you need to keep in mind is that `$0` is used to reference the script itself.
|
||||
|
||||
This is an excellent way to create self destruct the file if you need to or just get the name of the script.
|
||||
|
||||
For example, let's create a script that prints out the name of the file and deletes the file after that:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo "The name of the file is: $0 and it is going to be self-deleted."
|
||||
|
||||
rm -f $0
|
||||
```
|
||||
|
||||
You need to be careful with the self deletion and ensure that you have your script backed up before you self-delete it.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,112 +0,0 @@
|
||||
# Bash Arrays
|
||||
|
||||
If you have ever done any programming, you are probably already familiar with arrays.
|
||||
|
||||
But just in case you are not a developer, the main thing that you need to know is that unlike variables, arrays can hold several values under one name.
|
||||
|
||||
You can initialize an array by assigning values divided by space and enclosed in `()`. Example:
|
||||
|
||||
```bash
|
||||
my_array=("value 1" "value 2" "value 3" "value 4")
|
||||
```
|
||||
|
||||
To access the elements in the array, you need to reference them by their numeric index.
|
||||
|
||||
>{notice} keep in mind that you need to use curly brackets.
|
||||
|
||||
* Access a single element, this would output: `value 2`
|
||||
|
||||
```bash
|
||||
echo ${my_array[1]}
|
||||
```
|
||||
|
||||
* This would return the last element: `value 4`
|
||||
|
||||
```bash
|
||||
echo ${my_array[-1]}
|
||||
```
|
||||
|
||||
* As with command line arguments using `@` will return all arguments in the array, as follows: `value 1 value 2 value 3 value 4`
|
||||
|
||||
```bash
|
||||
echo ${my_array[@]}
|
||||
```
|
||||
|
||||
* Prepending the array with a hash sign (`#`) would output the total number of elements in the array, in our case it is `4`:
|
||||
|
||||
```bash
|
||||
echo ${#my_array[@]}
|
||||
```
|
||||
|
||||
Make sure to test this and practice it at your end with different values.
|
||||
|
||||
## Substring in Bash :: Slicing
|
||||
|
||||
Let's review the following example of slicing in a string in Bash:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
letters=( "A""B""C""D""E" )
|
||||
echo ${letters[@]}
|
||||
```
|
||||
|
||||
This command will print all the elements of an array.
|
||||
|
||||
Output:
|
||||
|
||||
```bash
|
||||
$ ABCDE
|
||||
```
|
||||
|
||||
|
||||
Let's see a few more examples:
|
||||
|
||||
- Example 1
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
letters=( "A""B""C""D""E" )
|
||||
b=${letters:0:2}
|
||||
echo "${b}"
|
||||
```
|
||||
|
||||
This command will print array from starting index 0 to 2 where 2 is exclusive.
|
||||
|
||||
```bash
|
||||
$ AB
|
||||
```
|
||||
|
||||
- Example 2
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
letters=( "A""B""C""D""E" )
|
||||
b=${letters::5}
|
||||
echo "${b}"
|
||||
```
|
||||
|
||||
This command will print from base index 0 to 5, where 5 is exclusive and starting index is default set to 0 .
|
||||
|
||||
```bash
|
||||
$ ABCDE
|
||||
```
|
||||
|
||||
- Example 3
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
letters=( "A""B""C""D""E" )
|
||||
b=${letters:3}
|
||||
echo "${b}"
|
||||
```
|
||||
|
||||
This command will print from starting index
|
||||
3 to end of array inclusive .
|
||||
|
||||
```bash
|
||||
$ DE
|
||||
```
|
||||
@@ -1,186 +0,0 @@
|
||||
# Bash Conditional Expressions
|
||||
|
||||
In computer science, conditional statements, conditional expressions, and conditional constructs are features of a programming language, which perform different computations or actions depending on whether a programmer-specified boolean condition evaluates to true or false.
|
||||
|
||||
In Bash, conditional expressions are used by the `[[` compound command and the `[`built-in commands to test file attributes and perform string and arithmetic comparisons.
|
||||
|
||||
Here is a list of the most popular Bash conditional expressions. You do not have to memorize them by heart. You can simply refer back to this list whenever you need it!
|
||||
|
||||
## File expressions
|
||||
|
||||
* True if file exists.
|
||||
|
||||
```bash
|
||||
[[ -a ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a block special file.
|
||||
|
||||
```bash
|
||||
[[ -b ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a character special file.
|
||||
|
||||
```bash
|
||||
[[ -c ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a directory.
|
||||
|
||||
```bash
|
||||
[[ -d ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists.
|
||||
|
||||
```bash
|
||||
[[ -e ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a regular file.
|
||||
|
||||
```bash
|
||||
[[ -f ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a symbolic link.
|
||||
|
||||
```bash
|
||||
[[ -h ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is readable.
|
||||
|
||||
```bash
|
||||
[[ -r ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and has a size greater than zero.
|
||||
|
||||
```bash
|
||||
[[ -s ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is writable.
|
||||
|
||||
```bash
|
||||
[[ -w ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is executable.
|
||||
|
||||
```bash
|
||||
[[ -x ${file} ]]
|
||||
```
|
||||
|
||||
* True if file exists and is a symbolic link.
|
||||
|
||||
```bash
|
||||
[[ -L ${file} ]]
|
||||
```
|
||||
|
||||
## String expressions
|
||||
|
||||
* True if the shell variable varname is set (has been assigned a value).
|
||||
|
||||
```bash
|
||||
[[ -v ${varname} ]]
|
||||
```
|
||||
|
||||
True if the length of the string is zero.
|
||||
|
||||
```bash
|
||||
[[ -z ${string} ]]
|
||||
```
|
||||
|
||||
True if the length of the string is non-zero.
|
||||
|
||||
```bash
|
||||
[[ -n ${string} ]]
|
||||
```
|
||||
|
||||
* True if the strings are equal. `=` should be used with the test command for POSIX conformance. When used with the `[[` command, this performs pattern matching as described above (Compound Commands).
|
||||
|
||||
```bash
|
||||
[[ ${string1} == ${string2} ]]
|
||||
```
|
||||
|
||||
* True if the strings are not equal.
|
||||
|
||||
```bash
|
||||
[[ ${string1} != ${string2} ]]
|
||||
```
|
||||
|
||||
* True if string1 sorts before string2 lexicographically.
|
||||
|
||||
```bash
|
||||
[[ ${string1} < ${string2} ]]
|
||||
```
|
||||
|
||||
* True if string1 sorts after string2 lexicographically.
|
||||
|
||||
```bash
|
||||
[[ ${string1} > ${string2} ]]
|
||||
```
|
||||
|
||||
## Arithmetic operators
|
||||
|
||||
* Returns true if the numbers are **equal**
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -eq ${arg2} ]]
|
||||
```
|
||||
|
||||
* Returns true if the numbers are **not equal**
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -ne ${arg2} ]]
|
||||
```
|
||||
|
||||
* Returns true if arg1 is **less than** arg2
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -lt ${arg2} ]]
|
||||
```
|
||||
|
||||
* Returns true if arg1 is **less than or equal** arg2
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -le ${arg2} ]]
|
||||
```
|
||||
|
||||
* Returns true if arg1 is **greater than** arg2
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -gt ${arg2} ]]
|
||||
```
|
||||
|
||||
* Returns true if arg1 is **greater than or equal** arg2
|
||||
|
||||
```bash
|
||||
[[ ${arg1} -ge ${arg2} ]]
|
||||
```
|
||||
|
||||
As a side note, arg1 and arg2 may be positive or negative integers.
|
||||
|
||||
As with other programming languages you can use `AND` & `OR` conditions:
|
||||
|
||||
```bash
|
||||
[[ test_case_1 ]] && [[ test_case_2 ]] # And
|
||||
[[ test_case_1 ]] || [[ test_case_2 ]] # Or
|
||||
```
|
||||
|
||||
## Exit status operators
|
||||
|
||||
* returns true if the command was successful without any errors
|
||||
|
||||
```bash
|
||||
[[ $? -eq 0 ]]
|
||||
```
|
||||
|
||||
* returns true if the command was not successful or had errors
|
||||
|
||||
```bash
|
||||
[[ $? -gt 0 ]]
|
||||
```
|
||||
@@ -1,187 +0,0 @@
|
||||
# Bash Conditionals
|
||||
|
||||
In the last section, we covered some of the most popular conditional expressions. We can now use them with standard conditional statements like `if`, `if-else` and `switch case` statements.
|
||||
|
||||
## If statement
|
||||
|
||||
The format of an `if` statement in Bash is as follows:
|
||||
|
||||
```bash
|
||||
if [[ some_test ]]
|
||||
then
|
||||
<commands>
|
||||
fi
|
||||
```
|
||||
|
||||
Here is a quick example which would ask you to enter your name in case that you've left it empty:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Bash if statement example
|
||||
|
||||
read -p "What is your name? " name
|
||||
|
||||
if [[ -z ${name} ]]
|
||||
then
|
||||
echo "Please enter your name!"
|
||||
fi
|
||||
```
|
||||
|
||||
## If Else statement
|
||||
|
||||
With an `if-else` statement, you can specify an action in case that the condition in the `if` statement does not match. We can combine this with the conditional expressions from the previous section as follows:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Bash if statement example
|
||||
|
||||
read -p "What is your name? " name
|
||||
|
||||
if [[ -z ${name} ]]
|
||||
then
|
||||
echo "Please enter your name!"
|
||||
else
|
||||
echo "Hi there ${name}"
|
||||
fi
|
||||
```
|
||||
|
||||
You can use the above if statement with all of the conditional expressions from the previous chapters:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
admin="devdojo"
|
||||
|
||||
read -p "Enter your username? " username
|
||||
|
||||
# Check if the username provided is the admin
|
||||
|
||||
if [[ "${username}" == "${admin}" ]] ; then
|
||||
echo "You are the admin user!"
|
||||
else
|
||||
echo "You are NOT the admin user!"
|
||||
fi
|
||||
```
|
||||
|
||||
Here is another example of an `if` statement which would check your current `User ID` and would not allow you to run the script as the `root` user:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
if (( $EUID == 0 )); then
|
||||
echo "Please do not run as root"
|
||||
exit
|
||||
fi
|
||||
```
|
||||
|
||||
If you put this on top of your script it would exit in case that the EUID is 0 and would not execute the rest of the script. This was discussed on [the DigitalOcean community forum](https://www.digitalocean.com/community/questions/how-to-check-if-running-as-root-in-a-bash-script).
|
||||
|
||||
You can also test multiple conditions with an `if` statement. In this example we want to make sure that the user is neither the admin user nor the root user to ensure the script is incapable of causing too much damage. We'll use the `or` operator in this example, noted by `||`. This means that either of the conditions needs to be true. If we used the `and` operator of `&&` then both conditions would need to be true.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
admin="devdojo"
|
||||
|
||||
read -p "Enter your username? " username
|
||||
|
||||
# Check if the username provided is the admin
|
||||
|
||||
if [[ "${username}" != "${admin}" ]] || [[ $EUID != 0 ]] ; then
|
||||
echo "You are not the admin or root user, but please be safe!"
|
||||
else
|
||||
echo "You are the admin user! This could be very destructive!"
|
||||
fi
|
||||
```
|
||||
|
||||
If you have multiple conditions and scenarios, then can use `elif` statement with `if` and `else` statements.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
read -p "Enter a number: " num
|
||||
|
||||
if [[ $num -gt 0 ]] ; then
|
||||
echo "The number is positive"
|
||||
elif [[ $num -lt 0 ]] ; then
|
||||
echo "The number is negative"
|
||||
else
|
||||
echo "The number is 0"
|
||||
fi
|
||||
```
|
||||
|
||||
## Switch case statements
|
||||
|
||||
As in other programming languages, you can use a `case` statement to simplify complex conditionals when there are multiple different choices. So rather than using a few `if`, and `if-else` statements, you could use a single `case` statement.
|
||||
|
||||
The Bash `case` statement syntax looks like this:
|
||||
|
||||
```bash
|
||||
case $some_variable in
|
||||
|
||||
pattern_1)
|
||||
commands
|
||||
;;
|
||||
|
||||
pattern_2| pattern_3)
|
||||
commands
|
||||
;;
|
||||
|
||||
*)
|
||||
default commands
|
||||
;;
|
||||
esac
|
||||
```
|
||||
|
||||
A quick rundown of the structure:
|
||||
|
||||
* All `case` statements start with the `case` keyword.
|
||||
* On the same line as the `case` keyword, you need to specify a variable or an expression followed by the `in` keyword.
|
||||
* After that, you have your `case` patterns, where you need to use `)` to identify the end of the pattern.
|
||||
* You can specify multiple patterns divided by a pipe: `|`.
|
||||
* After the pattern, you specify the commands that you would like to be executed in case that the pattern matches the variable or the expression that you've specified.
|
||||
* All clauses have to be terminated by adding `;;` at the end.
|
||||
* You can have a default statement by adding a `*` as the pattern.
|
||||
* To close the `case` statement, use the `esac` (case typed backwards) keyword.
|
||||
|
||||
Here is an example of a Bash `case` statement:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
read -p "Enter the name of your car brand: " car
|
||||
|
||||
case $car in
|
||||
|
||||
Tesla)
|
||||
echo -n "${car}'s car factory is in the USA."
|
||||
;;
|
||||
|
||||
BMW | Mercedes | Audi | Porsche)
|
||||
echo -n "${car}'s car factory is in Germany."
|
||||
;;
|
||||
|
||||
Toyota | Mazda | Mitsubishi | Subaru)
|
||||
echo -n "${car}'s car factory is in Japan."
|
||||
;;
|
||||
|
||||
*)
|
||||
echo -n "${car} is an unknown car brand"
|
||||
;;
|
||||
|
||||
esac
|
||||
```
|
||||
|
||||
With this script, we are asking the user to input a name of a car brand like Telsa, BMW, Mercedes and etc.
|
||||
|
||||
Then with a `case` statement, we check the brand name and if it matches any of our patterns, and if so, we print out the factory's location.
|
||||
|
||||
If the brand name does not match any of our `case` statements, we print out a default message: `an unknown car brand`.
|
||||
|
||||
## Conclusion
|
||||
|
||||
I would advise you to try and modify the script and play with it a bit so that you could practice what you've just learned in the last two chapters!
|
||||
|
||||
For more examples of Bash `case` statements, make sure to check chapter 16, where we would create an interactive menu in Bash using a `cases` statement to process the user input.
|
||||
@@ -1,197 +0,0 @@
|
||||
# Bash Loops
|
||||
|
||||
As with any other language, loops are very convenient. With Bash you can use `for` loops, `while` loops, and `until` loops.
|
||||
|
||||
## For loops
|
||||
|
||||
Here is the structure of a for loop:
|
||||
|
||||
```bash
|
||||
for var in ${list}
|
||||
do
|
||||
your_commands
|
||||
done
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
users="devdojo bobby tony"
|
||||
|
||||
for user in ${users}
|
||||
do
|
||||
echo "${user}"
|
||||
done
|
||||
```
|
||||
|
||||
A quick rundown of the example:
|
||||
|
||||
* First, we specify a list of users and store the value in a variable called `$users`.
|
||||
* After that, we start our `for` loop with the `for` keyword.
|
||||
* Then we define a new variable which would represent each item from the list that we give. In our case, we define a variable called `user`, which would represent each user from the `$users` variable.
|
||||
* Then we specify the `in` keyword followed by our list that we will loop through.
|
||||
* On the next line, we use the `do` keyword, which indicates what we will do for each iteration of the loop.
|
||||
* Then we specify the commands that we want to run.
|
||||
* Finally, we close the loop with the `done` keyword.
|
||||
|
||||
You can also use `for` to process a series of numbers. For example here is one way to loop through from 1 to 10:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
for num in {1..10}
|
||||
do
|
||||
echo ${num}
|
||||
done
|
||||
```
|
||||
|
||||
## While loops
|
||||
|
||||
The structure of a while loop is quite similar to the `for` loop:
|
||||
|
||||
```bash
|
||||
while [ your_condition ]
|
||||
do
|
||||
your_commands
|
||||
done
|
||||
```
|
||||
|
||||
Here is an example of a `while` loop:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
counter=1
|
||||
while [[ $counter -le 10 ]]
|
||||
do
|
||||
echo $counter
|
||||
((counter++))
|
||||
done
|
||||
```
|
||||
|
||||
First, we specified a counter variable and set it to `1`, then inside the loop, we added counter by using this statement here: `((counter++))`. That way, we make sure that the loop will run 10 times only and would not run forever. The loop will complete as soon as the counter becomes 10, as this is what we've set as the condition: `while [[ $counter -le 10 ]]`.
|
||||
|
||||
Let's create a script that asks the user for their name and not allow an empty input:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
read -p "What is your name? " name
|
||||
|
||||
while [[ -z ${name} ]]
|
||||
do
|
||||
echo "Your name can not be blank. Please enter a valid name!"
|
||||
read -p "Enter your name again? " name
|
||||
done
|
||||
|
||||
echo "Hi there ${name}"
|
||||
```
|
||||
|
||||
Now, if you run the above and just press enter without providing input, the loop would run again and ask you for your name again and again until you actually provide some input.
|
||||
|
||||
## Until Loops
|
||||
|
||||
The difference between `until` and `while` loops is that the `until` loop will run the commands within the loop until the condition becomes true.
|
||||
|
||||
Structure:
|
||||
|
||||
```bash
|
||||
until [[ your_condition ]]
|
||||
do
|
||||
your_commands
|
||||
done
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
count=1
|
||||
until [[ $count -gt 10 ]]
|
||||
do
|
||||
echo $count
|
||||
((count++))
|
||||
done
|
||||
```
|
||||
|
||||
## Continue and Break
|
||||
As with other languages, you can use `continue` and `break` with your bash scripts as well:
|
||||
|
||||
* `continue` tells your bash script to stop the current iteration of the loop and start the next iteration.
|
||||
|
||||
The syntax of the continue statement is as follows:
|
||||
|
||||
```bash
|
||||
continue [n]
|
||||
```
|
||||
|
||||
The [n] argument is optional and can be greater than or equal to 1. When [n] is given, the n-th enclosing loop is resumed. continue 1 is equivalent to continue.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
for i in 1 2 3 4 5
|
||||
do
|
||||
if [[ $i –eq 2 ]]
|
||||
then
|
||||
echo "skipping number 2"
|
||||
continue
|
||||
fi
|
||||
echo "i is equal to $i"
|
||||
done
|
||||
```
|
||||
|
||||
We can also use continue command in similar way to break command for controlling multiple loops.
|
||||
|
||||
* `break` tells your bash script to end the loop straight away.
|
||||
|
||||
The syntax of the break statement takes the following form:
|
||||
|
||||
```bash
|
||||
break [n]
|
||||
```
|
||||
[n] is an optional argument and must be greater than or equal to 1. When [n] is provided, the n-th enclosing loop is exited. break 1 is equivalent to break.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
num=1
|
||||
while [[ $num –lt 10 ]]
|
||||
do
|
||||
if [[ $num –eq 5 ]]
|
||||
then
|
||||
break
|
||||
fi
|
||||
((num++))
|
||||
done
|
||||
echo "Loop completed"
|
||||
```
|
||||
|
||||
We can also use break command with multiple loops. If we want to exit out of current working loop whether inner or outer loop, we simply use break but if we are in inner loop & want to exit out of outer loop, we use break 2.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
for (( a = 1; a < 10; a++ ))
|
||||
do
|
||||
echo "outer loop: $a"
|
||||
for (( b = 1; b < 100; b++ ))
|
||||
do
|
||||
if [[ $b –gt 5 ]]
|
||||
then
|
||||
break 2
|
||||
fi
|
||||
echo "Inner loop: $b "
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
The bash script will begin with a=1 & will move to inner loop and when it reaches b=5, it will break the outer loop.
|
||||
We can use break only instead of break 2, to break inner loop & see how it affects the output.
|
||||
@@ -1,66 +0,0 @@
|
||||
# Bash Functions
|
||||
|
||||
Functions are a great way to reuse code. The structure of a function in bash is quite similar to most languages:
|
||||
|
||||
```bash
|
||||
function function_name() {
|
||||
your_commands
|
||||
}
|
||||
```
|
||||
|
||||
You can also omit the `function` keyword at the beginning, which would also work:
|
||||
|
||||
```bash
|
||||
function_name() {
|
||||
your_commands
|
||||
}
|
||||
```
|
||||
|
||||
I prefer putting it there for better readability. But it is a matter of personal preference.
|
||||
|
||||
Example of a "Hello World!" function:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
function hello() {
|
||||
echo "Hello World Function!"
|
||||
}
|
||||
|
||||
hello
|
||||
```
|
||||
|
||||
>{notice} One thing to keep in mind is that you should not add the parenthesis when you call the function.
|
||||
|
||||
Passing arguments to a function work in the same way as passing arguments to a script:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
function hello() {
|
||||
echo "Hello $1!"
|
||||
}
|
||||
|
||||
hello DevDojo
|
||||
```
|
||||
|
||||
Functions should have comments mentioning description, global variables, arguments, outputs, and returned values, if applicable
|
||||
|
||||
```bash
|
||||
#######################################
|
||||
# Description: Hello function
|
||||
# Globals:
|
||||
# None
|
||||
# Arguments:
|
||||
# Single input argument
|
||||
# Outputs:
|
||||
# Value of input argument
|
||||
# Returns:
|
||||
# 0 if successful, non-zero on error.
|
||||
#######################################
|
||||
function hello() {
|
||||
echo "Hello $1!"
|
||||
}
|
||||
```
|
||||
|
||||
In the next few chapters we will be using functions a lot!
|
||||
@@ -1,83 +0,0 @@
|
||||
# Debugging, testing and shortcuts
|
||||
|
||||
In order to debug your bash scripts, you can use `-x` when executing your scripts:
|
||||
|
||||
```bash
|
||||
bash -x ./your_script.sh
|
||||
```
|
||||
|
||||
Or you can add `set -x` before the specific line that you want to debug, `set -x` enables a mode of the shell where all executed commands are printed to the terminal.
|
||||
|
||||
Another way to test your scripts is to use this fantastic tool here:
|
||||
|
||||
[https://www.shellcheck.net/](https://www.shellcheck.net/)
|
||||
|
||||
Just copy and paste your code into the textbox, and the tool will give you some suggestions on how you can improve your script.
|
||||
|
||||
You can also run the tool directly in your terminal:
|
||||
|
||||
[https://github.com/koalaman/shellcheck](https://github.com/koalaman/shellcheck)
|
||||
|
||||
If you like the tool, make sure to star it on GitHub and contribute!
|
||||
|
||||
As a SysAdmin/DevOps, I spend a lot of my day in the terminal. Here are my favorite shortcuts that help me do tasks quicker while writing Bash scripts or just while working in the terminal.
|
||||
|
||||
The below two are particularly useful if you have a very long command.
|
||||
|
||||
* Delete everything from the cursor to the end of the line:
|
||||
|
||||
```
|
||||
Ctrl + k
|
||||
```
|
||||
|
||||
* Delete everything from the cursor to the start of the line:
|
||||
|
||||
```
|
||||
Ctrl + u
|
||||
```
|
||||
|
||||
* Delete one word backward from cursor:
|
||||
|
||||
```
|
||||
Ctrl + w
|
||||
```
|
||||
|
||||
* Search your history backward. This is probably the one that I use the most. It is really handy and speeds up my work-flow a lot:
|
||||
|
||||
```
|
||||
Ctrl + r
|
||||
```
|
||||
|
||||
* Clear the screen, I use this instead of typing the `clear` command:
|
||||
|
||||
```
|
||||
Ctrl + l
|
||||
```
|
||||
|
||||
* Stops the output to the screen:
|
||||
|
||||
```
|
||||
Ctrl + s
|
||||
```
|
||||
|
||||
* Enable the output to the screen in case that previously stopped by `Ctrl + s`:
|
||||
|
||||
```
|
||||
Ctrl + q
|
||||
```
|
||||
|
||||
* Terminate the current command
|
||||
|
||||
```
|
||||
Ctrl + c
|
||||
```
|
||||
|
||||
* Throw the current command to background:
|
||||
|
||||
```
|
||||
Ctrl + z
|
||||
```
|
||||
|
||||
I use those regularly every day, and it saves me a lot of time.
|
||||
|
||||
If you think that I've missed any feel free to join the discussion on [the DigitalOcean community forum](https://www.digitalocean.com/community/questions/what-are-your-favorite-bash-shortcuts)!
|
||||
@@ -1,83 +0,0 @@
|
||||
# Creating custom bash commands
|
||||
|
||||
As a developer or system administrator, you might have to spend a lot of time in your terminal. I always try to look for ways to optimize any repetitive tasks.
|
||||
|
||||
One way to do that is to either write short bash scripts or create custom commands also known as aliases. For example, rather than typing a really long command every time you could just create a shortcut for it.
|
||||
|
||||
## Example
|
||||
|
||||
Let's start with the following scenario, as a system admin, you might have to check the connections to your web server quite often, so I will use the `netstat` command as an example.
|
||||
|
||||
What I would usually do when I access a server that is having issues with the connections to port 80 or 443 is to check if there are any services listening on those ports and the number of connections to the ports.
|
||||
|
||||
The following `netstat` command would show us how many TCP connections on port 80 and 443 we currently have:
|
||||
|
||||
```bash
|
||||
netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l
|
||||
```
|
||||
This is quite a lengthy command so typing it every time might be time-consuming in the long run especially when you want to get that information quickly.
|
||||
|
||||
To avoid that, we can create an alias, so rather than typing the whole command, we could just type a short command instead. For example, lets say that we wanted to be able to type `conn` (short for connections) and get the same information. All we need to do in this case is to run the following command:
|
||||
|
||||
```bash
|
||||
alias conn="netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
|
||||
```
|
||||
|
||||
That way we are creating an alias called `conn` which would essentially be a 'shortcut' for our long `netstat` command. Now if you run just `conn`:
|
||||
|
||||
```bash
|
||||
conn
|
||||
```
|
||||
|
||||
You would get the same output as the long `netstat` command.
|
||||
You can get even more creative and add some info messages like this one here:
|
||||
|
||||
```bash
|
||||
alias conn="echo 'Total connections on port 80 and 443:' ; netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
|
||||
```
|
||||
|
||||
Now if you run `conn` you would get the following output:
|
||||
|
||||
```bash
|
||||
Total connections on port 80 and 443:
|
||||
12
|
||||
```
|
||||
Now if you log out and log back in, your alias would be lost. In the next step you will see how to make this persistent.
|
||||
|
||||
## Making the change persistent
|
||||
|
||||
In order to make the change persistent, we need to add the `alias` command in our shell profile file.
|
||||
|
||||
By default on Ubuntu this would be the `~/.bashrc` file, for other operating systems this might be the `~/.bash_profle`. With your favorite text editor open the file:
|
||||
|
||||
```bash
|
||||
nano ~/.bashrc
|
||||
```
|
||||
|
||||
Go to the bottom and add the following:
|
||||
|
||||
```bash
|
||||
alias conn="echo 'Total connections on port 80 and 443:' ; netstat -plant | grep '80\|443' | grep -v LISTEN | wc -l"
|
||||
```
|
||||
|
||||
Save and then exit.
|
||||
|
||||
That way now even if you log out and log back in again your change would be persisted and you would be able to run your custom bash command.
|
||||
|
||||
## Listing all of the available aliases
|
||||
|
||||
To list all of the available aliases for your current shell, you have to just run the following command:
|
||||
|
||||
```bash
|
||||
alias
|
||||
```
|
||||
|
||||
This would be handy in case that you are seeing some weird behavior with some commands.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This is one way of creating custom bash commands or bash aliases.
|
||||
|
||||
Of course, you could actually write a bash script and add the script inside your `/usr/bin` folder, but this would not work if you don't have root or sudo access, whereas with aliases you can do it without the need of root access.
|
||||
|
||||
>{notice} This was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-create-custom-bash-commands)
|
||||
@@ -1,180 +0,0 @@
|
||||
# Write your first Bash script
|
||||
|
||||
Let's try to put together what we've learned so far and create our first Bash script!
|
||||
|
||||
## Planning the script
|
||||
|
||||
As an example, we will write a script that would gather some useful information about our server like:
|
||||
|
||||
* Current Disk usage
|
||||
* Current CPU usage
|
||||
* Current RAM usage
|
||||
* Check the exact Kernel version
|
||||
|
||||
Feel free to adjust the script by adding or removing functionality so that it matches your needs.
|
||||
|
||||
## Writing the script
|
||||
|
||||
The first thing that you need to do is to create a new file with a `.sh` extension. I will create a file called `status.sh` as the script that we will create would give us the status of our server.
|
||||
|
||||
Once you've created the file, open it with your favorite text editor.
|
||||
|
||||
As we've learned in chapter 1, on the very first line of our Bash script we need to specify the so-called [Shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)):
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
```
|
||||
|
||||
All that the shebang does is to instruct the operating system to run the script with the /bin/bash executable.
|
||||
|
||||
## Adding comments
|
||||
|
||||
Next, as discussed in chapter 6, let's start by adding some comments so that people could easily figure out what the script is used for. To do that right after the shebang you can just add the following:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Script that returns the current server status
|
||||
```
|
||||
|
||||
## Adding your first variable
|
||||
|
||||
Then let's go ahead and apply what we've learned in chapter 4 and add some variables which we might want to use throughout the script.
|
||||
|
||||
To assign a value to a variable in bash, you just have to use the `=` sign. For example, let's store the hostname of our server in a variable so that we could use it later:
|
||||
|
||||
```bash
|
||||
server_name=$(hostname)
|
||||
```
|
||||
|
||||
By using `$()` we tell bash to actually interpret the command and then assign the value to our variable.
|
||||
|
||||
Now if we were to echo out the variable we would see the current hostname:
|
||||
|
||||
```bash
|
||||
echo $server_name
|
||||
```
|
||||
|
||||
## Adding your first function
|
||||
|
||||
As you already know after reading chapter 12, in order to create a function in bash you need to use the following structure:
|
||||
|
||||
```bash
|
||||
function function_name() {
|
||||
your_commands
|
||||
}
|
||||
```
|
||||
|
||||
Let's create a function that returns the current memory usage on our server:
|
||||
|
||||
```bash
|
||||
function memory_check() {
|
||||
echo ""
|
||||
echo "The current memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo ""
|
||||
}
|
||||
```
|
||||
|
||||
Quick run down of the function:
|
||||
|
||||
* `function memory_check() {` - this is how we define the function
|
||||
* `echo ""` - here we just print a new line
|
||||
* `echo "The current memory usage on ${server_name} is: "` - here we print a small message and the `$server_name` variable
|
||||
* `}` - finally this is how we close the function
|
||||
|
||||
Then once the function has been defined, in order to call it, just use the name of the function:
|
||||
|
||||
```bash
|
||||
# Define the function
|
||||
function memory_check() {
|
||||
echo ""
|
||||
echo "The current memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Call the function
|
||||
memory_check
|
||||
```
|
||||
|
||||
## Adding more functions challenge
|
||||
|
||||
Before checking out the solution, I would challenge you to use the function from above and write a few functions by yourself.
|
||||
|
||||
The functions should do the following:
|
||||
|
||||
* Current Disk usage
|
||||
* Current CPU usage
|
||||
* Current RAM usage
|
||||
* Check the exact Kernel version
|
||||
|
||||
Feel free to use google if you are not sure what commands you need to use in order to get that information.
|
||||
|
||||
Once you are ready, feel free to scroll down and check how we've done it and compare the results!
|
||||
|
||||
Note that there are multiple correct ways of doing it!
|
||||
|
||||
## The sample script
|
||||
|
||||
Here's what the end result would look like:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
# BASH script that checks:
|
||||
# - Memory usage
|
||||
# - CPU load
|
||||
# - Number of TCP connections
|
||||
# - Kernel version
|
||||
##
|
||||
|
||||
server_name=$(hostname)
|
||||
|
||||
function memory_check() {
|
||||
echo ""
|
||||
echo "Memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo ""
|
||||
}
|
||||
|
||||
function cpu_check() {
|
||||
echo ""
|
||||
echo "CPU load on ${server_name} is: "
|
||||
echo ""
|
||||
uptime
|
||||
echo ""
|
||||
}
|
||||
|
||||
function tcp_check() {
|
||||
echo ""
|
||||
echo "TCP connections on ${server_name}: "
|
||||
echo ""
|
||||
cat /proc/net/tcp | wc -l
|
||||
echo ""
|
||||
}
|
||||
|
||||
function kernel_check() {
|
||||
echo ""
|
||||
echo "Kernel version on ${server_name} is: "
|
||||
echo ""
|
||||
uname -r
|
||||
echo ""
|
||||
}
|
||||
|
||||
function all_checks() {
|
||||
memory_check
|
||||
cpu_check
|
||||
tcp_check
|
||||
kernel_check
|
||||
}
|
||||
|
||||
all_checks
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Bash scripting is awesome! No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you can use Bash scripts to combine different Linux commands and automate boring and repetitive daily tasks, so that you can focus on more productive and fun things!
|
||||
|
||||
>{notice} This was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/introduction-to-bash-scripting)
|
||||
@@ -1,305 +0,0 @@
|
||||
# Creating an interactive menu in Bash
|
||||
|
||||
In this tutorial, I will show you how to create a multiple-choice menu in Bash so that your users could choose between what action should be executed!
|
||||
|
||||
We would reuse some of the code from the previous chapter, so if you have not read it yet make sure to do so.
|
||||
|
||||
## Planning the functionality
|
||||
|
||||
Let's start again by going over the main functionality of the script:
|
||||
|
||||
* Checks the current Disk usage
|
||||
* Checks the current CPU usage
|
||||
* Checks the current RAM usage
|
||||
* Checks the check the exact Kernel version
|
||||
|
||||
In case that you don't have it on hand, here is the script itself:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
# BASH menu script that checks:
|
||||
# - Memory usage
|
||||
# - CPU load
|
||||
# - Number of TCP connections
|
||||
# - Kernel version
|
||||
##
|
||||
|
||||
server_name=$(hostname)
|
||||
|
||||
function memory_check() {
|
||||
echo ""
|
||||
echo "Memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo ""
|
||||
}
|
||||
|
||||
function cpu_check() {
|
||||
echo ""
|
||||
echo "CPU load on ${server_name} is: "
|
||||
echo ""
|
||||
uptime
|
||||
echo ""
|
||||
}
|
||||
|
||||
function tcp_check() {
|
||||
echo ""
|
||||
echo "TCP connections on ${server_name}: "
|
||||
echo ""
|
||||
cat /proc/net/tcp | wc -l
|
||||
echo ""
|
||||
}
|
||||
|
||||
function kernel_check() {
|
||||
echo ""
|
||||
echo "Kernel version on ${server_name} is: "
|
||||
echo ""
|
||||
uname -r
|
||||
echo ""
|
||||
}
|
||||
|
||||
function all_checks() {
|
||||
memory_check
|
||||
cpu_check
|
||||
tcp_check
|
||||
kernel_check
|
||||
}
|
||||
```
|
||||
|
||||
We will then build a menu that allows the user to choose which function to be executed.
|
||||
|
||||
Of course, you can adjust the function or add new ones depending on your needs.
|
||||
|
||||
## Adding some colors
|
||||
|
||||
In order to make the menu a bit more 'readable' and easy to grasp at first glance, we will add some color functions.
|
||||
|
||||
At the beginning of your script add the following color functions:
|
||||
|
||||
```bash
|
||||
##
|
||||
# Color Variables
|
||||
##
|
||||
green='\e[32m'
|
||||
blue='\e[34m'
|
||||
clear='\e[0m'
|
||||
|
||||
##
|
||||
# Color Functions
|
||||
##
|
||||
|
||||
ColorGreen(){
|
||||
echo -ne $green$1$clear
|
||||
}
|
||||
ColorBlue(){
|
||||
echo -ne $blue$1$clear
|
||||
}
|
||||
```
|
||||
|
||||
You can use the color functions as follows:
|
||||
|
||||
```bash
|
||||
echo -ne $(ColorBlue 'Some text here')
|
||||
```
|
||||
|
||||
The above would output the `Some text here` string and it would be blue!
|
||||
|
||||
# Adding the menu
|
||||
|
||||
Finally, to add our menu, we will create a separate function with a case switch for our menu options:
|
||||
|
||||
```bash
|
||||
menu(){
|
||||
echo -ne "
|
||||
My First Menu
|
||||
$(ColorGreen '1)') Memory usage
|
||||
$(ColorGreen '2)') CPU load
|
||||
$(ColorGreen '3)') Number of TCP connections
|
||||
$(ColorGreen '4)') Kernel version
|
||||
$(ColorGreen '5)') Check All
|
||||
$(ColorGreen '0)') Exit
|
||||
$(ColorBlue 'Choose an option:') "
|
||||
read a
|
||||
case $a in
|
||||
1) memory_check ; menu ;;
|
||||
2) cpu_check ; menu ;;
|
||||
3) tcp_check ; menu ;;
|
||||
4) kernel_check ; menu ;;
|
||||
5) all_checks ; menu ;;
|
||||
0) exit 0 ;;
|
||||
*) echo -e $red"Wrong option."$clear; WrongCommand;;
|
||||
esac
|
||||
}
|
||||
```
|
||||
|
||||
### A quick rundown of the code
|
||||
|
||||
First we just echo out the menu options with some color:
|
||||
|
||||
```
|
||||
echo -ne "
|
||||
My First Menu
|
||||
$(ColorGreen '1)') Memory usage
|
||||
$(ColorGreen '2)') CPU load
|
||||
$(ColorGreen '3)') Number of TCP connections
|
||||
$(ColorGreen '4)') Kernel version
|
||||
$(ColorGreen '5)') Check All
|
||||
$(ColorGreen '0)') Exit
|
||||
$(ColorBlue 'Choose an option:') "
|
||||
```
|
||||
|
||||
Then we read the answer of the user and store it in a variable called `$a`:
|
||||
|
||||
```bash
|
||||
read a
|
||||
```
|
||||
|
||||
Finally, we have a switch case which triggers a different function depending on the value of `$a`:
|
||||
|
||||
```bash
|
||||
case $a in
|
||||
1) memory_check ; menu ;;
|
||||
2) cpu_check ; menu ;;
|
||||
3) tcp_check ; menu ;;
|
||||
4) kernel_check ; menu ;;
|
||||
5) all_checks ; menu ;;
|
||||
0) exit 0 ;;
|
||||
*) echo -e $red"Wrong option."$clear; WrongCommand;;
|
||||
esac
|
||||
```
|
||||
|
||||
At the end we need to call the menu function to actually print out the menu:
|
||||
|
||||
```bash
|
||||
# Call the menu function
|
||||
menu
|
||||
```
|
||||
|
||||
## Testing the script
|
||||
|
||||
In the end, your script will look like this:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
# BASH menu script that checks:
|
||||
# - Memory usage
|
||||
# - CPU load
|
||||
# - Number of TCP connections
|
||||
# - Kernel version
|
||||
##
|
||||
|
||||
server_name=$(hostname)
|
||||
|
||||
function memory_check() {
|
||||
echo ""
|
||||
echo "Memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo ""
|
||||
}
|
||||
|
||||
function cpu_check() {
|
||||
echo ""
|
||||
echo "CPU load on ${server_name} is: "
|
||||
echo ""
|
||||
uptime
|
||||
echo ""
|
||||
}
|
||||
|
||||
function tcp_check() {
|
||||
echo ""
|
||||
echo "TCP connections on ${server_name}: "
|
||||
echo ""
|
||||
cat /proc/net/tcp | wc -l
|
||||
echo ""
|
||||
}
|
||||
|
||||
function kernel_check() {
|
||||
echo ""
|
||||
echo "Kernel version on ${server_name} is: "
|
||||
echo ""
|
||||
uname -r
|
||||
echo ""
|
||||
}
|
||||
|
||||
function all_checks() {
|
||||
memory_check
|
||||
cpu_check
|
||||
tcp_check
|
||||
kernel_check
|
||||
}
|
||||
|
||||
##
|
||||
# Color Variables
|
||||
##
|
||||
green='\e[32m'
|
||||
blue='\e[34m'
|
||||
clear='\e[0m'
|
||||
|
||||
##
|
||||
# Color Functions
|
||||
##
|
||||
|
||||
ColorGreen(){
|
||||
echo -ne $green$1$clear
|
||||
}
|
||||
ColorBlue(){
|
||||
echo -ne $blue$1$clear
|
||||
}
|
||||
|
||||
menu(){
|
||||
echo -ne "
|
||||
My First Menu
|
||||
$(ColorGreen '1)') Memory usage
|
||||
$(ColorGreen '2)') CPU load
|
||||
$(ColorGreen '3)') Number of TCP connections
|
||||
$(ColorGreen '4)') Kernel version
|
||||
$(ColorGreen '5)') Check All
|
||||
$(ColorGreen '0)') Exit
|
||||
$(ColorBlue 'Choose an option:') "
|
||||
read a
|
||||
case $a in
|
||||
1) memory_check ; menu ;;
|
||||
2) cpu_check ; menu ;;
|
||||
3) tcp_check ; menu ;;
|
||||
4) kernel_check ; menu ;;
|
||||
5) all_checks ; menu ;;
|
||||
0) exit 0 ;;
|
||||
*) echo -e $red"Wrong option."$clear; WrongCommand;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Call the menu function
|
||||
menu
|
||||
```
|
||||
|
||||
To test the script, create a new filed with a `.sh` extension, for example: `menu.sh` and then run it:
|
||||
|
||||
```bash
|
||||
bash menu.sh
|
||||
```
|
||||
|
||||
The output that you would get will look like this:
|
||||
|
||||
```bash
|
||||
My First Menu
|
||||
1) Memory usage
|
||||
2) CPU load
|
||||
3) Number of TCP connections
|
||||
4) Kernel version
|
||||
5) Check All
|
||||
0) Exit
|
||||
Choose an option:
|
||||
```
|
||||
|
||||
You will be able to choose a different option from the list and each number will call a different function from the script:
|
||||
|
||||

|
||||
|
||||
## Conclusion
|
||||
|
||||
You now know how to create a Bash menu and implement it in your scripts so that users could select different values!
|
||||
|
||||
>{notice} This content was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-work-with-json-in-bash-using-jq)
|
||||
@@ -1,129 +0,0 @@
|
||||
# Executing BASH scripts on Multiple Remote Servers
|
||||
|
||||
Any command that you can run from the command line can be used in a bash script. Scripts are used to run a series of commands. Bash is available by default on Linux and macOS operating systems.
|
||||
|
||||
Let's have a hypothetical scenario where you need to execute a BASH script on multiple remote servers, but you don't want to manually copy the script to each server, then again login to each server individually and only then execute the script.
|
||||
|
||||
Of course you could use a tool like Ansible but let's learn how to do that with Bash!
|
||||
|
||||
## Prerequisites
|
||||
|
||||
For this example I will use 3 remote Ubuntu servers deployed on DigitalOcean. If you don't have a Digital Ocean account yet, you can sign up for DigitalOcean and get $100 free credit via this referral link here:
|
||||
|
||||
[https://m.do.co/c/2a9bba940f39](https://m.do.co/c/2a9bba940f39)
|
||||
|
||||
Once you have your Digital Ocean account ready go ahead and deploy 3 droplets.
|
||||
|
||||
I've gone ahead and created 3 Ubuntu servers:
|
||||
|
||||

|
||||
|
||||
I'll put a those servers IP's in a `servers.txt` file which I would use to loop though with our Bash script.
|
||||
|
||||
If you are new to DigitalOcean you can follow the steps on how to create a Droplet here:
|
||||
|
||||
* [How to Create a Droplet from the DigitalOcean Control Panel](https://www.digitalocean.com/docs/droplets/how-to/create/)
|
||||
|
||||
You can also follow the steps from this video here on how to do your initial server setup:
|
||||
|
||||
* [How to do your Initial Server Setup with Ubuntu](https://youtu.be/7NL2_4HIgKU)
|
||||
|
||||
Or even better, you can follow this article here on how to automate your initial server setup with Bash:
|
||||
|
||||
[Automating Initial Server Setup with Ubuntu 18.04 with Bash](https://www.digitalocean.com/community/tutorials/automating-initial-server-setup-with-ubuntu-18-04)
|
||||
|
||||
With the 3 new servers in place, we can go ahead and focus on running our Bash script on all of them with a single command!
|
||||
|
||||
## The BASH Script
|
||||
|
||||
I will reuse the demo script from the previous chapter with some slight changes. It simply executes a few checks like the current memory usage, the current CPU usage, the number of TCP connections and the version of the kernel.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
# BASH script that checks the following:
|
||||
# - Memory usage
|
||||
# - CPU load
|
||||
# - Number of TCP connections
|
||||
# - Kernel version
|
||||
##
|
||||
|
||||
##
|
||||
# Memory check
|
||||
##
|
||||
server_name=$(hostname)
|
||||
|
||||
function memory_check() {
|
||||
echo "#######"
|
||||
echo "The current memory usage on ${server_name} is: "
|
||||
free -h
|
||||
echo "#######"
|
||||
}
|
||||
|
||||
|
||||
function cpu_check() {
|
||||
echo "#######"
|
||||
echo "The current CPU load on ${server_name} is: "
|
||||
echo ""
|
||||
uptime
|
||||
echo "#######"
|
||||
}
|
||||
|
||||
function tcp_check() {
|
||||
echo "#######"
|
||||
echo "Total TCP connections on ${server_name}: "
|
||||
echo ""
|
||||
cat /proc/net/tcp | wc -l
|
||||
echo "#######"
|
||||
}
|
||||
|
||||
function kernel_check() {
|
||||
echo "#######"
|
||||
echo "The exact Kernel version on ${server_name} is: "
|
||||
echo ""
|
||||
uname -r
|
||||
echo "#######"
|
||||
}
|
||||
|
||||
function all_checks() {
|
||||
memory_check
|
||||
cpu_check
|
||||
tcp_check
|
||||
kernel_check
|
||||
}
|
||||
|
||||
all_checks
|
||||
```
|
||||
|
||||
Copy the code bellow and add this in a file called `remote_check.sh`. You can also get the script from [here](https://devdojo.com/bobbyiliev/executing-bash-script-on-multiple-remote-server).
|
||||
|
||||
## Running the Script on all Servers
|
||||
|
||||
Now that we have the script and the servers ready and that we've added those servers in our servers.txt file we can run the following command to loop though all servers and execute the script remotely without having to copy the script to each server and individually connect to each server.
|
||||
|
||||
```bash
|
||||
for server in $(cat servers.txt) ; do ssh your_user@${server} 'bash -s' < ./remote_check.sh ; done
|
||||
```
|
||||
|
||||
What this for loop does is, it goes through each server in the servers.txt file and then it runs the following command for each item in the list:
|
||||
|
||||
```bash
|
||||
ssh your_user@the_server_ip 'bash -s' < ./remote_check.sh
|
||||
```
|
||||
|
||||
You would get the following output:
|
||||
|
||||

|
||||
|
||||
## Conclusion
|
||||
|
||||
This is just a really simple example on how to execute a simple script on multiple servers without having to copy the script to each server and without having to access the servers individually.
|
||||
|
||||
Of course you could run a much more complex script and on many more servers.
|
||||
|
||||
If you are interested in automation, I would recommend checking out the Ansible resources page on the DigitalOcean website:
|
||||
|
||||
[Ansible Resources](https://www.digitalocean.com/community/tags/ansible)
|
||||
|
||||
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-summarize-your-nginx-and-apache-access-logs)
|
||||
@@ -1,225 +0,0 @@
|
||||
# Work with JSON in BASH using jq
|
||||
|
||||
The `jq` command-line tool is a lightweight and flexible command-line **JSON** processor. It is great for parsing JSON output in BASH.
|
||||
|
||||
One of the great things about `jq` is that it is written in portable C, and it has zero runtime dependencies. All you need to do is to download a single binary or use a package manager like apt and install it with a single command.
|
||||
|
||||
## Planning the script
|
||||
|
||||
For the demo in this tutorial, I would use an external REST API that returns a simple JSON output called the [QuizAPI](https://quizapi.io/):
|
||||
|
||||
> [https://quizapi.io/](https://quizapi.io/)
|
||||
|
||||
If you want to follow along make sure to get a free API key here:
|
||||
|
||||
> [https://quizapi.io/clientarea/settings/token](https://quizapi.io/clientarea/settings/token)
|
||||
|
||||
The QuizAPI is free for developers.
|
||||
|
||||
## Installing jq
|
||||
|
||||
There are many ways to install `jq` on your system. One of the most straight forward ways to do so is to use the package manager depending on your OS.
|
||||
|
||||
Here is a list of the commands that you would need to use depending on your OS:
|
||||
|
||||
* Install jq on Ubuntu/Debian:
|
||||
|
||||
```bash
|
||||
sudo apt-get install jq
|
||||
```
|
||||
|
||||
* Install jq on Fedora:
|
||||
|
||||
```bash
|
||||
sudo dnf install jq
|
||||
```
|
||||
|
||||
* Install jq on openSUSE:
|
||||
|
||||
```bash
|
||||
sudo zypper install jq
|
||||
```
|
||||
|
||||
- Install jq on Arch:
|
||||
|
||||
```bash
|
||||
sudo pacman -S jq
|
||||
```
|
||||
|
||||
* Installing on Mac with Homebrew:
|
||||
|
||||
```bash
|
||||
brew install jq
|
||||
```
|
||||
|
||||
* Install on Mac with MacPort:
|
||||
|
||||
```bash
|
||||
port install jq
|
||||
```
|
||||
|
||||
If you are using other OS, I would recommend taking a look at the official documentation here for more information:
|
||||
|
||||
> [https://stedolan.github.io/jq/download/](https://stedolan.github.io/jq/download/)
|
||||
|
||||
Once you have jq installed you can check your current version by running this command:
|
||||
|
||||
```bash
|
||||
jq --version
|
||||
```
|
||||
|
||||
## Parsing JSON with jq
|
||||
|
||||
Once you have `jq` installed and your QuizAPI API Key, you can parse the JSON output of the QuizAPI directly in your terminal.
|
||||
|
||||
First, create a variable that stores your API Key:
|
||||
|
||||
```bash
|
||||
API_KEY=YOUR_API_KEY_HERE
|
||||
```
|
||||
|
||||
In order to get some output from one of the endpoints of the QuizAPI you can use the curl command:
|
||||
|
||||
```bash
|
||||
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10"
|
||||
```
|
||||
|
||||
For a more specific output, you can use the QuizAPI URL Generator here:
|
||||
|
||||
> [https://quizapi.io/api-config](https://quizapi.io/api-config)
|
||||
|
||||
After running the curl command, the output which you would get would look like this:
|
||||
|
||||

|
||||
|
||||
This could be quite hard to read, but thanks to the jq command-line tool, all we need to do is pipe the curl command to jq and we would see a nice formatted JSON output:
|
||||
|
||||
```bash
|
||||
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10" | jq
|
||||
```
|
||||
|
||||
> Note the `| jq` at the end.
|
||||
|
||||
In this case the output that you would get would look something like this:
|
||||
|
||||

|
||||
|
||||
Now, this looks much nicer! The jq command-line tool formatted the output for us and added some nice coloring!
|
||||
|
||||
## Getting the first element with jq
|
||||
|
||||
Let's say that we only wanted to get the first element from the JSON output, in order to do that we have to just specify the index that we want to see with the following syntax:
|
||||
|
||||
```bash
|
||||
jq .[0]
|
||||
```
|
||||
|
||||
Now, if we run the curl command again and pipe the output to jq .[0] like this:
|
||||
|
||||
```bash
|
||||
curl "https://quizapi.io/api/v1/questions?apiKey=${API_KEY}&limit=10" | jq.[0]
|
||||
```
|
||||
|
||||
You will only get the first element and the output will look like this:
|
||||
|
||||

|
||||
|
||||
## Getting a value only for specific key
|
||||
|
||||
Sometimes you might want to get only the value of a specific key only, let's say in our example the QuizAPI returns a list of questions along with the answers, description and etc. but what if you wanted to get the Questions only without the additional information?
|
||||
|
||||
This is going to be quite straight forward with `jq`, all you need to do is add the key after jq command, so it would look something like this:
|
||||
|
||||
```bash
|
||||
jq .[].question
|
||||
```
|
||||
|
||||
We have to add the `.[]` as the QuizAPI returns an array and by specifying `.[]` we tell jq that we want to get the .question value for all of the elements in the array.
|
||||
|
||||
The output that you would get would look like this:
|
||||
|
||||

|
||||
|
||||
As you can see we now only get the questions without the rest of the values.
|
||||
|
||||
## Using jq in a BASH script
|
||||
|
||||
Let's go ahead and create a small bash script which should output the following information for us:
|
||||
|
||||
* Get only the first question from the output
|
||||
* Get all of the answers for that question
|
||||
* Assign the answers to variables
|
||||
* Print the question and the answers
|
||||
* To do that I've put together the following script:
|
||||
|
||||
>{notice} make sure to change the API_KEY part with your actual QuizAPI key:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
##
|
||||
# Make an API call to QuizAPI and store the output in a variable
|
||||
##
|
||||
output=$(curl 'https://quizapi.io/api/v1/questions?apiKey=API_KEY&limit=10' 2>/dev/null)
|
||||
|
||||
##
|
||||
# Get only the first question
|
||||
##
|
||||
output=$(echo $output | jq .[0])
|
||||
|
||||
##
|
||||
# Get the question
|
||||
##
|
||||
question=$(echo $output | jq .question)
|
||||
|
||||
##
|
||||
# Get the answers
|
||||
##
|
||||
|
||||
answer_a=$(echo $output | jq .answers.answer_a)
|
||||
answer_b=$(echo $output | jq .answers.answer_b)
|
||||
answer_c=$(echo $output | jq .answers.answer_c)
|
||||
answer_d=$(echo $output | jq .answers.answer_d)
|
||||
|
||||
##
|
||||
# Output the question
|
||||
##
|
||||
|
||||
echo "
|
||||
Question: ${question}
|
||||
|
||||
A) ${answer_a}
|
||||
B) ${answer_b}
|
||||
C) ${answer_c}
|
||||
D) ${answer_d}
|
||||
|
||||
"
|
||||
```
|
||||
|
||||
If you run the script you would get the following output:
|
||||
|
||||

|
||||
|
||||
We can even go further by making this interactive so that we could actually choose the answer directly in our terminal.
|
||||
|
||||
There is already a bash script that does this by using the QuizAPI and `jq`:
|
||||
|
||||
You can take a look at that script here:
|
||||
|
||||
* [https://github.com/QuizApi/QuizAPI-BASH/blob/master/quiz.sh](https://github.com/QuizApi/QuizAPI-BASH/blob/master/quiz.sh)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `jq` command-line tool is an amazing tool that gives you the power to work with JSON directly in your BASH terminal.
|
||||
|
||||
That way you can easily interact with all kinds of different REST APIs with BASH.
|
||||
|
||||
For more information, you could take a look at the official documentation here:
|
||||
|
||||
* [https://stedolan.github.io/jq/manual/](https://stedolan.github.io/jq/manual/)
|
||||
|
||||
And for more information on the **QuizAPI**, you could take a look at the official documentation here:
|
||||
|
||||
* [https://quizapi.io/docs/1.0/overview](https://quizapi.io/docs/1.0/overview)
|
||||
|
||||
>{notice} This content was initially posted on [DevDojo.com](https://devdojo.com/bobbyiliev/how-to-work-with-json-in-bash-using-jq)
|
||||
@@ -1,104 +0,0 @@
|
||||
# Working with Cloudflare API with Bash
|
||||
|
||||
I host all of my websites on **DigitalOcean** Droplets and I also use Cloudflare as my CDN provider. One of the benefits of using Cloudflare is that it reduces the overall traffic to your user and also hides your actual server IP address behind their CDN.
|
||||
|
||||
My personal favorite Cloudflare feature is their free DDoS protection. It has saved my servers multiple times from different DDoS attacks. They have a cool API that you could use to enable and disable their DDoS protection easily.
|
||||
|
||||
This chapter is going to be an exercise! I challenge you to go ahead and write a short bash script that would enable and disable the Cloudflare DDoS protection for your server automatically if needed!
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before following this guide here, please set up your Cloudflare account and get your website ready. If you are not sure how to do that you can follow these steps here: [Create a Cloudflare account and add a website](https://support.cloudflare.com/hc/en-us/articles/201720164-Step-2-Create-a-Cloudflare-account-and-add-a-website).
|
||||
|
||||
Once you have your Cloudflare account, make sure to obtain the following information:
|
||||
|
||||
* A Cloudflare account
|
||||
* Cloudflare API key
|
||||
* Cloudflare Zone ID
|
||||
|
||||
Also, Make sure curl is installed on your server:
|
||||
|
||||
```bash
|
||||
curl --version
|
||||
```
|
||||
|
||||
If curl is not installed you need to run the following:
|
||||
|
||||
* For RedHat/CentOs:
|
||||
|
||||
```bash
|
||||
yum install curl
|
||||
```
|
||||
|
||||
* For Debian/Ubuntu
|
||||
|
||||
```bash
|
||||
apt-get install curl
|
||||
```
|
||||
|
||||
## Challenge - Script requirements
|
||||
|
||||
The script needs to monitor the CPU usage on your server and if the CPU usage gets high based on the number vCPU it would enable the Cloudflare DDoS protection automatically via the Cloudflare API.
|
||||
|
||||
The main features of the script should be:
|
||||
|
||||
* Checks the script CPU load on the server
|
||||
* In case of a CPU spike the script triggers an API call to Cloudflare and enables the DDoS protection feature for the specified zone
|
||||
* After the CPU load is back to normal the script would disable the "I'm under attack" option and set it back to normal
|
||||
|
||||
## Example script
|
||||
|
||||
I already have prepared a demo script which you could use as a reference. But I encourage you to try and write the script yourself first and only then take a look at my script!
|
||||
|
||||
To download the script just run the following command:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/bobbyiliev/cloudflare-ddos-protection/main/protection.sh
|
||||
```
|
||||
|
||||
Open the script with your favorite text editor:
|
||||
|
||||
```bash
|
||||
nano protection.sh
|
||||
```
|
||||
|
||||
And update the following details with your Cloudflare details:
|
||||
|
||||
```bash
|
||||
CF_CONE_ID=YOUR_CF_ZONE_ID
|
||||
CF_EMAIL_ADDRESS=YOUR_CF_EMAIL_ADDRESS
|
||||
CF_API_KEY=YOUR_CF_API_KEY
|
||||
```
|
||||
|
||||
After that make the script executable:
|
||||
|
||||
```bash
|
||||
chmod +x ~/protection.sh
|
||||
```
|
||||
|
||||
Finally, set up 2 Cron jobs to run every 30 seconds. To edit your crontab run:
|
||||
|
||||
```bash
|
||||
crontab -e
|
||||
```
|
||||
|
||||
And add the following content:
|
||||
|
||||
```bash
|
||||
* * * * * /path-to-the-script/cloudflare/protection.sh
|
||||
* * * * * ( sleep 30 ; /path-to-the-script/cloudflare/protection.sh )
|
||||
```
|
||||
|
||||
Note that you need to change the path to the script with the actual path where you've stored the script at.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This is quite straight forward and budget solution, one of the downsides of the script is that if your server gets unresponsive due to an attack, the script might not be triggered at all.
|
||||
|
||||
Of course, a better approach would be to use a monitoring system like Nagios and based on the statistics from the monitoring system then you can trigger the script, but this script challenge could be a good learning experience!
|
||||
|
||||
Here is another great resource on how to use the Discord API and send notifications to your Discord Channel with a Bash script:
|
||||
|
||||
[How To Use Discord Webhooks to Get Notifications for Your Website Status on Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-use-discord-webhooks-to-get-notifications-for-your-website-status-on-ubuntu-18-04)
|
||||
|
||||
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-automatically-enable-cloudflare-ddos-protection)
|
||||
@@ -1,83 +0,0 @@
|
||||
# BASH Script parser to Summarize Your NGINX and Apache Access Logs
|
||||
|
||||
One of the first things that I would usually do in case I notice a high CPU usage on some of my Linux servers would be to check the process list with either top or htop and in case that I notice a lot of Apache or Nginx process I would quickly check my access logs to determine what has caused or is causing the CPU spike on my server or to figure out if anything malicious is going on.
|
||||
|
||||
Sometimes reading the logs could be quite intimidating as the log might be huge and going though it manually could take a lot of time. Also, the raw log format could be confusing for people with less experience.
|
||||
|
||||
Just like the previous chapter, this chapter is going to be a challenge! You need to write a short bash script that would summarize the whole access log for you without the need of installing any additional software.
|
||||
|
||||
# Script requirements
|
||||
|
||||
This BASH script needs to parse and summarize your access logs and provide you with very useful information like:
|
||||
|
||||
* The 20 top pages with the most POST requests
|
||||
* The 20 top pages with the most GET requests
|
||||
* Top 20 IP addresses and their geo-location
|
||||
|
||||
## Example script
|
||||
|
||||
I already have prepared a demo script which you could use as a reference. But I encourage you to try and write the script yourself first and only then take a look at my script!
|
||||
|
||||
In order to download the script, you can either clone the repository with the following command:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/bobbyiliev/quick_access_logs_summary.git
|
||||
```
|
||||
|
||||
Or run the following command which would download the script in your current directory:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/bobbyiliev/quick_access_logs_summary/master/spike_check
|
||||
```
|
||||
|
||||
The script does not make any changes to your system, it only reads the content of your access log and summarizes it for you, however, once you've downloaded the file, make sure to review the content yourself.
|
||||
|
||||
## Running the script
|
||||
|
||||
All that you have to do once the script has been downloaded is to make it executable and run it.
|
||||
|
||||
To do that run the following command to make the script executable:
|
||||
|
||||
```bash
|
||||
chmod +x spike_check
|
||||
```
|
||||
|
||||
Then run the script:
|
||||
|
||||
```bash
|
||||
./spike_check /path/to/your/access_log
|
||||
```
|
||||
|
||||
Make sure to change the path to the file with the actual path to your access log. For example if you are using Apache on an Ubuntu server, the exact command would look like this:
|
||||
|
||||
```bash
|
||||
./spike_check /var/log/apache2/access.log
|
||||
```
|
||||
|
||||
If you are using Nginx the exact command would be almost the same, but with the path to the Nginx access log:
|
||||
|
||||
```bash
|
||||
./spike_check /var/log/nginx/access.log
|
||||
```
|
||||
|
||||
## Understanding the output
|
||||
|
||||
Once you run the script, it might take a while depending on the size of the log.
|
||||
|
||||
The output that you would see should look like this:
|
||||
|
||||

|
||||
|
||||
Essentially what we can tell in this case is that we've received 16 POST requests to our xmlrpc.php file which is often used by attackers to try and exploit WordPress websites by using various username and password combinations.
|
||||
|
||||
In this specific case, this was not a huge brute force attack, but it gives us an early indication and we can take action to prevent a larger attack in the future.
|
||||
|
||||
We can also see that there were a couple of Russian IP addresses accessing our site, so in case that you do not expect any traffic from Russia, you might want to block those IP addresses as well.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This is an example of a simple BASH script that allows you to quickly summarize your access logs and determine if anything malicious is going on.
|
||||
|
||||
Of course, you might want to also manually go through the logs as well but it is a good challenge to try and automate this with Bash!
|
||||
|
||||
>{notice} This content was initially posted on [DevDojo](https://devdojo.com/bobbyiliev/bash-script-to-summarize-your-nginx-and-apache-access-logs)
|
||||
@@ -1,95 +0,0 @@
|
||||
# Sending emails with Bash and SSMTP
|
||||
|
||||
SSMTP is a tool that delivers emails from a computer or a server to a configured mail host.
|
||||
|
||||
SSMTP is not an email server itself and does not receive emails or manage a queue.
|
||||
|
||||
One of its primary uses is for forwarding automated email (like system alerts) off your machine and to an external email address.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You would need the following things in order to be able to complete this tutorial successfully:
|
||||
|
||||
* Access to an Ubuntu 18.04 server as a non-root user with sudo privileges and an active firewall installed on your server. To set these up, please refer to our [Initial Server Setup Guide for Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04)
|
||||
|
||||
* An SMTP server along with SMTP username and password, this would also work with Gmail's SMTP server, or you could set up your own SMTP server by following the steps from this tutorial on [How to Install and Configure Postfix as a Send-Only SMTP Server on Ubuntu 16.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-16-04)
|
||||
|
||||
## Installing SSMTP
|
||||
|
||||
In order to install SSMTP, you’ll need to first update your apt cache with:
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Then run the following command to install SSMTP:
|
||||
|
||||
```bash
|
||||
sudo apt install ssmtp
|
||||
```
|
||||
|
||||
Another thing that you would need to install is `mailutils`, to do that run the following command:
|
||||
|
||||
```bash
|
||||
sudo apt install mailutils
|
||||
```
|
||||
|
||||
## Configuring SSMTP
|
||||
|
||||
Now that you have `ssmtp` installed, in order to configure it to use your SMTP server when sending emails, you need to edit the SSMTP configuration file.
|
||||
|
||||
Using your favourite text editor to open the `/etc/ssmtp/ssmtp.conf` file:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/ssmtp/ssmtp.conf
|
||||
```
|
||||
|
||||
You need to include your SMTP configuration:
|
||||
|
||||
```
|
||||
root=postmaster
|
||||
mailhub=<^>your_smtp_host.com<^>:587
|
||||
hostname=<^>your_hostname<^>
|
||||
AuthUser=<^>your_gmail_username@your_smtp_host.com<^>
|
||||
AuthPass=<^>your_gmail_password<^>
|
||||
FromLineOverride=YES
|
||||
UseSTARTTLS=YES
|
||||
```
|
||||
|
||||
Save the file and exit.
|
||||
|
||||
## Sending emails with SSMTP
|
||||
|
||||
Once your configuration is done, in order to send an email just run the following command:
|
||||
|
||||
```bash
|
||||
echo "<^>Here add your email body<^>" | mail -s "<^>Here specify your email subject<^>" <^>your_recepient_email@yourdomain.com<^>
|
||||
```
|
||||
|
||||
You can run this directly in your terminal or include it in your bash scripts.
|
||||
|
||||
## Sending A File with SSMTP (optional)
|
||||
|
||||
If you need to send files as attachments, you can use `mpack`.
|
||||
|
||||
To install `mpack` run the following command:
|
||||
|
||||
```bash
|
||||
sudo apt install mpack
|
||||
```
|
||||
|
||||
Next, in order to send an email with a file attached, run the following command.
|
||||
|
||||
```bash
|
||||
mpack -s "<^>Your Subject here<^>" your_file.zip <^>your_recepient_email@yourdomain.com<^>
|
||||
```
|
||||
|
||||
The above command would send an email to `<^>your_recepient_email@yourdomain.com<^>` with the `<^>your_file.zip<^>` attached.
|
||||
|
||||
## Conclusion
|
||||
|
||||
SSMTP is a great and reliable way to implement SMTP email functionality directly in bash scripts.
|
||||
|
||||
For more information about SSMTP I would recommend checking the official documentation [here](https://wiki.archlinux.org/index.php/SSMTP).
|
||||
|
||||
>{notice} This content was initially posted on the [DigitalOcean community forum](https://www.digitalocean.com/community/questions/how-to-send-emails-from-a-bash-script-using-ssmtp).
|
||||
@@ -1,126 +0,0 @@
|
||||
# Password Generator Bash Script
|
||||
|
||||
It's not uncommon situation where you will need to generate a random password that you can use for any software installation or when you sign-up to any website.
|
||||
|
||||
There are a lot of options in order to achieve this. You can use a password manager/vault where you often have the option to randomly generate a password or to use a website that can generate the password on your behalf.
|
||||
|
||||
You can also use Bash in your terminal (command-line) to generate a password that you can quickly use. There are a lot of ways to achieve that and I will make sure to cover few of them and will leave up to you to choose which option is most suitable with your needs.
|
||||
|
||||
## :warning: Security
|
||||
|
||||
**This script is intended to practice your bash scripting skills. You can have fun while doing simple projects with BASH, but security is not a joke, so please make sure you do not save your passwords in plain text in a local file or write them down by hand on a piece of paper.**
|
||||
|
||||
**I will highly recommend everyone to use secure and trusted providers to generate and save the passwords.**
|
||||
|
||||
## Script summary
|
||||
|
||||
Let me first do a quick summary of what our script is going to do.:
|
||||
|
||||
1. We will have to option to choose the password characters length when the script is executed.
|
||||
2. The script will then generate 5 random passwords with the length that was specified in step 1
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You would need a bash terminal and a text editor. You can use any text editor like vi, vim, nano or Visual Studio Code.
|
||||
|
||||
I'm running the script locally on my Linux laptop but if you're using Windows PC you can ssh to any server of your choice and execute the script there.
|
||||
|
||||
Although the script is pretty simple, having some basic BASH scripting knowledge will help you to better understand the script and how it's working.
|
||||
|
||||
## Generate a random password
|
||||
One of the great benefits of Linux is that you can do a lot of things using different methods. When it comes to generating a random string of characters it's not different as well.
|
||||
|
||||
You can use several commands in order to generate a random string of characters. I will cover few of them and will provide some examples.
|
||||
|
||||
- Using the ```date``` command.
|
||||
The date command will output the current date and time. However we also further manipulate the output in order to use it as randomly generated password. We can hash the date using md5, sha or just run it through base64. These are few examples:
|
||||
|
||||
```
|
||||
date | md5sum
|
||||
94cb1cdecfed0699e2d98acd9a7b8f6d -
|
||||
```
|
||||
using sha256sum:
|
||||
|
||||
```
|
||||
date | sha256sum
|
||||
30a0c6091e194c8c7785f0d7bb6e1eac9b76c0528f02213d1b6a5fbcc76ceff4 -
|
||||
```
|
||||
using base64:
|
||||
```
|
||||
date | base64
|
||||
0YHQsSDRj9C90YMgMzAgMTk6NTE6NDggRUVUIDIwMjEK
|
||||
```
|
||||
|
||||
- We can also use openssl in order to generate pseudo-random bytes and run the output through base64. An example output will be:
|
||||
```
|
||||
openssl rand -base64 10
|
||||
9+soM9bt8mhdcw==
|
||||
```
|
||||
Keep in mind that openssl might not be installed on your system so it's likely that you will need to install it first in order to use it.
|
||||
|
||||
- The most preferred way is to use the pseudorandom number generator - /dev/urandom
|
||||
since it is intended for most cryptographic purposes. We would also need to manipulate the output using ```tr``` in order to translate it. An example command is:
|
||||
|
||||
```
|
||||
tr -cd '[:alnum:]' < /dev/urandom | fold -w10 | head -n 1
|
||||
```
|
||||
With this command we take the output from /dev/urandom and translate it with ```tr``` while using all letters and digits and print the desired number of characters.
|
||||
|
||||
## The script
|
||||
First we begin the script with the shebang. We use it to tell the operating system which interpreter to use to parse the rest of the file.
|
||||
```
|
||||
#!/bin/bash
|
||||
```
|
||||
We can then continue and ask the user for some input. In this case we would like to know how many characters the password needs to be:
|
||||
|
||||
```
|
||||
# Ask user for password length
|
||||
clear
|
||||
printf "\n"
|
||||
read -p "How many characters you would like the password to have? " pass_length
|
||||
printf "\n"
|
||||
```
|
||||
Generate the passwords and then print it so the user can use it.
|
||||
```
|
||||
# This is where the magic happens!
|
||||
# Generate a list of 10 strings and cut it to the desired value provided from the user
|
||||
|
||||
for i in {1..10}; do (tr -cd '[:alnum:]' < /dev/urandom | fold -w${pass_length} | head -n 1); done
|
||||
|
||||
# Print the strings
|
||||
printf "$pass_output\n"
|
||||
printf "Goodbye, ${USER}\n"
|
||||
```
|
||||
|
||||
## The full script:
|
||||
```
|
||||
#!/bin/bash
|
||||
#=======================================
|
||||
# Password generator with login option
|
||||
#=======================================
|
||||
|
||||
# Ask user for the string length
|
||||
clear
|
||||
printf "\n"
|
||||
read -p "How many characters you would like the password to have? " pass_length
|
||||
printf "\n"
|
||||
|
||||
# This is where the magic happens!
|
||||
# Generate a list of 10 strings and cut it to the desired value provided from the user
|
||||
|
||||
for i in {1..10}; do (tr -cd '[:alnum:]' < /dev/urandom | fold -w${pass_length} | head -n 1); done
|
||||
|
||||
# Print the strings
|
||||
printf "$pass_output\n"
|
||||
printf "Goodbye, ${USER}\n"
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
This is pretty much how you can use simple bash script to generate random passwords.
|
||||
|
||||
:warning: **As already mentioned, please make sure to use strong passwords in order to make sure your account is protected. Also whenever is possible use 2 factor authentication as this will provide additional layer of security for your account.**
|
||||
|
||||
While the script is working fine, it expects that the user will provide the requested input. In order to prevent any issues you would need to do some more advance checks on the user input in order to make sure the script will continue to work fine even if the provided input does not match our needs.
|
||||
|
||||
## Contributed by
|
||||
[Alex Georgiev](https://twitter.com/alexgeorgiev17)
|
||||
@@ -1,228 +0,0 @@
|
||||
# Redirection in Bash
|
||||
|
||||
A Linux superuser must have a good knowledge of pipes and redirection in Bash. It is an essential component of the system and is often helpful in the field of Linux System Administration.
|
||||
|
||||
When you run a command like ``ls``, ``cat``, etc, you get some output on the terminal. If you write a wrong command, pass a wrong flag or a wrong command-line argument, you get error output on the terminal.
|
||||
In both the cases, you are given some text. It may seem like "just text" to you, but the system treats this text differently. This identifier is known as a File Descriptor (fd).
|
||||
|
||||
In Linux, there are 3 File Descriptors, **STDIN** (0); **STDOUT** (1) and **STDERR** (2).
|
||||
|
||||
* **STDIN** (fd: 0): Manages the input in the terminal.
|
||||
* **STDOUT** (fd: 1): Manages the output text in the terminal.
|
||||
* **STDERR** (fd: 2): Manages the error text in the terminal.
|
||||
|
||||
# Difference between Pipes and Redirections
|
||||
|
||||
Both *pipes* and *redidertions* redirect streams `(file descriptor)` of process being executed. The main difference is that *redirections* deal with `files stream`, sending the output stream to a file or sending the content of a given file to the input stream of the process.
|
||||
|
||||
On the other hand a pipe connects two commands by sending the output stream of the first one to the input stream of the second one. without any redidertions specified.
|
||||
|
||||
# Redirection in Bash
|
||||
|
||||
## STDIN (Standard Input)
|
||||
When you enter some input text for a command that asks for it, you are actually entering the text to the **STDIN** file descriptor. Run the ``cat`` command without any command-line arguments.
|
||||
It may seem that the process has paused but in fact it's ``cat`` asking for **STDIN**. ``cat`` is a simple program and will print the text passed to **STDIN**. However, you can extend the use case by redirecting the input to the commands that take **STDIN**.
|
||||
|
||||
Example with ``cat``:
|
||||
```
|
||||
cat << EOF
|
||||
Hello World!
|
||||
How are you?
|
||||
EOF
|
||||
```
|
||||
This will simply print the provided text on the terminal screen:
|
||||
```
|
||||
Hello World!
|
||||
How are you?
|
||||
```
|
||||
|
||||
The same can be done with other commands that take input via STDIN. Like, ``wc``:
|
||||
```
|
||||
wc -l << EOF
|
||||
Hello World!
|
||||
How are you?
|
||||
EOF
|
||||
```
|
||||
The ``-l`` flag with ``wc`` counts the number of lines.
|
||||
This block of bash code will print the number of lines to the terminal screen:
|
||||
```
|
||||
2
|
||||
```
|
||||
|
||||
## STDOUT (Standard Output)
|
||||
The normal non-error text on your terminal screen is printed via the **STDOUT** file descriptor. The **STDOUT** of a command can be redirected into a file, in such a way that the output of the command is written to a file instead of being printed on the terminal screen.
|
||||
This is done simply with the help of ``>`` and ``>>`` operators.
|
||||
|
||||
Example:
|
||||
```
|
||||
echo "Hello World!" > file.txt
|
||||
```
|
||||
The above command will not print "Hello World" on the terminal screen, it will instead create a file called ``file.txt`` and will write the "Hello World" string to it.
|
||||
This can be verified by running the ``cat`` command on the ``file.txt`` file.
|
||||
```
|
||||
cat file.txt
|
||||
```
|
||||
|
||||
However, everytime you redirect the **STDOUT** of any command multiple times to the same file, it will remove the existing contents of the file to write the new ones.
|
||||
|
||||
Example:
|
||||
```
|
||||
echo "Hello World!" > file.txt
|
||||
echo "How are you?" > file.txt
|
||||
```
|
||||
|
||||
On running ``cat`` on ``file.txt`` file:
|
||||
```
|
||||
cat file.txt
|
||||
```
|
||||
|
||||
You will only get the "How are you?" string printed.
|
||||
```
|
||||
How are you?
|
||||
```
|
||||
|
||||
This is because the "Hello World" string has been overwritten.
|
||||
This behaviour can be avoided using the ``>>`` operator.
|
||||
|
||||
The above example can be written as:
|
||||
```
|
||||
echo "Hello World!" > file.txt
|
||||
echo "How are you?" >> file.txt
|
||||
```
|
||||
|
||||
On running ``cat`` on the ``file.txt`` file, you will get the desired result.
|
||||
```
|
||||
Hello World!
|
||||
How are you?
|
||||
```
|
||||
|
||||
Alternatively, the redirection operator for **STDOUT** can also be written as ``1>``. Like,
|
||||
```
|
||||
echo "Hello World!" 1> file.txt
|
||||
```
|
||||
|
||||
## STDERR (Standard Error)
|
||||
|
||||
The error text on the terminal screen is printed via the **STDERR** of the command. For example:
|
||||
```
|
||||
ls --hello
|
||||
```
|
||||
would give an error messages. This error message is the **STDERR** of the command.
|
||||
|
||||
**STDERR** can be redirected using the ``2>`` operator.
|
||||
|
||||
```
|
||||
ls --hello 2> error.txt
|
||||
```
|
||||
|
||||
This command will redirect the error message to the ``error.txt`` file and write it to it. This can be verified by running the ``cat`` command on the ``error.txt`` file.
|
||||
|
||||
You can also use the ``2>>`` operator for **STDERR** just like you used ``>>`` for **STDOUT**.
|
||||
|
||||
Error messages in Bash Scripts can be undesirable sometimes. You can choose to ignore them by redirecting the error message to the ``/dev/null`` file.
|
||||
``/dev/null`` is pseudo-device that takes in text and then immediately discards it.
|
||||
|
||||
The above example can be written follows to ignore the error text completely:
|
||||
```
|
||||
ls --hello 2> /dev/null
|
||||
```
|
||||
|
||||
Of course, you can redirect both **STDOUT** and **STDERR** for the same command or script.
|
||||
```
|
||||
./install_package.sh > output.txt 2> error.txt
|
||||
```
|
||||
|
||||
Both of them can be redirected to the same file as well.
|
||||
```
|
||||
./install_package.sh > file.txt 2> file.txt
|
||||
```
|
||||
|
||||
There is also a shorter way to do this.
|
||||
```
|
||||
./install_package.sh > file.txt 2>&1
|
||||
```
|
||||
|
||||
# Piping
|
||||
|
||||
So far we have seen how to redirect the **STDOUT**, **STDIN** and **STDOUT** to and from a file.
|
||||
To concatenate the output of program *(command)* as the input of another program *(command)* you can use a vertical bar `|`.
|
||||
|
||||
Example:
|
||||
```
|
||||
ls | grep ".txt"
|
||||
```
|
||||
This command will list the files in the current directory and pass output to *`grep`* command which then filter the output to only show the files that contain the string ".txt".
|
||||
|
||||
Syntax:
|
||||
```
|
||||
[time [-p]] [!] command1 [ | or |& command2 ] …
|
||||
```
|
||||
|
||||
You can also build arbitrary chains of commands by piping them together to achieve a powerful result.
|
||||
|
||||
This example creates a listing of every user which owns a file in a given directory as well as how many files and directories they own:
|
||||
```
|
||||
ls -l /projects/bash_scripts | tail -n +2 | sed 's/\s\s*/ /g' | cut -d ' ' -f 3 | sort | uniq -c
|
||||
```
|
||||
Output:
|
||||
```
|
||||
8 anne
|
||||
34 harry
|
||||
37 tina
|
||||
18 ryan
|
||||
```
|
||||
|
||||
# HereDocument
|
||||
|
||||
The symbol `<<` can be used to create a temporary file [heredoc] and redirect from it at the command line.
|
||||
```
|
||||
COMMAND << EOF
|
||||
ContentOfDocument
|
||||
...
|
||||
...
|
||||
EOF
|
||||
```
|
||||
Note here that `EOF` represents the delimiter (end of file) of the heredoc. In fact, we can use any alphanumeric word in its place to signify the start and the end of the file. For instance, this is a valid heredoc:
|
||||
```
|
||||
cat << randomword1
|
||||
This script will print these lines on the terminal.
|
||||
Note that cat can read from standard input. Using this heredoc, we can
|
||||
create a temporary file with these lines as it's content and pipe that
|
||||
into cat.
|
||||
randomword1
|
||||
```
|
||||
|
||||
Effectively it will appear as if the contents of the heredoc are piped into the command. This can make the script very clean if multiple lines need to be piped into a program.
|
||||
|
||||
Further, we can attach more pipes as shown:
|
||||
```
|
||||
cat << randomword1 | wc
|
||||
This script will print these lines on the terminal.
|
||||
Note that cat can read from standard input. Using this heredoc, we can
|
||||
create a temporary file with these lines as it's content and pipe that
|
||||
into cat.
|
||||
randomword1
|
||||
```
|
||||
|
||||
Also, pre-defined variables can be used inside the heredocs.
|
||||
|
||||
# HereString
|
||||
|
||||
Herestrings are quite similar to heredocs but use `<<<`. These are used for single line strings that have to be piped into some program. This looks cleaner that heredocs as we don't have to specify the delimiter.
|
||||
|
||||
```
|
||||
wc <<<"this is an easy way of passing strings to the stdin of a program (here wc)"
|
||||
```
|
||||
|
||||
Just like heredocs, herestrings can contain variables.
|
||||
|
||||
## Summary
|
||||
|**Operator** |**Description** |
|
||||
|:---|:---|
|
||||
|`>`|`Save output to a file`|
|
||||
|`>>`|`Append output to a file`|
|
||||
|`<`|`Read input from a file`|
|
||||
|`2>`|`Redirect error messages`|
|
||||
|`\|`|`Send the output from one program as input to another program`|
|
||||
|`<<`|`Pipe multiple lines into a program cleanly`|
|
||||
|`<<<`|`Pipe a single line into a program cleanly`|
|
||||
@@ -1,336 +0,0 @@
|
||||
# Automatic WordPress on LAMP installation with BASH
|
||||
|
||||
Here is an example of a full LAMP and WordPress installation that works on any Debian-based machine.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
- A Debian-based machine (Ubuntu, Debian, Linux Mint, etc.)
|
||||
|
||||
# Planning the functionality
|
||||
|
||||
Let's start again by going over the main functionality of the script:
|
||||
|
||||
**Lamp Installation**
|
||||
|
||||
* Update the package manager
|
||||
* Install a firewall (ufw)
|
||||
* Allow SSH, HTTP and HTTPS traffic
|
||||
* Install Apache2
|
||||
* Install & Configure MariaDB
|
||||
* Install PHP and required plugins
|
||||
* Enable all required Apache2 mods
|
||||
|
||||
**Apache Virtual Host Setup**
|
||||
|
||||
* Create a directory in `/var/www`
|
||||
* Configure permissions to the directory
|
||||
* Create the `$domain` file under `/etc/apache2/sites-available` and append the required Virtualhost content
|
||||
* Enable the site
|
||||
* Restart Apache2
|
||||
|
||||
**SSL Config**
|
||||
|
||||
* Generate the OpenSSL certificate
|
||||
* Append the SSL certificate to the `ssl-params.conf` file
|
||||
* Append the SSL config to the Virtualhost file
|
||||
* Enable SSL
|
||||
* Reload Apache2
|
||||
|
||||
**Database Config**
|
||||
|
||||
* Create a database
|
||||
* Create a user
|
||||
* Flush Privileges
|
||||
|
||||
**WordPress Config**
|
||||
|
||||
* Install required WordPress PHP plugins
|
||||
* Install WordPress
|
||||
* Append the required information to `wp-config.php` file
|
||||
|
||||
Without further ado, let's start writing the script.
|
||||
|
||||
# The script
|
||||
|
||||
We start by setting our variables and asking the user to input their domain:
|
||||
|
||||
```bash
|
||||
echo 'Please enter your domain of preference without www:'
|
||||
read DOMAIN
|
||||
echo "Please enter your Database username:"
|
||||
read DBUSERNAME
|
||||
echo "Please enter your Database password:"
|
||||
read DBPASSWORD
|
||||
echo "Please enter your Database name:"
|
||||
read DBNAME
|
||||
|
||||
ip=`hostname -I | cut -f1 -d' '`
|
||||
```
|
||||
|
||||
We are now ready to start writing our functions. Start by creating the `lamp_install()` function. Inside of it, we are going to update the system, install ufw, allow SSH, HTTP and HTTPS traffic, install Apache2, install MariaDB and PHP. We are also going to enable all required Apache2 mods.
|
||||
|
||||
```bash
|
||||
lamp_install () {
|
||||
apt update -y
|
||||
apt install ufw
|
||||
ufw enable
|
||||
ufw allow OpenSSH
|
||||
ufw allow in "WWW Full"
|
||||
|
||||
apt install apache2 -y
|
||||
apt install mariadb-server
|
||||
mysql_secure_installation -y
|
||||
apt install php libapache2-mod-php php-mysql -y
|
||||
sed -i "2d" /etc/apache2/mods-enabled/dir.conf
|
||||
sed -i "2i\\\tDirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" /etc/apache2/mods-enabled/dir.conf
|
||||
systemctl reload apache2
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Next, we are going to create the `apache_virtualhost_setup()` function. Inside of it, we are going to create a directory in `/var/www`, configure permissions to the directory, create the `$domain` file under `/etc/apache2/sites-available` and append the required Virtualhost content, enable the site and restart Apache2.
|
||||
|
||||
```bash
|
||||
apache_virtual_host_setup () {
|
||||
mkdir /var/www/$DOMAIN
|
||||
chown -R $USER:$USER /var/www/$DOMAIN
|
||||
|
||||
echo "<VirtualHost *:80>" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerName $DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerAlias www.$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerAdmin webmaster@localhost" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tDocumentRoot /var/www/$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e '\tErrorLog ${APACHE_LOG_DIR}/error.log' >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e '\tCustomLog ${APACHE_LOG_DIR}/access.log combined' >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo "</VirtualHost>" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
a2ensite $DOMAIN
|
||||
a2dissite 000-default
|
||||
systemctl reload apache2
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Next, we are going to create the `ssl_config()` function. Inside of it, we are going to generate the OpenSSL certificate, append the SSL certificate to the `ssl-params.conf` file, append the SSL config to the Virtualhost file, enable SSL and reload Apache2.
|
||||
|
||||
```bash
|
||||
ssl_config () {
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
|
||||
|
||||
echo "SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLHonorCipherOrder On" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "Header always set X-Frame-Options DENY" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "Header always set X-Content-Type-Options nosniff" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLCompression off" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLUseStapling on" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLStaplingCache \"shmcb:logs/stapling-cache(150000)\"" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLSessionTickets Off" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
|
||||
cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak
|
||||
sed -i "s/var\/www\/html/var\/www\/$DOMAIN/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "s/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/etc\/ssl\/certs\/apache-selfsigned.crt/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "s/etc\/ssl\/private\/ssl-cert-snakeoil.key/etc\/ssl\/private\/apache-selfsigned.key/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "4i\\\t\tServerName $ip" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "22i\\\tRedirect permanent \"/\" \"https://$ip/\"" /etc/apache2/sites-available/000-default.conf
|
||||
a2enmod ssl
|
||||
a2enmod headers
|
||||
a2ensite default-ssl
|
||||
a2enconf ssl-params
|
||||
systemctl reload apache2
|
||||
}
|
||||
```
|
||||
|
||||
Next, we are going to create the `db_setup()` function. Inside of it, we are going to create the database, create the user and grant all privileges to the user.
|
||||
|
||||
```bash
|
||||
db_config () {
|
||||
mysql -e "CREATE DATABASE $DBNAME;"
|
||||
mysql -e "GRANT ALL ON $DBNAME.* TO '$DBUSERNAME'@'localhost' IDENTIFIED BY '$DBPASSWORD' WITH GRANT OPTION;"
|
||||
mysql -e "FLUSH PRIVILEGES;"
|
||||
}
|
||||
```
|
||||
|
||||
Next, we are going to create the `wordpress_config()` function. Inside of it, we are going to download the latest version of WordPress, extract it to the `/var/www/$DOMAIN` directory, create the `wp-config.php` file and append the required content to it.
|
||||
|
||||
```bash
|
||||
wordpress_config () {
|
||||
db_config
|
||||
|
||||
|
||||
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip -y
|
||||
systemctl restart apache2
|
||||
sed -i "8i\\\t<Directory /var/www/$DOMAIN/>" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
sed -i "9i\\\t\tAllowOverride All" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
sed -i "10i\\\t</Directory>" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
|
||||
a2enmod rewrite
|
||||
systemctl restart apache2
|
||||
|
||||
apt install curl
|
||||
cd /tmp
|
||||
curl -O https://wordpress.org/latest.tar.gz
|
||||
tar xzvf latest.tar.gz
|
||||
touch /tmp/wordpress/.htaccess
|
||||
cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
|
||||
mkdir /tmp/wordpress/wp-content/upgrade
|
||||
cp -a /tmp/wordpress/. /var/www/$DOMAIN
|
||||
chown -R www-data:www-data /var/www/$DOMAIN
|
||||
find /var/www/$DOMAIN/ -type d -exec chmod 750 {} \;
|
||||
find /var/www/$DOMAIN/ -type f -exec chmod 640 {} \;
|
||||
curl -s https://api.wordpress.org/secret-key/1.1/salt/ >> /var/www/$DOMAIN/wp-config.php
|
||||
echo "define('FS_METHOD', 'direct');" >> /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "51,58d" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/database_name_here/$DBNAME/1" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/username_here/$DBUSERNAME/1" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/password_here/$DBPASSWORD/1" /var/www/$DOMAIN/wp-config.php
|
||||
}
|
||||
```
|
||||
|
||||
And finally, we are going to create the `execute()` function. Inside of it, we are going to call all the functions we created above.
|
||||
|
||||
```bash
|
||||
execute () {
|
||||
lamp_install
|
||||
apache_virtual_host_setup
|
||||
ssl_config
|
||||
wordpress_config
|
||||
}
|
||||
```
|
||||
|
||||
With this, you have the script ready and you are ready to run it. And if you need the full script, you can find it in the next section.
|
||||
|
||||
# The full script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
echo 'Please enter your domain of preference without www:'
|
||||
read DOMAIN
|
||||
echo "Please enter your Database username:"
|
||||
read DBUSERNAME
|
||||
echo "Please enter your Database password:"
|
||||
read DBPASSWORD
|
||||
echo "Please enter your Database name:"
|
||||
read DBNAME
|
||||
|
||||
ip=`hostname -I | cut -f1 -d' '`
|
||||
|
||||
lamp_install () {
|
||||
apt update -y
|
||||
apt install ufw
|
||||
ufw enable
|
||||
ufw allow OpenSSH
|
||||
ufw allow in "WWW Full"
|
||||
|
||||
apt install apache2 -y
|
||||
apt install mariadb-server
|
||||
mysql_secure_installation -y
|
||||
apt install php libapache2-mod-php php-mysql -y
|
||||
sed -i "2d" /etc/apache2/mods-enabled/dir.conf
|
||||
sed -i "2i\\\tDirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" /etc/apache2/mods-enabled/dir.conf
|
||||
systemctl reload apache2
|
||||
|
||||
}
|
||||
|
||||
apache_virtual_host_setup () {
|
||||
mkdir /var/www/$DOMAIN
|
||||
chown -R $USER:$USER /var/www/$DOMAIN
|
||||
|
||||
echo "<VirtualHost *:80>" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerName $DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerAlias www.$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tServerAdmin webmaster@localhost" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e "\tDocumentRoot /var/www/$DOMAIN" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e '\tErrorLog ${APACHE_LOG_DIR}/error.log' >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo -e '\tCustomLog ${APACHE_LOG_DIR}/access.log combined' >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
echo "</VirtualHost>" >> /etc/apache2/sites-available/$DOMAIN.conf
|
||||
a2ensite $DOMAIN
|
||||
a2dissite 000-default
|
||||
systemctl reload apache2
|
||||
|
||||
}
|
||||
|
||||
|
||||
ssl_config () {
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
|
||||
|
||||
echo "SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLHonorCipherOrder On" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "Header always set X-Frame-Options DENY" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "Header always set X-Content-Type-Options nosniff" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLCompression off" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLUseStapling on" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLStaplingCache \"shmcb:logs/stapling-cache(150000)\"" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
echo "SSLSessionTickets Off" >> /etc/apache2/conf-available/ssl-params.conf
|
||||
|
||||
cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak
|
||||
sed -i "s/var\/www\/html/var\/www\/$DOMAIN/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "s/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/etc\/ssl\/certs\/apache-selfsigned.crt/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "s/etc\/ssl\/private\/ssl-cert-snakeoil.key/etc\/ssl\/private\/apache-selfsigned.key/1" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "4i\\\t\tServerName $ip" /etc/apache2/sites-available/default-ssl.conf
|
||||
sed -i "22i\\\tRedirect permanent \"/\" \"https://$ip/\"" /etc/apache2/sites-available/000-default.conf
|
||||
a2enmod ssl
|
||||
a2enmod headers
|
||||
a2ensite default-ssl
|
||||
a2enconf ssl-params
|
||||
systemctl reload apache2
|
||||
}
|
||||
|
||||
db_config () {
|
||||
mysql -e "CREATE DATABASE $DBNAME;"
|
||||
mysql -e "GRANT ALL ON $DBNAME.* TO '$DBUSERNAME'@'localhost' IDENTIFIED BY '$DBPASSWORD' WITH GRANT OPTION;"
|
||||
mysql -e "FLUSH PRIVILEGES;"
|
||||
}
|
||||
|
||||
wordpress_config () {
|
||||
db_config
|
||||
|
||||
|
||||
apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip -y
|
||||
systemctl restart apache2
|
||||
sed -i "8i\\\t<Directory /var/www/$DOMAIN/>" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
sed -i "9i\\\t\tAllowOverride All" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
sed -i "10i\\\t</Directory>" /etc/apache2/sites-available/$DOMAIN.conf
|
||||
|
||||
a2enmod rewrite
|
||||
systemctl restart apache2
|
||||
|
||||
apt install curl
|
||||
cd /tmp
|
||||
curl -O https://wordpress.org/latest.tar.gz
|
||||
tar xzvf latest.tar.gz
|
||||
touch /tmp/wordpress/.htaccess
|
||||
cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
|
||||
mkdir /tmp/wordpress/wp-content/upgrade
|
||||
cp -a /tmp/wordpress/. /var/www/$DOMAIN
|
||||
chown -R www-data:www-data /var/www/$DOMAIN
|
||||
find /var/www/$DOMAIN/ -type d -exec chmod 750 {} \;
|
||||
find /var/www/$DOMAIN/ -type f -exec chmod 640 {} \;
|
||||
curl -s https://api.wordpress.org/secret-key/1.1/salt/ >> /var/www/$DOMAIN/wp-config.php
|
||||
echo "define('FS_METHOD', 'direct');" >> /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "51,58d" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/database_name_here/$DBNAME/1" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/username_here/$DBUSERNAME/1" /var/www/$DOMAIN/wp-config.php
|
||||
sed -i "s/password_here/$DBPASSWORD/1" /var/www/$DOMAIN/wp-config.php
|
||||
}
|
||||
|
||||
execute () {
|
||||
lamp_install
|
||||
apache_virtual_host_setup
|
||||
ssl_config
|
||||
wordpress_config
|
||||
}
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
The script does the following:
|
||||
|
||||
* Install LAMP
|
||||
* Create a virtual host
|
||||
* Configure SSL
|
||||
* Install WordPress
|
||||
* Configure WordPress
|
||||
|
||||
With this being said, I hope you enjoyed this example. If you have any questions, please feel free to ask me directly at [@denctl](https://twitter.com/denctl).
|
||||
@@ -1,15 +0,0 @@
|
||||
# Wrap Up
|
||||
|
||||
Congratulations! You have just completed the Bash basics guide!
|
||||
|
||||
If you found this useful, be sure to star the project on [GitHub](https://github.com/bobbyiliev/introduction-to-bash-scripting)!
|
||||
|
||||
If you have any suggestions for improvements, make sure to contribute pull requests or open issues.
|
||||
|
||||
In this introduction to Bash scripting book, we just covered the basics, but you still have enough under your belt to start wringing some awesome scripts and automating daily tasks!
|
||||
|
||||
As a next step try writing your own script and share it with the world! This is the best way to learn any new programming or scripting language!
|
||||
|
||||
In case that this book inspired you to write some cool Bash scripts, make sure to tweet about it and tag [@bobbyiliev_](https://twitter.com) so that we could check it out!
|
||||
|
||||
Congrats again on completing this book!
|
||||
@@ -1,82 +0,0 @@
|
||||
# A Linux Learning Playground Situation
|
||||
|
||||

|
||||
|
||||
wanna play with the application in this picture? connect to the server using the instructions below and run the command `hollywood`
|
||||
|
||||
## Introduction:
|
||||
|
||||
Welcome, aspiring Linux ninjas! This tutorial will guide you through accessing Shinobi Academy Linux, a custom-built server designed to provide a safe and engaging environment for you to learn and experiment with Linux. Brought to you by Softwareshinobi ([https://softwareshinobi.digital/](https://softwareshinobi.digital/)), this server is your gateway to the exciting world of open-source exploration.
|
||||
|
||||
## What You'll Learn:
|
||||
|
||||
* Connecting to a Linux server (using SSH)
|
||||
* Basic Linux commands (navigation, listing files, etc.)
|
||||
* Exploring pre-installed tools like cmatrix and hollywood
|
||||
|
||||
## What You'll Need:
|
||||
|
||||
* A computer with internet access
|
||||
* An SSH client (built-in on most Linux and macOS systems, downloadable for Windows)
|
||||
|
||||
## About Shinobi Academy:
|
||||
|
||||
Shinobi Academy, the online learning platform brought to you by Softwareshinobi!
|
||||
|
||||
Designed to empower aspiring tech enthusiasts, Shinobi Academy offers a comprehensive range of courses and resources to equip you with the skills you need to excel in the ever-evolving world of technology.
|
||||
|
||||
## Connecting to Shinobi Academy Linux:
|
||||
|
||||
1. Open your SSH client.
|
||||
2. Enter the following command (including the port number):
|
||||
|
||||
```
|
||||
ssh -p 2222 shinobi@linux.softwareshinobi.digital
|
||||
```
|
||||
|
||||
3. When prompted, enter the password "shinobi".
|
||||
|
||||
```
|
||||
username / shinobi
|
||||
```
|
||||
|
||||
```
|
||||
password / shinobi
|
||||
```
|
||||
|
||||
**Congratulations!** You're now connected to Shinobi Academy Linux.
|
||||
|
||||
## Exploring the Server:
|
||||
|
||||
Once connected, you can use basic Linux commands to navigate the system and explore its features. Here are a few examples:
|
||||
|
||||
* **`ls`:** Lists files and directories in the current directory.
|
||||
* **`cd`:** Changes directory. For example, `cd Desktop` will move you to the Desktop directory (if it exists).
|
||||
* **`pwd`:** Shows the current working directory.
|
||||
* **`man` followed by a command name:** Provides detailed information on a specific command (e.g., `man ls`).
|
||||
|
||||
## Pre-installed Goodies:
|
||||
|
||||
Shinobi Academy Linux comes pre-installed with some interesting tools to enhance your learning experience:
|
||||
|
||||
* **`cmatrix`:** Simulates the iconic falling code effect from the movie "The Matrix".
|
||||
* **`hollywood`:** Creates a variety of dynamic text effects on your terminal.
|
||||
|
||||
**Experimenting with these tools is a great way to explore the possibilities of Linux.**
|
||||
|
||||
## Conclusion:
|
||||
|
||||
By following these steps, you've successfully connected to Shinobi Academy Linux and begun your journey into the world of Linux. Use this platform to explore, experiment, and build your Linux skills!
|
||||
|
||||
A big thanks to Gemini for putting together these awesome docs!
|
||||
|
||||
## Master Linux Like a Pro: 1-on-1 Tutoring:
|
||||
|
||||
**Tired of fumbling in the terminal?** Imagine wielding Linux commands with ease, managing servers like a corporate ninja – just like my government and corporate gigs.
|
||||
|
||||
**1-on-1 tutoring unlocks your potential:**
|
||||
|
||||
* **Terminal mastery:** Conquer the command line and automate tasks like a pro.
|
||||
* **Become a command jedi:** Craft commands with lightning speed, streamlining your workflow.
|
||||
|
||||
**Ready to transform your skills?** [Learn More!](tutor.softwareshinobi.digital/linux)
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
Before Width: | Height: | Size: 10 KiB |
Binary file not shown.
106
docs/001-docker.md
Normal file
106
docs/001-docker.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Introduction to Docker
|
||||
|
||||
## What is Docker?
|
||||
|
||||
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization technology. It allows developers to package applications and their dependencies into standardized units called containers, which can run consistently across different environments.
|
||||
|
||||
### Key Concepts:
|
||||
|
||||
1. **Containerization**: A lightweight form of virtualization that packages applications and their dependencies together.
|
||||
2. **Docker Engine**: The runtime that allows you to build and run containers.
|
||||
3. **Docker Image**: A read-only template used to create containers.
|
||||
4. **Docker Container**: A runnable instance of a Docker image.
|
||||
5. **Docker Hub**: A cloud-based registry for storing and sharing Docker images.
|
||||
|
||||
## Why Use Docker?
|
||||
|
||||
Docker offers numerous advantages for developers and operations teams:
|
||||
|
||||
1. **Consistency**: Ensures applications run the same way in development, testing, and production environments.
|
||||
2. **Isolation**: Containers are isolated from each other and the host system, improving security and reducing conflicts.
|
||||
3. **Portability**: Containers can run on any system that supports Docker, regardless of the underlying infrastructure.
|
||||
4. **Efficiency**: Containers share the host system's OS kernel, making them more lightweight than traditional virtual machines.
|
||||
5. **Scalability**: Easy to scale applications horizontally by running multiple containers.
|
||||
6. **Version Control**: Docker images can be versioned, allowing for easy rollbacks and updates.
|
||||
|
||||
## Docker Architecture
|
||||
|
||||
Docker uses a client-server architecture:
|
||||
|
||||
1. **Docker Client**: The primary way users interact with Docker through the command line interface (CLI).
|
||||
2. **Docker Host**: The machine running the Docker daemon (dockerd).
|
||||
3. **Docker Daemon**: Manages Docker objects like images, containers, networks, and volumes.
|
||||
4. **Docker Registry**: Stores Docker images (e.g., Docker Hub).
|
||||
|
||||
Here's a simplified diagram of the Docker architecture:
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────────────────────────────┐
|
||||
│ Docker CLI │ │ Docker Host │
|
||||
│ (docker) │◄───►│ ┌────────────┐ ┌───────────┐ │
|
||||
└─────────────┘ │ │ Docker │ │ Containers│ │
|
||||
│ │ Daemon │◄────►│ and │ │
|
||||
│ │ (dockerd) │ │ Images │ │
|
||||
│ └────────────┘ └───────────┘ │
|
||||
└─────────────────────────────────────┘
|
||||
▲
|
||||
│
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Docker Registry │
|
||||
│ (Docker Hub) │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
## Containers vs. Virtual Machines
|
||||
|
||||
While both containers and virtual machines (VMs) are used for isolating applications, they differ in several key aspects:
|
||||
|
||||
| Aspect | Containers | Virtual Machines |
|
||||
|-----------------|---------------------------------------|-------------------------------------|
|
||||
| OS | Share host OS kernel | Run full OS and kernel |
|
||||
| Resource Usage | Lightweight, minimal overhead | Higher resource usage |
|
||||
| Boot Time | Seconds | Minutes |
|
||||
| Isolation | Process-level isolation | Full isolation |
|
||||
| Portability | Highly portable across different OSes | Less portable, OS-dependent |
|
||||
| Performance | Near-native performance | Slight performance overhead |
|
||||
| Storage | Typically smaller (MBs) | Larger (GBs) |
|
||||
|
||||
## Basic Docker Workflow
|
||||
|
||||
1. **Build**: Create a Dockerfile that defines your application and its dependencies.
|
||||
2. **Ship**: Push your Docker image to a registry like Docker Hub.
|
||||
3. **Run**: Pull the image and run it as a container on any Docker-enabled host.
|
||||
|
||||
Here's a simple example of this workflow:
|
||||
|
||||
```bash
|
||||
# Build an image
|
||||
docker build -t myapp:v1 .
|
||||
|
||||
# Ship the image to Docker Hub
|
||||
docker push username/myapp:v1
|
||||
|
||||
# Run the container
|
||||
docker run -d -p 8080:80 username/myapp:v1
|
||||
```
|
||||
|
||||
## Docker Components
|
||||
|
||||
1. **Dockerfile**: A text file containing instructions to build a Docker image.
|
||||
2. **Docker Compose**: A tool for defining and running multi-container Docker applications.
|
||||
3. **Docker Swarm**: Docker's native clustering and orchestration solution.
|
||||
4. **Docker Network**: Facilitates communication between Docker containers.
|
||||
5. **Docker Volume**: Provides persistent storage for container data.
|
||||
|
||||
## Use Cases for Docker
|
||||
|
||||
1. **Microservices Architecture**: Deploy and scale individual services independently.
|
||||
2. **Continuous Integration/Continuous Deployment (CI/CD)**: Streamline development and deployment processes.
|
||||
3. **Development Environments**: Create consistent development environments across teams.
|
||||
4. **Application Isolation**: Run multiple versions of an application on the same host.
|
||||
5. **Legacy Application Migration**: Containerize legacy applications for easier management and deployment.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker has revolutionized how applications are developed, shipped, and run. By providing a standardized way to package and deploy applications, Docker addresses many of the challenges faced in modern software development and operations. As we progress through this book, we'll dive deeper into each aspect of Docker, providing you with the knowledge and skills to leverage this powerful technology effectively.
|
||||
197
docs/002-installation.md
Normal file
197
docs/002-installation.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# Installing Docker
|
||||
|
||||
Installing Docker is the first step in your journey with containerization. This chapter will guide you through the process of installing Docker on various operating systems, troubleshooting common issues, and verifying your installation.
|
||||
|
||||
## Docker Editions
|
||||
|
||||
Before we begin, it's important to understand the different Docker editions available:
|
||||
|
||||
1. **Docker Engine - Community**: Free, open-source Docker platform suitable for developers and small teams.
|
||||
2. **Docker Engine - Enterprise**: Designed for enterprise development and IT teams building, running, and operating business-critical applications at scale.
|
||||
3. **Docker Desktop**: An easy-to-install application for Mac or Windows environments that includes Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper.
|
||||
|
||||
For most users, Docker Engine - Community or Docker Desktop will be sufficient.
|
||||
|
||||
## Installing Docker on Linux
|
||||
|
||||
Docker runs natively on Linux, making it the ideal platform for Docker containers. There are two main methods to install Docker on Linux: using the convenience script or manual installation for specific distributions.
|
||||
|
||||
### Method 1: Using the Docker Installation Script (Recommended for Quick Setup)
|
||||
|
||||
Docker provides a convenient script that automatically detects your Linux distribution and installs Docker for you. This method is quick and works across many Linux distributions:
|
||||
|
||||
1. Run the following command to download and execute the Docker installation script:
|
||||
```
|
||||
wget -qO- https://get.docker.com | sh
|
||||
```
|
||||
|
||||
2. Once the installation is complete, start the Docker service:
|
||||
```
|
||||
sudo systemctl start docker
|
||||
```
|
||||
|
||||
3. Enable Docker to start on boot:
|
||||
```
|
||||
sudo systemctl enable docker
|
||||
```
|
||||
|
||||
This method is ideal for quick setups and testing environments. However, for production environments, you might want to consider the manual installation method for more control over the process.
|
||||
|
||||
### Method 2: Manual Installation for Specific Distributions
|
||||
|
||||
For more control over the installation process or if you prefer to follow distribution-specific steps, you can manually install Docker. Here are instructions for popular Linux distributions:
|
||||
|
||||
Docker runs natively on Linux, making it the ideal platform for Docker containers. Here's how to install Docker on popular Linux distributions:
|
||||
|
||||
### Ubuntu
|
||||
|
||||
1. Update your package index:
|
||||
```
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
2. Install prerequisites:
|
||||
```
|
||||
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
|
||||
```
|
||||
|
||||
3. Add Docker's official GPG key:
|
||||
```
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
|
||||
```
|
||||
|
||||
4. Set up the stable repository:
|
||||
```
|
||||
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
|
||||
```
|
||||
|
||||
5. Update the package index again:
|
||||
```
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
6. Install Docker:
|
||||
```
|
||||
sudo apt-get install docker-ce docker-ce-cli containerd.io
|
||||
```
|
||||
|
||||
### CentOS
|
||||
|
||||
1. Install required packages:
|
||||
```
|
||||
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
|
||||
```
|
||||
|
||||
2. Add Docker repository:
|
||||
```
|
||||
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
```
|
||||
|
||||
3. Install Docker:
|
||||
```
|
||||
sudo yum install docker-ce docker-ce-cli containerd.io
|
||||
```
|
||||
|
||||
4. Start and enable Docker:
|
||||
```
|
||||
sudo systemctl start docker
|
||||
sudo systemctl enable docker
|
||||
```
|
||||
|
||||
### Other Linux Distributions
|
||||
|
||||
For other Linux distributions, refer to the official Docker documentation: https://docs.docker.com/engine/install/
|
||||
|
||||
## Installing Docker on macOS
|
||||
|
||||
For macOS, the easiest way to install Docker is by using Docker Desktop:
|
||||
|
||||
1. Download Docker Desktop for Mac from the official Docker website: https://www.docker.com/products/docker-desktop
|
||||
|
||||
2. Double-click the downloaded `.dmg` file and drag the Docker icon to your Applications folder.
|
||||
|
||||
3. Open Docker from your Applications folder.
|
||||
|
||||
4. Follow the on-screen instructions to complete the installation.
|
||||
|
||||
## Installing Docker on Windows
|
||||
|
||||
For Windows 10 Pro, Enterprise, or Education editions, you can install Docker Desktop:
|
||||
|
||||
1. Download Docker Desktop for Windows from the official Docker website: https://www.docker.com/products/docker-desktop
|
||||
|
||||
2. Double-click the installer to run it.
|
||||
|
||||
3. Follow the installation wizard to complete the installation.
|
||||
|
||||
4. Once installed, Docker Desktop will start automatically.
|
||||
|
||||
For Windows 10 Home or older versions of Windows, you can use Docker Toolbox, which uses Oracle VirtualBox to run Docker:
|
||||
|
||||
1. Download Docker Toolbox from: https://github.com/docker/toolbox/releases
|
||||
|
||||
2. Run the installer and follow the installation wizard.
|
||||
|
||||
3. Once installed, use the Docker Quickstart Terminal to interact with Docker.
|
||||
|
||||
## Post-Installation Steps
|
||||
|
||||
After installing Docker, there are a few steps you should take:
|
||||
|
||||
1. Verify the installation:
|
||||
```
|
||||
docker version
|
||||
docker run hello-world
|
||||
```
|
||||
|
||||
2. Configure Docker to start on boot (Linux only):
|
||||
```
|
||||
sudo systemctl enable docker
|
||||
```
|
||||
|
||||
3. Add your user to the docker group to run Docker commands without sudo (Linux only):
|
||||
```
|
||||
sudo usermod -aG docker $USER
|
||||
```
|
||||
Note: You'll need to log out and back in for this change to take effect.
|
||||
|
||||
## Docker Desktop vs Docker Engine
|
||||
|
||||
It's important to understand the difference between Docker Desktop and Docker Engine:
|
||||
|
||||
- **Docker Desktop** is a user-friendly application that includes Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper. It's designed for easy installation and use on Mac and Windows.
|
||||
|
||||
- **Docker Engine** is the core Docker runtime available for Linux systems. It doesn't come with the additional tools included in Docker Desktop but can be installed alongside them separately.
|
||||
|
||||
## Troubleshooting Common Installation Issues
|
||||
|
||||
1. **Permission denied**:
|
||||
If you encounter "permission denied" errors, ensure you've added your user to the docker group or are using sudo.
|
||||
|
||||
2. **Docker daemon not running**:
|
||||
On Linux, try starting the Docker service: `sudo systemctl start docker`
|
||||
|
||||
3. **Conflict with VirtualBox (Windows)**:
|
||||
Ensure Hyper-V is enabled for Docker Desktop, or use Docker Toolbox if you need to keep using VirtualBox.
|
||||
|
||||
4. **Insufficient system resources**:
|
||||
Docker Desktop requires at least 4GB of RAM. Increase your system's or virtual machine's allocated RAM if needed.
|
||||
|
||||
## Updating Docker
|
||||
|
||||
To update Docker:
|
||||
|
||||
- On Linux, use your package manager (e.g., `apt-get upgrade docker-ce` on Ubuntu)
|
||||
- On Mac and Windows, Docker Desktop will notify you of updates automatically
|
||||
|
||||
## Uninstalling Docker
|
||||
|
||||
If you need to uninstall Docker:
|
||||
|
||||
- On Linux, use your package manager (e.g., `sudo apt-get purge docker-ce docker-ce-cli containerd.io` on Ubuntu)
|
||||
- On Mac, remove Docker Desktop from the Applications folder
|
||||
- On Windows, uninstall Docker Desktop from the Control Panel
|
||||
|
||||
## Conclusion
|
||||
|
||||
Installing Docker is generally a straightforward process, but it can vary depending on your operating system. Always refer to the official Docker documentation for the most up-to-date installation instructions for your specific system. With Docker successfully installed, you're now ready to start exploring the world of containerization!
|
||||
222
docs/003-docker-containers.md
Normal file
222
docs/003-docker-containers.md
Normal file
@@ -0,0 +1,222 @@
|
||||
# Working with Docker Containers
|
||||
|
||||
Docker containers are lightweight, standalone, and executable packages that include everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. In this chapter, we'll explore how to work with Docker containers effectively.
|
||||
|
||||
## Running Your First Container
|
||||
|
||||
Let's start by running a simple container:
|
||||
|
||||
```bash
|
||||
docker run hello-world
|
||||
```
|
||||
|
||||
This command does the following:
|
||||
1. Checks for the `hello-world` image locally
|
||||
2. If not found, pulls the image from Docker Hub
|
||||
3. Creates a container from the image
|
||||
4. Runs the container, which prints a hello message
|
||||
5. Exits the container
|
||||
|
||||
## Basic Docker Commands
|
||||
|
||||
Here are some essential Docker commands for working with containers:
|
||||
|
||||
### Listing Containers
|
||||
|
||||
To see all running containers:
|
||||
```bash
|
||||
docker ps
|
||||
```
|
||||
|
||||
To see all containers (including stopped ones):
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
### Starting and Stopping Containers
|
||||
|
||||
To stop a running container:
|
||||
```bash
|
||||
docker stop <container_id_or_name>
|
||||
```
|
||||
|
||||
To start a stopped container:
|
||||
```bash
|
||||
docker start <container_id_or_name>
|
||||
```
|
||||
|
||||
To restart a container:
|
||||
```bash
|
||||
docker restart <container_id_or_name>
|
||||
```
|
||||
|
||||
### Removing Containers
|
||||
|
||||
To remove a stopped container:
|
||||
```bash
|
||||
docker rm <container_id_or_name>
|
||||
```
|
||||
|
||||
To force remove a running container:
|
||||
```bash
|
||||
docker rm -f <container_id_or_name>
|
||||
```
|
||||
|
||||
## Running Containers in Different Modes
|
||||
|
||||
### Detached Mode
|
||||
|
||||
Run a container in the background:
|
||||
```bash
|
||||
docker run -d <image_name>
|
||||
```
|
||||
|
||||
### Interactive Mode
|
||||
|
||||
Run a container and interact with it:
|
||||
```bash
|
||||
docker run -it <image_name> /bin/bash
|
||||
```
|
||||
|
||||
## Port Mapping
|
||||
|
||||
To map a container's port to the host:
|
||||
```bash
|
||||
docker run -p <host_port>:<container_port> <image_name>
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
docker run -d -p 80:80 nginx
|
||||
```
|
||||
|
||||
## Working with Container Logs
|
||||
|
||||
View container logs:
|
||||
```bash
|
||||
docker logs <container_id_or_name>
|
||||
```
|
||||
|
||||
Follow container logs in real-time:
|
||||
```bash
|
||||
docker logs -f <container_id_or_name>
|
||||
```
|
||||
|
||||
## Executing Commands in Running Containers
|
||||
|
||||
To execute a command in a running container:
|
||||
```bash
|
||||
docker exec -it <container_id_or_name> <command>
|
||||
```
|
||||
|
||||
Example:
|
||||
```bash
|
||||
docker exec -it my_container /bin/bash
|
||||
```
|
||||
|
||||
## Practical Example: Running an Apache Container
|
||||
|
||||
Let's run an Apache web server container:
|
||||
|
||||
1. Pull the image:
|
||||
```bash
|
||||
docker pull httpd
|
||||
```
|
||||
|
||||
2. Run the container:
|
||||
```bash
|
||||
docker run -d --name my-apache -p 8080:80 httpd
|
||||
```
|
||||
|
||||
3. Verify it's running:
|
||||
```bash
|
||||
docker ps
|
||||
```
|
||||
|
||||
4. Access the default page by opening a web browser and navigating to `http://localhost:8080`
|
||||
|
||||
5. Modify the default page:
|
||||
```bash
|
||||
docker exec -it my-apache /bin/bash
|
||||
echo "<h1>Hello from my Apache container!</h1>" > /usr/local/apache2/htdocs/index.html
|
||||
exit
|
||||
```
|
||||
|
||||
6. Refresh your browser to see the changes
|
||||
|
||||
## Container Resource Management
|
||||
|
||||
### Limiting Memory
|
||||
|
||||
Run a container with a memory limit:
|
||||
```bash
|
||||
docker run -d --memory=512m <image_name>
|
||||
```
|
||||
|
||||
### Limiting CPU
|
||||
|
||||
Run a container with CPU limit:
|
||||
```bash
|
||||
docker run -d --cpus=0.5 <image_name>
|
||||
```
|
||||
|
||||
## Container Networking
|
||||
|
||||
### Listing Networks
|
||||
|
||||
```bash
|
||||
docker network ls
|
||||
```
|
||||
|
||||
### Creating a Network
|
||||
|
||||
```bash
|
||||
docker network create my_network
|
||||
```
|
||||
|
||||
### Connecting a Container to a Network
|
||||
|
||||
```bash
|
||||
docker run -d --network my_network --name my_container <image_name>
|
||||
```
|
||||
|
||||
## Data Persistence with Volumes
|
||||
|
||||
### Creating a Volume
|
||||
|
||||
```bash
|
||||
docker volume create my_volume
|
||||
```
|
||||
|
||||
### Running a Container with a Volume
|
||||
|
||||
```bash
|
||||
docker run -d -v my_volume:/path/in/container <image_name>
|
||||
```
|
||||
|
||||
## Container Health Checks
|
||||
|
||||
Docker provides built-in health checking capabilities. You can define a health check in your Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost/ || exit 1
|
||||
```
|
||||
|
||||
## Cleaning Up
|
||||
|
||||
Remove all stopped containers:
|
||||
```bash
|
||||
docker container prune
|
||||
```
|
||||
|
||||
Remove all unused resources (containers, networks, images):
|
||||
```bash
|
||||
docker system prune
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Working with Docker containers involves a range of operations from basic running and stopping to more advanced topics like resource management and networking. As you become more comfortable with these operations, you'll be able to leverage Docker's full potential in your development and deployment workflows.
|
||||
|
||||
Remember, containers are designed to be ephemeral. Always store important data in volumes or use appropriate persistence mechanisms for your applications.
|
||||
214
docs/004-docker-images.md
Normal file
214
docs/004-docker-images.md
Normal file
@@ -0,0 +1,214 @@
|
||||
# What are Docker Images
|
||||
|
||||
A Docker image is a lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. Images are the building blocks of Docker containers.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
1. **Layers**: Images are composed of multiple layers, each representing a set of changes to the filesystem.
|
||||
2. **Base Image**: The foundation of an image, typically a minimal operating system.
|
||||
3. **Parent Image**: An image that your image is built upon.
|
||||
4. **Image Tags**: Labels used to version and identify images.
|
||||
5. **Image ID**: A unique identifier for each image.
|
||||
|
||||
## Working with Docker Images
|
||||
|
||||
### Listing Images
|
||||
|
||||
To see all images on your local system:
|
||||
|
||||
```bash
|
||||
docker images
|
||||
```
|
||||
|
||||
Or use the more verbose command:
|
||||
|
||||
```bash
|
||||
docker image ls
|
||||
```
|
||||
|
||||
### Pulling Images from Docker Hub
|
||||
|
||||
To download an image from Docker Hub:
|
||||
|
||||
```bash
|
||||
docker pull <image_name>:<tag>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker pull ubuntu:20.04
|
||||
```
|
||||
|
||||
If no tag is specified, Docker will pull the `latest` tag by default.
|
||||
|
||||
### Running Containers from Images
|
||||
|
||||
To run a container from an image:
|
||||
|
||||
```bash
|
||||
docker run <image_name>:<tag>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -it ubuntu:20.04 /bin/bash
|
||||
```
|
||||
|
||||
### Image Information
|
||||
|
||||
To get detailed information about an image:
|
||||
|
||||
```bash
|
||||
docker inspect <image_name>:<tag>
|
||||
```
|
||||
|
||||
### Removing Images
|
||||
|
||||
To remove an image:
|
||||
|
||||
```bash
|
||||
docker rmi <image_name>:<tag>
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
docker image rm <image_name>:<tag>
|
||||
```
|
||||
|
||||
To remove all unused images:
|
||||
|
||||
```bash
|
||||
docker image prune
|
||||
```
|
||||
|
||||
## Building Custom Images
|
||||
|
||||
### Using a Dockerfile
|
||||
|
||||
1. Create a file named `Dockerfile` with no extension.
|
||||
2. Define the instructions to build your image.
|
||||
|
||||
Example Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu:20.04
|
||||
RUN apt-get update && apt-get install -y nginx
|
||||
COPY ./my-nginx.conf /etc/nginx/nginx.conf
|
||||
EXPOSE 80
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
```
|
||||
|
||||
3. Build the image:
|
||||
|
||||
```bash
|
||||
docker build -t my-nginx:v1 .
|
||||
```
|
||||
|
||||
### Building from a Running Container
|
||||
|
||||
1. Make changes to a running container.
|
||||
2. Create a new image from the container:
|
||||
|
||||
```bash
|
||||
docker commit <container_id> my-new-image:tag
|
||||
```
|
||||
|
||||
## Image Tagging
|
||||
|
||||
To tag an existing image:
|
||||
|
||||
```bash
|
||||
docker tag <source_image>:<tag> <target_image>:<tag>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker tag my-nginx:v1 my-dockerhub-username/my-nginx:v1
|
||||
```
|
||||
|
||||
## Pushing Images to Docker Hub
|
||||
|
||||
1. Log in to Docker Hub:
|
||||
|
||||
```bash
|
||||
docker login
|
||||
```
|
||||
|
||||
2. Push the image:
|
||||
|
||||
```bash
|
||||
docker push my-dockerhub-username/my-nginx:v1
|
||||
```
|
||||
|
||||
## Image Layers and Caching
|
||||
|
||||
Understanding layers is crucial for optimizing image builds:
|
||||
|
||||
1. Each instruction in a Dockerfile creates a new layer.
|
||||
2. Layers are cached and reused in subsequent builds.
|
||||
3. Ordering instructions from least to most frequently changing can speed up builds.
|
||||
|
||||
Example of leveraging caching:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu:20.04
|
||||
RUN apt-get update && apt-get install -y nginx
|
||||
COPY ./static-files /var/www/html
|
||||
COPY ./config-files /etc/nginx
|
||||
```
|
||||
|
||||
## Multi-stage Builds
|
||||
|
||||
Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. This is useful for creating smaller production images.
|
||||
|
||||
Example:
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM golang:1.16 AS build
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN go build -o myapp
|
||||
|
||||
# Production stage
|
||||
FROM alpine:3.14
|
||||
COPY --from=build /app/myapp /usr/local/bin/myapp
|
||||
CMD ["myapp"]
|
||||
```
|
||||
|
||||
## Image Scanning and Security
|
||||
|
||||
Docker provides built-in image scanning capabilities:
|
||||
|
||||
```bash
|
||||
docker scan <image_name>:<tag>
|
||||
```
|
||||
|
||||
This helps identify vulnerabilities in your images.
|
||||
|
||||
## Best Practices for Working with Images
|
||||
|
||||
1. Use specific tags instead of `latest` for reproducibility.
|
||||
2. Keep images small by using minimal base images and multi-stage builds.
|
||||
3. Use `.dockerignore` to exclude unnecessary files from the build context.
|
||||
4. Leverage build cache by ordering Dockerfile instructions effectively.
|
||||
5. Regularly update base images to get security patches.
|
||||
6. Scan images for vulnerabilities before deployment.
|
||||
|
||||
## Image Management and Cleanup
|
||||
|
||||
To manage disk space, regularly clean up unused images:
|
||||
|
||||
```bash
|
||||
docker system prune -a
|
||||
```
|
||||
|
||||
This removes all unused images, not just dangling ones.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker images are a fundamental concept in containerization. They provide a consistent and portable way to package applications and their dependencies. By mastering image creation, optimization, and management, you can significantly improve your Docker workflows and application deployments.
|
||||
193
docs/005-dockerfile.md
Normal file
193
docs/005-dockerfile.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# What is a Dockerfile
|
||||
|
||||
A Dockerfile is a text document that contains a series of instructions and arguments. These instructions are used to create a Docker image automatically. It's essentially a script of successive commands Docker will run to assemble an image, automating the image creation process.
|
||||
|
||||
## Anatomy of a Dockerfile
|
||||
|
||||
A Dockerfile typically consists of the following components:
|
||||
|
||||
1. Base image declaration
|
||||
2. Metadata and label information
|
||||
3. Environment setup
|
||||
4. File and directory operations
|
||||
5. Dependency installation
|
||||
6. Application code copying
|
||||
7. Execution command specification
|
||||
|
||||
Let's dive deep into each of these components and the instructions used to implement them.
|
||||
|
||||
## Dockerfile Instructions
|
||||
|
||||
### FROM
|
||||
|
||||
The `FROM` instruction initializes a new build stage and sets the base image for subsequent instructions.
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu:20.04
|
||||
```
|
||||
|
||||
This instruction is typically the first one in a Dockerfile. It's possible to have multiple `FROM` instructions in a single Dockerfile for multi-stage builds.
|
||||
|
||||
### LABEL
|
||||
|
||||
`LABEL` adds metadata to an image in key-value pair format.
|
||||
|
||||
```dockerfile
|
||||
LABEL version="1.0" maintainer="john@example.com" description="This is a sample Docker image"
|
||||
```
|
||||
|
||||
Labels are useful for image organization, licensing information, annotations, and other metadata.
|
||||
|
||||
### ENV
|
||||
|
||||
`ENV` sets environment variables in the image.
|
||||
|
||||
```dockerfile
|
||||
ENV APP_HOME=/app NODE_ENV=production
|
||||
```
|
||||
|
||||
These variables persist when a container is run from the resulting image.
|
||||
|
||||
### WORKDIR
|
||||
|
||||
`WORKDIR` sets the working directory for any subsequent `RUN`, `CMD`, `ENTRYPOINT`, `COPY`, and `ADD` instructions.
|
||||
|
||||
```dockerfile
|
||||
WORKDIR /app
|
||||
```
|
||||
|
||||
If the directory doesn't exist, it will be created.
|
||||
|
||||
### COPY and ADD
|
||||
|
||||
Both `COPY` and `ADD` instructions copy files from the host into the image.
|
||||
|
||||
```dockerfile
|
||||
COPY package.json .
|
||||
ADD https://example.com/big.tar.xz /usr/src/things/
|
||||
```
|
||||
|
||||
`COPY` is generally preferred for its simplicity. `ADD` has some extra features like tar extraction and remote URL support, but these can make build behavior less predictable.
|
||||
|
||||
### RUN
|
||||
|
||||
`RUN` executes commands in a new layer on top of the current image and commits the results.
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y nodejs
|
||||
```
|
||||
|
||||
It's a best practice to chain commands with `&&` and clean up in the same `RUN` instruction to keep layers small.
|
||||
|
||||
### CMD
|
||||
|
||||
`CMD` provides defaults for an executing container. There can only be one `CMD` instruction in a Dockerfile.
|
||||
|
||||
```dockerfile
|
||||
CMD ["node", "app.js"]
|
||||
```
|
||||
|
||||
`CMD` can be overridden at runtime.
|
||||
|
||||
### ENTRYPOINT
|
||||
|
||||
`ENTRYPOINT` configures a container that will run as an executable.
|
||||
|
||||
```dockerfile
|
||||
ENTRYPOINT ["nginx", "-g", "daemon off;"]
|
||||
```
|
||||
|
||||
`ENTRYPOINT` is often used in combination with `CMD`, where `ENTRYPOINT` defines the executable and `CMD` supplies default arguments.
|
||||
|
||||
### EXPOSE
|
||||
|
||||
`EXPOSE` informs Docker that the container listens on specified network ports at runtime.
|
||||
|
||||
```dockerfile
|
||||
EXPOSE 80 443
|
||||
```
|
||||
|
||||
This doesn't actually publish the port; it functions as documentation between the person who builds the image and the person who runs the container.
|
||||
|
||||
### VOLUME
|
||||
|
||||
`VOLUME` creates a mount point and marks it as holding externally mounted volumes from native host or other containers.
|
||||
|
||||
```dockerfile
|
||||
VOLUME /data
|
||||
```
|
||||
|
||||
This is useful for any mutable and/or user-serviceable parts of your image.
|
||||
|
||||
### ARG
|
||||
|
||||
`ARG` defines a variable that users can pass at build-time to the builder with the `docker build` command.
|
||||
|
||||
```dockerfile
|
||||
ARG VERSION=latest
|
||||
```
|
||||
|
||||
This allows for more flexible image builds.
|
||||
|
||||
## Best Practices for Writing Dockerfiles
|
||||
|
||||
1. **Use multi-stage builds**: This helps create smaller final images by separating build-time dependencies from runtime dependencies.
|
||||
|
||||
```dockerfile
|
||||
FROM node:14 AS build
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm install
|
||||
COPY . .
|
||||
RUN npm run build
|
||||
|
||||
FROM nginx:alpine
|
||||
COPY --from=build /app/dist /usr/share/nginx/html
|
||||
```
|
||||
|
||||
2. **Minimize the number of layers**: Combine commands where possible to reduce the number of layers and image size.
|
||||
|
||||
3. **Leverage build cache**: Order instructions from least to most frequently changing to maximize build cache usage.
|
||||
|
||||
4. **Use `.dockerignore`**: Exclude files not relevant to the build, similar to `.gitignore`.
|
||||
|
||||
5. **Don't install unnecessary packages**: Keep the image lean and secure by only installing what's needed.
|
||||
|
||||
6. **Use specific tags**: Avoid `latest` tag for base images to ensure reproducible builds.
|
||||
|
||||
7. **Set the `WORKDIR`**: Always use `WORKDIR` instead of proliferating instructions like `RUN cd … && do-something`.
|
||||
|
||||
8. **Use `COPY` instead of `ADD`**: Unless you explicitly need the extra functionality of `ADD`, use `COPY` for transparency.
|
||||
|
||||
9. **Use environment variables**: Especially for version numbers and paths, making the Dockerfile more flexible.
|
||||
|
||||
## Advanced Dockerfile Concepts
|
||||
|
||||
### Health Checks
|
||||
|
||||
You can use the `HEALTHCHECK` instruction to tell Docker how to test a container to check that it's still working.
|
||||
|
||||
```dockerfile
|
||||
HEALTHCHECK --interval=30s --timeout=10s CMD curl -f http://localhost/ || exit 1
|
||||
```
|
||||
|
||||
### Shell and Exec Forms
|
||||
|
||||
Many Dockerfile instructions can be specified in shell form or exec form:
|
||||
|
||||
- Shell form: `RUN apt-get install python3`
|
||||
- Exec form: `RUN ["apt-get", "install", "python3"]`
|
||||
|
||||
The exec form is preferred as it's more explicit and avoids issues with shell string munging.
|
||||
|
||||
### BuildKit
|
||||
|
||||
BuildKit is a new backend for Docker builds that offers better performance, storage management, and features. You can enable it by setting an environment variable:
|
||||
|
||||
```bash
|
||||
export DOCKER_BUILDKIT=1
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Dockerfiles are a powerful tool for creating reproducible, version-controlled Docker images. By mastering Dockerfile instructions and best practices, you can create efficient, secure, and portable applications. Remember that writing good Dockerfiles is an iterative process – continually refine your Dockerfiles as you learn more about your application's needs and Docker's capabilities.
|
||||
212
docs/006-docker-networking.md
Normal file
212
docs/006-docker-networking.md
Normal file
@@ -0,0 +1,212 @@
|
||||
# Docker Networking
|
||||
|
||||
Docker networking allows containers to communicate with each other and with the outside world. It's a crucial aspect of Docker that enables the creation of complex, multi-container applications and microservices architectures.
|
||||
|
||||
## Docker Network Drivers
|
||||
|
||||
Docker uses a pluggable architecture for networking, offering several built-in network drivers:
|
||||
|
||||
1. **Bridge**: The default network driver. It's suitable for standalone containers that need to communicate.
|
||||
2. **Host**: Removes network isolation between the container and the Docker host.
|
||||
3. **Overlay**: Enables communication between containers across multiple Docker daemon hosts.
|
||||
4. **MacVLAN**: Assigns a MAC address to a container, making it appear as a physical device on the network.
|
||||
5. **None**: Disables all networking for a container.
|
||||
6. **Network plugins**: Allow you to use third-party network drivers.
|
||||
|
||||
## Working with Docker Networks
|
||||
|
||||
### Listing Networks
|
||||
|
||||
To list all networks:
|
||||
|
||||
```bash
|
||||
docker network ls
|
||||
```
|
||||
|
||||
This command shows the network ID, name, driver, and scope for each network.
|
||||
|
||||
### Inspecting Networks
|
||||
|
||||
To get detailed information about a network:
|
||||
|
||||
```bash
|
||||
docker network inspect <network_name>
|
||||
```
|
||||
|
||||
This provides information such as the network's subnet, gateway, connected containers, and configuration options.
|
||||
|
||||
### Creating a Network
|
||||
|
||||
To create a new network:
|
||||
|
||||
```bash
|
||||
docker network create --driver <driver> <network_name>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker network create --driver bridge my_custom_network
|
||||
```
|
||||
|
||||
You can specify additional options like subnet, gateway, IP range, etc.:
|
||||
|
||||
```bash
|
||||
docker network create --driver bridge --subnet 172.18.0.0/16 --gateway 172.18.0.1 my_custom_network
|
||||
```
|
||||
|
||||
### Connecting Containers to Networks
|
||||
|
||||
When running a container, you can specify which network it should connect to:
|
||||
|
||||
```bash
|
||||
docker run --network <network_name> <image>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run --network my_custom_network --name container1 -d nginx
|
||||
```
|
||||
|
||||
You can also connect a running container to a network:
|
||||
|
||||
```bash
|
||||
docker network connect <network_name> <container_name>
|
||||
```
|
||||
|
||||
### Disconnecting Containers from Networks
|
||||
|
||||
To disconnect a container from a network:
|
||||
|
||||
```bash
|
||||
docker network disconnect <network_name> <container_name>
|
||||
```
|
||||
|
||||
### Removing Networks
|
||||
|
||||
To remove a network:
|
||||
|
||||
```bash
|
||||
docker network rm <network_name>
|
||||
```
|
||||
|
||||
## Deep Dive into Network Drivers
|
||||
|
||||
### Bridge Networks
|
||||
|
||||
Bridge networks are the most commonly used network type in Docker. They are suitable for containers running on the same Docker daemon host.
|
||||
|
||||
Key points about bridge networks:
|
||||
|
||||
- Each container connected to a bridge network is allocated a unique IP address.
|
||||
- Containers on the same bridge network can communicate with each other using IP addresses.
|
||||
- The default bridge network has some limitations, so it's often better to create custom bridge networks.
|
||||
|
||||
Example of creating and using a custom bridge network:
|
||||
|
||||
```bash
|
||||
docker network create my_bridge
|
||||
docker run --network my_bridge --name container1 -d nginx
|
||||
docker run --network my_bridge --name container2 -d nginx
|
||||
```
|
||||
|
||||
Now `container1` and `container2` can communicate with each other using their container names as hostnames.
|
||||
|
||||
### Host Networks
|
||||
|
||||
Host networking adds a container on the host's network stack. This offers the best networking performance but sacrifices network isolation.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run --network host -d nginx
|
||||
```
|
||||
|
||||
In this case, if the container exposes port 80, it will be accessible on port 80 of the host machine directly.
|
||||
|
||||
### Overlay Networks
|
||||
|
||||
Overlay networks are used in Docker Swarm mode to enable communication between containers across multiple Docker daemon hosts.
|
||||
|
||||
To create an overlay network:
|
||||
|
||||
```bash
|
||||
docker network create --driver overlay my_overlay
|
||||
```
|
||||
|
||||
Then, when creating a service in swarm mode, you can attach it to this network:
|
||||
|
||||
```bash
|
||||
docker service create --network my_overlay --name my_service nginx
|
||||
```
|
||||
|
||||
### MacVLAN Networks
|
||||
|
||||
MacVLAN networks allow you to assign a MAC address to a container, making it appear as a physical device on your network.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker network create -d macvlan \
|
||||
--subnet=192.168.0.0/24 \
|
||||
--gateway=192.168.0.1 \
|
||||
-o parent=eth0 my_macvlan_net
|
||||
```
|
||||
|
||||
Then run a container on this network:
|
||||
|
||||
```bash
|
||||
docker run --network my_macvlan_net -d nginx
|
||||
```
|
||||
|
||||
## Network Troubleshooting
|
||||
|
||||
1. **Container-to-Container Communication**:
|
||||
Use the `docker exec` command to get into a container and use tools like `ping`, `curl`, or `wget` to test connectivity.
|
||||
|
||||
2. **Network Inspection**:
|
||||
Use `docker network inspect` to view detailed information about a network.
|
||||
|
||||
3. **Port Mapping**:
|
||||
Use `docker port <container>` to see the port mappings for a container.
|
||||
|
||||
4. **DNS Issues**:
|
||||
Check the `/etc/resolv.conf` file inside the container to verify DNS settings.
|
||||
|
||||
5. **Network Namespace**:
|
||||
For advanced troubleshooting, you can enter the network namespace of a container:
|
||||
```bash
|
||||
pid=$(docker inspect -f '{{.State.Pid}}' <container_name>)
|
||||
nsenter -t $pid -n ip addr
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Use custom bridge networks instead of the default bridge network for better isolation and built-in DNS resolution.
|
||||
2. Use overlay networks for multi-host communication in swarm mode.
|
||||
3. Use host networking sparingly and only when high performance is required.
|
||||
4. Be cautious with exposing ports, only expose what's necessary.
|
||||
5. Use Docker Compose for managing multi-container applications and their networks.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Network Encryption
|
||||
|
||||
For overlay networks, you can enable encryption to secure container-to-container traffic:
|
||||
|
||||
```bash
|
||||
docker network create --opt encrypted --driver overlay my_secure_network
|
||||
```
|
||||
|
||||
### Network Plugins
|
||||
|
||||
Docker supports third-party network plugins. Popular options include Weave Net, Calico, and Flannel. These can provide additional features like advanced routing, network policies, and encryption.
|
||||
|
||||
### Service Discovery
|
||||
|
||||
Docker provides built-in service discovery for containers on the same network. Containers can reach each other using container names as hostnames. In swarm mode, there's also built-in load balancing for services.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Networking is a critical component of Docker that enables complex, distributed applications. By understanding and effectively using Docker's networking capabilities, you can create secure, efficient, and scalable containerized applications. Always consider your specific use case when choosing network drivers and configurations.
|
||||
169
docs/007-docker-volumes.md
Normal file
169
docs/007-docker-volumes.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# Docker Volumes
|
||||
|
||||
Docker volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While containers can create, update, and delete files, those changes are lost when the container is removed and all changes are isolated to that container. Volumes provide the ability to connect specific filesystem paths of the container back to the host machine. If a directory in the container is mounted, changes in that directory are also seen on the host machine. If we mount that same directory across container restarts, we'd see the same files.
|
||||
|
||||
## Why Use Docker Volumes?
|
||||
|
||||
1. **Data Persistence**: Volumes allow you to persist data even when containers are stopped or removed.
|
||||
2. **Data Sharing**: Volumes can be shared and reused among multiple containers.
|
||||
3. **Performance**: Volumes are stored on the host filesystem, which generally provides better I/O performance, especially for databases.
|
||||
4. **Data Management**: Volumes make it easier to backup, restore, and migrate data.
|
||||
5. **Decoupling**: Volumes decouple the configuration of the Docker host from the container runtime.
|
||||
|
||||
## Types of Docker Volumes
|
||||
|
||||
### 1. Named Volumes
|
||||
|
||||
Named volumes are the recommended way to persist data in Docker. They are explicitly created and given a name.
|
||||
|
||||
Creating a named volume:
|
||||
```bash
|
||||
docker volume create my_volume
|
||||
```
|
||||
|
||||
Using a named volume:
|
||||
```bash
|
||||
docker run -d --name devtest -v my_volume:/app nginx:latest
|
||||
```
|
||||
|
||||
### 2. Anonymous Volumes
|
||||
|
||||
Anonymous volumes are automatically created by Docker and given a random name. They're useful for temporary data that you don't need to persist beyond the life of the container.
|
||||
|
||||
Using an anonymous volume:
|
||||
```bash
|
||||
docker run -d --name devtest -v /app nginx:latest
|
||||
```
|
||||
|
||||
### 3. Bind Mounts
|
||||
|
||||
Bind mounts map a specific path of the host machine to a path in the container. They're useful for development environments.
|
||||
|
||||
Using a bind mount:
|
||||
```bash
|
||||
docker run -d --name devtest -v /path/on/host:/app nginx:latest
|
||||
```
|
||||
|
||||
## Working with Docker Volumes
|
||||
|
||||
### Listing Volumes
|
||||
|
||||
To list all volumes:
|
||||
```bash
|
||||
docker volume ls
|
||||
```
|
||||
|
||||
### Inspecting Volumes
|
||||
|
||||
To get detailed information about a volume:
|
||||
```bash
|
||||
docker volume inspect my_volume
|
||||
```
|
||||
|
||||
### Removing Volumes
|
||||
|
||||
To remove a specific volume:
|
||||
```bash
|
||||
docker volume rm my_volume
|
||||
```
|
||||
|
||||
To remove all unused volumes:
|
||||
```bash
|
||||
docker volume prune
|
||||
```
|
||||
|
||||
### Backing Up Volumes
|
||||
|
||||
To backup a volume:
|
||||
```bash
|
||||
docker run --rm -v my_volume:/source -v /path/on/host:/backup ubuntu tar cvf /backup/backup.tar /source
|
||||
```
|
||||
|
||||
### Restoring Volumes
|
||||
|
||||
To restore a volume from a backup:
|
||||
```bash
|
||||
docker run --rm -v my_volume:/target -v /path/on/host:/backup ubuntu tar xvf /backup/backup.tar -C /target --strip 1
|
||||
```
|
||||
|
||||
## Volume Drivers
|
||||
|
||||
Docker supports volume drivers, which allow you to store volumes on remote hosts or cloud providers, among other options.
|
||||
|
||||
Some popular volume drivers include:
|
||||
- Local (default)
|
||||
- NFS
|
||||
- AWS EBS
|
||||
- Azure File Storage
|
||||
|
||||
To use a specific volume driver:
|
||||
```bash
|
||||
docker volume create --driver <driver_name> my_volume
|
||||
```
|
||||
|
||||
## Best Practices for Using Docker Volumes
|
||||
|
||||
1. **Use named volumes**: They're easier to manage and track than anonymous volumes.
|
||||
|
||||
2. **Don't use bind mounts in production**: They're less portable and can pose security risks.
|
||||
|
||||
3. **Use volumes for databases**: Databases require persistent storage and benefit from the performance of volumes.
|
||||
|
||||
4. **Be cautious with permissions**: Ensure the processes in your containers have the necessary permissions to read/write to volumes.
|
||||
|
||||
5. **Clean up unused volumes**: Regularly use `docker volume prune` to remove unused volumes and free up space.
|
||||
|
||||
6. **Use volume labels**: Labels can help you organize and manage your volumes.
|
||||
```bash
|
||||
docker volume create --label project=myapp my_volume
|
||||
```
|
||||
|
||||
7. **Consider using Docker Compose**: Docker Compose makes it easier to manage volumes across multiple containers.
|
||||
|
||||
## Advanced Volume Concepts
|
||||
|
||||
### 1. Read-Only Volumes
|
||||
|
||||
You can mount volumes as read-only to prevent containers from modifying the data:
|
||||
```bash
|
||||
docker run -d --name devtest -v my_volume:/app:ro nginx:latest
|
||||
```
|
||||
|
||||
### 2. Tmpfs Mounts
|
||||
|
||||
Tmpfs mounts are stored in the host system's memory only, which can be useful for storing sensitive information:
|
||||
```bash
|
||||
docker run -d --name tmptest --tmpfs /app nginx:latest
|
||||
```
|
||||
|
||||
### 3. Sharing Volumes Between Containers
|
||||
|
||||
You can share a volume between multiple containers:
|
||||
```bash
|
||||
docker run -d --name container1 -v my_volume:/app nginx:latest
|
||||
docker run -d --name container2 -v my_volume:/app nginx:latest
|
||||
```
|
||||
|
||||
### 4. Volume Plugins
|
||||
|
||||
Docker supports third-party volume plugins that can provide additional functionality:
|
||||
```bash
|
||||
docker plugin install <plugin_name>
|
||||
docker volume create -d <plugin_name> my_volume
|
||||
```
|
||||
|
||||
## Troubleshooting Volume Issues
|
||||
|
||||
1. **Volume not persisting data**: Ensure you're using the correct volume name and mount path.
|
||||
|
||||
2. **Permission issues**: Check the permissions of the mounted directory both on the host and in the container.
|
||||
|
||||
3. **Volume not removing**: Make sure no containers are using the volume before trying to remove it.
|
||||
|
||||
4. **Performance issues**: If you're experiencing slow I/O, consider using a volume driver optimized for your use case.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker volumes are a crucial component for managing data in Docker environments. They provide a flexible and efficient way to persist and share data between containers and the host system. By understanding how to create, manage, and use volumes effectively, you can build more robust and maintainable containerized applications.
|
||||
|
||||
Remember that the choice between different types of volumes (named volumes, bind mounts, or tmpfs mounts) depends on your specific use case. Always consider factors like persistence needs, performance requirements, and security implications when working with Docker volumes.
|
||||
317
docs/008-docker-compose.md
Normal file
317
docs/008-docker-compose.md
Normal file
@@ -0,0 +1,317 @@
|
||||
# Docker Compose
|
||||
|
||||
Docker Compose is a powerful tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services, networks, and volumes. Then, with a single command, you create and start all the services from your configuration.
|
||||
|
||||
> **Note**: Docker Compose is now integrated into Docker CLI. The new command is `docker compose` instead of `docker-compose`. We'll use the new command throughout this chapter.
|
||||
|
||||
## Key Benefits of Docker Compose
|
||||
|
||||
1. **Simplicity**: Define your entire application stack in a single file.
|
||||
2. **Reproducibility**: Easily share and version control your application configuration.
|
||||
3. **Scalability**: Simple commands to scale services up or down.
|
||||
4. **Environment Consistency**: Ensure development, staging, and production environments are identical.
|
||||
5. **Workflow Improvement**: Compose can be used throughout the development cycle for testing, staging, and production.
|
||||
|
||||
## The docker-compose.yml File
|
||||
|
||||
The `docker-compose.yml` file is the core of Docker Compose. It defines all the components and configurations of your application. Here's a basic example:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
environment:
|
||||
FLASK_ENV: development
|
||||
redis:
|
||||
image: "redis:alpine"
|
||||
```
|
||||
|
||||
Let's break down this example:
|
||||
|
||||
- `version`: Specifies the Compose file format version.
|
||||
- `services`: Defines the containers that make up your app.
|
||||
- `web`: A service based on an image built from the Dockerfile in the current directory.
|
||||
- `redis`: A service using the public Redis image.
|
||||
|
||||
## Key Concepts in Docker Compose
|
||||
|
||||
1. **Services**: Containers that make up your application.
|
||||
2. **Networks**: How your services communicate with each other.
|
||||
3. **Volumes**: Where your services store and access data.
|
||||
|
||||
## Basic Docker Compose Commands
|
||||
|
||||
- `docker compose up`: Create and start containers
|
||||
```bash
|
||||
docker compose up -d # Run in detached mode
|
||||
```
|
||||
|
||||
- `docker compose down`: Stop and remove containers, networks, images, and volumes
|
||||
```bash
|
||||
docker compose down --volumes # Also remove volumes
|
||||
```
|
||||
|
||||
- `docker compose ps`: List containers
|
||||
- `docker compose logs`: View output from containers
|
||||
```bash
|
||||
docker compose logs -f web # Follow logs for the web service
|
||||
```
|
||||
|
||||
## Advanced Docker Compose Features
|
||||
|
||||
### 1. Environment Variables
|
||||
|
||||
You can use .env files or set them directly in the compose file:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: "webapp:${TAG}"
|
||||
environment:
|
||||
- DEBUG=1
|
||||
```
|
||||
|
||||
### 2. Extending Services
|
||||
|
||||
Use `extends` to share common configurations:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
extends:
|
||||
file: common-services.yml
|
||||
service: webapp
|
||||
```
|
||||
|
||||
### 3. Healthchecks
|
||||
|
||||
Ensure services are ready before starting dependent services:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: "webapp:latest"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost"]
|
||||
interval: 1m30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
```
|
||||
|
||||
|
||||
## Practical Examples
|
||||
|
||||
### Example 1: WordPress with MySQL
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
db:
|
||||
image: mysql:5.7
|
||||
volumes:
|
||||
- db_data:/var/lib/mysql
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: somewordpress
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
|
||||
wordpress:
|
||||
depends_on:
|
||||
- db
|
||||
image: wordpress:latest
|
||||
ports:
|
||||
- "8000:80"
|
||||
restart: always
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: db:3306
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: wordpress
|
||||
WORDPRESS_DB_NAME: wordpress
|
||||
|
||||
volumes:
|
||||
db_data: {}
|
||||
```
|
||||
|
||||
Let's break this down in detail:
|
||||
|
||||
1. **Version**:
|
||||
`version: '3.8'` specifies the version of the Compose file format. Version 3.8 is compatible with Docker Engine 19.03.0+.
|
||||
|
||||
2. **Services**:
|
||||
We define two services: `db` and `wordpress`.
|
||||
|
||||
a. **db service**:
|
||||
- `image: mysql:5.7`: Uses the official MySQL 5.7 image.
|
||||
- `volumes`: Creates a named volume `db_data` and mounts it to `/var/lib/mysql` in the container. This ensures that the database data persists even if the container is removed.
|
||||
- `restart: always`: Ensures that the container always restarts if it stops.
|
||||
- `environment`: Sets up the MySQL environment variables:
|
||||
- `MYSQL_ROOT_PASSWORD`: Sets the root password for MySQL.
|
||||
- `MYSQL_DATABASE`: Creates a database named "wordpress".
|
||||
- `MYSQL_USER` and `MYSQL_PASSWORD`: Creates a new user with the specified password.
|
||||
|
||||
b. **wordpress service**:
|
||||
- `depends_on`: Ensures that the `db` service is started before the `wordpress` service.
|
||||
- `image: wordpress:latest`: Uses the latest official WordPress image.
|
||||
- `ports`: Maps port 8000 on the host to port 80 in the container, where WordPress runs.
|
||||
- `restart: always`: Ensures the container always restarts if it stops.
|
||||
- `environment`: Sets up WordPress environment variables:
|
||||
- `WORDPRESS_DB_HOST`: Specifies the database host. Note the use of `db:3306`, where `db` is the service name of our MySQL container.
|
||||
- `WORDPRESS_DB_USER`, `WORDPRESS_DB_PASSWORD`, `WORDPRESS_DB_NAME`: These match the MySQL settings we defined in the `db` service.
|
||||
|
||||
3. **Volumes**:
|
||||
`db_data: {}`: This creates a named volume that Docker manages. It's used to persist the MySQL data.
|
||||
|
||||
To run this setup:
|
||||
|
||||
1. Save the above YAML in a file named `docker-compose.yml`.
|
||||
2. In the same directory, run `docker compose up -d`.
|
||||
3. Once the containers are running, you can access WordPress by navigating to `http://localhost:8000` in your web browser.
|
||||
|
||||
This setup provides a complete WordPress environment with a MySQL database, all configured and ready to use. The use of environment variables and volumes ensures that the setup is both flexible and persistent.
|
||||
|
||||
### Example 2: Flask App with Redis and Nginx
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
flask:
|
||||
build: ./flask
|
||||
environment:
|
||||
- FLASK_ENV=development
|
||||
volumes:
|
||||
- ./flask:/code
|
||||
|
||||
redis:
|
||||
image: "redis:alpine"
|
||||
|
||||
nginx:
|
||||
image: "nginx:alpine"
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
ports:
|
||||
- "80:80"
|
||||
depends_on:
|
||||
- flask
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
backend:
|
||||
|
||||
volumes:
|
||||
db-data:
|
||||
```
|
||||
|
||||
Let's break this down:
|
||||
|
||||
1. **Version**:
|
||||
As before, we're using version 3.8 of the Compose file format.
|
||||
|
||||
2. **Services**:
|
||||
We define three services: `flask`, `redis`, and `nginx`.
|
||||
|
||||
a. **flask service**:
|
||||
- `build: ./flask`: This tells Docker to build an image using the Dockerfile in the `./flask` directory.
|
||||
- `environment`: Sets `FLASK_ENV=development`, which enables debug mode in Flask.
|
||||
- `volumes`: Mounts the local `./flask` directory to `/code` in the container. This is useful for development as it allows you to make changes to your code without rebuilding the container.
|
||||
|
||||
b. **redis service**:
|
||||
- `image: "redis:alpine"`: Uses the official Redis image based on Alpine Linux, which is lightweight.
|
||||
|
||||
c. **nginx service**:
|
||||
- `image: "nginx:alpine"`: Uses the official Nginx image based on Alpine Linux.
|
||||
- `volumes`: Mounts a local `nginx.conf` file to `/etc/nginx/nginx.conf` in the container. The `:ro` flag makes it read-only.
|
||||
- `ports`: Maps port 80 on the host to port 80 in the container.
|
||||
- `depends_on`: Ensures that the `flask` service is started before Nginx.
|
||||
|
||||
3. **Networks**:
|
||||
We define two networks: `frontend` and `backend`. This allows us to isolate our services. For example, we could put Nginx and Flask on the frontend network, and Flask and Redis on the backend network.
|
||||
|
||||
4. **Volumes**:
|
||||
`db-data`: This creates a named volume. Although it's not used in this configuration, it's available if we need persistent storage, perhaps for a database service we might add later.
|
||||
|
||||
To use this setup:
|
||||
|
||||
1. You need a Flask application in a directory named `flask`, with a Dockerfile to build it.
|
||||
2. You need an `nginx.conf` file in the same directory as your `docker-compose.yml`.
|
||||
3. Run `docker compose up -d` to start the services.
|
||||
|
||||
This configuration sets up a Flask application server, with Redis available for caching or as a message broker, and Nginx as a reverse proxy. The Flask code is mounted as a volume, allowing for easy development. Nginx handles incoming requests and forwards them to the Flask application.
|
||||
|
||||
The use of Alpine-based images for Redis and Nginx helps to keep the overall image size small, which is beneficial for deployment and scaling.
|
||||
|
||||
This setup is particularly useful for developing and testing a Flask application in an environment that closely mimics production, with a proper web server (Nginx) in front of the application server (Flask) and a caching/messaging system (Redis) available.
|
||||
|
||||
## Best Practices for Docker Compose
|
||||
|
||||
1. Use version control for your docker-compose.yml file.
|
||||
2. Keep development, staging, and production environments as similar as possible.
|
||||
3. Use build arguments and environment variables for flexibility.
|
||||
4. Leverage healthchecks to ensure service dependencies are met.
|
||||
5. Use `.env` files for environment-specific variables.
|
||||
6. Optimize your images to keep them small and efficient.
|
||||
7. Use docker-compose.override.yml for local development settings.
|
||||
|
||||
## Scaling Services
|
||||
|
||||
Docker Compose can scale services with a single command:
|
||||
|
||||
```bash
|
||||
docker compose up -d --scale web=3
|
||||
```
|
||||
|
||||
This command would start 3 instances of the `web` service.
|
||||
|
||||
## Networking in Docker Compose
|
||||
|
||||
By default, Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
|
||||
|
||||
You can also specify custom networks:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
db:
|
||||
networks:
|
||||
- backend
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
backend:
|
||||
```
|
||||
|
||||
## Volumes in Docker Compose
|
||||
|
||||
Compose also lets you create named volumes that can be reused across multiple services:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
volumes:
|
||||
- data:/var/lib/postgresql/data
|
||||
|
||||
volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker Compose simplifies the process of managing multi-container applications, making it an essential tool for developers working with Docker. By mastering Docker Compose, you can streamline your development workflow, ensure consistency across different environments, and easily manage complex applications with multiple interconnected services.
|
||||
|
||||
Remember to always use the latest `docker compose` command instead of the older `docker-compose`, as it's now integrated directly into Docker CLI and offers improved functionality and performance.
|
||||
159
docs/009-docker-security.md
Normal file
159
docs/009-docker-security.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# Docker Security Best Practices
|
||||
|
||||
Security is a critical aspect of working with Docker, especially in production environments. This chapter will cover essential security practices to help you build and maintain secure Docker environments.
|
||||
|
||||
## 1. Keep Docker Updated
|
||||
|
||||
Always use the latest version of Docker to benefit from the most recent security patches.
|
||||
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get upgrade docker-ce
|
||||
```
|
||||
|
||||
## 2. Use Official Images
|
||||
|
||||
Whenever possible, use official images from Docker Hub or trusted sources. These images are regularly updated and scanned for vulnerabilities.
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: nginx:latest # Official Nginx image
|
||||
```
|
||||
|
||||
## 3. Scan Images for Vulnerabilities
|
||||
|
||||
Use tools like Docker Scout or Trivy to scan your images for known vulnerabilities.
|
||||
|
||||
```bash
|
||||
docker scout cve <image_name>
|
||||
```
|
||||
|
||||
## 4. Limit Container Resources
|
||||
|
||||
Prevent Denial of Service attacks by limiting container resources:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: nginx:latest
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.50'
|
||||
memory: 50M
|
||||
```
|
||||
|
||||
## 5. Use Non-Root Users
|
||||
|
||||
Run containers as non-root users to limit the potential impact of a container breach:
|
||||
|
||||
```dockerfile
|
||||
FROM node:14
|
||||
RUN groupadd -r myapp && useradd -r -g myapp myuser
|
||||
USER myuser
|
||||
```
|
||||
|
||||
## 6. Use Secret Management
|
||||
|
||||
For sensitive data like passwords and API keys, use Docker secrets:
|
||||
|
||||
```bash
|
||||
echo "mysecretpassword" | docker secret create db_password -
|
||||
```
|
||||
|
||||
Then in your docker-compose.yml:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
db:
|
||||
image: mysql
|
||||
secrets:
|
||||
- db_password
|
||||
secrets:
|
||||
db_password:
|
||||
external: true
|
||||
```
|
||||
|
||||
## 7. Enable Content Trust
|
||||
|
||||
Sign and verify image tags:
|
||||
|
||||
```bash
|
||||
export DOCKER_CONTENT_TRUST=1
|
||||
docker push myrepo/myimage:latest
|
||||
```
|
||||
|
||||
## 8. Use Read-Only Containers
|
||||
|
||||
When possible, run containers in read-only mode:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: nginx
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp
|
||||
- /var/cache/nginx
|
||||
```
|
||||
|
||||
## 9. Implement Network Segmentation
|
||||
|
||||
Use Docker networks to isolate containers:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
frontend:
|
||||
networks:
|
||||
- frontend
|
||||
backend:
|
||||
networks:
|
||||
- backend
|
||||
networks:
|
||||
frontend:
|
||||
backend:
|
||||
```
|
||||
|
||||
## 10. Regular Security Audits
|
||||
|
||||
Regularly audit your Docker environment using tools like Docker Bench for Security:
|
||||
|
||||
```bash
|
||||
docker run -it --net host --pid host --userns host --cap-add audit_control \
|
||||
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
|
||||
-v /var/lib:/var/lib \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /usr/lib/systemd:/usr/lib/systemd \
|
||||
-v /etc:/etc --label docker_bench_security \
|
||||
docker/docker-bench-security
|
||||
```
|
||||
|
||||
## 11. Use Security-Enhanced Linux (SELinux) or AppArmor
|
||||
|
||||
These provide an additional layer of security. Ensure they're enabled and properly configured on your host system.
|
||||
|
||||
## 12. Implement Logging and Monitoring
|
||||
|
||||
Use Docker's logging capabilities and consider integrating with external monitoring tools:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
web:
|
||||
image: nginx
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "10"
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Implementing these security best practices will significantly improve the security posture of your Docker environments. Remember, security is an ongoing process, and it's important to stay informed about the latest security threats and Docker security features.
|
||||
186
docs/010-docker-and-kubernetes.md
Normal file
186
docs/010-docker-and-kubernetes.md
Normal file
@@ -0,0 +1,186 @@
|
||||
# Docker in Production: Orchestration with Kubernetes
|
||||
|
||||
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It works well with Docker and provides a robust set of features for running containers in production.
|
||||
|
||||
Kubernetes is a topic of its own, but here are some key concepts and best practices for using Kubernetes with Docker in production environments.
|
||||
|
||||
## Key Kubernetes Concepts
|
||||
|
||||
1. **Pods**: The smallest deployable units in Kubernetes, containing one or more containers.
|
||||
2. **Services**: An abstract way to expose an application running on a set of Pods.
|
||||
3. **Deployments**: Describe the desired state for Pods and ReplicaSets.
|
||||
4. **Namespaces**: Virtual clusters within a physical cluster.
|
||||
|
||||
## Setting Up a Kubernetes Cluster
|
||||
|
||||
You can set up a local Kubernetes cluster using Minikube:
|
||||
|
||||
```bash
|
||||
minikube start
|
||||
```
|
||||
|
||||
For production, consider managed Kubernetes services like Google Kubernetes Engine (GKE), Amazon EKS, or Azure AKS.
|
||||
|
||||
## Deploying a Docker Container to Kubernetes
|
||||
|
||||
1. Create a Deployment YAML file (`deployment.yaml`):
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.14.2
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
2. Apply the Deployment:
|
||||
|
||||
```bash
|
||||
kubectl apply -f deployment.yaml
|
||||
```
|
||||
|
||||
3. Create a Service to expose the Deployment (`service.yaml`):
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
4. Apply the Service:
|
||||
|
||||
```bash
|
||||
kubectl apply -f service.yaml
|
||||
```
|
||||
|
||||
## Scaling in Kubernetes
|
||||
|
||||
Scale your deployment easily:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment nginx-deployment --replicas=5
|
||||
```
|
||||
|
||||
## Rolling Updates
|
||||
|
||||
Update your application without downtime:
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
1. View Pod logs:
|
||||
|
||||
```bash
|
||||
kubectl logs <pod-name>
|
||||
```
|
||||
|
||||
2. Use Prometheus and Grafana for monitoring:
|
||||
|
||||
```yaml
|
||||
helm install prometheus stable/prometheus
|
||||
helm install grafana stable/grafana
|
||||
```
|
||||
|
||||
## Kubernetes Dashboard
|
||||
|
||||
Enable the Kubernetes Dashboard for a GUI:
|
||||
|
||||
```bash
|
||||
minikube addons enable dashboard
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
## Persistent Storage in Kubernetes
|
||||
|
||||
Use Persistent Volumes (PV) and Persistent Volume Claims (PVC):
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: mysql-pv-claim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
## Kubernetes Networking
|
||||
|
||||
1. **ClusterIP**: Exposes the Service on a cluster-internal IP.
|
||||
2. **NodePort**: Exposes the Service on each Node's IP at a static port.
|
||||
3. **LoadBalancer**: Exposes the Service externally using a cloud provider's load balancer.
|
||||
|
||||
## Kubernetes Secrets
|
||||
|
||||
Manage sensitive information:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic my-secret --from-literal=password=mysecretpassword
|
||||
```
|
||||
|
||||
Use in a Pod:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp
|
||||
image: myapp
|
||||
env:
|
||||
- name: SECRET_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: my-secret
|
||||
key: password
|
||||
```
|
||||
|
||||
## Helm: The Kubernetes Package Manager
|
||||
|
||||
Helm simplifies deploying complex applications:
|
||||
|
||||
```bash
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm install my-release bitnami/wordpress
|
||||
```
|
||||
|
||||
## Best Practices for Kubernetes in Production
|
||||
|
||||
1. Use namespaces to organize resources.
|
||||
2. Implement resource requests and limits.
|
||||
3. Use liveness and readiness probes.
|
||||
4. Implement proper logging and monitoring.
|
||||
5. Regularly update Kubernetes and your applications.
|
||||
6. Use Network Policies for fine-grained network control.
|
||||
7. Implement proper RBAC (Role-Based Access Control).
|
||||
|
||||
## Conclusion
|
||||
|
||||
Kubernetes provides a powerful platform for orchestrating Docker containers in production environments. It offers robust features for scaling, updating, and managing containerized applications. While there's a learning curve, the benefits of using Kubernetes for production Docker deployments are significant, especially for large, complex applications.
|
||||
239
docs/011-docker-performance.md
Normal file
239
docs/011-docker-performance.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Docker Performance Optimization
|
||||
|
||||
Optimizing Docker performance is crucial for efficient resource utilization and improved application responsiveness. This chapter covers various techniques and best practices to enhance the performance of your Docker containers and overall Docker environment.
|
||||
|
||||
## 1. Optimizing Docker Images
|
||||
|
||||
### Use Multi-Stage Builds
|
||||
|
||||
Multi-stage builds can significantly reduce the size of your final Docker image:
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM golang:1.16 AS builder
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
|
||||
|
||||
# Final stage
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /app/main .
|
||||
CMD ["./main"]
|
||||
```
|
||||
|
||||
### Minimize Layer Count
|
||||
|
||||
Combine commands to reduce the number of layers:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y \
|
||||
package1 \
|
||||
package2 \
|
||||
package3 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
### Use .dockerignore
|
||||
|
||||
Create a `.dockerignore` file to exclude unnecessary files from the build context:
|
||||
|
||||
```
|
||||
.git
|
||||
*.md
|
||||
*.log
|
||||
```
|
||||
|
||||
## 2. Container Resource Management
|
||||
|
||||
### Set Memory and CPU Limits
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
app:
|
||||
image: myapp
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
```
|
||||
|
||||
### Use --cpuset-cpus for CPU Pinning
|
||||
|
||||
```bash
|
||||
docker run --cpuset-cpus="0,1" myapp
|
||||
```
|
||||
|
||||
## 3. Networking Optimization
|
||||
|
||||
### Use Host Networking Mode
|
||||
|
||||
For high-performance scenarios, consider using host networking:
|
||||
|
||||
```bash
|
||||
docker run --network host myapp
|
||||
```
|
||||
|
||||
### Optimize DNS Resolution
|
||||
|
||||
If you're experiencing slow DNS resolution, you can use the `--dns` option:
|
||||
|
||||
```bash
|
||||
docker run --dns 8.8.8.8 myapp
|
||||
```
|
||||
|
||||
## 4. Storage Optimization
|
||||
|
||||
### Use Volumes Instead of Bind Mounts
|
||||
|
||||
Volumes generally offer better performance than bind mounts:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
db:
|
||||
image: postgres
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
```
|
||||
|
||||
### Consider Using tmpfs Mounts
|
||||
|
||||
For ephemeral data, tmpfs mounts can improve I/O performance:
|
||||
|
||||
```bash
|
||||
docker run --tmpfs /tmp myapp
|
||||
```
|
||||
|
||||
## 5. Logging and Monitoring
|
||||
|
||||
### Use the JSON-file Logging Driver with Limits
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
app:
|
||||
image: myapp
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
```
|
||||
|
||||
### Implement Proper Monitoring
|
||||
|
||||
Use tools like Prometheus and Grafana for comprehensive monitoring:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
volumes:
|
||||
- ./prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
grafana:
|
||||
image: grafana/grafana
|
||||
ports:
|
||||
- "3000:3000"
|
||||
```
|
||||
|
||||
## 6. Docker Daemon Optimization
|
||||
|
||||
### Adjust the Storage Driver
|
||||
|
||||
Consider using overlay2 for better performance:
|
||||
|
||||
```json
|
||||
{
|
||||
"storage-driver": "overlay2"
|
||||
}
|
||||
```
|
||||
|
||||
### Enable Live Restore
|
||||
|
||||
This allows containers to keep running even if the Docker daemon is unavailable:
|
||||
|
||||
```json
|
||||
{
|
||||
"live-restore": true
|
||||
}
|
||||
```
|
||||
|
||||
## 7. Application-Level Optimization
|
||||
|
||||
### Use Alpine-Based Images
|
||||
|
||||
Alpine-based images are typically smaller and faster to pull:
|
||||
|
||||
```dockerfile
|
||||
FROM alpine:3.14
|
||||
RUN apk add --no-cache python3
|
||||
```
|
||||
|
||||
### Optimize Your Application Code
|
||||
|
||||
Ensure your application is optimized for containerized environments:
|
||||
- Implement proper caching mechanisms
|
||||
- Optimize database queries
|
||||
- Use asynchronous processing where appropriate
|
||||
|
||||
## 8. Benchmarking and Profiling
|
||||
|
||||
### Use Docker's Built-in Stats Command
|
||||
|
||||
```bash
|
||||
docker stats
|
||||
```
|
||||
|
||||
### Benchmark with Tools Like Apache Bench
|
||||
|
||||
```bash
|
||||
ab -n 1000 -c 100 http://localhost/
|
||||
```
|
||||
|
||||
## 9. Orchestration-Level Optimization
|
||||
|
||||
When using orchestration tools like Kubernetes:
|
||||
|
||||
### Use Horizontal Pod Autoscaler
|
||||
|
||||
```yaml
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: myapp-hpa
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: myapp
|
||||
minReplicas: 2
|
||||
maxReplicas: 10
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
targetAverageUtilization: 50
|
||||
```
|
||||
|
||||
### Implement Proper Liveness and Readiness Probes
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Optimizing Docker performance is an ongoing process that involves various aspects of your Docker setup, from image building to runtime configuration and application-level optimizations. By implementing these best practices and continuously monitoring your Docker environment, you can significantly improve the performance and efficiency of your containerized applications.
|
||||
242
docs/012-docker-debugging.md
Normal file
242
docs/012-docker-debugging.md
Normal file
@@ -0,0 +1,242 @@
|
||||
# Docker Troubleshooting and Debugging
|
||||
|
||||
Even with careful planning and best practices, issues can arise when working with Docker. This chapter covers common problems you might encounter and provides strategies for effective troubleshooting and debugging.
|
||||
|
||||
## 1. Container Lifecycle Issues
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
If a container fails to start, use these commands:
|
||||
|
||||
```bash
|
||||
# View container logs
|
||||
docker logs <container_id>
|
||||
|
||||
# Inspect container details
|
||||
docker inspect <container_id>
|
||||
|
||||
# Check container status
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
### Container Exits Immediately
|
||||
|
||||
For containers that exit right after starting:
|
||||
|
||||
```bash
|
||||
# Run the container in interactive mode
|
||||
docker run -it --entrypoint /bin/sh <image_name>
|
||||
|
||||
# Check the ENTRYPOINT and CMD in the Dockerfile
|
||||
docker inspect --format='{{.Config.Entrypoint}}' <image_name>
|
||||
docker inspect --format='{{.Config.Cmd}}' <image_name>
|
||||
```
|
||||
|
||||
## 2. Networking Issues
|
||||
|
||||
### Container Can't Connect to Network
|
||||
|
||||
To troubleshoot network connectivity:
|
||||
|
||||
```bash
|
||||
# Inspect network settings
|
||||
docker network inspect <network_name>
|
||||
|
||||
# Check container's network settings
|
||||
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_id>
|
||||
|
||||
# Use a network debugging container
|
||||
docker run --net container:<container_id> nicolaka/netshoot
|
||||
```
|
||||
|
||||
### Port Mapping Issues
|
||||
|
||||
If you can't access a container's exposed port:
|
||||
|
||||
```bash
|
||||
# Check port mappings
|
||||
docker port <container_id>
|
||||
|
||||
# Verify host machine's firewall settings
|
||||
sudo ufw status
|
||||
|
||||
# Test the port directly on the container
|
||||
docker exec <container_id> nc -zv localhost <port>
|
||||
```
|
||||
|
||||
## 3. Storage and Volume Issues
|
||||
|
||||
### Data Persistence Problems
|
||||
|
||||
For issues with data not persisting:
|
||||
|
||||
```bash
|
||||
# List volumes
|
||||
docker volume ls
|
||||
|
||||
# Inspect a volume
|
||||
docker volume inspect <volume_name>
|
||||
|
||||
# Check volume mounts in a container
|
||||
docker inspect --format='{{range .Mounts}}{{.Source}} -> {{.Destination}}{{"\n"}}{{end}}' <container_id>
|
||||
```
|
||||
|
||||
### Disk Space Issues
|
||||
|
||||
If you're running out of disk space:
|
||||
|
||||
```bash
|
||||
# Check Docker disk usage
|
||||
docker system df
|
||||
|
||||
# Remove unused data
|
||||
docker system prune -a
|
||||
|
||||
# Identify large images
|
||||
docker images --format "{{.Size}}\t{{.Repository}}:{{.Tag}}" | sort -h
|
||||
```
|
||||
|
||||
## 4. Resource Constraints
|
||||
|
||||
### Container Using Too Much CPU or Memory
|
||||
|
||||
To identify and address resource usage issues:
|
||||
|
||||
```bash
|
||||
# Monitor resource usage
|
||||
docker stats
|
||||
|
||||
# Set resource limits
|
||||
docker run --memory=512m --cpus=0.5 <image_name>
|
||||
|
||||
# Update limits for a running container
|
||||
docker update --cpus=0.75 <container_id>
|
||||
```
|
||||
|
||||
## 5. Image-related Issues
|
||||
|
||||
### Image Pull Failures
|
||||
|
||||
If you can't pull an image:
|
||||
|
||||
```bash
|
||||
# Check Docker Hub status
|
||||
curl -Is https://registry.hub.docker.com/v2/ | head -n 1
|
||||
|
||||
# Verify your Docker login
|
||||
docker login
|
||||
|
||||
# Try pulling with verbose output
|
||||
docker pull --verbose <image_name>
|
||||
```
|
||||
|
||||
### Image Build Failures
|
||||
|
||||
For issues during image builds:
|
||||
|
||||
```bash
|
||||
# Build with verbose output
|
||||
docker build --progress=plain -t <image_name> .
|
||||
|
||||
# Check for issues in the Dockerfile
|
||||
docker build --no-cache -t <image_name> .
|
||||
```
|
||||
|
||||
## 6. Docker Daemon Issues
|
||||
|
||||
### Docker Daemon Won't Start
|
||||
|
||||
If the Docker daemon fails to start:
|
||||
|
||||
```bash
|
||||
# Check Docker daemon status
|
||||
sudo systemctl status docker
|
||||
|
||||
# View Docker daemon logs
|
||||
sudo journalctl -u docker.service
|
||||
|
||||
# Restart Docker daemon
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
## 7. Debugging Techniques
|
||||
|
||||
### Interactive Debugging
|
||||
|
||||
To debug a running container interactively:
|
||||
|
||||
```bash
|
||||
# Start an interactive shell in a running container
|
||||
docker exec -it <container_id> /bin/bash
|
||||
|
||||
# Run a new container with a shell for debugging
|
||||
docker run -it --entrypoint /bin/bash <image_name>
|
||||
```
|
||||
|
||||
### Using Docker Events
|
||||
|
||||
Monitor Docker events for troubleshooting:
|
||||
|
||||
```bash
|
||||
docker events
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
Configure and view container logs:
|
||||
|
||||
```bash
|
||||
# View container logs
|
||||
docker logs <container_id>
|
||||
|
||||
# Follow log output
|
||||
docker logs -f <container_id>
|
||||
|
||||
# Adjust logging driver
|
||||
docker run --log-driver json-file --log-opt max-size=10m <image_name>
|
||||
```
|
||||
|
||||
## 8. Performance Debugging
|
||||
|
||||
### Identifying Performance Bottlenecks
|
||||
|
||||
Use these commands to identify performance issues:
|
||||
|
||||
```bash
|
||||
# Monitor container resource usage
|
||||
docker stats
|
||||
|
||||
# Profile container processes
|
||||
docker top <container_id>
|
||||
|
||||
# Use cAdvisor for more detailed metrics
|
||||
docker run \
|
||||
--volume=/:/rootfs:ro \
|
||||
--volume=/var/run:/var/run:ro \
|
||||
--volume=/sys:/sys:ro \
|
||||
--volume=/var/lib/docker/:/var/lib/docker:ro \
|
||||
--volume=/dev/disk/:/dev/disk:ro \
|
||||
--publish=8080:8080 \
|
||||
--detach=true \
|
||||
--name=cadvisor \
|
||||
google/cadvisor:latest
|
||||
```
|
||||
|
||||
## 9. Docker Compose Troubleshooting
|
||||
|
||||
For issues with Docker Compose:
|
||||
|
||||
```bash
|
||||
# View logs for all services
|
||||
docker-compose logs
|
||||
|
||||
# Rebuild and recreate containers
|
||||
docker-compose up -d --build
|
||||
|
||||
# Check the configuration
|
||||
docker-compose config
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Effective troubleshooting and debugging are essential skills for working with Docker. By understanding these techniques and tools, you can quickly identify and resolve issues in your Docker environment. Remember to always check the official Docker documentation and community forums for the most up-to-date information and solutions to common problems.
|
||||
185
docs/013-docker-tips.md
Normal file
185
docs/013-docker-tips.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Advanced Docker Concepts and Features
|
||||
|
||||
As you become more proficient with Docker, you'll encounter more advanced concepts and features. This chapter explores some of these topics to help you take your Docker skills to the next level even though this is beyond the scope of this introductory ebook.
|
||||
|
||||
## 1. Multi-stage Builds
|
||||
|
||||
Multi-stage builds allow you to create more efficient Dockerfiles by using multiple FROM statements in your Dockerfile.
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM golang:1.16 AS builder
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN go build -o main .
|
||||
|
||||
# Final stage
|
||||
FROM alpine:latest
|
||||
RUN apk --no-cache add ca-certificates
|
||||
WORKDIR /root/
|
||||
COPY --from=builder /app/main .
|
||||
CMD ["./main"]
|
||||
```
|
||||
|
||||
This approach reduces the final image size by only including necessary artifacts from the build stage.
|
||||
|
||||
## 2. Docker BuildKit
|
||||
|
||||
BuildKit is a next-generation build engine for Docker. Enable it by setting an environment variable:
|
||||
|
||||
```bash
|
||||
export DOCKER_BUILDKIT=1
|
||||
```
|
||||
|
||||
BuildKit offers faster builds, better cache management, and advanced features like:
|
||||
|
||||
- Concurrent dependency resolution
|
||||
- Efficient instruction caching
|
||||
- Automatic garbage collection
|
||||
|
||||
## 3. Custom Bridge Networks
|
||||
|
||||
Create isolated network environments for your containers:
|
||||
|
||||
```bash
|
||||
docker network create --driver bridge isolated_network
|
||||
docker run --network=isolated_network --name container1 -d nginx
|
||||
docker run --network=isolated_network --name container2 -d nginx
|
||||
```
|
||||
|
||||
Containers on this network can communicate using their names as hostnames.
|
||||
|
||||
## 4. Docker Contexts
|
||||
|
||||
Manage multiple Docker environments with contexts:
|
||||
|
||||
```bash
|
||||
# Create a new context
|
||||
docker context create my-remote --docker "host=ssh://user@remote-host"
|
||||
|
||||
# List contexts
|
||||
docker context ls
|
||||
|
||||
# Switch context
|
||||
docker context use my-remote
|
||||
```
|
||||
|
||||
## 5. Docker Content Trust (DCT)
|
||||
|
||||
DCT provides a way to verify the integrity and publisher of images:
|
||||
|
||||
```bash
|
||||
# Enable DCT
|
||||
export DOCKER_CONTENT_TRUST=1
|
||||
|
||||
# Push a signed image
|
||||
docker push myrepo/myimage:latest
|
||||
```
|
||||
|
||||
## 6. Docker Secrets
|
||||
|
||||
Manage sensitive data with Docker secrets:
|
||||
|
||||
```bash
|
||||
# Create a secret
|
||||
echo "mypassword" | docker secret create my_secret -
|
||||
|
||||
# Use the secret in a service
|
||||
docker service create --name myservice --secret my_secret myimage
|
||||
```
|
||||
|
||||
## 7. Docker Health Checks
|
||||
|
||||
Implement custom health checks in your Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
HEALTHCHECK --interval=30s --timeout=10s CMD curl -f http://localhost/ || exit 1
|
||||
```
|
||||
|
||||
## 8. Docker Plugins
|
||||
|
||||
Extend Docker's functionality with plugins:
|
||||
|
||||
```bash
|
||||
# Install a plugin
|
||||
docker plugin install vieux/sshfs
|
||||
|
||||
# Use the plugin
|
||||
docker volume create -d vieux/sshfs -o sshcmd=user@host:/path sshvolume
|
||||
```
|
||||
|
||||
## 9. Docker Experimental Features
|
||||
|
||||
Enable experimental features in your Docker daemon config (`/etc/docker/daemon.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"experimental": true
|
||||
}
|
||||
```
|
||||
|
||||
This unlocks features like:
|
||||
- Checkpoint and restore
|
||||
- Rootless mode
|
||||
|
||||
## 10. Container Escape Protection
|
||||
|
||||
Use security options to prevent container escapes:
|
||||
|
||||
```bash
|
||||
docker run --security-opt="no-new-privileges:true" --cap-drop=ALL myimage
|
||||
```
|
||||
|
||||
## 11. Custom Dockerfile Instructions
|
||||
|
||||
Create custom Dockerfile instructions using ONBUILD:
|
||||
|
||||
```dockerfile
|
||||
ONBUILD ADD . /app/src
|
||||
ONBUILD RUN /usr/local/bin/python-build --dir /app/src
|
||||
```
|
||||
|
||||
## 12. Docker Manifest
|
||||
|
||||
Create and push multi-architecture images:
|
||||
|
||||
```bash
|
||||
docker manifest create myrepo/myimage myrepo/myimage:amd64 myrepo/myimage:arm64
|
||||
docker manifest push myrepo/myimage
|
||||
```
|
||||
|
||||
## 13. Docker Buildx
|
||||
|
||||
Buildx is a CLI plugin that extends the docker build command with the full support of the features provided by BuildKit:
|
||||
|
||||
```bash
|
||||
# Create a new builder instance
|
||||
docker buildx create --name mybuilder
|
||||
|
||||
# Build and push multi-platform images
|
||||
docker buildx build --platform linux/amd64,linux/arm64 -t myrepo/myimage:latest --push .
|
||||
```
|
||||
|
||||
## 14. Docker Compose Profiles
|
||||
|
||||
Use profiles in Docker Compose to selectively start services:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
frontend:
|
||||
image: frontend
|
||||
profiles: ["frontend"]
|
||||
backend:
|
||||
image: backend
|
||||
profiles: ["backend"]
|
||||
```
|
||||
|
||||
Start specific profiles:
|
||||
|
||||
```bash
|
||||
docker-compose --profile frontend up -d
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
These advanced Docker concepts and features provide powerful tools for optimizing your Docker workflows, improving security, and extending Docker's capabilities. As you incorporate these techniques into your projects, you'll be able to create more efficient, secure, and flexible Docker environments.
|
||||
215
docs/014-docker-ci-cd.md
Normal file
215
docs/014-docker-ci-cd.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# Docker in CI/CD Pipelines
|
||||
|
||||
Integrating Docker into Continuous Integration and Continuous Deployment (CI/CD) pipelines can significantly streamline the development, testing, and deployment processes. This chapter explores how to effectively use Docker in CI/CD workflows.
|
||||
|
||||
## 1. Docker in Continuous Integration
|
||||
|
||||
### Automated Building and Testing
|
||||
|
||||
Use Docker to create consistent environments for building and testing your application:
|
||||
|
||||
```yaml
|
||||
# .gitlab-ci.yml example
|
||||
build_and_test:
|
||||
image: docker:latest
|
||||
services:
|
||||
- docker:dind
|
||||
script:
|
||||
- docker build -t myapp:${CI_COMMIT_SHA} .
|
||||
- docker run myapp:${CI_COMMIT_SHA} npm test
|
||||
```
|
||||
|
||||
### Parallel Testing
|
||||
|
||||
Leverage Docker to run tests in parallel:
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
node-version: [12.x, 14.x, 16.x]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Test with Node.js ${{ matrix.node-version }}
|
||||
run: |
|
||||
docker build -t myapp:${{ matrix.node-version }} --build-arg NODE_VERSION=${{ matrix.node-version }} .
|
||||
docker run myapp:${{ matrix.node-version }} npm test
|
||||
```
|
||||
|
||||
## 2. Docker in Continuous Deployment
|
||||
|
||||
### Pushing to Docker Registry
|
||||
|
||||
After successful tests, push your Docker image to a registry:
|
||||
|
||||
```yaml
|
||||
# Jenkins pipeline example
|
||||
pipeline {
|
||||
agent any
|
||||
stages {
|
||||
stage('Build and Push') {
|
||||
steps {
|
||||
script {
|
||||
docker.withRegistry('https://registry.example.com', 'credentials-id') {
|
||||
def customImage = docker.build("my-image:${env.BUILD_ID}")
|
||||
customImage.push()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deploying with Docker Swarm or Kubernetes
|
||||
|
||||
Use Docker Swarm or Kubernetes for orchestrating deployments:
|
||||
|
||||
```yaml
|
||||
# Docker Swarm deployment in GitLab CI
|
||||
deploy:
|
||||
stage: deploy
|
||||
script:
|
||||
- docker stack deploy -c docker-compose.yml myapp
|
||||
```
|
||||
|
||||
For Kubernetes:
|
||||
|
||||
```yaml
|
||||
# Kubernetes deployment in CircleCI
|
||||
deployment:
|
||||
kubectl:
|
||||
command: |
|
||||
kubectl set image deployment/myapp myapp=myrepo/myapp:${CIRCLE_SHA1}
|
||||
```
|
||||
|
||||
## 3. Docker Compose in CI/CD
|
||||
|
||||
Use Docker Compose to manage multi-container applications in your CI/CD pipeline:
|
||||
|
||||
```yaml
|
||||
# Travis CI example
|
||||
services:
|
||||
- docker
|
||||
|
||||
before_install:
|
||||
- docker-compose up -d
|
||||
- docker-compose exec -T app npm install
|
||||
|
||||
script:
|
||||
- docker-compose exec -T app npm test
|
||||
|
||||
after_success:
|
||||
- docker-compose down
|
||||
```
|
||||
|
||||
## 4. Security Scanning
|
||||
|
||||
Integrate security scanning into your pipeline:
|
||||
|
||||
```yaml
|
||||
# GitLab CI with Trivy scanner
|
||||
scan:
|
||||
image: aquasec/trivy:latest
|
||||
script:
|
||||
- trivy image myapp:${CI_COMMIT_SHA}
|
||||
```
|
||||
|
||||
## 5. Performance Testing
|
||||
|
||||
Incorporate performance testing using Docker:
|
||||
|
||||
```yaml
|
||||
# Jenkins pipeline with Apache JMeter
|
||||
stage('Performance Tests') {
|
||||
steps {
|
||||
sh 'docker run -v ${WORKSPACE}:/jmeter apache/jmeter -n -t test-plan.jmx -l results.jtl'
|
||||
perfReport 'results.jtl'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Environment-Specific Configurations
|
||||
|
||||
Use Docker's environment variables and build arguments for environment-specific configurations:
|
||||
|
||||
```dockerfile
|
||||
ARG CONFIG_FILE=default.conf
|
||||
COPY config/${CONFIG_FILE} /app/config.conf
|
||||
```
|
||||
|
||||
In your CI/CD pipeline:
|
||||
|
||||
```yaml
|
||||
build:
|
||||
script:
|
||||
- docker build --build-arg CONFIG_FILE=${ENV}.conf -t myapp:${CI_COMMIT_SHA} .
|
||||
```
|
||||
|
||||
## 7. Caching in CI/CD
|
||||
|
||||
Optimize build times by caching Docker layers:
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example with caching
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache
|
||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
tags: user/app:latest
|
||||
cache-from: type=local,src=/tmp/.buildx-cache
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache
|
||||
```
|
||||
|
||||
## 8. Blue-Green Deployments with Docker
|
||||
|
||||
Implement blue-green deployments using Docker:
|
||||
|
||||
```bash
|
||||
# Script for blue-green deployment
|
||||
#!/bin/bash
|
||||
docker service update --image myrepo/myapp:${NEW_VERSION} myapp_blue
|
||||
docker service scale myapp_blue=2 myapp_green=0
|
||||
```
|
||||
|
||||
## 9. Monitoring and Logging in CI/CD
|
||||
|
||||
Integrate monitoring and logging solutions:
|
||||
|
||||
```yaml
|
||||
# Docker Compose with ELK stack
|
||||
version: '3'
|
||||
services:
|
||||
app:
|
||||
image: myapp:latest
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "200k"
|
||||
max-file: "10"
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.0
|
||||
logstash:
|
||||
image: docker.elastic.co/logstash/logstash:7.10.0
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana:7.10.0
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Integrating Docker into your CI/CD pipeline can greatly enhance your development and deployment processes. It provides consistency across environments, improves testing efficiency, and streamlines deployments. By leveraging Docker in your CI/CD workflows, you can achieve faster, more reliable software delivery.
|
||||
254
docs/015-docker-microservices.md
Normal file
254
docs/015-docker-microservices.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Docker and Microservices Architecture
|
||||
|
||||
Microservices architecture is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. Docker's containerization technology is an excellent fit for microservices, providing isolation, portability, and scalability.
|
||||
|
||||
## 1. Principles of Microservices
|
||||
|
||||
- Single Responsibility Principle
|
||||
- Decentralized Data Management
|
||||
- Failure Isolation
|
||||
- Scalability
|
||||
- Technology Diversity
|
||||
|
||||
## 2. Dockerizing Microservices
|
||||
|
||||
### Sample Microservice Dockerfile
|
||||
|
||||
```dockerfile
|
||||
FROM node:14-alpine
|
||||
WORKDIR /usr/src/app
|
||||
COPY package*.json ./
|
||||
RUN npm install
|
||||
COPY . .
|
||||
EXPOSE 3000
|
||||
CMD ["node", "server.js"]
|
||||
```
|
||||
|
||||
### Building and Running
|
||||
|
||||
```bash
|
||||
docker build -t my-microservice .
|
||||
docker run -d -p 3000:3000 my-microservice
|
||||
```
|
||||
|
||||
## 3. Inter-service Communication
|
||||
|
||||
### REST API
|
||||
|
||||
```javascript
|
||||
// Express.js example
|
||||
const express = require('express');
|
||||
const app = express();
|
||||
|
||||
app.get('/api/data', (req, res) => {
|
||||
res.json({ message: 'Data from Microservice A' });
|
||||
});
|
||||
|
||||
app.listen(3000, () => console.log('Microservice A listening on port 3000'));
|
||||
```
|
||||
|
||||
### Message Queues
|
||||
|
||||
Using RabbitMQ:
|
||||
|
||||
```dockerfile
|
||||
# Dockerfile
|
||||
FROM node:14-alpine
|
||||
RUN npm install amqplib
|
||||
COPY . .
|
||||
CMD ["node", "consumer.js"]
|
||||
```
|
||||
|
||||
```javascript
|
||||
// consumer.js
|
||||
const amqp = require('amqplib');
|
||||
|
||||
async function consume() {
|
||||
const connection = await amqp.connect('amqp://rabbitmq');
|
||||
const channel = await connection.createChannel();
|
||||
await channel.assertQueue('task_queue');
|
||||
|
||||
channel.consume('task_queue', (msg) => {
|
||||
console.log("Received:", msg.content.toString());
|
||||
channel.ack(msg);
|
||||
});
|
||||
}
|
||||
|
||||
consume();
|
||||
```
|
||||
|
||||
## 4. Service Discovery
|
||||
|
||||
Using Consul:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
consul:
|
||||
image: consul:latest
|
||||
ports:
|
||||
- "8500:8500"
|
||||
|
||||
service-a:
|
||||
build: ./service-a
|
||||
environment:
|
||||
- CONSUL_HTTP_ADDR=consul:8500
|
||||
|
||||
service-b:
|
||||
build: ./service-b
|
||||
environment:
|
||||
- CONSUL_HTTP_ADDR=consul:8500
|
||||
```
|
||||
|
||||
## 5. API Gateway
|
||||
|
||||
Using NGINX as an API Gateway:
|
||||
|
||||
```nginx
|
||||
http {
|
||||
upstream service_a {
|
||||
server service-a:3000;
|
||||
}
|
||||
upstream service_b {
|
||||
server service-b:3000;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location /api/service-a {
|
||||
proxy_pass http://service_a;
|
||||
}
|
||||
|
||||
location /api/service-b {
|
||||
proxy_pass http://service_b;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Data Management
|
||||
|
||||
### Database per Service
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
service-a:
|
||||
build: ./service-a
|
||||
depends_on:
|
||||
- db-a
|
||||
|
||||
db-a:
|
||||
image: postgres:13
|
||||
environment:
|
||||
POSTGRES_DB: service_a_db
|
||||
POSTGRES_PASSWORD: password
|
||||
|
||||
service-b:
|
||||
build: ./service-b
|
||||
depends_on:
|
||||
- db-b
|
||||
|
||||
db-b:
|
||||
image: mysql:8
|
||||
environment:
|
||||
MYSQL_DATABASE: service_b_db
|
||||
MYSQL_ROOT_PASSWORD: password
|
||||
```
|
||||
|
||||
## 7. Monitoring Microservices
|
||||
|
||||
Using Prometheus and Grafana:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
volumes:
|
||||
- ./prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana
|
||||
ports:
|
||||
- "3000:3000"
|
||||
depends_on:
|
||||
- prometheus
|
||||
```
|
||||
|
||||
## 8. Scaling Microservices
|
||||
|
||||
Using Docker Swarm:
|
||||
|
||||
```bash
|
||||
# Initialize swarm
|
||||
docker swarm init
|
||||
|
||||
# Deploy stack
|
||||
docker stack deploy -c docker-compose.yml myapp
|
||||
|
||||
# Scale a service
|
||||
docker service scale myapp_service-a=3
|
||||
```
|
||||
|
||||
## 9. Testing Microservices
|
||||
|
||||
### Unit Testing
|
||||
|
||||
```javascript
|
||||
// Jest example
|
||||
test('API returns correct data', async () => {
|
||||
const response = await request(app).get('/api/data');
|
||||
expect(response.statusCode).toBe(200);
|
||||
expect(response.body).toHaveProperty('message');
|
||||
});
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
app:
|
||||
build: .
|
||||
depends_on:
|
||||
- test-db
|
||||
|
||||
test-db:
|
||||
image: postgres:13
|
||||
environment:
|
||||
POSTGRES_DB: test_db
|
||||
POSTGRES_PASSWORD: test_password
|
||||
|
||||
test:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.test
|
||||
depends_on:
|
||||
- app
|
||||
- test-db
|
||||
command: ["npm", "run", "test"]
|
||||
```
|
||||
|
||||
## 10. Deployment Strategies
|
||||
|
||||
### Blue-Green Deployment
|
||||
|
||||
```bash
|
||||
# Deploy new version (green)
|
||||
docker service create --name myapp-green --replicas 2 myrepo/myapp:v2
|
||||
|
||||
# Switch traffic to green
|
||||
docker service update --network-add proxy-network myapp-green
|
||||
docker service update --network-rm proxy-network myapp-blue
|
||||
|
||||
# Remove old version (blue)
|
||||
docker service rm myapp-blue
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Docker provides an excellent platform for developing, deploying, and managing microservices. It offers the necessary isolation, portability, and scalability that microservices architecture demands. By leveraging Docker's features along with complementary tools and services, you can build robust, scalable, and maintainable microservices-based applications.
|
||||
185
docs/099-docker-swarm.md
Normal file
185
docs/099-docker-swarm.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Docker Swarm
|
||||
|
||||
According to the official **Docker** docs, a swarm is a group of machines that are running **Docker** and joined into a cluster. If you are running a **Docker swarm** your commands would be executed on a cluster by a swarm manager. The machines in a swarm can be physical or virtual. After joining a swarm, they are referred to as nodes. I would do a quick demo shortly on my **DigitalOcean** account!
|
||||
|
||||
The **Docker Swarm** consists of **manager nodes** and **worker nodes**.
|
||||
|
||||
The manager nodes dispatch tasks to the worker nodes and on the other side Worker nodes just execute those tasks. For High Availability, it is recommended to have **3** or **5** manager nodes.
|
||||
|
||||
## Docker Services
|
||||
|
||||
To deploy an application image when Docker Engine is in swarm mode, you have create a service. A service is a group of containers of the same `image:tag`. Services make it simple to scale your application.
|
||||
|
||||
In order to have **Docker services**, you must first have your **Docker swarm** and nodes ready.
|
||||
|
||||

|
||||
|
||||
## Building a Swarm
|
||||
|
||||
I'll do a really quick demo on how to build a **Docker swarm with 3 managers and 3 workers**.
|
||||
|
||||
For that I'm going to deploy 6 droplets on DigitalOcean:
|
||||
|
||||

|
||||
|
||||
Then once you've got that ready, **install docker** just as we did in the **[Introduction to Docker Part 1](https://devdojo.com/tutorials/introduction-to-docker-part-1)** and then just follow the steps here:
|
||||
|
||||
### Step 1
|
||||
|
||||
Initialize the docker swarm on your first manager node:
|
||||
|
||||
```
|
||||
docker swarm init --advertise-addr your_dorplet_ip_here
|
||||
```
|
||||
|
||||
### Step 2
|
||||
|
||||
Then to get the command that you need to join the rest of the managers simply run this:
|
||||
|
||||
```
|
||||
docker swarm join-token manager
|
||||
```
|
||||
|
||||
> Note: This would provide you with the exact command that you need to run on the rest of the swarm manager nodes. Example:
|
||||
|
||||

|
||||
|
||||
### Step 3
|
||||
|
||||
To get the command that you need for joining workers just run:
|
||||
|
||||
```
|
||||
docker swarm join-token worker
|
||||
```
|
||||
|
||||
The command for workers would be pretty similar to the command for join managers but the token would be a bit different.
|
||||
|
||||
The output that you would get when joining a manager would look like this:
|
||||
|
||||

|
||||
|
||||
### Step 4
|
||||
|
||||
Then once you have your join commands, **ssh to the rest of your nodes and join them** as workers and managers accordingly.
|
||||
|
||||
# Managing the cluster
|
||||
|
||||
After you've run the join commands on all of your workers and managers, in order to get some information for your cluster status you could use these commands:
|
||||
|
||||
* To list all of the available nodes run:
|
||||
|
||||
```
|
||||
docker node ls
|
||||
```
|
||||
|
||||
> Note: This command can only be run from a **swarm manager**!Output:
|
||||
|
||||

|
||||
|
||||
* To get information for the current state run:
|
||||
|
||||
```
|
||||
docker info
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
## Promote a worker to manager
|
||||
|
||||
To promote a worker to a manager run the following from **one** of your manager nodes:
|
||||
|
||||
```
|
||||
docker node promote node_id_here
|
||||
```
|
||||
|
||||
Also note that each manager also acts as a worker, so from your docker info output you should see 6 workers and 3 manager nodes.
|
||||
|
||||
## Using Services
|
||||
|
||||
In order to create a service you need to use the following command:
|
||||
|
||||
```
|
||||
docker service create --name bobby-web -p 80:80 --replicas 5 bobbyiliev/php-apache
|
||||
```
|
||||
|
||||
Note that I already have my bobbyiliev/php-apache image pushed to the Docker hub as described in the previous blog posts.
|
||||
|
||||
To get a list of your services run:
|
||||
|
||||
```
|
||||
docker service ls
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
Then in order to get a list of the running containers you need to use the following command:
|
||||
|
||||
```
|
||||
docker services ps name_of_your_service_here
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
Then you can visit the IP address of any of your nodes and you should be able to see the service! We can basically visit any node from the swarm and we will still get the to service.
|
||||
|
||||
## Scaling a service
|
||||
|
||||
We could try shutting down one of the nodes and see how the swarm would automatically spin up a new process on another node so that it matches the desired state of 5 replicas.
|
||||
|
||||
To do that go to your **DigitalOcean** control panel and hit the power off button for one of your Droplets. Then head back to your terminal and run:
|
||||
|
||||
```
|
||||
docker services ps name_of_your_service_here
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
In the screenshot above, you can see how I've shutdown the droplet called worker-2 and how the replica bobby-web.2 was instantly started again on another node called worker-01 to match the desired state of 5 replicas.
|
||||
|
||||
To add more replicas run:
|
||||
|
||||
```
|
||||
docker service scale name_of_your_service_here=7
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
This would automatically spin up 2 more containers, you can check this with the docker service ps command:
|
||||
|
||||
```
|
||||
docker service ps name_of_your_service_here
|
||||
```
|
||||
|
||||
Then as a test try starting the node that we've shutdown and check if it picked up any tasks?
|
||||
|
||||
**Tip**: Bringing new nodes to the cluster does not automatically distribute running tasks.
|
||||
|
||||
## Deleting a service
|
||||
|
||||
In order to delete a service, all you need to do is to run the following command:
|
||||
|
||||
```
|
||||
docker service rm name_of_your_service
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||

|
||||
|
||||
Now you know how to initialize and scale a docker swarm cluster! For more information make sure to go through the official Docker documentation [here](https://docs.docker.com/engine/swarm/).
|
||||
|
||||
## Docker Swarm Knowledge Check
|
||||
|
||||
Once you've read this post, make sure to test your knowledge with this **[Docker Swarm Quiz](https://quizapi.io/predefined-quizzes/common-docker-swarm-interview-questions)**:
|
||||
|
||||
[https://quizapi.io/predefined-quizzes/common-docker-swarm-interview-questions](https://quizapi.io/predefined-quizzes/common-docker-swarm-interview-questions)
|
||||
@@ -1,59 +0,0 @@
|
||||
# The `cal` Command
|
||||
The `cal` command displays a formatted calendar in the terminal. If no options are specified, cal displays the current month, with the current day highlighted.
|
||||
|
||||
### Syntax:
|
||||
```
|
||||
cal [general options] [-jy] [[month] year]
|
||||
```
|
||||
|
||||
### Options:
|
||||
|**Option**|**Description**|
|
||||
|:--|:--|
|
||||
|`-h`|Don't highlight today's date.|
|
||||
|`-m month`|Specify a month to display. The month specifier is a full month name (e.g., February), a month abbreviation of at least three letters (e.g., Feb), or a number (e.g., 2). If you specify a number, followed by the letter "f" or "p", the month of the following or previous year, respectively, display. For instance, `-m 2f` displays February of next year.|
|
||||
|`-y year`|Specify a year to display. For example, `-y 1970` displays the entire calendar of the year 1970.|
|
||||
|`-3`|Display last month, this month, and next month.|
|
||||
|`-1`|Display only this month. This is the default.|
|
||||
|`-A num`|Display num months occurring after any months already specified. For example, `-3 -A 3` displays last month, this month, and four months after this one; and `-y 1970 -A 2` displays every month in 1970, and the first two months of 1971.|
|
||||
|`-B num`|Display num months occurring before any months already specified. For example, `-3 -B 2` displays the previous three months, this month, and next month.|
|
||||
|`-d YYYY-MM`|Operate as if the current month is number MM of year YYYY.|
|
||||
|
||||
### Examples:
|
||||
1. Display the calendar for this month, with today highlighted.
|
||||
```
|
||||
cal
|
||||
```
|
||||
|
||||
2. Same as the previous command, but do not highlight today.
|
||||
```
|
||||
cal -h
|
||||
```
|
||||
|
||||
3. Display last month, this month, and next month.
|
||||
```
|
||||
cal -3
|
||||
```
|
||||
4. Display this entire year's calendar.
|
||||
```
|
||||
cal -y
|
||||
```
|
||||
|
||||
5. Display the entire year 2000 calendar.
|
||||
```
|
||||
cal -y 2000
|
||||
```
|
||||
|
||||
6. Same as the previous command.
|
||||
```
|
||||
cal 2000
|
||||
```
|
||||
|
||||
7. Display the calendar for December of this year.
|
||||
```
|
||||
cal -m [December, Dec, or 12]
|
||||
```
|
||||
|
||||
10. Display the calendar for December 2000.
|
||||
```
|
||||
cal 12 2000
|
||||
```
|
||||
@@ -1,94 +0,0 @@
|
||||
# The `bc` command
|
||||
|
||||
The `bc` command provides the functionality of being able to perform mathematical calculations through the command line.
|
||||
|
||||
### Examples:
|
||||
|
||||
1 . Arithmetic:
|
||||
|
||||
```
|
||||
Input : $ echo "11+5" | bc
|
||||
Output : 16
|
||||
```
|
||||
2 . Increment:
|
||||
- var –++ : Post increment operator, the result of the variable is used first and then the variable is incremented.
|
||||
- – ++var : Pre increment operator, the variable is increased first and then the result of the variable is stored.
|
||||
|
||||
```
|
||||
Input: $ echo "var=3;++var" | bc
|
||||
Output: 4
|
||||
```
|
||||
3 . Decrement:
|
||||
- var – – : Post decrement operator, the result of the variable is used first and then the variable is decremented.
|
||||
- – – var : Pre decrement operator, the variable is decreased first and then the result of the variable is stored.
|
||||
|
||||
```
|
||||
Input: $ echo "var=3;--var" | bc
|
||||
Output: 2
|
||||
```
|
||||
4 . Assignment:
|
||||
- var = value : Assign the value to the variable
|
||||
- var += value : similar to var = var + value
|
||||
- var -= value : similar to var = var – value
|
||||
- var *= value : similar to var = var * value
|
||||
- var /= value : similar to var = var / value
|
||||
- var ^= value : similar to var = var ^ value
|
||||
- var %= value : similar to var = var % value
|
||||
|
||||
```
|
||||
Input: $ echo "var=4;var" | bc
|
||||
Output: 4
|
||||
```
|
||||
5 . Comparison or Relational:
|
||||
- If the comparison is true, then the result is 1. Otherwise,(false), returns 0
|
||||
- expr1<expr2 : Result is 1, if expr1 is strictly less than expr2.
|
||||
- expr1<=expr2 : Result is 1, if expr1 is less than or equal to expr2.
|
||||
- expr1>expr2 : Result is 1, if expr1 is strictly greater than expr2.
|
||||
- expr1>=expr2 : Result is 1, if expr1 is greater than or equal to expr2.
|
||||
- expr1==expr2 : Result is 1, if expr1 is equal to expr2.
|
||||
- expr1!=expr2 : Result is 1, if expr1 is not equal to expr2.
|
||||
|
||||
```
|
||||
Input: $ echo "6<4" | bc
|
||||
Output: 0
|
||||
```
|
||||
```
|
||||
Input: $ echo "2==2" | bc
|
||||
Output: 1
|
||||
```
|
||||
6 . Logical or Boolean:
|
||||
|
||||
- expr1 && expr2 : Result is 1, if both expressions are non-zero.
|
||||
- expr1 || expr2 : Result is 1, if either expression is non-zero.
|
||||
- ! expr : Result is 1, if expr is 0.
|
||||
|
||||
```
|
||||
Input: $ echo "! 1" | bc
|
||||
Output: 0
|
||||
|
||||
Input: $ echo "10 && 5" | bc
|
||||
Output: 1
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
bc [ -hlwsqv ] [long-options] [ file ... ]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
*Note: This does not include an exhaustive list of options.*
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-i`|`--interactive`|Force interactive mode|
|
||||
|`-l`|`--mathlib`|Use the predefined math routines|
|
||||
|`-q`|`--quiet`|Opens the interactive mode for bc without printing the header|
|
||||
|`-s`|`--standard`|Treat non-standard bc constructs as errors|
|
||||
|`-w`|`--warn`|Provides a warning if non-standard bc constructs are used|
|
||||
|
||||
### Notes:
|
||||
|
||||
1. The capabilities of `bc` can be further appreciated if used within a script. Aside from basic arithmetic operations, `bc` supports increments/decrements, complex calculations, logical comparisons, etc.
|
||||
2. Two of the flags in `bc` refer to non-standard constructs. If you evaluate `100>50 | bc` for example, you will get a strange warning. According to the POSIX page for bc, relational operators are only valid if used within an `if`, `while`, or `for` statement.
|
||||
@@ -1,31 +0,0 @@
|
||||
# The `help` command
|
||||
The `help` command displays information about builtin commands.
|
||||
Display information about builtin commands.
|
||||
|
||||
If a `PATTERN` is specified, this command gives detailed help on all commands matching the `PATTERN`, otherwise the list of available help topics is printed.
|
||||
|
||||
## Syntax
|
||||
```bash
|
||||
$ help [-dms] [PATTERN ...]
|
||||
```
|
||||
|
||||
## Options
|
||||
|**Option**|**Description**|
|
||||
|:--|:--|
|
||||
|`-d`|Output short description for each topic.|
|
||||
|`-m`|Display usage in pseudo-manpage format.|
|
||||
|`-s`|Output only a short usage synopsis for each topic matching the provided `PATTERN`.|
|
||||
|
||||
## Examples of uses:
|
||||
1. We get the complete information about the `cd` command
|
||||
```bash
|
||||
$ help cd
|
||||
```
|
||||
2. We get a short description about the `pwd` command
|
||||
```bash
|
||||
$ help -d pwd
|
||||
```
|
||||
3. We get the syntax of the `cd` command
|
||||
```bash
|
||||
$ help -s cd
|
||||
```
|
||||
@@ -1,29 +0,0 @@
|
||||
# The `factor` command
|
||||
The `factor` command prints the prime factors of each specified integer `NUMBER`. If none are specified on the command line, it will read them from the standard input.
|
||||
|
||||
## Syntax
|
||||
```bash
|
||||
$ factor [NUMBER]...
|
||||
```
|
||||
OR:
|
||||
```bash
|
||||
$ factor OPTION
|
||||
```
|
||||
|
||||
## Options
|
||||
|**Option**|**Description**|
|
||||
|:--|:--|
|
||||
|`--help`|Display this a help message and exit.|
|
||||
|`--version`|Output version information and exit.|
|
||||
|
||||
## Examples
|
||||
|
||||
1. Print prime factors of a prime number.
|
||||
```bash
|
||||
$ factor 50
|
||||
```
|
||||
|
||||
2. Print prime factors of a non-prime number.
|
||||
```bash
|
||||
$ factor 75
|
||||
```
|
||||
@@ -1,32 +0,0 @@
|
||||
# The `whatis` command
|
||||
|
||||
The `whatis` command is used to display one-line manual page descriptions for commands.
|
||||
It can be used to get a basic understanding of what a (unknown) command is used for.
|
||||
|
||||
### Examples of uses:
|
||||
|
||||
1. To display what `ls` is used for:
|
||||
|
||||
```
|
||||
whatis ls
|
||||
```
|
||||
|
||||
2. To display the use of all commands which start with `make`, execute the following:
|
||||
|
||||
```
|
||||
whatis -w make*
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
whatis [-OPTION] [KEYWORD]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-d`|`--debug`|Show debugging messages|
|
||||
|`-r`|`--regex`|Interpret each keyword as a regex|
|
||||
|`-w`|`--wildcard`|The keyword(s) contain wildcards|
|
||||
@@ -1,33 +0,0 @@
|
||||
# The `who` command
|
||||
|
||||
The `who` command lets you print out a list of logged-in users, the current run level of the system and the time of last system boot.
|
||||
|
||||
### Examples
|
||||
|
||||
1. Print out all details of currently logged-in users
|
||||
|
||||
```
|
||||
who -a
|
||||
```
|
||||
|
||||
2. Print out the list of all dead processes
|
||||
|
||||
```
|
||||
who -d -H
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
who [options] [filename]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities
|
||||
|
||||
|**Short Flag** |**Description** |
|
||||
|---|---|
|
||||
| `-r` |prints all the current runlevel |
|
||||
| `-d` |print all the dead processes |
|
||||
|`-q`|print all the login names and total number of logged on users |
|
||||
|`-h`|print the heading of the columns displayed |
|
||||
|`-b`|print the time of last system boot |
|
||||
@@ -1,33 +0,0 @@
|
||||
018-the-free-command.md
|
||||
|
||||
# The `free` command
|
||||
|
||||
The `free` command in Linux/Unix is used to show memory (RAM/SWAP) information.
|
||||
|
||||
# Usage
|
||||
|
||||
## Show memory usage
|
||||
|
||||
**Action:**
|
||||
--- Output the memory usage - available and used, as well as swap
|
||||
|
||||
**Details:**
|
||||
--- Outputted values are not human-readable (are in bytes)
|
||||
|
||||
**Command:**
|
||||
```
|
||||
free
|
||||
```
|
||||
|
||||
## Show memory usage in human-readable form
|
||||
|
||||
**Action:**
|
||||
--- Output the memory usage - available and used, as well as swap
|
||||
|
||||
**Details:**
|
||||
--- Outputted values ARE human-readable (are in GB / MB)
|
||||
|
||||
**Command:**
|
||||
```
|
||||
free -h
|
||||
```
|
||||
@@ -1,19 +0,0 @@
|
||||
# The `sl` command
|
||||
|
||||
The `sl` command in Linux is a humorous program that runs a steam locomotive(sl) across your terminal.
|
||||
|
||||

|
||||
|
||||
## Installation
|
||||
|
||||
Install the package before running.
|
||||
|
||||
```
|
||||
sudo apt install sl
|
||||
```
|
||||
|
||||
## Syntax
|
||||
|
||||
```
|
||||
sl
|
||||
```
|
||||
@@ -1,76 +0,0 @@
|
||||
|
||||
# The `finger` command
|
||||
|
||||
The `finger` displays information about the system users.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. View detail about a particular user.
|
||||
|
||||
```
|
||||
finger abc
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
Login: abc Name: (null)
|
||||
Directory: /home/abc Shell: /bin/bash
|
||||
On since Mon Nov 1 18:45 (IST) on :0 (messages off)
|
||||
On since Mon Nov 1 18:46 (IST) on pts/0 from :0.0
|
||||
New mail received Fri May 7 10:33 2013 (IST)
|
||||
Unread since Sat Jun 7 12:59 2003 (IST)
|
||||
No Plan.
|
||||
```
|
||||
|
||||
2. View login details and Idle status about an user
|
||||
|
||||
```
|
||||
finger -s root
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
Login Name Tty Idle Login Time Office Office Phone
|
||||
root root *1 19d Wed 17:45
|
||||
root root *2 3d Fri 16:53
|
||||
root root *3 Mon 20:20
|
||||
root root *ta 2 Tue 15:43
|
||||
root root *tb 2 Tue 15:44
|
||||
```
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
finger [-l] [-m] [-p] [-s] [username]
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Flag** |**Description** |
|
||||
|:---|:---|
|
||||
|`-l`|Force long output format.|
|
||||
|`-m`|Match arguments only on user name (not first or last name).|
|
||||
|`-p`|Suppress printing of the .plan file in a long format printout.|
|
||||
|`-s`|Force short output format.|
|
||||
|
||||
### Additional Information
|
||||
**Default Format**
|
||||
|
||||
The default format includes the following items:
|
||||
|
||||
Login name
|
||||
Full username
|
||||
Terminal name
|
||||
Write status (an * (asterisk) before the terminal name indicates that write permission is denied)
|
||||
For each user on the host, the default information list also includes, if known, the following items:
|
||||
|
||||
Idle time (Idle time is minutes if it is a single integer, hours and minutes if a : (colon) is present, or days and hours if a “d” is present.)
|
||||
Login time
|
||||
Site-specific information
|
||||
|
||||
**Longer Format**
|
||||
|
||||
A longer format is used by the finger command whenever a list of user’s names is given. (Account names as well as first and last names of users are accepted.) This format is multiline, and includes all the information described above along with the following:
|
||||
|
||||
User’s $HOME directory
|
||||
User’s login shell
|
||||
Contents of the .plan file in the user’s $HOME directory
|
||||
Contents of the .project file in the user’s $HOME directory
|
||||
@@ -1,56 +0,0 @@
|
||||
# The `w` command
|
||||
|
||||
The `w` command displays information about the users that are currently active on the machine and their [processes](https://www.computerhope.com/jargon/p/process.htm).
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Running the `w` command without [arguments](https://www.computerhope.com/jargon/a/argument.htm) shows a list of logged on users and their processes.
|
||||
|
||||
```
|
||||
w
|
||||
```
|
||||
|
||||
|
||||
2. Show information for the user named *hope*.
|
||||
|
||||
```
|
||||
w hope
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
finger [-l] [-m] [-p] [-s] [username]
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-h`|`--no-header`|Don't print the header.|
|
||||
|`-u`|`--no-current`|Ignores the username while figuring out the current process and cpu times. *(To see an example of this, switch to the root user with `su` and then run both `w` and `w -u`.)*|
|
||||
|`-s`|`--short`|Display abbreviated output *(don't print the login time, JCPU or PCPU times).*|
|
||||
|`-f`|`--from`|Toggle printing the from *(remote hostname)* field. The default as released is for the from field to not be printed, although your system administrator or distribution maintainer may have compiled a version where the from field is shown by default.|
|
||||
|`--help`|<center>-</center>|Display a help message, and exit.|
|
||||
|`-V`|`--version`|Display version information, and exit.|
|
||||
|`-o`|`--old-style`|Old style output *(prints blank space for idle times less than one minute)*.|
|
||||
|*`user`*|<center>-</center>|Show information about the specified the user only.|
|
||||
|
||||
|
||||
### Additional Information
|
||||
|
||||
The [header](https://www.computerhope.com/jargon/h/header.htm) of the output shows (in this order): the current time, how long the system has been running, how many users are currently logged on, and the system [load](https://www.computerhope.com/jargon/l/load.htm) averages for the past 1, 5, and 15 minutes.
|
||||
|
||||
The following entries are displayed for each user:
|
||||
- login name the [tty](https://www.computerhope.com/jargon/t/tty.htm)
|
||||
- name the [remote](https://www.computerhope.com/jargon/r/remote.htm)
|
||||
- [host](https://www.computerhope.com/jargon/h/hostcomp.htm) they are
|
||||
- logged in from the amount of time they are logged in their
|
||||
- [idle](https://www.computerhope.com/jargon/i/idle.htm) time JCPU
|
||||
- PCPU
|
||||
- [command line](https://www.computerhope.com/jargon/c/commandi.htm) of their current process
|
||||
|
||||
The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs, but does include currently running background jobs.
|
||||
|
||||
The PCPU time is the time used by the current process, named in the "what" field.
|
||||
@@ -1,28 +0,0 @@
|
||||
# The `login` Command
|
||||
|
||||
The `login` command initiates a user session.
|
||||
|
||||
## Syntax
|
||||
|
||||
```bash
|
||||
$ login [-p] [-h host] [-H] [-f username|username]
|
||||
```
|
||||
|
||||
## Flags and their functionalities
|
||||
|
||||
|**Short Flag** |**Description** |
|
||||
|---|---|
|
||||
| `-f` |Used to skip a login authentication. This option is usually used by the getty(8) autologin feature. |
|
||||
| `-h` | Used by other servers (such as telnetd(8) to pass the name of the remote host to login so that it can be placed in utmp and wtmp. Only the superuser is allowed use this option. |
|
||||
|`-p`|Used by getty(8) to tell login to preserve the environment. |
|
||||
|`-H`|Used by other servers (for example, telnetd(8)) to tell login that printing the hostname should be suppressed in the login: prompt. |
|
||||
|`--help`|Display help text and exit.|
|
||||
|`-v`|Display version information and exit.|
|
||||
|
||||
## Examples
|
||||
|
||||
To log in to the system as user abhishek, enter the following at the login prompt:
|
||||
```bash
|
||||
$ login: abhishek
|
||||
```
|
||||
If a password is defined, the password prompt appears. Enter your password at this prompt.
|
||||
@@ -1,52 +0,0 @@
|
||||
# `lscpu` command
|
||||
|
||||
`lscpu` in Linux/Unix is used to display CPU Architecture info. `lscpu` gathers CPU architecture information from `sysfs` and `/proc/cpuinfo` files.
|
||||
|
||||
For example :
|
||||
```
|
||||
manish@godsmack:~$ lscpu
|
||||
Architecture: x86_64
|
||||
CPU op-mode(s): 32-bit, 64-bit
|
||||
Byte Order: Little Endian
|
||||
CPU(s): 4
|
||||
On-line CPU(s) list: 0-3
|
||||
Thread(s) per core: 2
|
||||
Core(s) per socket: 2
|
||||
Socket(s): 1
|
||||
NUMA node(s): 1
|
||||
Vendor ID: GenuineIntel
|
||||
CPU family: 6
|
||||
Model: 142
|
||||
Model name: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
|
||||
Stepping: 9
|
||||
CPU MHz: 700.024
|
||||
CPU max MHz: 3100.0000
|
||||
CPU min MHz: 400.0000
|
||||
BogoMIPS: 5399.81
|
||||
Virtualization: VT-x
|
||||
L1d cache: 32K
|
||||
L1i cache: 32K
|
||||
L2 cache: 256K
|
||||
L3 cache: 3072K
|
||||
NUMA node0 CPU(s): 0-3
|
||||
```
|
||||
|
||||
|
||||
## Options
|
||||
|
||||
`-a, --all`
|
||||
Include lines for online and offline CPUs in the output (default for -e). This option may only specified together with option -e or -p.
|
||||
For example: `lsof -a`
|
||||
|
||||
`-b, --online`
|
||||
Limit the output to online CPUs (default for -p). This option may only be specified together with option -e or -p.
|
||||
For example: `lscpu -b`
|
||||
|
||||
`-c, --offline`
|
||||
Limit the output to offline CPUs. This option may only be specified together with option -e or -p.
|
||||
|
||||
`-e, --extended [=list]`
|
||||
Display the CPU information in human readable format.
|
||||
For example: `lsof -e`
|
||||
|
||||
For more info: use `man lscpu` or `lscpu --help`
|
||||
@@ -1,37 +0,0 @@
|
||||
# The `printenv` command
|
||||
|
||||
The `printenv` prints the values of the specified [environment _VARIABLE(s)_](https://www.computerhope.com/jargon/e/envivari.htm). If no [_VARIABLE_](https://www.computerhope.com/jargon/v/variable.htm) is specified, print name and value pairs for them all.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Display the values of all environment variables.
|
||||
|
||||
```
|
||||
printenv
|
||||
```
|
||||
|
||||
2. Display the location of the current user's [home directory](https://www.computerhope.com/jargon/h/homedir.htm).
|
||||
```
|
||||
printenv HOME
|
||||
```
|
||||
|
||||
3. To use the `--null` command line option as the terminating character between output entries.
|
||||
```
|
||||
printenv --null SHELL HOME
|
||||
```
|
||||
*NOTE: By default, the* `printenv` *command uses newline as the terminating character between output entries.*
|
||||
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
printenv [OPTION]... PATTERN...
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-0`|`--null`|End each output line with **0** byte rather than [newline](https://www.computerhope.com/jargon/n/newline.htm).|
|
||||
|`--help`|<center>-</center>|Display a help message, and exit.|
|
||||
@@ -1,39 +0,0 @@
|
||||
# The `ip` command
|
||||
|
||||
The `ip` command is present in the net-tools which is used for performing several network administration tasks. IP stands for Internet Protocol. This command is used to show or manipulate routing, devices, and tunnels. It can perform tasks like configuring and modifying the default and static routing, setting up tunnel over IP, listing IP addresses and property information, modifying the status of the interface, assigning, deleting and setting up IP addresses and routes.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To assign an IP Address to a specific interface (eth1) :
|
||||
|
||||
```
|
||||
ip addr add 192.168.50.5 dev eth1
|
||||
```
|
||||
|
||||
2. To show detailed information about network interfaces like IP Address, MAC Address information etc. :
|
||||
|
||||
```
|
||||
ip addr show
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
ip [ OPTIONS ] OBJECT { COMMAND | help }
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Flag** |**Description** |
|
||||
|:---|:---|
|
||||
|`-a`| Display and modify IP Addresses |
|
||||
|`-l`|Display and modify network interfaces |
|
||||
|`-r`|Display and alter the routing table|
|
||||
|`-n`|Display and manipulate neighbor objects (ARP table) |
|
||||
|`-ru`|Rule in routing policy database.|
|
||||
|`-s`|Output more information. If the option appears twice or more, the amount of information increases |
|
||||
|`-f`|Specifies the protocol family to use|
|
||||
|`-r`|Use the system's name resolver to print DNS names instead of host addresses|
|
||||
|`-c`|To configure color output |
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
# The `last` command
|
||||
|
||||
This command shows you a list of all the users that have logged in and out since the creation of the `var/log/wtmp` file. There are also some parameters you can add which will show you for example when a certain user has logged in and how long he was logged in for.
|
||||
|
||||
If you want to see the last 5 logs, just add `-5` to the command like this:
|
||||
|
||||
```
|
||||
last -5
|
||||
```
|
||||
|
||||
And if you want to see the last 10, add `-10`.
|
||||
|
||||
Another cool thing you can do is if you add `-F` you can see the login and logout time including the dates.
|
||||
|
||||
```
|
||||
last -F
|
||||
```
|
||||
|
||||
There are quite a lot of stuff you can view with this command. If you need to find out more about this command you can run:
|
||||
|
||||
```
|
||||
last --help
|
||||
```
|
||||
@@ -1,93 +0,0 @@
|
||||
# The `locate` command
|
||||
|
||||
The `locate` command searches the file system for files and directories whose name matches a given pattern through a database file that is generated by the `updatedb` command.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Running the `locate` command to search for a file named `.bashrc`.
|
||||
|
||||
```
|
||||
locate .bashrc
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
/etc/bash.bashrc
|
||||
/etc/skel/.bashrc
|
||||
/home/linuxize/.bashrc
|
||||
/usr/share/base-files/dot.bashrc
|
||||
/usr/share/doc/adduser/examples/adduser.local.conf.examples/bash.bashrc
|
||||
/usr/share/doc/adduser/examples/adduser.local.conf.examples/skel/dot.bashrc
|
||||
```
|
||||
The `/root/.bashrc` file will not be shown because we ran the command as a normal user that doesn’t have access permissions to the `/root` directory.
|
||||
|
||||
If the result list is long, for better readability, you can pipe the output to the [`less`](https://linuxize.com/post/less-command-in-linux/) command:
|
||||
|
||||
```
|
||||
locate .bashrc | less
|
||||
```
|
||||
|
||||
2. To search for all `.md` files on the system
|
||||
```
|
||||
locate *.md
|
||||
```
|
||||
3. To search all `.py` files and display only 10 results
|
||||
```
|
||||
locate -n 10 *.py
|
||||
```
|
||||
4. To performs case-insensitive search.
|
||||
```
|
||||
locate -i readme.md
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
/home/linuxize/p1/readme.md
|
||||
/home/linuxize/p2/README.md
|
||||
/home/linuxize/p3/ReadMe.md
|
||||
```
|
||||
5. To return the number of all files containing `.bashrc` in their name.
|
||||
```
|
||||
locate -c .bashrc
|
||||
```
|
||||
*Output*
|
||||
```
|
||||
6
|
||||
```
|
||||
6. The following would return only the existing `.json` files on the file system.
|
||||
```
|
||||
locate -e *.json
|
||||
```
|
||||
7. To run a more complex search the `-r` (`--regexp`) option is used.
|
||||
To search for all `.mp4` and `.avi` files on your system and ignore case.
|
||||
```
|
||||
locate --regex -i "(\.mp4|\.avi)"
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
1. locate [OPTION]... PATTERN...
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-A`|`--all`|It is used to display only entries that match all PATTERNs instead of requiring only one of them to match.|
|
||||
|`-b`|`--basename`|It is used to match only the base name against the specified patterns.|
|
||||
|`-c`|`--count`|It is used for writing the number matching entries instead of writing file names on standard output.|
|
||||
|`-d`|`--database DBPATH`|It is used to replace the default database with DBPATH.|
|
||||
|`-e`|`--existing`|It is used to display only entries that refer to existing files during the command is executed.|
|
||||
|`-L`|`--follow`|If the `--existing` option is specified, It is used for checking whether files exist and follow trailing symbolic links. It will omit the broken symbolic links to the output. This is the default behavior. The opposite behavior can be specified using the `--nofollow` option.|
|
||||
|`-h`|`--help`|It is used to display the help documentation that contains a summary of the available options.|
|
||||
|`-i`|`--ignore-case`|It is used to ignore case sensitivity of the specified patterns.|
|
||||
|`-p`|`--ignore-spaces`|It is used to ignore punctuation and spaces when matching patterns.|
|
||||
|`-t`|`--transliterate`|It is used to ignore accents using iconv transliteration when matching patterns.|
|
||||
|`-l`|`--limit, -n LIMIT`|If this option is specified, the command exit successfully after finding LIMIT entries.|
|
||||
|`-m`|`--mmap`|It is used to ignore the compatibility with BSD, and GNU locate.|
|
||||
|`-0`|`--null`|It is used to separate the entries on output using the ASCII NUL character instead of writing each entry on a separate line.|
|
||||
|`-S`|`--statistics`|It is used to write statistics about each read database to standard output instead of searching for files.|
|
||||
|`-r`|`--regexp REGEXP`|It is used for searching a basic regexp REGEXP.|
|
||||
|`--regex`|<center>-</center>|It is used to describe all PATTERNs as extended regular expressions.|
|
||||
|`-V`|`--version`|It is used to display the version and license information.|
|
||||
|`-w`|` --wholename`|It is used for matching only the whole path name in specified patterns.|
|
||||
@@ -1,47 +0,0 @@
|
||||
# The `iostat` command
|
||||
|
||||
The `iostat` command in Linux is used for monitoring system input/output statistics for devices and partitions. It monitors system input/output by observing the time the devices are active in relation to their average transfer rates. The iostat produce reports may be used to change the system configuration to raised balance the input/output between the physical disks. iostat is being included in sysstat package. If you don’t have it, you need to install first.
|
||||
|
||||
### Syntax:
|
||||
|
||||
```[linux]
|
||||
iostat [ -c ] [ -d ] [ -h ] [ -N ] [ -k | -m ] [ -t ] [ -V ] [ -x ]
|
||||
[ -z ] [ [ [ -T ] -g group_name ] { device [...] | ALL } ]
|
||||
[ -p [ device [,...] | ALL ] ] [ interval [ count ] ]
|
||||
```
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Display a single history-since-boot report for all CPU and Devices:
|
||||
```[linux]
|
||||
iostat -d 2
|
||||
```
|
||||
|
||||
2. Display a continuous device report at two-second intervals:
|
||||
```[linux]
|
||||
iostat -d 2 6
|
||||
```
|
||||
|
||||
3.Display, for all devices, six reports at two-second intervals:
|
||||
```[linux]
|
||||
iostat -x sda sdb 2 6
|
||||
```
|
||||
|
||||
4.Display, for devices sda and sdb, six extended reports at two-second intervals:
|
||||
```[linux]
|
||||
iostat -p sda 2 6
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
| **Short Flag** | **Description** |
|
||||
| :------------------------------ | :--------------------------------------------------------- |
|
||||
| `-x` | Show more details statistics information. |
|
||||
| `-c` | Show only the cpu statistic. |
|
||||
| `-d` | Display only the device report |
|
||||
| `-xd | Show extended I/O statistic for device only. |
|
||||
| `-k` | Capture the statistics in kilobytes or megabytes. |
|
||||
| `-k23` | Display cpu and device statistics with delay. |
|
||||
| `-j ID mmcbkl0 sda6 -x -m 2 2` | Display persistent device name statistics. |
|
||||
| `-p ` | Display statistics for block devices. |
|
||||
| `-N ` | Display lvm2 statistic information. |
|
||||
@@ -1,77 +0,0 @@
|
||||
# The `sort` command
|
||||
|
||||
the `sort` command is used to sort a file, arranging the records in a particular order. By default, the sort command sorts a file assuming the contents are ASCII. Using options in the sort command can also be used to sort numerically.
|
||||
|
||||
### Examples:
|
||||
|
||||
Suppose you create a data file with name file.txt:
|
||||
```
|
||||
Command :
|
||||
$ cat > file.txt
|
||||
abhishek
|
||||
chitransh
|
||||
satish
|
||||
rajan
|
||||
naveen
|
||||
divyam
|
||||
harsh
|
||||
```
|
||||
|
||||
Sorting a file: Now use the sort command
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
sort filename.txt
|
||||
```
|
||||
|
||||
```
|
||||
Command:
|
||||
$ sort file.txt
|
||||
|
||||
Output :
|
||||
abhishek
|
||||
chitransh
|
||||
divyam
|
||||
harsh
|
||||
naveen
|
||||
rajan
|
||||
satish
|
||||
```
|
||||
|
||||
Note: This command does not actually change the input file, i.e. file.txt.
|
||||
|
||||
|
||||
### The sort function on a file with mixed case content
|
||||
|
||||
i.e. uppercase and lower case: When we have a mix file with both uppercase and lowercase letters then first the upper case letters would be sorted following with the lower case letters.
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
Create a file mix.txt
|
||||
|
||||
|
||||
```
|
||||
Command :
|
||||
$ cat > mix.txt
|
||||
abc
|
||||
apple
|
||||
BALL
|
||||
Abc
|
||||
bat
|
||||
```
|
||||
Now use the sort command
|
||||
|
||||
```
|
||||
Command :
|
||||
$ sort mix.txt
|
||||
Output :
|
||||
Abc
|
||||
BALL
|
||||
abc
|
||||
apple
|
||||
bat
|
||||
```
|
||||
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
# The `paste` command
|
||||
|
||||
The `paste` command writes lines of two or more files, sequentially and separated by TABs, to the standard output
|
||||
|
||||
### Syntax:
|
||||
|
||||
```[linux]
|
||||
paste [OPTIONS]... [FILE]...
|
||||
```
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To paste two files
|
||||
|
||||
```[linux]
|
||||
paste file1 file2
|
||||
```
|
||||
|
||||
2. To paste two files using new line as delimiter
|
||||
|
||||
```[linux]
|
||||
paste -d '\n' file1 file2
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
| **Short Flag** | **Long Flag** | **Description** |
|
||||
| :----------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-d` | `--delimiter` | use charater of TAB |
|
||||
| `-s` | `--serial` | paste one file at a time instead of in parallel |
|
||||
| `-z` | `--zero-terminated` | set line delimiter to NUL, not newline |
|
||||
| | `--help` | print command help |
|
||||
| | `--version` | print version information |
|
||||
@@ -1,24 +0,0 @@
|
||||
# The `iptables` Command
|
||||
|
||||
The `iptables` command is used to set up and maintain tables for the Netfilter firewall for IPv4, included in the Linux kernel. The firewall matches packets with rules defined in these tables and then takes the specified action on a possible match.
|
||||
|
||||
### Syntax:
|
||||
```
|
||||
iptables --table TABLE -A/-C/-D... CHAIN rule --jump Target
|
||||
```
|
||||
|
||||
### Example and Explanation:
|
||||
*This command will append to the chain provided in parameters:*
|
||||
```
|
||||
iptables [-t table] --append [chain] [parameters]
|
||||
```
|
||||
|
||||
*This command drops all the traffic coming on any port:*
|
||||
```
|
||||
iptables -t filter --append INPUT -j DROP
|
||||
```
|
||||
### Flags and their Functionalities:
|
||||
|Flag|Description|
|
||||
|:---|:---|
|
||||
|`-C`|Check if a rule is present in the chain or not. It returns 0 if the rule exists and returns 1 if it does not.|
|
||||
|`-A`|Append to the chain provided in parameters.|
|
||||
@@ -1,50 +0,0 @@
|
||||
# The `lsof` command
|
||||
|
||||
The `lsof` command shows **file infomation** of all the files opened by a running process. It's name is also derived from the fact that, list open files > `lsof`
|
||||
|
||||
An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library , a stream or a network file (Internet socket, NFS file or UNIX domain socket). A specific file or all the files in a file system may be selected by path.
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
lsof [-OPTION] [USER_NAME]
|
||||
```
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To show all the files opened by all active processes:
|
||||
|
||||
```
|
||||
lsof
|
||||
```
|
||||
|
||||
2. To show the files opened by a particular user:
|
||||
|
||||
```
|
||||
lsof -u [USER_NAME]
|
||||
```
|
||||
|
||||
|
||||
3. To list the processes with opened files under a specified directory:
|
||||
|
||||
```
|
||||
lsof +d [PATH_TO_DIR]
|
||||
```
|
||||
|
||||
### Options and their Functionalities:
|
||||
|
||||
|**Option** |**Additional Options** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-i`|`tcp`/ `udp`/ `:port`|List all network connections running, Additionally, on udp/tcp or on specified port.|
|
||||
|`-i4`|<center>-</center>|List all processes with ipv4 connections.|
|
||||
|`-i6`|<center>-</center>|List all processes with ipv6 connections.|
|
||||
|`-c`|`[PROCESS_NAME]`|List all the files of a particular process with given name.|
|
||||
|`-p`|`[PROCESS_ID]`|List all the files opened by a specified process id.|
|
||||
|`-p`|`^[PROCESS_ID]`|List all the files that are not opened by a specified process id.|
|
||||
|`+d`|`[PATH]`|List the processes with opened files under a specified directory|
|
||||
|`+R`|<center>-</center>|List the files opened by parent process Id.|
|
||||
|
||||
### Help Command
|
||||
Run below command to view the complete guide to `lsof` command.
|
||||
```
|
||||
man lsof
|
||||
```
|
||||
@@ -1,57 +0,0 @@
|
||||
# The `bzip2` command
|
||||
|
||||
The `bzip2` command lets you compress and decompress the files i.e. it helps in binding the files into a single file which takes less storage space as the original file use to take.
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
bzip2 [OPTIONS] filenames ...
|
||||
```
|
||||
|
||||
#### Note : Each file is replaced by a compressed version of itself, with the name original name of the file followed by extension bz2.
|
||||
|
||||
### Options and their Functionalities:
|
||||
|
||||
|**Option** |**Alias** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-d`|`--decompress`|to decompress compressed file|
|
||||
|`-f`|`--force`|to force overwrite an existing output file|
|
||||
|`-h`|`--help`|to display the help message and exit|
|
||||
|`-k`|`--keep`|to enable file compression, doesn't deletes the original input file|
|
||||
|`-L`|`--license`|to display the license terms and conditions|
|
||||
|`-q`|`--quiet`|to suppress non-essential warning messages|
|
||||
|`-t`|`--test`|to check integrity of the specified .bz2 file, but don't want to decompress them|
|
||||
|`-v`|`--erbose`|to display details for each compression operation|
|
||||
|`-V`|`--version`|to display the software version|
|
||||
|`-z`|`--compress`|to enable file compression, but deletes the original input file|
|
||||
|
||||
|
||||
> #### By default, when bzip2 compresses a file, it deletes the original (or input) file. However, if you don't want that to happen, use the -k command line option.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To force compression:
|
||||
```
|
||||
bzip2 -z input.txt
|
||||
```
|
||||
**Note: This option deletes the original file also**
|
||||
|
||||
2. To force compression and also retain original input file:
|
||||
```
|
||||
bzip2 -k input.txt
|
||||
```
|
||||
|
||||
3. To force decompression:
|
||||
```
|
||||
bzip2 -d input.txt.bz2
|
||||
```
|
||||
|
||||
4. To test integrity of compressed file:
|
||||
```
|
||||
bzip2 -t input.txt.bz2
|
||||
```
|
||||
|
||||
5. To show the compression ratio for each file processed:
|
||||
```
|
||||
bzip2 -v input.txt
|
||||
```
|
||||
@@ -1,30 +0,0 @@
|
||||
# The `service` command
|
||||
|
||||
Service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /.
|
||||
|
||||
The SCRIPT parameter specifies a System V init script, located in /etc/init.d/SCRIPT. The supported values of COMMAND depend on the invoked script, service passes COMMAND and OPTIONS it to the init script unmodified. All scripts should support at least the start and stop commands. As a special case, if COMMAND is --full-restart, the script is run twice, first with the stop command, then with the start command.
|
||||
|
||||
The COMMAND can be at least start, stop, status, and restart.
|
||||
|
||||
service --status-all runs all init scripts, in alphabetical order, with the `status` command
|
||||
|
||||
Examples :
|
||||
|
||||
1. To check the status of all the running services:
|
||||
|
||||
```
|
||||
service --status-all
|
||||
```
|
||||
|
||||
2. To run a script
|
||||
|
||||
```
|
||||
service SCRIPT-Name start
|
||||
```
|
||||
|
||||
3. A more generalized command:
|
||||
|
||||
```
|
||||
service [SCRIPT] [COMMAND] [OPTIONS]
|
||||
|
||||
```
|
||||
@@ -1,25 +0,0 @@
|
||||
# The `vmstat` command
|
||||
|
||||
The `vmstat` command lets you monitor the performance of your system. It shows you information about your memory, disk, processes, CPU scheduling, paging, and block IO. This command is also referred to as **virtual memory statistic report**.
|
||||
|
||||
The very first report that is produced shows you the average details since the last reboot and after that, other reports are made which report over time.
|
||||
|
||||
### `vmstat`
|
||||
|
||||

|
||||
|
||||
As you can see it is a pretty useful little command. The most important things that we see above are the `free`, which shows us the free space that is not being used, `si` shows us how much memory is swapped in every second in kB, and `so` shows how much memory is swapped out each second in kB as well.
|
||||
|
||||
### `vmstat -a`
|
||||
|
||||
If we run `vmstat -a`, it will show us the active and inactive memory of the system running.
|
||||
|
||||

|
||||
|
||||
### `vmstat -d`
|
||||
|
||||
The `vmstat -d` command shows us all the disk statistics.
|
||||
|
||||

|
||||
|
||||
As you can see this is a pretty useful little command that shows you different statistics about your virtual memory
|
||||
@@ -1,57 +0,0 @@
|
||||
# The `mpstat` command
|
||||
|
||||
The `mpstat` command is used to report processor related statistics. It accurately displays the statistics of the CPU usage of the system and information about CPU utilization and performance.
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
mpstat [options] [<interval> [<count>]]
|
||||
```
|
||||
|
||||
#### Note : It initializes the first processor with CPU 0, the second one with CPU 1, and so on.
|
||||
|
||||
### Options and their Functionalities:
|
||||
|
||||
|**Option** |**Description** |
|
||||
|-------------|----------------------------------------------------------------------|
|
||||
|`-A` |to display all the detailed statistics |
|
||||
|`-h` |to display mpstat help |
|
||||
|`-I` |to display detailed interrupts statistics |
|
||||
|`-n` |to report summary CPU statistics based on NUMA node placement |
|
||||
|`-N` |to indicate the NUMA nodes for which statistics are to be reported |
|
||||
|`-P` |to indicate the processors for which statistics are to be reported |
|
||||
|`-o` |to display the statistics in JSON (Javascript Object Notation) format |
|
||||
|`-T` |to display topology elements in the CPU report |
|
||||
|`-u` |to report CPU utilization |
|
||||
|`-v` |to display utilization statistics at the virtual processor level |
|
||||
|`-V` |to display mpstat version |
|
||||
|`-ALL` |to display detailed statistics about all CPUs |
|
||||
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To display processor and CPU statistics:
|
||||
```
|
||||
mpstat
|
||||
```
|
||||
|
||||
2. To display processor number of all CPUs:
|
||||
```
|
||||
mpstat -P ALL
|
||||
```
|
||||
|
||||
3. To get all the information which the tool may collect:
|
||||
```
|
||||
mpstat -A
|
||||
```
|
||||
|
||||
4. To display CPU utilization by a specific processor:
|
||||
```
|
||||
mpstat -P 0
|
||||
```
|
||||
|
||||
5. To display CPU usage with a time interval:
|
||||
```
|
||||
mpstat 1 5
|
||||
```
|
||||
**Note: This command will print 5 reports with 1 second time interval**
|
||||
@@ -1,36 +0,0 @@
|
||||
# The `ncdu` Command
|
||||
|
||||
`ncdu` (NCurses Disk Usage) is a curses-based version of the well-known `du` command. It provides a fast way to see what directories are using your disk space.
|
||||
|
||||
|
||||
## Example
|
||||
1. Quiet Mode
|
||||
```
|
||||
ncdu -q
|
||||
```
|
||||
|
||||
2. Omit mounted directories
|
||||
```
|
||||
ncdu -q -x
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Syntax
|
||||
```
|
||||
ncdu [-hqvx] [--exclude PATTERN] [-X FILE] dir
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
## Additional Flags and their Functionalities:
|
||||
|
||||
|Short Flag | Long Flag | Description|
|
||||
|---|---|---|
|
||||
| `-h`| - |Print a small help message|
|
||||
| `-q`| - |Quiet mode. While calculating disk space, ncdu will update the screen 10 times a second by default, this will be decreased to once every 2 seconds in quiet mode. Use this feature to save bandwidth over remote connections.|
|
||||
| `-v`| - |Print version.|
|
||||
| `-x`| - |Only count files and directories on the same filesystem as the specified dir.|
|
||||
| - | `--exclude PATTERN`|Exclude files that match PATTERN. This argument can be added multiple times to add more patterns.|
|
||||
| `-X FILE`| `--exclude-from FILE`| Exclude files that match any pattern in FILE. Patterns should be separated by a newline.|
|
||||
@@ -1,69 +0,0 @@
|
||||
# The `uniq` command
|
||||
|
||||
The `uniq` command in Linux is a command line utility that reports or filters out the repeated lines in a file.
|
||||
In simple words, `uniq` is the tool that helps you to detect the adjacent duplicate lines and also deletes the duplicate lines. It filters out the adjacent matching lines from the input file(that is required as an argument) and writes the filtered data to the output file .
|
||||
|
||||
### Examples:
|
||||
|
||||
In order to omit the repeated lines from a file, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq kt.txt
|
||||
```
|
||||
|
||||
In order to tell the number of times a line was repeated, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -c kt.txt
|
||||
```
|
||||
|
||||
In order to print repeated lines, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -d kt.txt
|
||||
```
|
||||
|
||||
In order to print unique lines, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -u kt.txt
|
||||
```
|
||||
|
||||
In order to allows the N fields to be skipped while comparing uniqueness of the lines, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -f 2 kt.txt
|
||||
```
|
||||
|
||||
In order to allows the N characters to be skipped while comparing uniqueness of the lines, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -s 5 kt.txt
|
||||
```
|
||||
|
||||
In order to to make the comparison case-insensitive, the syntax would be the following:
|
||||
|
||||
```
|
||||
uniq -i kt.txt
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
uniq [OPTION] [INPUT[OUTPUT]]
|
||||
```
|
||||
|
||||
### Possible options:
|
||||
|
||||
|**Flag** |**Description** |**Params** |
|
||||
|:---|:---|:---|
|
||||
|`-c`|It tells how many times a line was repeated by displaying a number as a prefix with the line.|-|
|
||||
|`-d`|It only prints the repeated lines and not the lines which aren’t repeated.|-|
|
||||
|`-i`|By default, comparisons done are case sensitive but with this option case insensitive comparisons can be made.|-|
|
||||
|`-f`|It allows you to skip N fields(a field is a group of characters, delimited by whitespace) of a line before determining uniqueness of a line.|N|
|
||||
|`-s`|It doesn’t compares the first N characters of each line while determining uniqueness. This is like the -f option, but it skips individual characters rather than fields.|N|
|
||||
|`-u`|It allows you to print only unique lines.|-|
|
||||
|`-z`|It will make a line end with 0 byte(NULL), instead of a newline.|-|
|
||||
|`-w`|It only compares N characters in a line.|N|
|
||||
|`--help`|It displays a help message and exit.|-|
|
||||
|`--version`|It displays version information and exit.|-|
|
||||
@@ -1,103 +0,0 @@
|
||||
# The `RPM` command
|
||||
|
||||
`rpm` - RPM Package Manager
|
||||
|
||||
`rpm` is a powerful __Package Manager__, which can be used to build, install, query, verify, update, and erase individual software packages. A __package__ consists of an archive of files and meta-data used to install and erase the archive files. The meta-data includes helper scripts, file attributes, and descriptive information about the package. Packages come in two varieties: binary packages, used to encapsulate software to be installed, and source packages, containing the source code and recipe necessary to produce binary packages.
|
||||
|
||||
One of the following basic modes must be selected: __Query, Verify, Signature Check, Install/Upgrade/Freshen, Uninstall, Initialize Database, Rebuild Database, Resign, Add Signature, Set Owners/Groups, Show Querytags, and Show Configuration.__
|
||||
|
||||
**General Options**
|
||||
|
||||
These options can be used in all the different modes.
|
||||
|
||||
|Short Flag| Long Flag| Description|
|
||||
|---|---|---|
|
||||
| -? | --help| Print a longer usage message then normal.|
|
||||
| - |--version |Print a single line containing the version number of rpm being used.|
|
||||
| - | --quiet | Print as little as possible - normally only error messages will be displayed.|
|
||||
| -v | - | Print verbose information - normally routine progress messages will be displayed.|
|
||||
| -vv | - | Print lots of ugly debugging information.|
|
||||
| - | --rcfile FILELIST | Each of the files in the colon separated FILELIST is read sequentially by rpm for configuration information. Only the first file in the list must exist, and tildes will be expanded to the value of $HOME. The default FILELIST is /usr/lib/rpm/rpmrc:/usr/lib/rpm/redhat/rpmrc:/etc/rpmrc:~/.rpmrc. |
|
||||
| - | --pipe CMD | Pipes the output of rpm to the command CMD. |
|
||||
| - | --dbpath DIRECTORY | Use the database in DIRECTORY rather than the default path /var/lib/rpm |
|
||||
| - | --root DIRECTORY | Use the file system tree rooted at DIRECTORY for all operations. Note that this means the database within DIRECTORY will be used for dependency checks and any scriptlet(s) (e.g. %post if installing, or %prep if building, a package) will be run after a chroot(2) to DIRECTORY. |
|
||||
| -D | --define='MACRO EXPR' | Defines MACRO with value EXPR.|
|
||||
| -E | --eval='EXPR' | Prints macro expansion of EXPR. |
|
||||
|
||||
|
||||
# Synopsis
|
||||
|
||||
## Querying and Verifying Packages:
|
||||
|
||||
```
|
||||
rpm {-q|--query} [select-options] [query-options]
|
||||
|
||||
rpm {-V|--verify} [select-options] [verify-options]
|
||||
|
||||
rpm --import PUBKEY ...
|
||||
|
||||
rpm {-K|--checksig} [--nosignature] [--nodigest] PACKAGE_FILE ...
|
||||
```
|
||||
|
||||
## Installing, Upgrading, and Removing Packages:
|
||||
|
||||
```
|
||||
rpm {-i|--install} [install-options] PACKAGE_FILE ...
|
||||
|
||||
rpm {-U|--upgrade} [install-options] PACKAGE_FILE ...
|
||||
|
||||
rpm {-F|--freshen} [install-options] PACKAGE_FILE ...
|
||||
|
||||
rpm {-e|--erase} [--allmatches] [--nodeps] [--noscripts] [--notriggers] [--test] PACKAGE_NAME ...
|
||||
```
|
||||
|
||||
## Miscellaneous:
|
||||
|
||||
```
|
||||
rpm {--initdb|--rebuilddb}
|
||||
|
||||
rpm {--addsign|--resign} PACKAGE_FILE...
|
||||
|
||||
rpm {--querytags|--showrc}
|
||||
|
||||
rpm {--setperms|--setugids} PACKAGE_NAME .
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
### query-options
|
||||
|
||||
```
|
||||
[--changelog] [-c,--configfiles] [-d,--docfiles] [--dump]
|
||||
[--filesbypkg] [-i,--info] [--last] [-l,--list]
|
||||
[--provides] [--qf,--queryformat QUERYFMT]
|
||||
[-R,--requires] [--scripts] [-s,--state]
|
||||
[--triggers,--triggerscripts]
|
||||
```
|
||||
|
||||
### verify-options
|
||||
|
||||
```
|
||||
[--nodeps] [--nofiles] [--noscripts]
|
||||
[--nodigest] [--nosignature]
|
||||
[--nolinkto] [--nofiledigest] [--nosize] [--nouser]
|
||||
[--nogroup] [--nomtime] [--nomode] [--nordev]
|
||||
[--nocaps]
|
||||
```
|
||||
### install-options
|
||||
```
|
||||
[--aid] [--allfiles] [--badreloc] [--excludepath OLDPATH]
|
||||
[--excludedocs] [--force] [-h,--hash]
|
||||
[--ignoresize] [--ignorearch] [--ignoreos]
|
||||
[--includedocs] [--justdb] [--nodeps]
|
||||
[--nodigest] [--nosignature] [--nosuggest]
|
||||
[--noorder] [--noscripts] [--notriggers]
|
||||
[--oldpackage] [--percent] [--prefix NEWPATH]
|
||||
[--relocate OLDPATH=NEWPATH]
|
||||
[--replacefiles] [--replacepkgs]
|
||||
[--test]
|
||||
```
|
||||
|
||||
|
||||
@@ -1,69 +0,0 @@
|
||||
# The `scp` command
|
||||
|
||||
SCP (secure copy) is a command-line utility that allows you to securely copy files and directories between two locations.
|
||||
|
||||
Both the files and passwords are encrypted so that anyone snooping on the traffic doesn't get anything sensitive.
|
||||
|
||||
### Different ways to copy a file or directory:
|
||||
|
||||
- From local system to a remote system.
|
||||
- From a remote system to a local system.
|
||||
- Between two remote systems from the local system.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To copy the files from a local system to a remote system:
|
||||
|
||||
```
|
||||
scp /home/documents/local-file root@{remote-ip-address}:/home/
|
||||
```
|
||||
|
||||
2. To copy the files from a remote system to the local system:
|
||||
```
|
||||
scp root@{remote-ip-address}:/home/remote-file /home/documents/
|
||||
```
|
||||
|
||||
3. To copy the files between two remote systems from the local system.
|
||||
```
|
||||
scp root@{remote1-ip-address}:/home/remote-file root@{remote2-ip-address}/home/
|
||||
```
|
||||
4. To copy file though a jump host server.
|
||||
```
|
||||
scp /home/documents/local-file -oProxyJump=<jump-host-ip> root@{remote-ip-address}/home/
|
||||
```
|
||||
On newer version of scp on some machines you can use the above command with a `-J` flag.
|
||||
```
|
||||
scp /home/documents/local-file -J <jump-host-ip> root@{remote-ip-address}/home/
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
```
|
||||
scp [OPTION] [user@]SRC_HOST:]file1 [user@]DEST_HOST:]file2
|
||||
```
|
||||
- `OPTION` - scp options such as cipher, ssh configuration, ssh port, limit, recursive copy …etc.
|
||||
- `[user@]SRC_HOST:]file1` - Source file
|
||||
- `[user@]DEST_HOST:]file2` - Destination file
|
||||
|
||||
Local files should be specified using an absolute or relative path, while remote file names should include a user and host specification.
|
||||
|
||||
scp provides several that control every aspect of its behaviour. The most widely used options are:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-P`|<center>-</center>|Specifies the remote host ssh port.|
|
||||
|`-p`|<center>-</center>|Preserves files modification and access times.|
|
||||
|`-q`|<center>-</center>|Use this option if you want to suppress the progress meter and non-error messages.|
|
||||
|`-C`|<center>-</center>|This option forces scp to compresses the data as it is sent to the destination machine.|
|
||||
|`-r`|<center>-</center>|This option tells scp to copy directories recursively.|
|
||||
|
||||
### Before you begin
|
||||
|
||||
The `scp` command relies on `ssh` for data transfer, so it requires an `ssh key` or `password` to authenticate on the remote systems.
|
||||
|
||||
The `colon (:)` is how scp distinguish between local and remote locations.
|
||||
|
||||
To be able to copy files, you must have at least read permissions on the source file and write permission on the target system.
|
||||
|
||||
Be careful when copying files that share the same name and location on both systems, `scp` will overwrite files without warning.
|
||||
|
||||
When transferring large files, it is recommended to run the scp command inside a `screen` or `tmux` session.
|
||||
@@ -1,76 +0,0 @@
|
||||
# The `split` command
|
||||
|
||||
The `split` command in Linux is used to split a file into smaller files.
|
||||
|
||||
### Examples
|
||||
|
||||
1. Split a file into a smaller file using file name
|
||||
|
||||
```
|
||||
split filename.txt
|
||||
```
|
||||
|
||||
2. Split a file named filename into segments of 200 lines beginning with prefix file
|
||||
|
||||
```
|
||||
split -l 200 filename file
|
||||
```
|
||||
|
||||
This will create files of the name fileaa, fileab, fileac, filead, etc. of 200 lines.
|
||||
|
||||
3. Split a file named filename into segments of 40 bytes with prefix file
|
||||
|
||||
```
|
||||
split -b 40 filename file
|
||||
```
|
||||
|
||||
This will create files of the name fileaa, fileab, fileac, filead, etc. of 40 bytes.
|
||||
|
||||
4. Split a file using --verbose to see the files being created.
|
||||
|
||||
```
|
||||
split filename.txt --verbose
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
split [options] filename [prefix]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-a`|`--suffix-length=N`|Generate suffixes of length N (default 2)|
|
||||
||`--additional-suffix=SUFFIX`|Append an additional SUFFIX to file names|
|
||||
|`-b`|`--bytes=SIZE`|Put SIZE bytes per output file|
|
||||
|`-C`|`--line-bytes=SIZE`|Put at most SIZE bytes of records per output file|
|
||||
|`-d`| |Use numeric suffixes starting at 0, not alphabetic|
|
||||
||`--numeric-suffixes[=FROM]`|Same as -d, but allow setting the start value|
|
||||
|`-x`||Use hex suffixes starting at 0, not alphabetic|
|
||||
||`--hex-suffixes[=FROM]`|Same as -x, but allow setting the start value|
|
||||
|`-e`|`--elide-empty-files`|Do not generate empty output files with '-n'|
|
||||
||`--filter=COMMAND`|Write to shell COMMAND;<br>file name is $FILE|
|
||||
|`-l`|`--lines=NUMBER`|Put NUMBER lines/records per output file|
|
||||
|`-n`|`--number=CHUNKS`|Generate CHUNKS output files;<br>see explanation below|
|
||||
|`-t`|`--separator=SEP`|Use SEP instead of newline as the record separator;<br>'\0' (zero) specifies the NUL character|
|
||||
|`-u`|`--unbuffered`|Immediately copy input to output with '-n r/...'|
|
||||
||`--verbose`|Print a diagnostic just before each<br>output file is opened|
|
||||
||`--help`|Display this help and exit|
|
||||
||`--version`|Output version information and exit|
|
||||
|
||||
The SIZE argument is an integer and optional unit (example: 10K is 10*1024).
|
||||
Units are K,M,G,T,P,E,Z,Y (powers of 1024) or KB,MB,... (powers of 1000).
|
||||
|
||||
CHUNKS may be:
|
||||
|**CHUNKS** |**Description** |
|
||||
|:---|:---|
|
||||
|`N`|Split into N files based on size of input|
|
||||
|`K/N`|Output Kth of N to stdout|
|
||||
|`l/N`|Split into N files without splitting lines/records|
|
||||
|`l/K/N`|Output Kth of N to stdout without splitting lines/records|
|
||||
|`r/N`|Like 'l' but use round robin distribution|
|
||||
|`r/K/N`|Likewise but only output Kth of N to stdout|
|
||||
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
# The `stat` command
|
||||
|
||||
The `stat` command lets you display file or file system status. It gives you useful information about the file (or directory) on which you use it.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Basic command usage
|
||||
|
||||
```
|
||||
stat file.txt
|
||||
```
|
||||
|
||||
2. Use the `-c` (or `--format`) argument to only display information you want to see (here, the total size, in bytes)
|
||||
|
||||
```
|
||||
stat file.txt -c %s
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
stat [OPTION] [FILE]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
| Short Flag | Long Flag | Description |
|
||||
| ---------- | ----------------- | ----------------------------------------------------------------------------- |
|
||||
| `-L` | `--dereference` | Follow links |
|
||||
| `-f` | `--file-system` | Display file system status instead of file status |
|
||||
| `-c` | `--format=FORMAT` | Specify the format (see below) |
|
||||
| `-t` | `--terse` | Print the information in terse form |
|
||||
| - | `--cached=MODE` | Specify how to use cached attributes. Can be: `always`, `never`, or `default` |
|
||||
| - | `--printf=FORMAT` | Like `--format`, but interpret backslash escapes (`\n`, `\t`, ...) |
|
||||
| - | `--help` | Display the help and exit |
|
||||
| - | `--version` | Output version information and exit |
|
||||
|
||||
|
||||
### Example of Valid Format Sequences for Files:
|
||||
|
||||
| Format | Description |
|
||||
| ------ | ---------------------------------------------------- |
|
||||
| `%a` | Permission bits in octal |
|
||||
| `%A` | Permission bits and file type in human readable form |
|
||||
| `%d` | Device number in decimal |
|
||||
| `%D` | Device number in hex |
|
||||
| `%F` | File type |
|
||||
| `%g` | Group ID of owner |
|
||||
| `%G` | Group name of owner |
|
||||
| `%h` | Number of hard links |
|
||||
| `%i` | Inode number |
|
||||
| `%m` | Mount point |
|
||||
| `%n` | File name |
|
||||
| `%N` | Quoted file name with dereference if symbolic link |
|
||||
| `%s` | Total size, in bytes |
|
||||
| `%u` | User ID of owner |
|
||||
| `%U` | User name of owner |
|
||||
| `%w` | Time of file birth, human-readable; - if unknown |
|
||||
| `%x` | Time of last access, human-readable |
|
||||
| `%y` | Time of last data modification, human-readable |
|
||||
| `%z` | Time of last status change, human-readable |
|
||||
@@ -1,93 +0,0 @@
|
||||
# The `ionice` command
|
||||
|
||||
The `ionice` command is used to set or get process I/O scheduling class and priority.
|
||||
|
||||
If no arguments are given , `ionice` will query the current I/O scheduling class and priority for that process.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
ionice [options] -p <pid>
|
||||
```
|
||||
|
||||
```
|
||||
ionice [options] -P <pgid>
|
||||
```
|
||||
|
||||
```
|
||||
ionice [options] -u <uid>
|
||||
```
|
||||
|
||||
```
|
||||
ionice [options] <command>
|
||||
```
|
||||
|
||||
## A process can be of three scheduling classes:
|
||||
- ### Idle
|
||||
|
||||
A program with idle I/O priority will only get disk time when `no other program has asked for disk I/O for a defined grace period`.
|
||||
|
||||
The impact of idle processes on normal system actively should be `zero`.
|
||||
|
||||
This scheduling class `doesn’t take priority` argument.
|
||||
|
||||
Presently this scheduling class is permitted for an `ordinary user (since kernel 2.6.25)`.
|
||||
- ### Best Effort
|
||||
|
||||
This is `effective` scheduling class for any process that has `not asked for a specific I/O priority`.
|
||||
|
||||
This class `takes priority argument from 0-7`, with `lower` number being `higher priority`.
|
||||
|
||||
Programs running at the same best effort priority are served in `round- robbin fashion`.
|
||||
|
||||
Note that before kernel 2.6.26 a process that has not asked for an I/O priority formally uses “None” as scheduling class , but the io schedular will treat such processes as if it were in the best effort class.
|
||||
|
||||
The priority within best effort class will be dynamically derived form the CPU nice level of the process : io_priority = ( cpu_nice + 20 ) / 5/
|
||||
for kernels after 2.6.26 with CFQ I/O schedular a process that has not asked for sn io priority inherits CPU scheduling class.
|
||||
|
||||
`The I/O priority is derived from the CPU nice level of the process` ( smr sd before kernel 2.6.26 ).
|
||||
|
||||
- ### Real Time
|
||||
|
||||
The real time schedular class is `given first access to disk, regardless of what else is going on in the system`.
|
||||
|
||||
Thus the real time class needs to be used with some care, as it cans tarve other processes .
|
||||
|
||||
As with the best effort class, `8 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window`.
|
||||
|
||||
This scheduling class is `not permitted for an ordinary user(non-root)`.
|
||||
|
||||
## Options
|
||||
| Options | Description |
|
||||
|---|---|
|
||||
| -c, --class <class> | name or number of scheduling class, 0: none, 1: realtime, 2: best-effort, 3: idle|
|
||||
| -n, --classdata <num> | priority (0..7) in the specified scheduling class,only for the realtime and best-effort classes|
|
||||
| -p, --pid <pid>... | act on these already running processes|
|
||||
| -P, --pgid <pgrp>... | act on already running processes in these groups|
|
||||
| -t, --ignore | ignore failures|
|
||||
| -u, --uid <uid>... | act on already running processes owned by these users|
|
||||
| -h, --help | display this help|
|
||||
| -V, --version | display version|
|
||||
|
||||
For more details see ionice(1).
|
||||
|
||||
|
||||
## Examples
|
||||
| Command | O/P |Explanation|
|
||||
|---|---|---|
|
||||
|`$ ionice` |*none: prio 4*|Running alone `ionice` will give the class and priority of current process |
|
||||
|`$ ionice -p 101`|*none : prio 4*|Give the details(*class : priority*) of the process specified by given process id|
|
||||
|`$ ionice -p 2` |*none: prio 4*| Check the class and priority of process with pid 2 it is none and 4 resp.|
|
||||
|`$ ionice -c2 -n0 -p2`|2 ( best-effort ) priority 0 process 2 | Now lets set process(pid) 2 as a best-effort program with highest priority|
|
||||
|$ `ionice` -p 2|best-effort : prio 0| Now if I check details of Process 2 you can see the updated one|
|
||||
|$ `ionice` /bin/ls||get priority and class info of bin/ls |
|
||||
|$ `ionice` -n4 -p2||set priority 4 of process with pid 2 |
|
||||
|$ `ionice` -p 2| best-effort: prio 4| Now observe the difference between the command ran above and this one we have changed priority from 0 to 4|
|
||||
|$ `ionice` -c0 -n4 -p2|ionice: ignoring given class data for none class|(Note that before kernel 2.6.26 a process that has not asked for an I/O priority formally uses “None” as scheduling class , |
|
||||
|||but the io schedular will treat such processes as if it were in the best effort class. )|
|
||||
|||-t option : ignore failure|
|
||||
|$ `ionice` -c0 -n4 -p2 -t| | For ignoring the warning shown above we can use -t option so it will ignore failure |
|
||||
|
||||
## Conclusion
|
||||
|
||||
Thus we have successfully learnt about `ionice` command.
|
||||
@@ -1,85 +0,0 @@
|
||||
# The `rsync` command
|
||||
|
||||
The `rsync` command is probably one of the most used commands out there. It is used to securely copy files from one server to another over SSH.
|
||||
|
||||
Compared to the `scp` command, which does a similar thing, `rsync` makes the transfer a lot faster, and in case of an interruption, you could restore/resume the transfer process.
|
||||
|
||||
In this tutorial, I will show you how to use the `rsync` command and copy files from one server to another and also share a few useful tips!
|
||||
|
||||
Before you get started, you would need to have 2 Linux servers. I will be using DigitalOcean for the demo and deploy 2 Ubuntu servers.
|
||||
|
||||
You can use my referral link to get a free $100 credit that you could use to deploy your virtual machines and test the guide yourself on a few DigitalOcean servers:
|
||||
|
||||
**[DigitalOcean $100 Free Credit](https://m.do.co/c/2a9bba940f39)**
|
||||
|
||||
## Transfer Files from local server to remote
|
||||
|
||||
This is one of the most common causes. Essentially this is how you would copy the files from the server that you are currently on (the source server) to remote/destination server.
|
||||
|
||||
What you need to do is SSH to the server that is holding your files, cd to the directory that you would like to transfer over:
|
||||
|
||||
```
|
||||
cd /var/www/html
|
||||
```
|
||||
|
||||
And then run:
|
||||
|
||||
```
|
||||
rsync -avz user@your-remote-server.com:/home/user/dir/
|
||||
```
|
||||
|
||||
The above command would copy all the files and directories from the current folder on your server to your remote server.
|
||||
|
||||
Rundown of the command:
|
||||
|
||||
* `-a`: is used to specify that you want recursion and want to preserve the file permissions and etc.
|
||||
* `-v`: is verbose mode, it increases the amount of information you are given during the transfer.
|
||||
* `-z`: this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted -- something that is useful over a slow connection.
|
||||
|
||||
I recommend having a look at the following website which explains the commands and the arguments very nicely:
|
||||
|
||||
[https://explainshell.com/explain?cmd=rsync+-avz](https://explainshell.com/explain?cmd=rsync+-avz)
|
||||
|
||||
In case that the SSH service on the remote server is not running on the standard `22` port, you could use `rsync` with a special SSH port:
|
||||
|
||||
```
|
||||
rsync -avz -e 'ssh -p 1234' user@your-remote-server.com:/home/user/dir/
|
||||
```
|
||||
|
||||
## Transfer Files remote server to local
|
||||
|
||||
In some cases you might want to transfer files from your remote server to your local server, in this case, you would need to use the following syntax:
|
||||
|
||||
```
|
||||
rsync -avz your-user@your-remote-server.com:/home/user/dir/ /home/user/local-dir/
|
||||
```
|
||||
|
||||
Again, in case that you have a non-standard SSH port, you can use the following command:
|
||||
|
||||
```
|
||||
rsync -avz -e 'ssh -p 2510' your-user@your-remote-server.com:/home/user/dir/ /home/user/local-dir/
|
||||
```
|
||||
|
||||
## Transfer only missing files
|
||||
|
||||
If you would like to transfer only the missing files you could use the `--ignore-existing` flag.
|
||||
|
||||
This is very useful for final sync in order to ensure that there are no missing files after a website or a server migration.
|
||||
|
||||
Basically the commands would be the same apart from the appended --ignore-existing flag:
|
||||
|
||||
```
|
||||
rsync -avz --ignore-existing user@your-remote-server.com:/home/user/dir/
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Using `rsync` is a great way to quickly transfer some files from one machine over to another in a secure way over SSH.
|
||||
|
||||
For more cool Linux networking tools, I would recommend checking out this tutorial here:
|
||||
|
||||
[Top 15 Linux Networking tools that you should know!](https://devdojo.com/serverenthusiast/top-15-linux-networking-tools-that-you-should-know)
|
||||
|
||||
Hope that this helps!
|
||||
|
||||
Initially posted here: [How to Transfer Files from One Linux Server to Another Using rsync](https://devdojo.com/bobbyiliev/how-to-transfer-files-from-one-linux-server-to-another-using-rsync)
|
||||
@@ -1,133 +0,0 @@
|
||||
# The `dig` command
|
||||
|
||||
dig - DNS lookup utility
|
||||
|
||||
The `dig` is a flexible tool for interrogating DNS name servers. It performs DNS lookups and displays the answers that are returned from the name server(s) that
|
||||
were queried.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Dig is a network administration command-line tool for querying the Domain Name System.
|
||||
|
||||
```
|
||||
dig google.com
|
||||
```
|
||||
|
||||
2. The system will list all google.com DNS records that it finds, along with the IP addresses.
|
||||
|
||||
```
|
||||
dig google.com ANY
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
dig [server] [name] [type] [q-type] [q-class] {q-opt}
|
||||
{global-d-opt} host [@local-server] {local-d-opt}
|
||||
[ host [@local-server] {local-d-opt} [...]]
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
```bash
|
||||
|
||||
domain is in the Domain Name System
|
||||
q-class is one of (in,hs,ch,...) [default: in]
|
||||
q-type is one of (a,any,mx,ns,soa,hinfo,axfr,txt,...) [default:a]
|
||||
(Use ixfr=version for type ixfr)
|
||||
q-opt is one of:
|
||||
-4 (use IPv4 query transport only)
|
||||
-6 (use IPv6 query transport only)
|
||||
-b address[#port] (bind to source address/port)
|
||||
-c class (specify query class)
|
||||
-f filename (batch mode)
|
||||
-k keyfile (specify tsig key file)
|
||||
-m (enable memory usage debugging)
|
||||
-p port (specify port number)
|
||||
-q name (specify query name)
|
||||
-r (do not read ~/.digrc)
|
||||
-t type (specify query type)
|
||||
-u (display times in usec instead of msec)
|
||||
-x dot-notation (shortcut for reverse lookups)
|
||||
-y [hmac:]name:key (specify named base64 tsig key)
|
||||
d-opt is of the form +keyword[=value], where keyword is:
|
||||
+[no]aaflag (Set AA flag in query (+[no]aaflag))
|
||||
+[no]aaonly (Set AA flag in query (+[no]aaflag))
|
||||
+[no]additional (Control display of additional section)
|
||||
+[no]adflag (Set AD flag in query (default on))
|
||||
+[no]all (Set or clear all display flags)
|
||||
+[no]answer (Control display of answer section)
|
||||
+[no]authority (Control display of authority section)
|
||||
+[no]badcookie (Retry BADCOOKIE responses)
|
||||
+[no]besteffort (Try to parse even illegal messages)
|
||||
+bufsize[=###] (Set EDNS0 Max UDP packet size)
|
||||
+[no]cdflag (Set checking disabled flag in query)
|
||||
+[no]class (Control display of class in records)
|
||||
+[no]cmd (Control display of command line -
|
||||
global option)
|
||||
+[no]comments (Control display of packet header
|
||||
and section name comments)
|
||||
+[no]cookie (Add a COOKIE option to the request)
|
||||
+[no]crypto (Control display of cryptographic
|
||||
fields in records)
|
||||
+[no]defname (Use search list (+[no]search))
|
||||
+[no]dnssec (Request DNSSEC records)
|
||||
+domain=### (Set default domainname)
|
||||
+[no]dscp[=###] (Set the DSCP value to ### [0..63])
|
||||
+[no]edns[=###] (Set EDNS version) [0]
|
||||
+ednsflags=### (Set EDNS flag bits)
|
||||
+[no]ednsnegotiation (Set EDNS version negotiation)
|
||||
+ednsopt=###[:value] (Send specified EDNS option)
|
||||
+noednsopt (Clear list of +ednsopt options)
|
||||
+[no]expandaaaa (Expand AAAA records)
|
||||
+[no]expire (Request time to expire)
|
||||
+[no]fail (Don't try next server on SERVFAIL)
|
||||
+[no]header-only (Send query without a question section)
|
||||
+[no]identify (ID responders in short answers)
|
||||
+[no]idnin (Parse IDN names [default=on on tty])
|
||||
+[no]idnout (Convert IDN response [default=on on tty])
|
||||
+[no]ignore (Don't revert to TCP for TC responses.)
|
||||
+[no]keepalive (Request EDNS TCP keepalive)
|
||||
+[no]keepopen (Keep the TCP socket open between queries)
|
||||
+[no]mapped (Allow mapped IPv4 over IPv6)
|
||||
+[no]multiline (Print records in an expanded format)
|
||||
+ndots=### (Set search NDOTS value)
|
||||
+[no]nsid (Request Name Server ID)
|
||||
+[no]nssearch (Search all authoritative nameservers)
|
||||
+[no]onesoa (AXFR prints only one soa record)
|
||||
+[no]opcode=### (Set the opcode of the request)
|
||||
+padding=### (Set padding block size [0])
|
||||
+[no]qr (Print question before sending)
|
||||
+[no]question (Control display of question section)
|
||||
+[no]raflag (Set RA flag in query (+[no]raflag))
|
||||
+[no]rdflag (Recursive mode (+[no]recurse))
|
||||
+[no]recurse (Recursive mode (+[no]rdflag))
|
||||
+retry=### (Set number of UDP retries) [2]
|
||||
+[no]rrcomments (Control display of per-record comments)
|
||||
+[no]search (Set whether to use searchlist)
|
||||
+[no]short (Display nothing except short
|
||||
form of answers - global option)
|
||||
+[no]showsearch (Search with intermediate results)
|
||||
+[no]split=## (Split hex/base64 fields into chunks)
|
||||
+[no]stats (Control display of statistics)
|
||||
+subnet=addr (Set edns-client-subnet option)
|
||||
+[no]tcflag (Set TC flag in query (+[no]tcflag))
|
||||
+[no]tcp (TCP mode (+[no]vc))
|
||||
+timeout=### (Set query timeout) [5]
|
||||
+[no]trace (Trace delegation down from root [+dnssec])
|
||||
+tries=### (Set number of UDP attempts) [3]
|
||||
+[no]ttlid (Control display of ttls in records)
|
||||
+[no]ttlunits (Display TTLs in human-readable units)
|
||||
+[no]unexpected (Print replies from unexpected sources
|
||||
default=off)
|
||||
+[no]unknownformat (Print RDATA in RFC 3597 "unknown" format)
|
||||
+[no]vc (TCP mode (+[no]tcp))
|
||||
+[no]yaml (Present the results as YAML)
|
||||
+[no]zflag (Set Z flag in query)
|
||||
global d-opts and servers (before host name) affect all queries.
|
||||
local d-opts and servers (after host name) affect only that lookup.
|
||||
-h (print help and exit)
|
||||
-v (print version and exit)
|
||||
|
||||
```
|
||||
@@ -1,66 +0,0 @@
|
||||
# The `whois` command
|
||||
|
||||
The `whois` command in Linux to find out information about a domain, such as the owner of the domain, the owner’s contact information, and the nameservers that the domain is using.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Performs a whois query for the domain name:
|
||||
|
||||
```
|
||||
whois {Domain_name}
|
||||
```
|
||||
|
||||
2. -H option omits the lengthy legal disclaimers that many domain registries deliver along with the domain information.
|
||||
|
||||
```
|
||||
whois -H {Domain_name}
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
whois [ -h HOST ] [ -p PORT ] [ -aCFHlLMmrRSVx ] [ -g SOURCE:FIRST-LAST ]
|
||||
[ -i ATTR ] [ -S SOURCE ] [ -T TYPE ] object
|
||||
```
|
||||
```
|
||||
whois -t TYPE
|
||||
```
|
||||
```
|
||||
whois -v TYPE
|
||||
```
|
||||
```
|
||||
whois -q keyword
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Flag** |**Description** |
|
||||
|:---|:---|
|
||||
|`-h HOST`, `--host HOST`|Connect to HOST.|
|
||||
|`-H`|Do not display the legal disclaimers some registries like to show you.|
|
||||
|`-p`, `--port PORT`|Connect to PORT.|
|
||||
|`--verbose`|Be verbose.|
|
||||
|`--help`|Display online help.|
|
||||
|`--version`|Display client version information. Other options are flags understood by whois.ripe.net and some other RIPE-like servers.|
|
||||
|`-a`|Also search all the mirrored databases.|
|
||||
|`-b`|Return brief IP address ranges with abuse contact.|
|
||||
|`-B`|Disable object filtering *(show the e-mail addresses)*|
|
||||
|`-c`|Return the smallest IP address range with a reference to an irt object.|
|
||||
|`-d`|Return the reverse DNS delegation object too.|
|
||||
|`-g SOURCE:FIRST-LAST`|Search updates from SOURCE database between FIRST and LAST update serial number. It's useful to obtain Near Real Time Mirroring stream.|
|
||||
|`-G`|Disable grouping of associated objects.|
|
||||
|`-i ATTR[,ATTR]...`|Search objects having associated attributes. ATTR is attribute name. Attribute value is positional OBJECT argument.|
|
||||
|`-K`|Return primary key attributes only. Exception is members attribute of set object which is always returned. Another exceptions are all attributes of objects organisation, person, and role that are never returned.|
|
||||
|`-l`|Return the one level less specific object.|
|
||||
|`-L`|Return all levels of less specific objects.|
|
||||
|`-m`|Return all one level more specific objects.|
|
||||
|`-M`|Return all levels of more specific objects.|
|
||||
|`-q KEYWORD`|Return list of keywords supported by server. KEYWORD can be version for server version, sources for list of source databases, or types for object types.|
|
||||
|`-r`|Disable recursive look-up for contact information.|
|
||||
|`-R`|Disable following referrals and force showing the object from the local copy in the server.|
|
||||
|`-s SOURCE[,SOURCE]...`|Request the server to search for objects mirrored from SOURCES. Sources are delimited by comma and the order is significant. Use `-q` sources option to obtain list of valid sources.|
|
||||
|`-t TYPE`|Return the template for a object of TYPE.|
|
||||
|`-T TYPE[,TYPE]...`|Restrict the search to objects of TYPE. Multiple types are separated by a comma.|
|
||||
|`-v TYPE`|Return the verbose template for a object of TYPE.|
|
||||
|`-x`|Search for only exact match on network address prefix.|
|
||||
@@ -1,90 +0,0 @@
|
||||
# The `awk` command
|
||||
|
||||
Awk is a general-purpose scripting language designed for advanced text processing. It is mostly used as a reporting and analysis tool.
|
||||
|
||||
#### WHAT CAN WE DO WITH AWK?
|
||||
|
||||
1. AWK Operations:
|
||||
(a) Scans a file line by line
|
||||
(b) Splits each input line into fields
|
||||
(c) Compares input line/fields to pattern
|
||||
(d) Performs action(s) on matched lines
|
||||
|
||||
|
||||
|
||||
2. Useful For:
|
||||
(a) Transform data files
|
||||
(b) Produce formatted reports
|
||||
|
||||
3. Programming Constructs:
|
||||
(a) Format output lines
|
||||
(b) Arithmetic and string operations
|
||||
(c) Conditionals and loops
|
||||
|
||||
#### Syntax
|
||||
|
||||
```
|
||||
awk options 'selection _criteria {action }' input-file > output-file
|
||||
```
|
||||
|
||||
#### Example
|
||||
Consider the following text file as the input file for below example:
|
||||
|
||||
```
|
||||
$cat > employee.txt
|
||||
```
|
||||
```
|
||||
ajay manager account 45000
|
||||
sunil clerk account 25000
|
||||
varun manager sales 50000
|
||||
amit manager account 47000
|
||||
tarun peon sales 15000
|
||||
```
|
||||
|
||||
1. Default behavior of Awk: By default Awk prints every line of data from the specified file.
|
||||
```
|
||||
$ awk '{print}' employee.txt
|
||||
```
|
||||
```
|
||||
ajay manager account 45000
|
||||
sunil clerk account 25000
|
||||
varun manager sales 50000
|
||||
amit manager account 47000
|
||||
tarun peon sales 15000
|
||||
```
|
||||
In the above example, no pattern is given. So the actions are applicable to all the lines. Action print without any argument prints the whole line by default, so it prints all the lines of the file without failure.
|
||||
|
||||
2. Print the lines which match the given pattern.
|
||||
```
|
||||
awk '/manager/ {print}' employee.txt
|
||||
```
|
||||
```
|
||||
ajay manager account 45000
|
||||
varun manager sales 50000
|
||||
amit manager account 47000
|
||||
```
|
||||
In the above example, the awk command prints all the line which matches with the ‘manager’.
|
||||
|
||||
3. Splitting a Line Into Fields : For each record i.e line, the awk command splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and $4 respectively. Also, $0 represents the whole line.
|
||||
```
|
||||
$ awk '{print $1,$4}' employee.txt
|
||||
```
|
||||
```
|
||||
ajay 45000
|
||||
sunil 25000
|
||||
varun 50000
|
||||
amit 47000
|
||||
tarun 15000
|
||||
```
|
||||
|
||||
#### Built-In Variables In Awk
|
||||
|
||||
Awk’s built-in variables include the field variables—$1, $2, $3, and so on ($0 is the entire line) — that break a line of text into individual words or pieces called fields.
|
||||
|
||||
|
||||
NR: NR command keeps a current count of the number of input records. Remember that records are usually lines. Awk command performs the pattern/action statements once for each record in a file.
|
||||
NF: NF command keeps a count of the number of fields within the current input record.
|
||||
FS: FS command contains the field separator character which is used to divide fields on the input line. The default is “white space”, meaning space and tab characters. FS can be reassigned to another character (typically in BEGIN) to change the field separator.
|
||||
RS: RS command stores the current record separator character. Since, by default, an input line is the input record, the default record separator character is a newline.
|
||||
OFS: OFS command stores the output field separator, which separates the fields when Awk prints them. The default is a blank space. Whenever print has several parameters separated with commas, it will print the value of OFS in between each parameter.
|
||||
ORS: ORS command stores the output record separator, which separates the output lines when Awk prints them. The default is a newline character. print automatically outputs the contents of ORS at the end of whatever it is given to print.
|
||||
@@ -1,69 +0,0 @@
|
||||
|
||||
# The `pstree` command
|
||||
|
||||
The `pstree` command is similar to `ps`, but instead of listing the running processes, it shows them as a tree. The tree-like format is sometimes more suitable way to display the processes hierarchy which is a much simpler way to visualize running processes. The root of the tree is either init or the process with the given pid.
|
||||
|
||||
### Examples
|
||||
|
||||
1. To display a hierarchical tree structure of all running processes:
|
||||
|
||||
```
|
||||
pstree
|
||||
```
|
||||
|
||||
2. To display a tree with the given process as the root of the tree:
|
||||
|
||||
```
|
||||
pstree [pid]
|
||||
```
|
||||
|
||||
3. To show only those processes that have been started by a user:
|
||||
|
||||
```
|
||||
pstree [USER]
|
||||
```
|
||||
|
||||
4. To show the parent processes of the given process:
|
||||
|
||||
```
|
||||
pstree -s [PID]
|
||||
```
|
||||
|
||||
5. To view the output one page at a time, pipe it to the `less` command:
|
||||
|
||||
```
|
||||
pstree | less
|
||||
```
|
||||
|
||||
|
||||
### Syntax
|
||||
|
||||
`ps [OPTIONS] [USER or PID]`
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities
|
||||
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-a`|`--arguments`|Show command line arguments|
|
||||
|`-A`|`--ascii`|use ASCII line drawing characters|
|
||||
|`-c`|`--compact`|Don't compact identical subtrees|
|
||||
|`-h`|`--highlight-all`|Highlight current process and its ancestors|
|
||||
|`-H PID`|`--highlight-pid=PID`|highlight this process and its ancestors|
|
||||
|`-g`|`--show-pgids`|show process group ids; implies `-c`|
|
||||
|`-G`|`--vt100`|use VT100 line drawing characters|
|
||||
|`-l`|`--long`|Don't truncate long lines|
|
||||
|`-n`|`--numeric-sort`|Sort output by PID|
|
||||
|`-N type`|`--ns-sort=type`|Sort by namespace type (cgroup, ipc, mnt, net, pid, user, uts)|
|
||||
|`-p`|`--show-pids`|show PIDs; implies -c|
|
||||
|`-s`|`--show-parents`|Show parents of the selected process|
|
||||
|`-S`|`--ns-changes`|show namespace transitions|
|
||||
|`-t`|`--thread-names`|Show full thread names|
|
||||
|`-T`|`--hide-threads`|Hide threads, show only processes|
|
||||
|`-u`|`--uid-changes`|Show uid transitions|
|
||||
|`-U`|`--unicode`|Use UTF-8 (Unicode) line drawing characters|
|
||||
|`-V`|`--version`|Display version information|
|
||||
|`-Z`|`--security-context`|Show SELinux security contexts|
|
||||
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
# The `tree` command
|
||||
|
||||
The `tree` command in Linux recursively lists directories as tree structures. Each listing is indented according to its depth relative to root of the tree.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Show a tree representation of the current directory.
|
||||
|
||||
```
|
||||
tree
|
||||
```
|
||||
|
||||
2. -L NUMBER limits the depth of recursion to avoid display very deep trees.
|
||||
|
||||
```
|
||||
tree -L 2 /
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
tree [-acdfghilnpqrstuvxACDFQNSUX] [-L level [-R]] [-H baseHREF] [-T title]
|
||||
[-o filename] [--nolinks] [-P pattern] [-I pattern] [--inodes]
|
||||
[--device] [--noreport] [--dirsfirst] [--version] [--help] [--filelimit #]
|
||||
[--si] [--prune] [--du] [--timefmt format] [--matchdirs] [--from-file]
|
||||
[--] [directory ...]
|
||||
```
|
||||
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Flag** |**Description** |
|
||||
|:---|:---|
|
||||
|`-a`|Print all files, including hidden ones.|
|
||||
|`-d`|Only list directories.|
|
||||
|`-l`|Follow symbolic links into directories.|
|
||||
|`-f`|Print the full path to each listing, not just its basename.|
|
||||
|`-x`|Do not move across file-systems.|
|
||||
|`-L #`|Limit recursion depth to #.|
|
||||
|`-P REGEX`|Recurse, but only list files that match the REGEX.|
|
||||
|`-I REGEX`|Recurse, but do not list files that match the REGEX.|
|
||||
|`--ignore-case`|Ignore case while pattern-matching.|
|
||||
|`--prune`|Prune empty directories from output.|
|
||||
|`--filelimit #`|Omit directories that contain more than # files.|
|
||||
|`-o FILE`|Redirect STDOUT output to FILE.|
|
||||
|`-i`|Do not output indentation.|
|
||||
@@ -1,184 +0,0 @@
|
||||
# The `printf` command
|
||||
|
||||
This command lets you print the value of a variable by formatting it using rules. It is pretty similar to the printf in C language.
|
||||
|
||||
### Syntax:
|
||||
```
|
||||
$printf [-v variable_name] format [arguments]
|
||||
```
|
||||
|
||||
### Options:
|
||||
|
||||
| OPTION | Description |
|
||||
| --- | --- |
|
||||
| `FORMAT` | FORMAT controls the output, and defines the way that the ARGUMENTs will be expressed in the output |
|
||||
| `ARGUMENT` | An ARGUMENT will be inserted into the formatted output according to the definition of FORMAT |
|
||||
| `--help` | Display help and exit | |
|
||||
| `--version` | Output version information adn exit | |
|
||||
|
||||
### Formats:
|
||||
|
||||
The anatomy of the FORMAT string can be extracted into three different parts,
|
||||
|
||||
- _ordinary characters_, which are copied exactly the same characters as were used originally to the output.
|
||||
- _interpreted character_ sequences, which are escaped with a backslash ("\\").
|
||||
- _conversion specifications_, this one will define the way the ARGUMENTs will be expressed as part of the output.
|
||||
|
||||
|
||||
You can see those parts in this example,
|
||||
|
||||
```
|
||||
printf " %s is where over %d million developers shape \"the future of sofware.\" " Github 65
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
Github is where over 65 million developers shape "the future of sofware."
|
||||
```
|
||||
|
||||
There are two conversion specifications `%s` and `%d`, and there are two escaped characters which are the opening and closing double-quotes wrapping the words of _the future of software_. Other than that are the ordinary characters.
|
||||
|
||||
### Conversion Specifications:
|
||||
|
||||
Each conversion specification begins with a `%` and ends with a `conversion character`. Between the `%` and the `conversion character` there may be, in order:
|
||||
|
||||
| | |
|
||||
| --- | --- |
|
||||
| `-` | A minus sign. This tells printf to left-adjust the conversion of the argument |
|
||||
| _number_ | An integer that specifies field width; printf prints a conversion of ARGUMENT in a field at least number characters wide. If necessary it will be padded on the left (or right, if left-adjustment is called for) to make up the field width |
|
||||
| `.` | A period, which separates the field width from the precision |
|
||||
| _number_ | An integer, the precision, which specifies the maximum number of characters to be printed from a string, or the number of digits after the decimal point of a floating-point value, or the minimum number of digits for an integer |
|
||||
| `h` or `l` | These differentiate between a short and a long integer, respectively, and are generally only needed for computer programming |
|
||||
|
||||
The conversion characters tell `printf` what kind of argument to print out, are as follows:
|
||||
|
||||
| Conversion char | Argument type |
|
||||
| --- | --- |
|
||||
| `s` | A string |
|
||||
| `c` | An integer, expressed as a character corresponds ASCII code |
|
||||
| `d, i` | An integer as a decimal number |
|
||||
| `o` | An integer as an unsigned octal number |
|
||||
| `x, X` | An integer as an unsigned hexadecimal number |
|
||||
| `u` | An integer as an unsigned decimal number |
|
||||
| `f` | A floating-point number with a default precision of 6 |
|
||||
| `e, E` | A floating-point number in scientific notation |
|
||||
| `p` | A memory address pointer |
|
||||
| `%` | No conversion |
|
||||
|
||||
|
||||
|
||||
Here is the list of some examples of the `printf` output the ARGUMENT. we can put any word but in this one we put a 'linuxcommand` word and enclosed it with quotes so we can see easier the position related to the whitespaces.
|
||||
|
||||
| FORMAT string | ARGUMENT string | Output string |
|
||||
| --- | --- | --- |
|
||||
| `"%s"` | `"linuxcommand"` | "linuxcommand" |
|
||||
| `"%5s"` | `"linuxcommand"` | "linuxcommand" |
|
||||
| `"%.5s"` | `"linuxcommand"` | "linux" |
|
||||
| `"%-8s"` | `"linuxcommand"` | "linuxcommand" |
|
||||
| `"%-15s"` | `"linuxcommand"` | "linuxcommand " |
|
||||
| `"%12.5s"` | `"linuxcommand"` | " linux" |
|
||||
| `"%-12.5"` | `"linuxcommand"` | "linux " |
|
||||
| `"%-12.4"` | `"linuxcommand"` | "linu " |
|
||||
|
||||
Notes:
|
||||
|
||||
- `printf` requires the number of conversion strings to match the number of ARGUMENTs
|
||||
- `printf` maps the conversion strings one-to-one, and expects to find exactly one ARGUMENT for each conversion string
|
||||
- Conversion strings are always interpreted from left to right.
|
||||
|
||||
Here's the example:
|
||||
|
||||
The input
|
||||
|
||||
```
|
||||
printf "We know %f is %s %d" 12.07 "larger than" 12
|
||||
```
|
||||
|
||||
The output:
|
||||
|
||||
```
|
||||
We know 12.070000 is larger than 12
|
||||
```
|
||||
|
||||
The example above shows 3 arguments, _12.07_, _larger than_, and _12_. Each of them interpreted from left to right one-to-one with the given 3 conversion strings (`%f`, `%d`, `%s`).
|
||||
|
||||
Character sequences which are interpreted as special characters by `printf`:
|
||||
|
||||
| Escaped char | Description |
|
||||
| --- | --- |
|
||||
| `\a` | issues an alert (plays a bell). Usually ASCII BEL characters |
|
||||
| `\b` | prints a backspace |
|
||||
| `\c` | instructs `printf` to produce no further output |
|
||||
| `\e` | prints an escape character (ASCII code 27) |
|
||||
| `\f` | prints a form feed |
|
||||
| `\n` | prints a newline |
|
||||
| `\r` | prints a carriage return |
|
||||
| `\t` | prints a horizontal tab |
|
||||
| `\v` | prints a vertical tab |
|
||||
| `\"` | prints a double-quote (") |
|
||||
| `\\` | prints a backslash (\) |
|
||||
| `\NNN` | prints a byte with octal value `NNN` (1 to 3 digits)
|
||||
| `\xHH` | prints a byte with hexadecimal value `HH` (1 to 2 digits)
|
||||
| `\uHHHH`| prints the unicode character with hexadecimal value `HHHH` (4 digits) |
|
||||
| `\UHHHHHHHH` | prints the unicode character with hexadecimal value `HHHHHHHH` (8 digits) |
|
||||
| `%b` | prints ARGUMENT as a string with "\\" escapes interpreted as listed above, with the exception that octal escapes take the form `\0` or `\0NN` |
|
||||
|
||||
|
||||
### Examples:
|
||||
The format specifiers usually used with printf are stated in the examples below:
|
||||
|
||||
- %s
|
||||
|
||||
```
|
||||
$printf "%s\n" "Printf command documentation!"
|
||||
```
|
||||
This will print `Printf command documentation!` in the shell.
|
||||
|
||||
### Other important attributes of printf command:
|
||||
|
||||
- `%b` - Prints arguments by expanding backslash escape sequences.
|
||||
- `%q` - Prints arguments in a shell-quoted format which is reusable as input.
|
||||
- `%d` , `%i` - Prints arguments in the format of signed decimal integers.
|
||||
- `%u` - Prints arguments in the format of unsigned decimal integers.
|
||||
- `%o` - Prints arguments in the format of unsigned octal(base 8) integers.
|
||||
- `%x`, `%X` - Prints arguments in the format of unsigned hexadecimal(base 16) integers. %x prints lower-case letters and %X prints upper-case letters.
|
||||
- `%e`, `%E` - Prints arguments in the format of floating-point numbers in exponential notation. %e prints lower-case letters and %E prints upper-case.
|
||||
- `%a`, `%A` - Prints arguments in the format of floating-point numbers in hexadecimal(base 16) fractional notation. %a prints lower-case letters and %A prints upper-case.
|
||||
- `%g`, `%G` - Prints arguments in the format of floating-point numbers in normal or exponential notation, whichever is more appropriate for the given value and precision. %g prints lower-case letters and %G prints upper-case.
|
||||
- `%c` - Prints arguments as single characters.
|
||||
- `%f` - Prints arguments as floating-point numbers.
|
||||
- `%s` - Prints arguments as strings.
|
||||
- `%%` - Prints a "%" symbol.
|
||||
|
||||
|
||||
#### More Examples:
|
||||
|
||||
The input:
|
||||
```
|
||||
printf 'Hello\nyoung\nman!'
|
||||
```
|
||||
The output:
|
||||
|
||||
```
|
||||
hello
|
||||
young
|
||||
man!
|
||||
```
|
||||
|
||||
The two `\n` break the sentence into 3 parts of words.
|
||||
|
||||
|
||||
|
||||
The input:
|
||||
```
|
||||
printf "%f\n" 2.5 5.75
|
||||
```
|
||||
|
||||
The output
|
||||
```
|
||||
2.500000
|
||||
5.750000
|
||||
```
|
||||
|
||||
The `%f` specifier combined with the `\n` interpreted the two arguments in the form of floating point in the seperated new lines.
|
||||
@@ -1,42 +0,0 @@
|
||||
# The `cut` command
|
||||
|
||||
The `cut` command lets you remove sections from each line of files. Print selected parts of lines from each FILE to standard output. With no FILE, or when FILE is -, read standard input.
|
||||
|
||||
### Usage and Examples:
|
||||
|
||||
1. Selecting specific fields in a file
|
||||
```
|
||||
cut -d "delimiter" -f (field number) file.txt
|
||||
```
|
||||
|
||||
2. Selecting specific characters:
|
||||
```
|
||||
cut -c [(k)-(n)/(k),(n)/(n)] filename
|
||||
```
|
||||
Here, **k** denotes the starting position of the character and **n** denotes the ending position of the character in each line, if _k_ and _n_ are separated by “-” otherwise they are only the position of character in each line from the file taken as an input.
|
||||
|
||||
3. Selecting specific bytes:
|
||||
```
|
||||
cut -b 1,2,3 filename //select bytes 1,2 and 3
|
||||
cut -b 1-4 filename //select bytes 1 through 4
|
||||
cut -b 1- filename //select bytes 1 through the end of file
|
||||
cut -b -4 filename //select bytes from the beginning till the 4th byte
|
||||
```
|
||||
**Tabs and backspaces** are treated like as a character of 1 byte.
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
cut OPTION... [FILE]...
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-b`|`--bytes=LIST`|select only these bytes|
|
||||
|`-c`|`--characters=LIST`|select only these characters|
|
||||
|`-d`|`--delimiter=DELIM`|use DELIM instead of TAB for field delimiter|
|
||||
|`-f`|`--fields`|select only these fields; also print any line that contains no delimiter character, unless the -s option is specified|
|
||||
|`-s`|`--only-delimited`|do not print lines not containing delimiters|
|
||||
|`-z`|`--zero-terminated`|line delimiter is NUL, not newline|
|
||||
@@ -1,53 +0,0 @@
|
||||
# The `sed` command
|
||||
|
||||
`sed` command stands for stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). For instance, it can perform lot’s of functions on files like searching, find and replace, insertion or deletion. While in some ways it is similar to an editor which permits scripted edits (such as `ed`), `sed` works by making only one pass over the input(s), and is consequently more efficient. But it is sed's ability to filter text in a pipeline that particularly distinguishes it from other types of editors.
|
||||
|
||||
The most common use of `sed` command is for a substitution or for find and replace. By using sed you can edit files even without opening it, which is a much quicker way to find and replace something in the file. It supports basic and extended regular expressions that allow you to match complex patterns. Most Linux distributions come with GNU and `sed` is pre-installed by default.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To Find and Replace String with `sed`
|
||||
```
|
||||
sed -i 's/{search_regex}/{replace_value}/g' input-file
|
||||
```
|
||||
|
||||
2. For Recursive Find and Replace *(along with `find`)*
|
||||
|
||||
> Sometimes you may want to recursively search directories for files containing a string and replace the string in all files. This can be done using commands such as find to recursively find files in the directory and piping the file names to `sed`.
|
||||
The following command will recursively search for files in the current working directory and pass the file names to `sed`. It will recursively search for files in the current working directory and pass the file names to `sed`.
|
||||
|
||||
```
|
||||
find . -type f -exec sed -i 's/{search_regex}/{replace_value}/g' {} +
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
sed [OPTION]... {script-only-if-no-other-script} [INPUT-FILE]...
|
||||
```
|
||||
|
||||
- `OPTION` - sed options in-place, silent, follow-symlinks, line-length, null-data ...etc.
|
||||
- `{script-only-if-no-other-script}` - Add the script to command if available.
|
||||
- `INPUT-FILE` - Input Stream, A file or input from a pipeline.
|
||||
|
||||
If no option is given, then the first non-option argument is taken as the sed script to interpret. All remaining arguments are names of input files; if no input files are specified, then the standard input is read.
|
||||
|
||||
GNU sed home page: [http://www.gnu.org/software/sed/](http://www.gnu.org/software/sed/)
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-i[SUFFIX]`|<center>--in-place[=SUFFIX]</center>|Edit files in place (makes backup if SUFFIX supplied).|
|
||||
|`-n`|<center>--quiet, --silent</center>|Suppress automatic printing of pattern space.|
|
||||
|`-e script`|<center>--expression=script</center>|Add the script to the commands to be executed.|
|
||||
|`-f script-file`|<center>--file=script-file</center>|Add the contents of script-file to the commands to be executed.|
|
||||
|`-l N`|<center>--line-length=N</center>|Specify the desired line-wrap length for the `l` command.|
|
||||
|`-r`|<center>--regexp-extended</center>|Use extended regular expressions in the script.|
|
||||
|`-s`|<center>--separate</center>|Consider files as separate rather than as a single continuous long stream.|
|
||||
|`-u`|<center>--unbuffered</center>|Load minimal amounts of data from the input files and flush the output buffers more often.|
|
||||
|`-z`|<center>--null-data</center>|Separate lines by NULL characters.|
|
||||
|
||||
### Before you begin
|
||||
|
||||
It may seem complicated and complex at first, but searching and replacing text in files with sed is very simple.
|
||||
|
||||
To find out more: [https://www.gnu.org/software/sed/manual/sed.html](https://www.gnu.org/software/sed/manual/sed.html)
|
||||
@@ -1,29 +0,0 @@
|
||||
# The `rmdir` command
|
||||
|
||||
The **rmdir** command is used to remove empty directories from the filesystem in Linux. The rmdir command removes each and every directory specified in the command line only if these directories are empty.
|
||||
|
||||
### Usage and Examples:
|
||||
|
||||
1. remove directory and its ancestors
|
||||
```
|
||||
rmdir -p a/b/c // is similar to 'rmdir a/b/c a/b a'
|
||||
```
|
||||
2. remove multiple directories
|
||||
```
|
||||
rmdir a b c // removes empty directories a,b and c
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
rmdir [OPTION]... DIRECTORY...
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-`|`--ignore-fail-on-non-empty`|ignore each failure that is solely because a directory is non-empty|
|
||||
|`-p`|`--parents`|remove DIRECTORY and its ancestors|
|
||||
|`-d`|`--delimiter=DELIM`|use DELIM instead of TAB for field delimiter|
|
||||
|`-v`|`--verbose`|output a diagnostic for every directory processed|
|
||||
@@ -1,45 +0,0 @@
|
||||
# The `screen` command
|
||||
|
||||
`screen` - With screen you can start a screen session and then open any number of windows (virtual terminals) inside that session.
|
||||
Processes running in Screen will continue to run when their window is not visible even if you get disconnected. This is very
|
||||
handy for running long during session such as bash scripts that run very long.
|
||||
|
||||
To start a screen session you type `screen`, this will open a new screen session with a virtual terminal open.
|
||||
|
||||
Below are some most common commands for managing Linux Screen Windows:
|
||||
|
||||
|**Command** |**Description** |
|
||||
|:---|:---|
|
||||
|`Ctrl+a`+ `c`|Create a new window (with shell).|
|
||||
|`Ctrl+a`+ `"`|List all windows.
|
||||
|`Ctrl+a`+ `0`|Switch to window 0 (by number).
|
||||
|`Ctrl+a`+ `A`|Rename the current window.
|
||||
|`Ctrl+a`+ `S`|Split current region horizontally into two regions.
|
||||
|`Ctrl+a`+ `'`|Split current region vertically into two regions.
|
||||
|`Ctrl+a`+ `tab`|Switch the input focus to the next region.
|
||||
|`Ctrl+a`+ `Ctrl+a`|Toggle between the current and previous windows
|
||||
|`Ctrl+a`+ `Q`|Close all regions but the current one.
|
||||
|`Ctrl+a`+ `X`|Close the current region.
|
||||
|
||||
|
||||
## Restore a Linux Screen
|
||||
|
||||
To restore to a screen session you type `screen -r`, if you have more than one open screen session you have to add the
|
||||
session id to the command to connect to the right session.
|
||||
|
||||
## Listing all open screen sessions
|
||||
|
||||
To find the session ID you can list the current running screen sessions with:
|
||||
|
||||
`screen -ls`
|
||||
|
||||
There are screens on:
|
||||
```
|
||||
18787.pts-0.your-server (Detached)
|
||||
15454.pts-0.your-server (Detached)
|
||||
2 Sockets in /run/screens/S-yourserver.
|
||||
```
|
||||
|
||||
If you want to restore screen 18787.pts-0, then type the following command:
|
||||
|
||||
`screen -r 18787`
|
||||
@@ -1,86 +0,0 @@
|
||||
# The `nc` command
|
||||
|
||||
The `nc` (or netcat) command is used to perform any operation involving TCP (Transmission Control Protocol, connection oriented), UDP (User Datagram Protocol, connection-less, no guarantee of data delivery) or UNIX-domain sockets. It can be thought of as swiss-army knife for communication protocol utilities.
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
nc [options] [ip] [port]
|
||||
```
|
||||
|
||||
### Examples:
|
||||
|
||||
#### 1. Open a TCP connection to port 80 of host, using port 1337 as source port with timeout of 5s:
|
||||
|
||||
```bash
|
||||
$ nc -p 1337 -w 5 host.ip 80
|
||||
```
|
||||
|
||||
#### 2. Open a UDP connection to port 80 on host:
|
||||
|
||||
```bash
|
||||
$ nc -u host.ip 80
|
||||
```
|
||||
|
||||
#### 3. Create and listen on UNIX-domain stream socket:
|
||||
|
||||
```bash
|
||||
$ nc -lU /var/tmp/dsocket
|
||||
```
|
||||
|
||||
#### 4. Create a basic server/client model:
|
||||
|
||||
This creates a connection, with no specific server/client sides with respect to nc, once the connection is established.
|
||||
|
||||
```bash
|
||||
$ nc -l 1234 # in one console
|
||||
|
||||
$ nc 127.0.0.1 1234 # in another console
|
||||
```
|
||||
|
||||
#### 5. Build a basic data transfer model:
|
||||
|
||||
After the file has been transferred, sequentially, the connection closes automatically
|
||||
|
||||
```bash
|
||||
$ nc -l 1234 > filename.out # to start listening in one console and collect data
|
||||
|
||||
$ nc host.ip 1234 < filename.in
|
||||
```
|
||||
|
||||
#### 6. Talk to servers:
|
||||
|
||||
Basic example of retrieving the homepage of the host, along with headers.
|
||||
|
||||
```bash
|
||||
$ printf "GET / HTTP/1.0\r\n\r\n" | nc host.ip 80
|
||||
```
|
||||
|
||||
#### 7. Port scanning:
|
||||
|
||||
Checking which ports are open and running services on target machines. `-z` flag commands to inform about those rather than initiate a connection.
|
||||
|
||||
```bash
|
||||
$ nc -zv host.ip 20-2000 # range of ports to check for
|
||||
```
|
||||
|
||||
### Flags and their Functionalities:
|
||||
|
||||
| **Short Flag** | **Description** |
|
||||
| -------------- | ----------------------------------------------------------------- |
|
||||
| `-4` | Forces nc to use IPv4 addresses |
|
||||
| `-6` | Forces nc to use IPv6 addresses |
|
||||
| `-b` | Allow broadcast |
|
||||
| `-D` | Enable debugging on the socket |
|
||||
| `-i` | Specify time interval delay between lines sent and received |
|
||||
| `-k` | Stay listening for another connection after current is over |
|
||||
| `-l` | Listen for incoming connection instead of initiate one to remote |
|
||||
| `-T` | Specify length of TCP |
|
||||
| `-p` | Specify source port to be used |
|
||||
| `-r` | Specify source and/or destination ports randomly |
|
||||
| `-s` | Specify IP of interface which is used to send the packets |
|
||||
| `-U` | Use UNIX-domain sockets |
|
||||
| `-u` | Use UDP instead of TCP as protocol |
|
||||
| `-w` | Declare a timeout threshold for idle or unestablished connections |
|
||||
| `-x` | Should use specified protocol when talking to proxy server |
|
||||
| `-z` | Specify to scan for listening daemons, without sending any data |
|
||||
@@ -1,48 +0,0 @@
|
||||
# The `make` command
|
||||
|
||||
The `make` command is used to automate the reuse of multiple commands in certain directory structure.
|
||||
|
||||
An example for that would be the use of `terraform init`, `terraform plan`, and `terraform validate` while having to change different subscriptions in Azure. This is usually done in the following steps:
|
||||
|
||||
```
|
||||
az account set --subscription "Subscription - Name"
|
||||
terraform init
|
||||
```
|
||||
|
||||
How the `make` command can help us is it can automate all of that in just one go:
|
||||
```make tf-init```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
make [ -f makefile ] [ options ] ... [ targets ] ...
|
||||
```
|
||||
|
||||
### Example use (guide):
|
||||
|
||||
#### 1. Create `Makefile` in your guide directory
|
||||
#### 2. Include the following in your `Makefile` :
|
||||
```
|
||||
hello-world:
|
||||
echo "Hello, World!"
|
||||
|
||||
hello-bobby:
|
||||
echo "Hello, Bobby!"
|
||||
|
||||
touch-letter:
|
||||
echo "This is a text that is being inputted into our letter!" > letter.txt
|
||||
|
||||
clean-letter:
|
||||
rm letter.txt
|
||||
```
|
||||
#### 3. Execute ```make hello-world``` - this echoes "Hello, World" in our terminal.
|
||||
#### 4. Execute ```make hello-bobby``` - this echoes "Hello, Bobby!" in our terminal.
|
||||
#### 5. Execute ```make touch-letter``` - This creates a text file named `letter.txt` and populates a line in it.
|
||||
#### 6. Execute ```make clean-letter```
|
||||
|
||||
|
||||
|
||||
### References to lenghtier and more contentful tutorials:
|
||||
|
||||
(linoxide - linux make command examples)[https://linoxide.com/linux-make-command-examples/]
|
||||
(makefiletutorial.com - the name itself gives it out)[https://makefiletutorial.com/]
|
||||
@@ -1,90 +0,0 @@
|
||||
|
||||
# The `basename` command
|
||||
|
||||
The `basename` is a command-line utility that strips directory from given file names. Optionally, it can also remove any trailing suffix. It is a simple command that accepts only a few options.
|
||||
|
||||
### Examples
|
||||
|
||||
The most basic example is to print the file name with the leading directories removed:
|
||||
|
||||
```bash
|
||||
basename /etc/bar/foo.txt
|
||||
```
|
||||
|
||||
The output will include the file name:
|
||||
|
||||
```bash
|
||||
foo.txt
|
||||
```
|
||||
|
||||
If you run basename on a path string that points to a directory, you will get the last segment of the path. In this example, /etc/bar is a directory.
|
||||
|
||||
```bash
|
||||
basename /etc/bar
|
||||
```
|
||||
|
||||
Output
|
||||
|
||||
```bash
|
||||
bar
|
||||
```
|
||||
|
||||
The basename command removes any trailing `/` characters:
|
||||
|
||||
```bash
|
||||
basename /etc/bar/foo.txt/
|
||||
```
|
||||
|
||||
Output
|
||||
|
||||
```bash
|
||||
foo.txt
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
1. By default, each output line ends in a newline character. To end the lines with NUL, use the -z (--zero) option.
|
||||
|
||||
```bash
|
||||
$ basename -z /etc/bar/foo.txt
|
||||
foo.txt$
|
||||
```
|
||||
|
||||
2. The `basename` command can accept multiple names as arguments. To do so, invoke the command with the `-a` (`--multiple`) option, followed by the list of files separated by space. For example, to get the file names of `/etc/bar/foo.txt` and `/etc/spam/eggs.docx` you would run:
|
||||
|
||||
```bash
|
||||
basename -a /etc/bar/foo.txt /etc/spam/eggs.docx
|
||||
```
|
||||
|
||||
```bash
|
||||
foo.txt
|
||||
eggs.docx
|
||||
```
|
||||
|
||||
### Syntax
|
||||
|
||||
The basename command supports two syntax formats:
|
||||
|
||||
```bash
|
||||
basename NAME [SUFFIX]
|
||||
basename OPTION... NAME...
|
||||
```
|
||||
|
||||
### Additional functionalities
|
||||
|
||||
**Removing a Trailing Suffix**: To remove any trailing suffix from the file name, pass the suffix as a second argument:
|
||||
|
||||
```bash
|
||||
basename /etc/hostname name
|
||||
host
|
||||
```
|
||||
|
||||
Generally, this feature is used to strip file extensions
|
||||
|
||||
### Help Command
|
||||
|
||||
Run the following command to view the complete guide to `basename` command.
|
||||
|
||||
```bash
|
||||
man basename
|
||||
```
|
||||
@@ -1,33 +0,0 @@
|
||||
# The `banner` command
|
||||
|
||||
The `banner` command writes ASCII character Strings to standard output in large letters. Each line in the output can be up to 10 uppercase or lowercase characters in length. On output, all characters appear in uppercase, with the lowercase input characters appearing smaller than the uppercase input characters.
|
||||
|
||||
Note: If you will define more than one NUMBER with sleep command then this command will delay for the sum of the values.
|
||||
|
||||
### Examples :
|
||||
|
||||
1. To display a banner at the workstation, enter:
|
||||
|
||||
```
|
||||
banner LINUX!
|
||||
```
|
||||
|
||||
|
||||
2. To display more than one word on a line, enclose the text in quotation marks, as follows:
|
||||
|
||||
```
|
||||
banner "Intro to" Linux
|
||||
```
|
||||
|
||||
> This displays Intro to on one line and Linux on the next
|
||||
|
||||
|
||||
3. Printing “101LinuxCommands” in large letters.
|
||||
|
||||
```
|
||||
banner 101LinuxCommands
|
||||
```
|
||||
|
||||
> It will print only 101LinuxCo as banner has a default capacity of 10
|
||||
|
||||
---
|
||||
@@ -1,64 +0,0 @@
|
||||
# The `which` command
|
||||
|
||||
`which` command identifies the executable binary that launches when you issue a command to the shell.
|
||||
If you have different versions of the same program on your computer, you can use which to find out which one the shell will use.
|
||||
|
||||
It has 3 return status as follows:
|
||||
|
||||
0 : If all specified commands are found and executable.
|
||||
1 : If one or more specified commands is nonexistent or not executable.
|
||||
2 : If an invalid option is specified.
|
||||
|
||||
### Examples
|
||||
|
||||
1. To find the full path of the ls command, type the following:
|
||||
|
||||
```
|
||||
which ls
|
||||
```
|
||||
|
||||
2. We can provide more than one arguments to the which command:
|
||||
|
||||
```
|
||||
which netcat uptime ping
|
||||
```
|
||||
|
||||
The which command searches from left to right, and if more than one matches are found in the directories listed in the PATH path variable, which will print only the first one.
|
||||
|
||||
3. To display all the paths for the specified command:
|
||||
|
||||
```
|
||||
which [filename] -a
|
||||
```
|
||||
|
||||
4. To display the path of node executable files, execute the command:
|
||||
|
||||
```
|
||||
which node
|
||||
```
|
||||
|
||||
5. To display the path of Java executable files, execute:
|
||||
|
||||
```
|
||||
which java
|
||||
```
|
||||
|
||||
### Syntax
|
||||
|
||||
```
|
||||
which [filename1] [filename2] ...
|
||||
```
|
||||
|
||||
You can pass multiple programs and commands to which, and it will check them in order.
|
||||
|
||||
For example:
|
||||
|
||||
```which ping cat uptime date head```
|
||||
|
||||
### Options
|
||||
|
||||
-a : List all instances of executables found (instead of just the first
|
||||
one of each).
|
||||
|
||||
-s : No output, just return 0 if all the executables are found, or 1
|
||||
if some were not found
|
||||
@@ -1,35 +0,0 @@
|
||||
|
||||
# The `nice/renice` command
|
||||
|
||||
The `nice/renice` commands is used to modify the priority of the program to be executed.
|
||||
The priority range is between -20 and 19 where 19 is the lowest priority.
|
||||
### Examples:
|
||||
|
||||
1. Running cc command in the background with a lower priority than default (slower):
|
||||
|
||||
```
|
||||
nice -n 15 cc -c *.c &
|
||||
```
|
||||
|
||||
2. Increase the priority to all processes belonging to group "test":
|
||||
|
||||
```
|
||||
renice --20 -g test
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```
|
||||
nice [ -Increment| -n Increment ] Command [ Argument ... ]
|
||||
```
|
||||
|
||||
|
||||
### Flags :
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-Increment`|<center>-</center>|Increment is the value of priority you want to assign.|
|
||||
|`-n Increment`|<center>-</center>|Same as `-Increment`
|
||||
|
||||
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
# The `wc` command
|
||||
|
||||
the `wc` command stands for word count. It's used to count the number of lines, words, and bytes *(characters)* in a file or standard input then prints the result to the standard output.
|
||||
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To count the number of lines, words and characters in a file in order:
|
||||
|
||||
```
|
||||
wc file.txt
|
||||
```
|
||||
|
||||
2. To count the number of directories in a directory:
|
||||
|
||||
```
|
||||
ls -F | grep / | wc -l
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
```bash
|
||||
wc [OPTION]... [FILE]...
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
|**Short Flag** |**Long Flag** |**Description** |
|
||||
|:---|:---|:---|
|
||||
|`-c` | `--bytes` | print the byte counts|
|
||||
|`-m` | `--chars` | print the character counts|
|
||||
|`-l` | `--lines` | print the newline counts|
|
||||
|<center>-</center> | `--files0-from=F` | read input from the files specified by NUL-terminated names in file F. If F is `-` then read names from standard input|
|
||||
|`-L` | `--max-line-length` | print the maximum display width|
|
||||
|`-w` | `--words` | print the word counts|
|
||||
|
||||
|
||||
|
||||
### Additional Notes:
|
||||
|
||||
* Passing more than one file to `wc` command prints the counts for each file and the total conuts of them.
|
||||
* you can combine more whan one flag to print the result as you want.
|
||||
@@ -1,65 +0,0 @@
|
||||
# The `tr` command
|
||||
|
||||
The tr command in UNIX is a command line utility for translating or deleting characters.
|
||||
It supports a range of transformations including uppercase to lowercase, squeezing repeating characters, deleting specific characters and basic find and replace.
|
||||
It can be used with UNIX pipes to support more complex translation. tr stands for translate.
|
||||
|
||||
### Examples:
|
||||
|
||||
1. Convert all lowercase letters in file1 to uppercase.
|
||||
|
||||
```
|
||||
$ cat file1
|
||||
foo
|
||||
bar
|
||||
baz
|
||||
tr a-z A-Z < file1
|
||||
FOO
|
||||
BAR
|
||||
BAZ
|
||||
```
|
||||
|
||||
2. Make consecutive line breaks into one.
|
||||
|
||||
```
|
||||
$ cat file1
|
||||
foo
|
||||
|
||||
|
||||
bar
|
||||
|
||||
|
||||
baz
|
||||
$ tr -s "\n" < file1
|
||||
foo
|
||||
bar
|
||||
baz
|
||||
```
|
||||
|
||||
3. Remove the newline code.
|
||||
|
||||
```
|
||||
$ cat file1
|
||||
foo
|
||||
bar
|
||||
baz
|
||||
$ tr -d "\n" < file1
|
||||
foobarbaz%
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
The general syntax for the tr command is as follows:
|
||||
|
||||
```
|
||||
tr [options] string1 [string2]
|
||||
```
|
||||
|
||||
### Additional Flags and their Functionalities:
|
||||
|
||||
| **Short Flag** | **Long Flag** | **Description** |
|
||||
| :------------- | :------------ | :------------------------------------------------------------------------------------------------------------ |
|
||||
| `-C` | | Complement the set of characters in string1, that is `-C ab` includes every character except for `a` and `b`. |
|
||||
| `-c` | | Same as -C. |
|
||||
| `-d` | | Delete characters in string1 from the input. |
|
||||
| `-s` | | If there is a sequence of characters in string1, combine them into one. |
|
||||
@@ -1,27 +0,0 @@
|
||||
# The `Wait` commands
|
||||
|
||||
It is a command that waits for completing any running process of given id. if the process id is not given then it waits for all current child processes to complete.
|
||||
|
||||
## Example
|
||||
|
||||
This example shows how the `wait` command works : <br />
|
||||
|
||||
**Step-1**:
|
||||
|
||||
Create a file named "wait_example.sh" and add the following script to it.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
echo "Wait command" &
|
||||
process_id=$!
|
||||
wait $process_id
|
||||
echo "Exited with status $?"
|
||||
```
|
||||
|
||||
**Step-2**:
|
||||
|
||||
Run the file with bash command.
|
||||
|
||||
```
|
||||
$ bash wait_example.sh
|
||||
```
|
||||
@@ -1,30 +0,0 @@
|
||||
# The `zcat` command
|
||||
|
||||
The `zcat` allows you to look at a compressed file.
|
||||
|
||||
|
||||
### Examples:
|
||||
|
||||
1. To view the content of a compressed file:
|
||||
|
||||
```
|
||||
~$ zcat test.txt.gz
|
||||
Hello World
|
||||
```
|
||||
|
||||
2. It can also Works with multiple files:
|
||||
|
||||
```
|
||||
~$ zcat test2.txt.gz test.txt.gz
|
||||
hello
|
||||
Hello world
|
||||
```
|
||||
|
||||
### Syntax:
|
||||
|
||||
The general syntax for the `zcat` command is as follows:
|
||||
|
||||
```
|
||||
zcat [ -n ] [ -V ] [ File ... ]
|
||||
```
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user