CopyDisable

Wednesday, 24 May 2017

How to change resource limit of a running docker container dynamically

Say we are running a docker container constraining it’s resource usage. After some time we discovered that the container needs more resource and we have to relax the resource limit for that container without stopping that container. 
I am writing two methods for changing the resource limit of a container.
For example we are running a container with a memory limit of 128MB
docker run -it -m 128m ubuntu /bin/bash
You can find all the information about the memory under /sys/fs/cgroup/memory/docker/<full container id>
We can get the full Container ID by running the docker ps command with --no-trunc option.
docker ps --no-trunc
image
Memory limit for a container can be found from the file
/sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes
#cat /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
We can check the memory utilization by the container using the docker stats command:
# docker stats b2bebfd78782
image
Let’s run stress tool in the container and check the utilization:
# stress --vm 1 --vm-bytes 512M
image
Checking the resource utilization again:
image
Although we specified 512MB in the stress command but as the container has a limit of 128MB RAM, so stress command is unable to get 512MB RAM and currently occupying full 128MB RAM.
Let’s increase the RAM to 1GB:
Method 1:
We can directly change the value of /sys/fs/cgroup/memory/docker/<container full id>/memory.limit_in_bytes to number of bytes, and this will change the memory limit to the value we want.
echo 1073741824 > /sys/fs/cgroup/memory/docker/b2bebfd78782ff345c92a6e44535e61d001187a2f15ce171679729eebfd7c327/memory.limit_in_bytes
image
Again we will check the memory utilization:
image
Yes, we can see that the memory limit has been increased to 1G
This change is temporary and once the container is restarted, it takes whatever memory setting was specified while container was created.
 
Method 2:
Another simple way is to change the resource limit is to use the docker update command. For example say we want to change the memory limit to 512MB:
# docker update b2bebfd78782 -m 512M
This will update the memory limit for the container permanently.
Usage: docker update CONTAINER [CONTAINER...]
Update configuration of one or more containers
--blkio-weight Block IO (relative weight), between 10 and 1000
-c, --cpu-shares CPU shares (relative weight)
--cpu-period Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota
--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems MEMs in which to allow execution (0-3, 0,1)
--help Print usage
--kernel-memory Kernel memory limit
-m, --memory Memory limit
--memory-reservation Memory soft limit
--memory-swap Swap limit equal to memory plus swap: '-1' to enable unlimited swap
--restart Restart policy to apply when a container exits


Friday, 19 May 2017

MongoDB Recipes: Disable replica set chaining

By default MongoDB allows replica set chaining. That means it allows a secondary member to sync from another secondary. Suppose we want our secondaries only to sync from the primary not from any other secondary. In that case we can disable replica set chaining.
  • Save replica set configuration in a variable:
    repl1:PRIMARY> cfg = rs.conf()
image
  • If settings sub-document is not present in the config, then add it:
    repl1:PRIMARY> cfg.settings = {}
  • Set the chainingAllowed property to false from default value true in the cfg variable
    repl1:PRIMARY> cfg.settings.chainingAllowed = false
  • Set the new configuration from cfg variable
    repl1:PRIMARY> rs.reconfig(cfg)
  • Check the new settings:
    repl1:PRIMARY> rs.conf()
image

Tuesday, 16 May 2017

Encrypting the shell scripts

Sometimes we need to encrypt a shell script for security reasons, for example if the script contains some sensitive information like password etc.
For this task I am going to use the shc tool (http://www.datsi.fi.upm.es/~frosal/sources/shc.html) to convert my text shell script file into a binary file . Download the source code of shc tool from the link http://www.datsi.fi.upm.es/~frosal/sources/ and extract the GZIP compressed tar archive file. Here I am going to use the 3.8.9 version.
Note: I used Ubuntu 14.04 for this example.
If make is not installed, then install make
# apt-get install make
Go inside the shc-3.8.9 source folder.
# cd shc-3.8.9
# make


clip_image001
Now install shc
#make install
clip_image003
If installation fails with directory not found error, create the /usr/local/man/man1 directory and run the command again.

#mkdir /usr/local/man/man1
# make install

clip_image004
Remove the shc source folder after it is installed
# cd ..
# rm -rf shc-3.8.9/

Our shc tool is installed, we are now going to convert our shell script into binary.
Go to the folder where the shell script is stored. My script name is mysql_backup.
Create binary file of the shell script using the following command:
# shc -f mysql_backup
shc command creates 2 additional files
# ls -l mysql_backup*
-rwxrw-r-- 1 pranab pranab 149 Mar 27 01:09 mysql_backup
-rwx-wx--x 1 pranab pranab 11752 Mar 27 01:12 mysql_backup.x
-rw-rw-r-- 1 pranab pranab 10174 Mar 27 01:12 mysql_backup.x.c
 
mysql_backup is the original unencrypted shell script.
mysql_backup.x is the encrypted shell script in binary format.
mysql_backup.c is the C source code of the mysql_backup file. This C source code is compiled to create the above encrypted mysql_backup.x file.
We will remove the original shell script (mysql_backup) and c file (mysql_backup.x.c) and rename the binary file (mysql_backup.x) into the shell script (mysql_backup).
# rm -f  mysql_backup.x.c
# rm -f mysql_backup
# mv mysql_backup.x mysql_backup
 
Now we have our binary shell script, the contents of this file can not be easily seen as it is a binary file.

Monday, 20 February 2017

Making Pen Drive Write Protected

Making a pen drive read only is very device specific. It is mainly dependent on the Pen drive chip and manufacturer.
I was successful in making a Transcend JetFlash 8GB pen drive write protected.
clip_image002
First we will find out the chip details of the Pen Drive. For that we will use the ChipGenius utility.
Run the ChipGenius_v4_00_0026_b2.exe tool to identify the Pen Drive chip.
In my case the chip manufacturer vendor is SMI and Part Number is SM3255AB.
clip_image004
Once I have the details of the Pen Drive chip I can look for specific tool for that Pen drive.
If the USB Pen drive has a Silicon Motion Inc. (SMI) controller inside it, then we can use the SMI utilities to alter some of the settings of the pen drive.
Luckily I found one link, where the author had already compiled a list of SMI tools
http://usb-fix.blogspot.in/p/smi.html
I searched tools for my pen drive chip SM3255AB in this blog. From this blog I was able to find a tool SMI ReFixInfo 1.0.0.1 which allows me to make my Pen drive write protected. I downloaded that file from the URL http://flashboot.ru/files/file/244/
The file I downloaded is SMI_ReFixInfo_1_0_0_1.7z. After downloading I extracted and run the executable SMI_ReFixInfo.exe
Once the pen drive is detected by the tool, we can change the various properties of the pen drive.
clip_image006
We are going to make the pen drive write protect, so select the Reset Write Protect check box and from the top W.P. list select Write Protect to make the pen drive write protected. To remove write protection, select Un-Write Protect option.
clip_image008
Once the required option is selected, click Start button to save the settings.
If the change is successfully saved, we can see the PASS message
clip_image010
Sometimes we may have to remove the Pen drive and connect again to see the new settings.
Now if we try to copy something to the Pen Drive, we will see the following message:clip_image012
If I try to delete some file from the pen drive… Ooopppsss there is no delete option. If we press the Delete key also, nothing will happen.
clip_image014



















Monday, 7 November 2016

Private Docker Registry on Ubuntu

Normally the docker tool uploads/downloads docker images from docker public registry called Docker Hub. Docker hub lets us upload our images free of cost and anybody can access our images as our images our public. There are ways to configure our own registries from where we can pull docker images.
The benefits of having private registries are:
  • We can keep our private images as private, so that nobody from outside have access of our private images.
  • We can also save time by pushing and pulling images locally from our own WAN/LAN, instead of pushing and pulling over the Internet.
  • We can save Internet bandwidth by keeping commonly used images locally in our registry.
Setting up a private registry is very simple in Ubuntu. We can download the registry container image from the Docker Hub and use that image to start our own Docker registry service.
In this document I am going to write about a very basic registry on Ubuntu 14.04 without any built-in authentication mechanism and without SSL.
I will take two docker nodes server1(IP 192.168.10.75) and server2(192.168.10.76), in the first node server1 I will deploy the docker registry container and from the second node server2, I am going to pull images from our own registry.
Now lets do the handson.
Download the registry image from docker hub.
docker pull registry:latest
image
Let’s run the registry docker container, the registry container exposes port 5000 on the node server1, so that docker clients outside the container can use it.
docker run --name myregistry -p 5000:5000 -d registry:latest
image
Next we will pull some images from docker hub on to server1, and then we will push these images into our own docker registry (container with name myregistry). For this example we will download alpine and hello-world images from docker hub.
docker pull alpine
docker pull hello-world
image
Our two images alpine and hello-world are available in server1, so we will push these two images into our registry.
Before pushing the images into our registry, we have to tag the images with the tag of the local registry to which we are going to push these. Without tagging if we try to push an image, we will get image does not exists error.
image
Use docker tag command to give the image a name that we can use to push the image to our own Docker registry:
docker tag hello-world localhost:5000/hello-world:latest
docker tag alpine localhost:5000/alpine:latest
image
Now let’s push our alpine and hello-world images into our registry
docker push localhost:5000/hello-world
docker push localhost:5000/alpine
image
We can check the images available in our registry by running the command:
curl <repository>:<port>/v2/_catalog
curl localhost:5000/v2/_catalog
image

Our private registry is ready, now we will pull the images from another docker node server2 (IP 192.168.10.76).

When we try to pull the image from our new registry (server1 with IP 192.168.10.75), we get error:
Error response from daemon: Get https://192.168.10.75:5000/v1/_ping: http: server gave HTTP response to HTTPS client
image
To resolve the error, edit/create the file /etc/docker/daemon.json and add the following line:
{ "insecure-registries":["<registry>:<port>"] }
pico /etc/docker/daemon.json
{ "insecure-registries":["192.168.10.75:5000"] }
image
After adding the insecure registry line restart docker process.
service docker restart
image

When we use the registry on localhost the communication is in plain text and no TLS encryption is needed, but when we connect it from another node it communicates using TLS encryption. So by adding our new registry into the insecure-registries, we informed docker to communicate without TLS encryption.

Now let’s pull our images:
docker pull 192.168.10.75:5000/alpine:latest
docker pull 192.168.10.75:5000/hello-world:latest
image
Congrats! We have configured our own registry and pushed/pulled images from it successfully Smile.

Tuesday, 13 September 2016

Creating a docker image from a running container

We can create docker image in two ways:
1) From a running container
2) Create an image using Dokcerfile
In this document I will write only about creating an image from a running container, in future I will write about the second method of creating docker image using Dockerfile (if time permits Winking smile ) .
I will give one example, I will deploy the world’s simplest node.js application in a docker container. I will take lightweight Alpine Linux as base image for my Node.JS application image.
First I will pull the Alpine Linux image from docker hub.
# docker pull alpine
clip_image002
Next I will install Node.js in a running Alpine Linux container. For that I am creating one interactive session to an alpine Linux container.
# docker run -i -t alpine /bin/shclip_image003
Install Node.JS using Alpine Linux’s package manager apk
clip_image005
Verify node installation:
clip_image006
To deploy the sample Node.JS application, first I will create a directory for this sample application
# mkdir myapp
# cd myapp
Sample Node.JS application
# vi myapp.js
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});






clip_image007
/myapp # vi package.json
clip_image008
# npm install
clip_image009
We will need the container ID to create our new image, if we run the hostname command in the container it gives us the container ID.
clip_image010
Now exit from the container shell and run the docker commit command to create the new image.
# docker commit --change='CMD ["node", "/myapp/myapp.js"]' -c "EXPOSE 3000" 24b1763f7d0d pranabsharma/nodetest:version0
clip_image012
Our new image is created and if we run the docker images command, we can see the newly created image.
clip_image014
We will run our new docker image:
# docker run -p 3000:3000 -d bf4d3f980e76
clip_image016
Checking our Node.js app from browser:
clip_image018


Friday, 26 August 2016

How to run MySQL docker container with populated data

Suppose we have to run few MySQL containers each containing data for different applications. Each MySQL docker containers should be initialized with databases and data while we run the container for the first time.
In this example I am going to run two MySQL docker containers one for our techsupport application and the second one for the blog application.
First I will download the MySQL 5.5 docker image.
# docker pull mysql:5.5
image
We have the MySQL 5.5 docker image. Before running the image, I will copy the mysqldump files for both the applications.
I have created 2 directories for copying the SQL scripts for the two applications:
# mkdir -p /docker/scripts/blog
# mkdir -p /docker/scripts/techsupport

Next I copied the SQL files into the respective directories:
# cp /root/MySQLDocker/sql/blog.sql /docker/scripts/blog/
# cp /root/MySQLDocker/sql/techsupport.sql /docker/scripts/techsupport/

Now we are ready to run our MySQL docker containers.
First I will run the docker container for techsupport application.
# docker run --name mysql-techsupport -v /docker/scripts/techsupport/techsupport.sql:/docker-entrypoint-initdb.d/techsupport.sql -p 3310:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:5.5
image
The key of populating data in the MySQL container is to specify docker-entrypoint-initdb.d. When we start a MySQL container for the first time, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d directory. I will mount the SQL script file /docker/scripts/techsupport/techsupport.sql to the /docker-entrypoint-initdb.d/techsupport.sql file in the MySQL container using the -v flag, so that the techsupport.sql file gets executed when the container runs for the first time.
After the MySQL container is started for the first time, if we inspect the running processes we can see that the mysql client is also running and executing the statements of the SQL file. We can’t connect to this MySQL server from outside till the time the SQL script execution is completed (we can see that mysqld is running with --skip-networking option)
image

Next I will run the MySQL container for blog application
# docker run --name mysql-blog -v /docker/scripts/blog/blog.sql:/docker-entrypoint-initdb.d/blog.sql -p 3320:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:5.5
image
Now both the MySQL containers are running, one is on port 3310 and the second one is on port 3320 of my server.
image
Let’s inspect whether the databases got created in our containers.
First connect to mysql-techsupport container and check:

# docker run -it --link  mysql-techsupport --rm mysql:5.5 /bin/bash
image
Yes from the screenshot we can see that techsupport database got created.
image
Data is also present, so our MySQL container is populated with the required data Smile.
Let’s check the second container mysql-blog
image
image
Second container is also populated with data Smile.

Note: In the above screenshot, I have connected to the MySQL server using the IP address (mysql -h 172.17.0.3  -u root -proot). To get the IP address we can run env or we can use the environment variable MYSQL_BLOG_PORT_3306_TCP_ADDR instead of IP address.
image