Tensor Flow Docker Image

TensorFlow Docker Installation & Setup


1.       Get the Tensor Flow Docker image
:/> docker pull tensorflow/tensorflow


2.       Start the instance
:/> docker run -it  -p 8888:8888 --rm tensorflow/tensorflow



3.       Once you access the url with port and token as shown in the above command





4.       Login to the tensor container
NOTE: "tensor" below is name of the conatiner otherwise use the container id.




Removing Docker images and containers


1. List & Remove images
docker images -a
docker rmi image-name1 image-name2

2. List & removeDangaling Images
docker images -f dangling=true
docker rmi $(docker images -f dangling=true -q)

3. Remove all images
docker rmi $(docker images -a -q)

4. List & Remove images by pattern
docker ps -a |  grep "pattern"
docker images | grep "pattern" | awk '{print $1}' | xargs docker rm

5. List & Remove containers

docker ps -a
docker rm container1_id container2_id

docker container rm contianer-name


6. Remove container upon exit
docker run --rm image_name

7. List & Remove all exited containers
docker ps -a -f status=exited
docker rm $(docker ps -a -f status=exited -q)

8. Listing  & Removing using Multiple filters
docker ps -a -f status=exited -f status=created
docker rm $(docker ps -a -f status=exited -f status=created -q)

9. List and Remove all containers by pattern

docker ps -a |  grep "pattern”
docker ps -a | grep "pattern" | awk '{print $3}' | xargs docker rmi


10. Stop and remove all containers
docker ps -a

docker stop $(docker ps -a -q) 
docker rm $(docker ps -a -q)

docker stop container-name

11. List & Removing Volumes
docker volume ls
docker volume rm volume_name volume_name

12. List and remove all dangling volumes
docker volume ls -f dangling=true
docker volume rm $(docker volume ls -f dangling=true -q)

13. Remove Volume and its container
docker rm -v container_name


Docker Swarm and Creating Cluster on Docker

Initializing and Joining Swarm
Sudo docker swarm init

NOTE: if you have multiple network interfaces on the host (or on guest VM) , then you need to specify the "--advertise-addr" with specific ip

Worked fine after specifying the ip



From the second VM , run the following command to join as worker node in swarm



To list all swarm nodes connected to manager


NOTE: Only swarm managers execute Docker commands; workers are just for capacity.



For deploying applications (service)




To Change no of Replicas (nodes)
a.       Simple modify the docker compose file
b.       Re run the stack deploy command as shown
:/> docker stack deploy -c docker-compose.yml getstartedlab

To list the service which are running
:/> docker service ls

To list the nodes where the service is running
:/>docker service ps getstartedlab

To remove (uninstall) the service
:/> docker service rm getstartedlab



Leaving Swarm
Sudo docker swarm leave



NOTE: you may need to use the "--force" to leave the last manager from the swarm.

Docker Volumes


Docker offers three different ways to mount data into a container from the Docker host: volumesbind mounts, or tmpfs volumes. When in doubt, volumes are almost always the right choice



·         Volumes are stored in a part of the host filesystem which is managed by Docker(/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
·         Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.
·         tmpfs mounts are stored in the host system’s memory only, and are never written to the host system’s filesystem

There are three main use cases for Docker data volumes:
1.      To keep data around when a container is removed
2.      To share data between the host filesystem and the Docker container
3.      To share data with other Docker containers

Volume Commands

Create a volume:
$ docker volume create my-vol

List volumes:
$ docker volume ls

Inspect a volume:
$ docker volume inspect my-vol

Remove a volume:
$ docker volume rm my-vol

start the container using volume
$ docker run -d -it --name=nginxtest -v nginx-vol:/usr/share/nginx/html nginx:latest

Clean container with volume
$ docker container stop nginxtest
$ docker container rm nginxtest
$ docker volume rm nginx-vol

For creating readonly containers
$ docker run -d -it --name=nginxtest -v nginx-vol:/usr/share/nginx/html:ro nginx:latest

Binding Mounts  (Sharing Data Between the Host and the Docker Container)


$ docker run -dit --name devtest -v "$(pwd)"/target:/app nginx:latest

$ docker run -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 -i nginx:latest

-v ~/nginxlogs:/var/log/nginx — This will set up a volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine. Docker uses a : to split the host's path from the container path, and the host path always comes first.


Installing Docker CE on Ubuntu

Before you install Docker CE (first time) on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

1. Update the apt package index:

$ sudo apt-get update

2. Install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

3. Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. Verify that you now have the key with the fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by searching for the last 8 characters of the fingerprint.

$ sudo apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22
      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid                  Docker Release (CE deb) <docker@docker.com>
sub   4096R/F273FCD8 2017-02-22


5. Use the following command to set up the stable repository. You always need the stable repository, even if you want to install builds from the edge or testing repositories as well. To add the edge or testing repository, add the word edge or testing (or both) after the word stable in the commands below.

Note: The lsb_release -cs sub-command below returns the name of your Ubuntu distribution, such as xenial. Sometimes, in a distribution like Linux Mint, you might have to change $(lsb_release -cs) to your parent Ubuntu distribution. For example, if you are using Linux Mint Rafaela, you could use trusty.sudo
amd64:

$ sudo add-apt-repository \
   $(lsb_release -cs) \
   stable"

NOTE: It will sources url to "sources.list" under /etc/apt repository.

INSTALL DOCKER CE

1. Update the apt package index.

$ sudo apt-get update

2. Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.

$ sudo apt-get install docker-ce

or to install specific version use below command
$ sudo apt-get install docker-ce=<VERSION>
The Docker daemon starts automatically.

3. Verify that Docker CE is installed correctly by running the hello-world image.

$ sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.




Ref:

Virtual Machines Vs Docker Container


What is docker?

A Docker container can be described as a wrap around a piece of software that contains everything needed in order to run the software. This is done in order to make sure that the app will run the same no matter what environment it runs in.

VirtualMachines Vs Docker Containers

VirtualBox and VMWare are virtualization apps that create virtual machines that are isolated at the hardware level.

Docker is a containerization app that isolates apps at software level.





irtual Machines
Docker Containers
Hardware level process isolation
OS level process isolation
VMs offer complete isolation of applications from host OS
Docker containers can share some resources with host OS
Each VM has separate OS
Each docker container can share OS resources
Boots in minutes
Boots in seconds
More resource usage
Less resource usage
Pre-configured VMs are hard to find and manage
Pre-built docker containers for home server apps already available
Customizing pre-configured VMs requires work
Building a custom setup with containers is easy
VMs are typically bigger in size as they contain whole OS underneath
Docker containers are small in size with only docker engine over the host OS
VMs can be easily moved to a new host OS
Containers are destroyed and recreated rather than moving (data volume is backed up)
Creating VMs take relatively long time
Dockers can be created in seconds
Virtualized Apps are harder to find and it takes more time to install and run them
Containerized apps such as SickBeard, Sonarr, CouchPotato etc. can be found and installed easily within minutes

Docker vs Linux LXC



Linux cgroups, originally developed by Google, govern the isolation and usage of system resources, such as CPU and memory, for a group of processes.

Linux namespaces, originally developed by IBM, wrap a set of system resources and present them to a process to make it look like they are dedicated to that process.

The original Linux container technology is Linux Containers, commonly known as LXC. LXC is a Linux operating system level virtualization method for running multiple isolated Linux systems on a single host. Namespaces and cgroups make LXC possible.

Single vs. multiprocess. Docker restricts containers to run as a single process. If your application environment consists of X concurrent processes, Docker wants you to run X containers, each with a distinct process. By contrast, LXC containers have a conventional init process and can run multiple processes.

Stateless vs. stateful. Docker containers are designed to be stateless, more so than LXC. First, Docker does not support persistent storage. Docker gets around this by allowing you to mount host storage as a “Docker volume” from your containers. Because the volumes are mounted, they are not really part of the container environment.


Second, Docker containers consist of read-only layers. This means that, once the container image has been created, it does not change. During runtime, if the process in a container makes changes to its internal state, a “diff” is made between the internal state and the image from which the container was created. If you run the docker commit command, the diff becomes part of a new image—not the original image, but a new image, from which you can create new containers. Otherwise, if you delete the container, the diff disappears.