Pages

Playing with Docker

 Docker is a tool to run applications in an isolated environment. It provides the same advantages as running the application on a virtual machine but its way more lightweight, fast and offers many more advantages.

Adventages:

1. Same environment
2. Sandbox projects
3. Eliminates the configuration and setup phase of virtual box

Demo

There is a tutorial here. Go through the Docker for Beginners workshop and you will have a very good understanding of using docker.

I am going to list few of the commands as a reference in this blog.

First complete these steps:

  • Install Docker in your machine
  • Start the docker service
  • Run a standard container (ie Hello world, or alpine)

Note: If you are linux user, add your user to sudo group to avoid typing sudo infront of every docker command
Basic commands
  • Check Docker info
    docker info
  • Check docker images in your machine
    docker images

  • check docker processes
    docker ps
    docker ps -a

  • stop docker process
    docker stop <container-id or container-name>
  • Remove docker process
    docker rm <container-id or container-name>
  • Remove docker image
    docker rmi <container-id or container-name>
  • Run docker build to create an image
    docker build -t USERNAME/IMG_TAG_NAME . 
  • Run image
    docker run -p HOST_PORT:GUEST_PORT -v HOST_PATH:GUEST_PATH IMAGE_NAME . 
Run interactive command on a container
Lets say you have a container running(say ubuntu). Now you want to connect to the container and run some shell scripts from your host machine. You can run hit this command to connect to the running container and test if the scripts work or not.

docker exec -it <container name or id> sh  

Run mongodb in docker:

  1. docker run --name mongo -p 27017:27017 -v ~/mongodb/data/:/data/db -d mongo

Run another container and line the previous container to it (mongo)
sudo docker run -itd -e NODE_ENV=production -e PEDA_HOST=peda.app.rajanu.com.np -e PEDA_PORT=3000 --link mongo --name=peda -p 3000:3000 rajanpupa/peda
The above command links the mongo container to the peda container and it will create some environment variables in the peda containers which can be used to connect to mongo container from peda container.

for example:
MONGO_PORT_27017_TCP_ADDR  = 127.0.0.10

which can be accessed from a nodejs app running in peda by
process.env.MONGO_PORT_27017_TCP_ADDR

If the container started successfully, it could be attached from the command line of host using the following command.

docker attach peda

To push the image to the hub
docker push rajanpupa/peda
before that, you need to have a docker hub account and you need to be logged in.
docker login

To see logs of a server  (for debugging)
docker logs <id>

Simple right

Pivotal Cloud Foundry Concepts


Cloud Foundry Concepts. 

Cloud Foundry is an open platform as a service, providing a choice of clouds, developer frameworks, and application services. ... It is an open source project and is available through a variety of private cloud distributions and public cloud instances.
Pivotal implementation of Cloud foundry is called PCF Cloud Foundry.

As any cloud environment, PCF also provides easy way to deploy many kinds of applications developed in different languages/frameworks, scalability and availability of your applications, pay as you go, shutdown as you need and its pricing is very competitive as compared to other cloud providers. PCF website provides you a starting credit of 80 something dollars, which you can use to play with pcf environment. If your application is very basic, this credit can last for years because if your application has only one instance and a memory limit of 128mb, you are charged almost 3$ per month. You can use NODEJS as a language to play with PCF as its memory footprint is pretty low (especially compared to Java). If you are using SpringBoot java framework, you need to have a minimum of 512MB memory allocated per instance to function properly.

Getting started.

Aws Experience

I have been thinking about creating an Amazon developer's account and experiment with some Ec2 stuff for about two years now. Recently I created an account and begin experimenting with the free tiers.

Instances

Basically amazon offers free a micro instance for a year. You have the option to choose from different types of operating system from Windows to Ubuntu, Redhat and many more. The name of the instance (nano, micro, ...) is based on the specifications that instance will have. For example, the micro instance has 8GB of local disk space and 1GB of memory, with 1processor core. Similarly you can choose other higher instances based on your need. The cost of the instances goes up based on the specifications.

AMI

 Also there are many other Images (AMI's ) created which have different features already installed/configured. Some AMI's are already configured for Java environment while others have pre-installed databases and other languages/packages. You can even create your own AMI once you install and configure your instance. I think Amazon charges you for the storage of the AMI but its pretty low compared to keeping an instance alive. The benefit of AMI is that you don't have to spend hours installing the required packages and configuring your machine everytime you fire up a new instance.

Security Groups

You can attach various security groups to your instance. Security group is where you will configure your security settings like
  • Should the incoming traffic be allowed or not
  • which ip-addresses/ports are open for incoming traffic and out-going traffic
By default, the port is not open and you can't connect to your remote instance via ssh or anything. You have to manually allow the connections via the security settings.

Storage

In addition to the local storage of each instances, you have the option to choose other storage options for higher capacity and reliability. Local storage is not reliable to store persistent information as it may be lost when the instance restarts for some reason. 

S3 is the most populare simple bulk object storage service provided by amazon at a reasonable cost which depends on storage capacity and data transfer. It's good for long term storage of objects. With the api and key authentication, its very easy to store and retrieve files and objects.

Load Balancing

Amazon has a load balancing service which is very handy for scaling and security. You can create an instance of load-balancer and point it to a instance group, and the load-balancer automatically distributes the load among the server in round-robin or whatever way possible. As of security stand-point, you can configure the security group of your instances to private so that they can not be accessed publicly from the internet, and can only be accessed via internal applications such as load-balancer. This definitely makes the job of a hacker very difficult if not impossible.

Auto Scaling

There is an auto scaling feature available in AWS which can automatically increase or decrease the number of instances depending on some resource utilization in the instance. For example, you can configure you instance to increase 1 instance every 5 minutes for the next 10 minutes if the cpu utilization is more than 70% and similarly decrease the instance if the cpu utilization is less than 50%  with a minimum of 1 instance and so on.

Services

AWS provides various types of services in-built and you can directly use it. The services includes the following
  • Computing services
  • Storage 
  • Databases 
  • Developer tools (CodeCommit, codeBuild, codePipeline)
  • Management tools(cloudWatch, cloudConfig, cloudFormation, )

AWS SDK

Amazon also provides sdk which can be used to automate the interaction with aws. You can basically add the aws sdk dependency in you maven or gradle file and use it to interact with aws and automate the interaction with S3 instances or creation/configuration of ec2 instances and many more.

Here is a very good youtube video of using sdk to interact with S3 instances for storage.

I will keep adding other information to this blog as i gain new information.