Tuesday, March 29, 2016

Deploy All-In-One Kubernetes and Docker from Scratch with Flannel Networking

first_kubernetes

Deploy Kubernetes and Docker from Scratch with Flannel Networking

來解釋一下如何安裝Kubernetes,並launch一個Container,最後我們將Networking改成Flannel。

套件安裝與環境設定

首先先安裝Docker等,相關的套件。

$ apt-get update
$ apt-get install ssh
$ apt-get install docker.io
$ apt-get install curl

打通ssh。

ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub 127.0.0.1
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
$ ssh root@127.0.0.1
exit

下載Kubernetes v1.0.1版。

$ wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.1/kubernetes.tar.gz
tar -xvf kubernetes.tar.gz

All-in-one 套件執行

開始進行安裝了。

cd kubernetes/cluster/ubuntu
./build.sh

設定Config檔。

cd
vi kubernetes/cluster/ubuntu/config-default.sh
export nodes="root@127.0.0.1"
export roles="ai"
export NUM_MINIONS=${NUM_MINIONS:-1}

將所有執行檔叫起來。

$ cd kubernetes/cluster
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

將執行檔export給系統,可再任意地方執行kubernetes的執行檔,並用kubectl測試一下。

export PATH=$PATH:~/kubernetes/cluster/ubuntu/binaries
kubectl --help

看一下,目前安裝的狀態

kubectl get nodes

kube-UI的安裝,也可以不用。

$ kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
$ kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system

目前單機已經安裝完了,因此,可以執行以下命令將wordpress container叫起來。
如果你想換成Flannel的網路架構,可以先跳過接下來的步驟,直接去Flannel的安裝

kubectl run wordpress --image=tutum/wordpress --port=80 --hostport=81

kubectl查看,Container是否啟動,需要等個一分鐘左右,因為要從dockerhub下載檔案。

kubectl get pods
kubectl get rc

也可以用docker指令查看。

docker ps

看一下網路架構,與Docker本身的網路架構是一樣的。

#ifconfig
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1072 (1.0 KB)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 00:0c:29:ad:f5:61
          inet addr:172.16.235.128  Bcast:172.16.235.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fead:f561/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:300420 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127648 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:437238373 (437.2 MB)  TX bytes:8566239 (8.5 MB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:522030 errors:0 dropped:0 overruns:0 frame:0
          TX packets:522030 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:207725326 (207.7 MB)  TX bytes:207725326 (207.7 MB)

veth0ef8f9d Link encap:Ethernet  HWaddr 26:f0:39:f9:65:3a
          inet6 addr: fe80::24f0:39ff:fef9:653a/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:1944 (1.9 KB)

vetha2a43fc Link encap:Ethernet  HWaddr d2:ad:ea:e1:ff:7f
          inet6 addr: fe80::d0ad:eaff:fee1:ff7f/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
          

連到Container看一下網路IP。

root@wordpress-j0f8t:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          

ETCD在Kubernetes有著非常重要的位置,看一下etcdctl內容。

root@kubernetes:~# /opt/bin/etcdctl ls
/registry
root@kubernetes:~# ps aux|grep kube

root       4763  0.0  0.1   8392  1972 ?        Ssl  13:20   0:00 /kube-ui
root      10628  3.2  3.6 248140 36140 ?        Ssl  13:35   0:03 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname_override=127.0.0.1 --api_servers=http://127.0.0.1:8080 --logtostderr=true --cluster_dns=192.168.3.10 --cluster_domain=cluster.local
root      10629  0.4  1.2  16276 12500 ?        Ssl  13:35   0:00 /opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root      10631  1.2  3.9  48896 39936 ?        Ssl  13:35   0:01 /opt/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --service-cluster-ip-range=192.168.3.0/24 --admission_control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,ServiceAccount,ResourceQuota --service_account_key_file=/tmp/kube-serviceaccount.key --service_account_lookup=false
root      10632  0.6  1.9  25556 19248 ?        Ssl  13:35   0:00 /opt/bin/kube-controller-manager --master=127.0.0.1:8080 --service_account_private_key_file=/tmp/kube-serviceaccount.key --logtostderr=true
root      10633  0.2  1.4 203044 14244 ?        Ssl  13:35   0:00 /opt/bin/kube-proxy --master=http://127.0.0.1:8080 --logtostderr=true

Flannel的安裝

Let's change it to Flannel network

vim /etc/default/flanneld
FLANNEL_OPTS="--etcd-endpoints=http://127.0.0.1:2379"
service flanneld stop 
service docker stop
service kubelet stop
service kube-proxy stop
apt-get install -y bridge-utils
ip link set dev docker0 down
brctl delbr docker0
/opt/bin/etcdctl rm --recursive /coreos.com/network
/opt/bin/etcdctl set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'
service flanneld start

You will see /run/flannel/subnet.env

ls /run/flannel/subnet.env

and modify /etc/default/docker

vim /etc/default/docker
. /run/flannel/subnet.env 
DOCKER_OPTS="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"

where the /run/flannel/subnet.env file shown as

vim /run/flannel/subnet.env
FLANNEL_SUBNET=172.17.16.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
service docker start
service kubelet start
service kube-proxy start
kubectl run wordpress --image=tutum/wordpress --port=80 --hostport=81
kubectl get pods
kubectl get rc
docker ps -a

After 1 min, you will see the result that the VM's been launched.

root@kubernetes:~/kubernetes/cluster# kubectl get pods
NAME              READY     REASON    RESTARTS   AGE
wordpress-gp4cl   1/1       Running   0          7m

You wil see the Flannel Network here.

root@kubernetes:~/kubernetes/cluster# ifconfig
docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99
          inet addr:172.17.16.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1472  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:536 (536.0 B)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 00:0c:29:ad:f5:61
          inet addr:172.16.235.128  Bcast:172.16.235.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fead:f561/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:297104 errors:0 dropped:0 overruns:0 frame:0
          TX packets:118915 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:435040989 (435.0 MB)  TX bytes:7818950 (7.8 MB)

flannel0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:172.17.16.0  P-t-P:172.17.16.0  Mask:255.255.0.0
          UP POINTOPOINT RUNNING  MTU:1472  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:87012 errors:0 dropped:0 overruns:0 frame:0
          TX packets:87012 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:184668449 (184.6 MB)  TX bytes:184668449 (184.6 MB)

vethbd09c72 Link encap:Ethernet  HWaddr 96:0a:11:72:07:77
          inet6 addr: fe80::940a:11ff:fe72:777/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1472  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:1296 (1.2 KB)
          
          

From Official Site, you wiil see the result is right.

However, we only have one node here.

Also see the container's IP, and login to the container.

root@wordpress-gp4cl:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:10:02
          inet addr:172.17.16.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:acff:fe11:1002/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1472  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

網路問題

在Container內,ping www.google.com,我發現不通,查了一下container內的nameserver

vim /etc/resolv.conf
nameserver 192.168.3.10
nameserver 172.16.235.2
search default.svc.cluster.local svc.cluster.local cluster.local localdomain

delete 192.168.3.10 nameserver

vim /etc/resolv.conf
nameserver 172.16.235.2
search default.svc.cluster.local svc.cluster.local cluster.local localdomain

再ping一下通了

oot@wordpress-gp4cl:/# ping www.google.com
PING www.google.com (64.233.188.103) 56(84) bytes of data.
64 bytes from tk-in-f103.1e100.net (64.233.188.103): icmp_seq=1 ttl=127 time=10.2 ms
64 bytes from tk-in-f103.1e100.net (64.233.188.103): icmp_seq=2 ttl=127 time=9.81 ms

發現了deploy config有問題,下次可以修改一下。

root@kubernetes:~/kubernetes/cluster# grep -R  192.168.3 *
ubuntu/config-default.sh:export SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-192.168.3.0/24}  # formerly PORTAL_NET
ubuntu/config-default.sh:DNS_SERVER_IP=${DNS_SERVER_IP:-"192.168.3.10"}

改成8.8.8.8試試看,這部分是手動在container中改的,下次部署可以改上述的檔案ubuntu/config-default.shDNSSERVERIP.

nameserver 8.8.8.8
search default.svc.cluster.local svc.cluster.local cluster.local localdomain

ping google again, it works well.

root@wordpress-gp4cl:/# ping www.google.com
PING www.google.com (74.125.203.106) 56(84) bytes of data.
64 bytes from th-in-f106.1e100.net (74.125.203.106): icmp_seq=1 ttl=127 time=11.4 ms
64 bytes from th-in-f106.1e100.net (74.125.203.106): icmp_seq=2 ttl=127 time=12.5 ms
64 bytes from th-in-f106.1e100.net (74.125.203.106): icmp_seq=3 ttl=127 time=15.0 ms

To check the detailed infomation via curl.

root@kubernetes:/# curl -s http://localhost:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
{
    "action": "get",
    "node": {
        "createdIndex": 37,
        "dir": true,
        "key": "/coreos.com/network/subnets",
        "modifiedIndex": 37,
        "nodes": [
            {
                "createdIndex": 37,
                "expiration": "2016-03-30T07:37:28.947871304Z",
                "key": "/coreos.com/network/subnets/172.17.16.0-24",
                "modifiedIndex": 37,
                "ttl": 83666,
                "value": "{\"PublicIP\":\"172.16.235.128\"}"
            }
        ]
    }
}

下一篇我們可以來看看VXlan的模式如何開啟。

Saturday, March 19, 2016

Implement ETCD Watch by using Golang

etcdWatch

一直很喜歡etcd這套key-value store。透過http的方式存取,COREOS也提供強大的Client供使用。
Watch找了很久,終於知道該怎麼寫了,這中間,還依賴了之前安裝了auto-completion (見我之前的Blog),c-x x-o實在太強大了。
不止Watch,還可對不同的操作Watch,以下為代碼。

package main

import (
    "github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
    "github.com/coreos/etcd/client"
    "log"
    "time"
)

func main() {
    cfg := client.Config{
        Endpoints: []string{"http://127.0.0.1:2379"},
        Transport: client.DefaultTransport,
        // set timeout per request to fail fast when the target endpoint is unavailable
        HeaderTimeoutPerRequest: time.Second,
    }
    c, err := client.New(cfg)
    if err != nil {
        log.Fatal(err)
    }
    kapi := client.NewKeysAPI(c)

    watcher := kapi.Watcher("/cred/", &client.WatcherOptions{
        Recursive: true,
    })
    log.Println("redyto run")
    for {
        detail, _ := watcher.Next(context.Background())
        log.Println("redyto run")
        if detail.Action == "expire" {
            log.Println("expire", detail.Node.Key, detail.Node.Value)
        }
        if detail.Action == "set" {
            log.Println("set", detail.Node.Key, detail.Node.Value)
        }
        if detail.Action == "update" {
            log.Println("update")
        }
        if detail.Action == "expired" {
            log.Println("expired")
        }
        if detail.Action == "delete" {
            log.Println("delete")
        }
       }

}

很可惜ETC取消了Lock的功能,相信以後還會開啟的。

Monday, March 14, 2016

Docker Container 初探

dockerInstall

Docker Install

最近事情有點多,所以只好中英文交雜著,也懶得管文法了。

只是初探而已,我想我就直接用Ubuntu14.04的Docker,不做升級了。

apt-get update 
apt-get -y install docker.io

測試安裝結果

裝完了,拉個Ubuntu Container檔案看看。 接下來連進這台Container中。

docker pull ubuntu
docker run -i -t ubuntu /bin/bash

An test image

download a webapp from docker hub.

指下載image不啟動Container。

docker pull training/webapp

runing the webapp for testing

docker run -d -p 80:5000 training/webapp python app.py

提取container的5000 port maps to host 80 port. So one can use

curl http://hostip:80

to test it,其中hostip為host的IP。

commit a modified image

You can commit a running container and will not affect the running status(UP).

docker commit -m "Added new webapp" -a "Docker Newbee for webapp" 89f4c0576be3 training/webapp2:v2

where 89xx is id, traning/webapp repository name, and v2 is a TAG.
The default TAG is latest so if you want to run a image with v2 tag, you can use

docker run -d -p 81:5001 training/webapp2:v2 python app1.py

why show Exited

If you have no job running, docker will mark it as Exited(0) while type docker ps -a. You can have a test on it, after 10 sec it will show Exited(0).

docker run --name cont6 ubuntu sleep 10 &

if then command with -t, TTY, it will not show Exited, the status will always show "UP".

docker run -t --name cont6 ubuntu /bin/bash 

Time Namespace Problem

If you are running container with the following commands, changing date is not permission. The admin permission is being limited.

docker run -t --name intermode2 ubuntu

but use the following commands will release the admin permission.

docker run -t --privileged --name intermode2 ubuntu

When you change the date with Date -s " 2000 / 01 / 01 00 : 00 : 00 command in the container.
The host's date will be changed, since kernel not support time namespace.
When disable --privileged will make the user as a guest user and cannot be admin user. Some command also will be disable, such as mount command. If you want to be a admin user, you can not avoid the lack of namespace supported problem.

tty Connection

tty connect to container by using docker command

docker exec -i -t id bash

Docker Link

Docker link 只能發生在同一台Server。

Running db first.

docker run -d --name db training/postgres

Running web then.

docker run -d -P --name web --link db:db training/webapp python app.py

連到web server檢查/etc/hosts發現如下,link只是更改了/etc/hosts/加入了db information。

72.17.0.4      8eb308a6f375
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      db a768185b48d7

The most cool thing is:
If the db' IP changed due to DHCP or ..., the /etc/hosts in web server will automatically update.

One can check the detail connection information through:

docker inspect -f "{{ .HostConfig.Links }}" web

It's in the section of HostConfig with variable Links using command docker inspect.

Dockerfile

建立一個目錄放入Dockerfile,名為Dockerfile

# A basic apache server. To use either add or bind mount content under /var/www
FROM ubuntu

MAINTAINER Kimbro Staken version: 0.1

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

其中FROM指下載的image位置。
RUN 表示在內部執行的命令,如果image存在,可透過此命令做為init的方法執行啟動程式。發生在build階段。
CMD 表示docker -it ubuntu python webapp.py。其中CMD就是指這段最後的指令。發生在啟動階段。

No init

這是一件很難想像的事情,而大部分的人好似也接受了。官網希望一個Container只跑一個特定的服務。 但我總覺得這會讓事情沒有更容易。這也是個奇想,在任何地方,任何老闆,大概沒有辦法接受有這樣的情況。
這也證明了創意才是重點,那些小問題一點都不重要。 這到目前為止還是個大問題,SIGKILL捕捉的到嗎?sub-process清得乾淨嗎?

s6 seems the best application for solving this problem.

ambassador

這是一個好想法,我應該再往這裡面專研一下。
首先我們先測試一下。

在server1,啟動redis-server and abassador connector to redis-server

docker run -d --name redis redis
docker run -d --link redis:redis --name ambassador1 -p 6379:6379 svendowideit/ambassador

在server2,啟動redis-client and abassador connector to redis-cli

docker run -d --name ambassador2 --expose 6379 -e REDIS_PORT_6379_TCP=tcp://172.16.235.128:6379 svendowideit/ambassador
docker run -d -t  --link ambassador2:redis relateiq/redis-cli

其中,172.16.235.128為server1的IP。

我們進去redis-cli看看

root@7882118ba9aa:/# env
REDIS_PORT_6379_TCP_PROTO=tcp
HOSTNAME=7882118ba9aa
TERM=xterm
REDIS_NAME=/mad_stallman/redis
REDIS_PORT_6379_TCP_ADDR=172.17.0.2
REDIS_PORT_6379_TCP_PORT=6379
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
REDIS_ENV_REDIS_PORT_6379_TCP=tcp://172.16.235.128:6379
REDIS_PORT_6379_TCP=tcp://172.17.0.2:6379
SHLVL=1
HOME=/
REDIS_PORT=tcp://172.17.0.2:6379
_=/usr/bin/env

redis-cli的script為

root@7882118ba9aa:/# cat start.sh
#!/bin/bash

if [ $# -eq 0 ]; then
    /redis/src/redis-cli -h $REDIS_PORT_6379_TCP_ADDR -p $REDIS_PORT_6379_TCP_PORT
else
    /redis/src/redis-cli "$@"
fi

其中REDISPORT6379TCPADDRREDISPORT6379TCPPORT在環境變數中已經設立。 啟動start.sh後執行

root@7882118ba9aa:/# ./start.sh
redis 172.17.0.2:6379> ping
PONG

有反應了。

這裏解釋一下,為何是--link ambassador2:redis。 首先我們看一下envstart.sh間用什麼變數溝通,答案是REDIS_這個prefix。
假設我們用--link ambassador2:ambassador2會怎樣?

root@f0a8b07680bc:/# env
AMBASSADOR2_PORT_6379_TCP=tcp://172.17.0.2:6379
HOSTNAME=f0a8b07680bc
TERM=xterm
AMBASSADOR2_PORT_6379_TCP_PROTO=tcp
AMBASSADOR2_PORT_6379_TCP_PORT=6379
AMBASSADOR2_PORT=tcp://172.17.0.2:6379
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
AMBASSADOR2_NAME=/furious_einstein/ambassador2
AMBASSADOR2_ENV_REDIS_PORT_6379_TCP=tcp://172.16.235.128:6379
SHLVL=1
HOME=/
AMBASSADOR2_PORT_6379_TCP_ADDR=172.17.0.2
_=/usr/bin/env

env被改成AMBASSADOR2的env了,使得,script對應的參數是錯的。

另外也影響了/etc/hosts

root@f0a8b07680bc:/# cat /etc/hosts
172.17.0.10 f0a8b07680bc
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2  ambassador2 5fb7c99e8b6c

因此link可解讀為--link destination container name:local alias name
local alias name表示source container(啟動的這台)所隱含的參數相對於desination container。

command

list all container
docker ps -a

delete a container 
docker rm id

show all images
docker images

run a container from an image: with -t to keep inteactive, and alway show UP.  
if not contains -t will show Exited(0) while no job running.  
docker run -t  --name intermode ubuntu

to run it on the background
docker run -d rethinkdb

remove an image
docker rmi id

show the detailed container infomation
docker inspect id

trouble shooting

already being pulled by another client. Waiting.

service restart docker

check list of container and see the status that should be shown "UP".

docker start id

Saturday, March 12, 2016

Using Youtube API to Download Video by Using Python

youtube3k

Youtube Download Video

偶爾寫寫工作以外的程式,也是很有趣的事情。
這支程式,快速的從Youtube下載新三國影片,很多集,一次全部下載,我實在太愛這部片了。
用python寫的,只是想很快地把這件事做完,程式亂七八糟的,我也不想整理了。

不多說了,以下為代碼。
Developer_key需要自己去申請。

# -*- coding: utf-8 -*-
#!/usr/bin/python
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.tools import argparser
import dl
DEVELOPER_KEY = "AIXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
BaseURL='https://www.youtube.com/watch?v='
def youtube_search(options, search_index):
  youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
    developerKey=DEVELOPER_KEY)

  options.q = search_index
  search_response = youtube.search().list(
    q=search_index,
    part="id,snippet",
    maxResults=50#options.max_results,
  ).execute()
  videos = []
  channels = []
  playlists = []
  for search_result in search_response.get("items", []):
    if search_result['snippet']['title']==options.q:
        linkurl = BaseURL + search_result['id']['videoId']
        print linkurl
        print 'got it'
        dl.download(linkurl)
        break;
    else:
        continue
  print ' cannot find the video: %s '%options.q


if __name__ == "__main__":
  argparser.add_argument("--q", help="Search term", default="三國")
  argparser.add_argument("--max-results", help="Max results", default=50)
  args = argparser.parse_args()
  print args
  for loop in range(1,100):
      if loop <10:
         search_index=u'新三國演義 2010 DVD 0%s'%loop
      else:
         search_index=u'新三國演義 2010 DVD %s'%loop

      print 'search %s'%search_index
      try:
        youtube_search(args, search_index)
      except HttpError, e:
        print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
        

Golang Interface Explanation

golangInterface

The Explanation for Golang's Interface

Golang的Interface一直以來都處在一種模糊的想法,無法去確定為什麼需要這樣的東西,而又對程式帶來什麼樣的好處。 我重新思考了一下Interface並藉此記錄下來。 Interface至少有兩個用法,
1. 對變數不假設型態,比如map的Docker type。這是很容易了解的,所以不需要做太多的解釋。
2. 對Method的抽象(abstract)組合(composition)。
最主要我想了解的是第二個用法。

我們在Google上常常會看到類似以下範例來解釋Interface。

package main

import (
    "fmt"
)

type Geometry interface {
    area() (error, float64)
}

type Rect struct {
    width  float64
    height float64
}

func (R Rect) area() (error, float64) {
    return nil, R.width * R.height * 2
}

type Circle struct {
    radius float64
}

func (C Circle) area() (error, float64) {
    return nil, C.radius * C.radius * 3.14
}

func Measure(g Geometry) (error, float64) {
    _, ss := g.area()
    fmt.Println(ss)
    return nil, ss
}


func main() {
    fmt.Println("hah")
    rr := Rect{width: 3, height: 2}
    Measure(rr)
    cc := Circle{radius: 2}
    Measure(cc)
}

傳統的做法,可能在main function中直接使用Rect.area()Circle.area()來計算area。
而Interface就是想取代這樣的概念,能否藉由一個統一的接口來獲得area,比如Measure function,透過此統一接口(interface), Measure(Rect) or Measure(Circle),來提供Service。
這裏我強調Service正是我體悟到Interface的目的,如果你想懂了Service的概念,應該這篇文章就讀到此處即可了。
其實跳出來想,如果有一個Service可以用任何變數型態帶入,都會return相應的結果,這樣是不是很棒呢?! 這就是我強調Service的由來,而此概念可以透過Interface來完成。
當然,對於Programmer來講他除了要寫傳統的代碼架構,他還得花時間多些一層Interface提供服務。
而對User來講,他只看到Data本身,變數需要填入的數值外,就是統一接口(interface)所提供的Service。

為了更清楚的描述,花了半小時畫了如下圖。

透過上圖,我們可以清楚地看到,透過Interface,Programmer的抽象化Method後,User可以得到單一入口的Service。
基本上Programmer本身是需要做更多的事情的,但對User來講, 如何調用函數的問題,就被淡化掉了,因為變成了統一的接口(Measure Function)。 上圖簡單的說就是,Programmer提供了Data與Service內容,給User,而User就是填滿Data並放入統一的接口。

上述程式,主要是寫在main中,這有一個缺點,你會看不到Interface是如何提供Service,並讓User更簡單的使用你所提供的Lib。
假設Rect,Circle,與Geometry Interface是寫在一個Lib上,並透過go get下載得到。當你在使用此Lib,我們暫時叫做shape的lib。 我們會import "shape"並使用它透過shape.rect{3,2},並直接使用shape.Measure()來調用。任何型態都可放入shape.Measure()中,比如shape.Measure(Rect),透過這樣的瞭解,我相信,Interface的目的就更容易凸顯出來了。
此外此代碼部分,你可以看到Rect.area()其實是個private函數,User是無法使用的,只能透過Measure來使用,這也是從傳統走入Inteface的一個差別,當然,你已可以改成大寫,就是個Public函數了。

最後,我們還沒定義何謂User,何謂Programmer。 在這樣的範例中,我定義的programmer其實是指的撰寫shape的人,而user是用go get下載,準備使用此Lib的人。

對Interface的結論

對User而言,只會看到Data與Service。
對Programmer而言不只要寫傳統的定義,還得從新定義Interface所要帶來的Service。