Saturday, July 14, 2018

XGBOOST Installation in Ubuntu

xgboost

XGBOOST

import pandas as pd
import xgboost as xgb
from sklearn.preprocessing import LabelEncoder
import numpy as np

##How To Install

### Install xgboost

git clone --recursive https://github.com/dmlc/xgboost.git
cd xgboost
./build.sh ```

cd python-package
python setup.py install
#or pip3 install -e python-package  

Install Others

pip3 install pandas
pip3 install scipy
pip3 install numpy==1.13.3
pip3 install sklearn

Centos Yum Local Installation

centosyumlocal

CentOS To Obtain Package

sudo yum --downloadonly --downloaddir=packages update
sudo yum --downloadonly --downloaddir=package/net-tools install net-tools

It will create a package directory for you, you don't need to create directory by yourself.

sudo yum localinstall -y packages/net-tools/*

Install all, and It will install via dependency.

Kubernetes and Node Select

kubernetesNodesecelt

Kubernetes Label and NodeSelector Deployment

Show all the Nodes we have

root@kuberm:~/kube1.6config/deploy/webscale# kubectl get nodes
NAME             STATUS     AGE       VERSION
172.16.155.158   NotReady   129d      v1.6.0
172.16.155.165   NotReady   73d       v1.6.0
192.168.51.131   Ready      1d        v1.6.0
kubermnode1      Ready      128d      v1.6.0
kubermnode2      Ready      127d      v1.6.0

Setup a given node a label, said ebotrole=worker

kubectl label nodes kubermnode1 ebotrole=worker

delete the label

kubectl label nodes kubermnode1 ebotrole-

adding -, we can delete the node labeling.

Let's see the result

root@kuberm:~/kube1.6config/deploy/webscale# kubectl get nodes --show-labels
NAME             STATUS     AGE       VERSION   LABELS
172.16.155.158   NotReady   129d      v1.6.0    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=172.16.155.158
172.16.155.165   NotReady   73d       v1.6.0    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=172.16.155.165
192.168.51.131   Ready      1d        v1.6.0    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.51.131
kubermnode1      Ready      128d      v1.6.0    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ebotrole=worker,kubernetes.io/hostname=kubermnode1
kubermnode2      Ready      127d      v1.6.0    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubermnode2

Yes, we set up a label, ebotrole=worker, in kubermnode1.

How to deploy a Pod to the labeled node

kubectl create -f label.yaml

where label.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginxstor
spec:
  #serviceName: "nginxstor"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginxstor
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - nginxstor
              topologyKey: kubernetes.io/hostname
      containers:
      - name: nginxstor
        image: 192.168.51.130:5000/uwebserverv6
        ports:
          - containerPort: 8000
      nodeSelector:
        ebotrole: worker
  minReadySeconds: 5
  strategy:
    # indicate which strategy we want for rolling update
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1

See the nodeSelector

      nodeSelector:
        ebotrole: worker

Result

Now it will deploy to the node we setup as ebotrole: worker.
All the Pod will deploy to the node, if only one node, even is soft anti-affinity.

If we don't define the node label, the pod will be deployed until the host appeared.

root@kubecontext:~/k8sdeployex/labelselector# kubectl get po
NAME                        READY     STATUS    RESTARTS   AGE
nginxstor-293209922-2531z   0/1       Pending   0          23s
nginxstor-293209922-zdd83   0/1       Pending   0          23s

Kubernetes and Configmap

kubernetesconfigmap

Kubernetes ConfigMap

kubectl create -f web-cm.yaml

root@kuberm:~/kube1.6config/deploy/configmap# cat web-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: hadoop-env
data:
  CORE_CONF_fs_defaultFS: "hdfs://namenode:8020"
  CORE_CONF_hadoop_http_staticuser_user: "root"

kubectl create -f web-cm2.yaml

root@kuberm:~/kube1.6config/deploy/configmap# cat web-cm2.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm

###kubectl create -f web-controller.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webconfig
  namespace: default
  labels:
    app: webconfig
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webconfig
  template:
    metadata:
      labels:
        app:  webconfig
        group: lb
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name:  webconfig
        # Any image is permissable as long as: 1. It serves a 404 page at /
        # and 2. It serves 200 on a /healthz endpoint
        image: 172.16.155.136:5000/uwebserverv6
        env:
          - name: SPECIAL_LEVEL_KEY
            valueFrom:
              configMapKeyRef:
                name: special-config
                key: special.how
          - name: SPECIAL_TYPE_KEY
            valueFrom:
              configMapKeyRef:
                name: special-config
                key: special.type
        envFrom:
        - configMapRef:
            name: hadoop-env
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi

Reuslt

You will see the result.

  • if you use envFrom, you can directly use the parameter mapped from configmap you defined
  • if you use env, you can redefine the key to the value that you defined in configmap. It's quite convinuence, if every body defined in different key but with same value. You can now adding the env and valuefrom parameter to trasfer the key-value pair.
root@webconfig-3249689838-fglf7:/# env
HOSTNAME=webconfig-3249689838-fglf7
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://172.18.0.1:443
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=172.18.0.1
LS_COLORS=
SPECIAL_TYPE_KEY=charm
SPECIAL_LEVEL_KEY=very
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
CORE_CONF_hadoop_http_staticuser_user=root
LESSOPEN=| /usr/bin/lesspipe %s
KUBERNETES_PORT_443_TCP_ADDR=172.18.0.1
KUBERNETES_PORT_443_TCP=tcp://172.18.0.1:443
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env

Rolling Upgrade Kubernetes 1.6 Statefulset

statefulsetgrade

Upgrade StatefulSet Image (k8s 1.6)

For K8s 1.6, Rollout is still not support Statefulset. But we can use patch and manually restart the PODS to upgrade images.

First of all, we just use the following command and fill the target image in "image".

kubectl patch statefulset mariadbcluster -p'{"spec":{"template":{"spec":{"containers":[{"name":"mariadbcluster","image":"192.168.51.130:5000/mariadbcluster:vtest2"}]}}}}'

You will see the statefulset's image has been replaced to new images.

kubectl describe statefulset mariadbcluster
.
.
.
  Containers:
   mariadbcluster:
    Image:  192.168.51.130:5000/mariadbcluster:vtest2

Sequentially, restart the POD. Here is just an example for delete of one the POD.

kubectl delete po mariadbcluster-2

After the POD restart successfully, to check the pod's info, we will get the new image. You might login to the POD to check the code is right. Of course, it's right. I did it.

kubectl describe po mariadbcluster-2
 .
 .
 .
 
 Image:     192.168.51.130:5000/mariadbcluster:vtest2

You can compare the upgraded POD to Non-Upgraded POD.

root@kubecontext:~/k8sdeployex/statefulupgrade# kubectl describe po mariadbcluster-1
.
.

Image:      192.168.51.130:5000/mariadbcluster:vtest3

Great!!!.

Kubernetes External IP

Kubernetes Networking

Kubernetes Networking

External IP

apiVersion: v1
kind: Service
metadata:
  name: nginxstor
  labels:
    app: nginxstor
spec:
  ports:
  - name: http
    protocol: TCP
    #port is loadbalancer port
    port: 8001
    # targetport is container port
    targetPort: 8000
  externalIPs:
  - 192.168.51.200
  selector:
    app: nginxstor

It only works on external IP we set with port.

onahwude-MacBook-Pro:~ jonahwu$ curl http://192.168.51.200:8001
nginxstor-3684888594-9btw5

Linux Return Code

linuxcommandres

Linux Command Success or not

root@mariadbcluster-0:/# ls
bin   data  dev  home  lib64           media  opt   root  run.sh  srv  tmp  var
boot  db    etc  lib   mariadbcluster  mnt    proc  run   sbin    sys  usr

A failed test

root@mariadbcluster-0:/# ls |grep aa
root@mariadbcluster-0:/# res=$?
root@mariadbcluster-0:/# echo $res
1

A Success test

root@mariadbcluster-0:/# ls |grep mnt
mnt
root@mariadbcluster-0:/# res=$?
root@mariadbcluster-0:/# echo $res
0

The res =0 is success and =1 is failed. where $? is get the previous result.

Kubernetes and Pod Dependency

KubernetesPodsDependency

Kubernetes Pods Dependency

Using readinessprob and initContainer to solve the problem.

Readinessprob

We can setup a criterian that decides the pods is succesful startup of not. If we don't hope kubernetes marks the container is ok before some daemon is running or some port is enable. We can add some condition by using Readinessprob. We just list three ways

  • command
  • httpGet
  • check port

Command

        image: 192.168.51.130:5000/uwebserverv6
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - ps x |grep python
          initialDelaySeconds: 5
          periodSeconds: 20

line 4 command will detect the return value of line 7 is successfor($?=0) or not($?=1).

check port

        image: 192.168.51.130:5000/uwebserverv6
        readinessProbe:
          tcpSocket:
            port: 8000

check url

        image: 192.168.51.130:5000/uwebserverv6
        readinessProbe:
          httpGet:
            path: /
            port: 8000

InitContainer

    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: myapp-container
        image: busybox
        command: ['sh', '-c', 'echo The app is running! && sleep 3600']

      initContainers:
      - name: init-myservice
        image: busybox
        command: ['sh', '-c', 'until nslookup webservern; do echo waiting for myservice; sleep 2; done;']

initContainers will wait to start container until command return success.

where webservern is the dependency container with it's service name.

Line 6 is the most simple way to detect a target Pods' status is Ok or not.

How it works

when readinessProbe is ok, it will store the container to etcd and mark it as ok. So DNS will be triggered and register the name to domain name. So client can use nslookup to find out the name.

And of course you need to create service to make other container to connect to them.

So People shoud make sure which condition means the container is ok for server side. And make sure the dependency target for client side.

Golang and Mongodb

gomongodb

MongoDB Mgo

type Person struct {
    Name  string
    Phone string
}

Search One

err = c.Find(bson.M{"name": "馬力"}).One(&result)

Search All in List

   results := []Person{}
    err = c.Find(bson.M{"name": "馬力"}).All(&results)
    if err != nil {
        log.Fatal(err)
    }
    for _, j := range results {
        fmt.Println("Phone:", j.Phone)
        //fmt.Println("Phone:", results)
    }

Sort By TimeStamp (If Have)

err = c.Find(bson.M{"name": "馬力"}).Sort("-timestamp").All(&results)

Seach Pattern

search "台北" as as a search patten.

presult := Person{}
err = c.Find(bson.M{"phone": bson.RegEx{Pattern: "台北", Options: "i"}}).One(&presult)

check Number of search result

num, _ := c.Find(bson.M{"id": searchid}).Count()

Delete Data

c.Remove(bson.M{"id": searchid})

Update Data

updateTarget := bson.M{"id": searchid}
change := bson.M{"$set": bson.M{"phone": "新phone"}}
if err := c.Update(updateTarget, change); err != nil {
    fmt.Println("update error:", err)
}

It update local data, phone, and keep rest of all data.

Drop Database

sDroped := session.DB("test").DropDatabase()

Golang Context TIme Out Setting

gotimeout

Golang Context TimeOut

It's quite easy to wrap a function for a timeout function.

package main

import (
    "context"
    "fmt"
    "time"
)

func forever() {
    for {
        fmt.Println("haha")
        time.Sleep(time.Second)
    }
}

func foreverWTO() {

    ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
    defer cancel()
    go forever()
    select {
    case <-ctx.Done():
        fmt.Println("got ctx Done message")
        return
    }
    return
}

func main() {
    fmt.Println("vim-go")
    //forever()
    foreverWTO()
}

Where forever is the function in the code, if we want to adding timeout function for forever.
You just addding a function foreverWTO and call foreverWTO(), you then can get timeout function.
It's quite efficient, since you don't need to modify th e core function forever, and just adding a TimeOut function for it.

Note, you can not add forever() in the select. Since no place to put forever in select.

  • default: not right, since it will go to defualt directly. and always running on default

Adding Select to the bellow of forever with gorouting(no block), and adding select bellow the foever, it will block here until timeout.

Adding Watch

func foreverWTO() {

    ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
    defer cancel()
    go forever()
    for {
        select {
        default:
            fmt.Println("watchint timeout now")
            time.Sleep(time.Second)
        case <-ctx.Done():
            fmt.Println("got ctx Done message")
            return
        }
    }
    return
}

Adding A default for mention we are waiting for timeout in log. For loop will goto default forever until ctx's been triggered.