Tuesday, August 30, 2016

Trace Source Code: nova scheduler (OpenStack Kilo)

scheduler

Code tracing in Nova Scheduler (Kilo)

Nova Scheduler in Kilo

We started from nova/nova/scheduler/manage.py and its function select_destinations,
that is the beginning of the nova-schduler.
Here is the an example for quick understanding the RabbitMQ Server Side that compared the section of reference for oslo.messaging in this Blog.

And the Client side is from nova-conductor.

And the nova/nova/scheduler/manage.py is here.

class SchedulerManager(manager.Manager):
    """Chooses a host to run instances on."""

    target = messaging.Target(version='4.2')

    def __init__(self, scheduler_driver=None, *args, **kwargs):
        if not scheduler_driver:
            scheduler_driver = CONF.scheduler_driver
        self.driver = importutils.import_object(scheduler_driver)
        super(SchedulerManager, self).__init__(service_name='scheduler',
                                               *args, **kwargs)
        self.additional_endpoints.append(_SchedulerManagerV3Proxy(self))
        
    @messaging.expected_exceptions(exception.NoValidHost)
    def select_destinations(self, context, request_spec, filter_properties):

This is the result of tracing nova scheduler code.

Adding more filter, you can put it on /schedule/filters and reconfig nova.conf to add the new filter you put.

Reference for oslo.messaging

http://www.cinlk.com/2015/12/04/rabbitmq/

import oslo.messaging
from oslo.config import cfg
class TestEndpoint(object):
    target = oslo.messaging.Target(namespace='test', version='2.0')
    def __init__(self, server):
        self.server = server
    def foo(self, ctx, id):
        return id
oslo.messaging.set_transport_defaults('myexchange')
transport = oslo.messaging.get_transport(cfg.CONF)
target = oslo.messaging.Target(topic='myroutingkey', server='myserver')
endpoints = [TestEndpoint(None)]
server = oslo.messaging.get_rpc_server(transport, target, endpoints,
                                      executor='blocking')
server.start()
server.wait()

Monday, August 29, 2016

How to install and play with Mermaid (Markdown Tool) in MACOSX

mermaid

Mermaid in MacOSX

An great tool for using Markdown to plot blockdiagram. That's see how to install and use it in the MACOS system.

Installation in MacOS

Install Mermaid

Open a terminal from your MACOS. And execute the following CMD line.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install node && npm install -g phantomjs && npm install -g mermaid

Started to Use Mermaid

edit file ma

In the terminal, you can use vim editor to add the following to the file ma.

sequenceDiagram
A->> B: Query
B->> C: Forward query
Note right of C: Thinking...
C->> B: Response
B->> A: Forward response

execute mermaid

junmeinde-MacBook-Pro:~ junmein$ mermaid ma
Num files to execute : 1
ready to execute png: ma.png
CONSOLE: [08:12:23 (825)]  Starting rendering diagrams (from line # in "")
CONSOLE: [08:12:23 (829)]  Start On Load before: undefined (from line # in "")
CONSOLE: [08:12:23 (830)]  Initializing mermaidAPI (from line # in "")
CONSOLE: [08:12:23 (837)]  Setting conf  gantt - useWidth (from line # in "")
CONSOLE: [08:12:23 (837)]  Setting config: gantt useWidth to 1200 (from line # in "")
CONSOLE: [08:12:23 (867)]  Adding message from=A to=B message=Query type=0 (from line # in "")
CONSOLE: [08:12:23 (867)]  Adding message from=B to=C message=Forward query type=0 (from line # in "")
CONSOLE: [08:12:23 (868)]  Adding message from=C to=B message=Response type=0 (from line # in "")
CONSOLE: [08:12:23 (868)]  Adding message from=B to=A message=Forward response type=0 (from line # in "")
CONSOLE: [08:12:23 (913)]  For line height fix Querying: #mermaidChart0 .actor-line (from line # in "")
saved png: ma.png

You will see the ma.png

open ma.png 

You will see the block diagram.

Web Demo

Here is the Web Demo.

http://knsv.github.io/mermaid/live_editor/

Some more examples.

http://knsv.github.io/mermaid/#flowcharts-basic-syntax

Friday, August 26, 2016

說話與評論的方式

美言,說到盡。
醜話,點到為止。

除了技術語言的描述,
對於說話,這是目前最深刻的感受。

Thursday, August 25, 2016

Python: How to Trace Source Code



Based on my previous Blog

http://gogosatellite.blogspot.tw/2016/05/the-best-and-simple-way-to-python-vim.html


After Installation,
We might use Ctrl+C, Ctrl +g (long press Ctrl) to trace code deeper.
and Press Ctrl+o to go back to original code location.

Great Tool for tracing source code tool.


Thursday, August 18, 2016

Hacking Cloud VM to use User password to Login(2)

hackvm2

Hacking Cloud VM to use User password to Login(2)

Based on my previous blog that talk about how to hack cloud image and we then can login to VM by using use and password directly.

http://gogosatellite.blogspot.tw/2016/08/html-charsetutf-8-cloud-vm-to-use-user.html

Thanks for this blog that describe the detailed process to hack vm.

http://frederik.orellana.dk/booting-ubuntu-14-04-cloud-images-without-a-cloud/

The following the the process cmd based on the above blog, and it works greatly.

  wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
  qemu-img convert -c -O qcow2 trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1_30GB.qcow2
  qemu-img resize trusty-server-cloudimg-amd64-disk1_30GB.qcow2 30G
  sudo modprobe nbd
  sudo qemu-nbd -c /dev/nbd0 `pwd`/trusty-server-cloudimg-amd64-disk1_30GB.qcow2
  ls image || mkdir image
  sudo mount /dev/nbd0p1 image
  sudo sed -ri 's|(/boot/vmlinuz-3.13.0-24-generic\s*root=LABEL=cloudimg-rootfs.*)$|\1 ds=nocloud|' image/boot/grub/grub.cfg
  sudo sed -ri 's|^(GRUB_CMDLINE_LINUX_DEFAULT=).*$|\1" ds=nocloud"|' image/etc/default/grub
  sudo sed -ri 's|^#(GRUB_TERMINAL=console)$|\1|' image/etc/default/grub
  sudo mkdir -p image/var/lib/cloud/seed/nocloud
  sudo tee image/var/lib/cloud/seed/nocloud/meta-data <<EOF
instance-id: ubuntu
local-hostname: ubuntu
EOF

  sudo tee image/var/lib/cloud/seed/nocloud/user-data <<EOF
#cloud-config
password: ubuntu
chpasswd: { expire: False }
ssh_pwauth: True
EOF

  sudo sed -ri "s|^(127.0.0.1\s*localhost)$|\1\n127.0.0.1 `cat image/etc/hostname`|" image/etc/hosts
  sudo sync
  sudo umount image
  sudo qemu-nbd -d /dev/nbd0
  sudo modprobe -r nbd
  
  

Wednesday, August 17, 2016

Study OpenStack Octavia in Mitaka by using DevStack (2)

octavia2

Octavia 2

Using CMD Line To Create Octavia Loadbalancer in OpenStack Miataka

I stronly suggest to read this Document

http://egonzalez.org/load-balancer-as-a-service-lbaas/

First of all, we check the loadbalancer list.

stack@devoct:~/devstack$ neutron lbaas-loadbalancer-list
+--------------------------------------+-----------------+-------------+---------------------+----------+
| id                                   | name            | vip_address | provisioning_status | provider |
+--------------------------------------+-----------------+-------------+---------------------+----------+
| 0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 | Load Balancer 1 | 10.0.0.10   | ACTIVE              | octavia  |
| 93a920f0-a934-4ef9-bcb7-32c9ed022966 | Load Balancer 2 | 10.0.0.17   | ACTIVE              | octavia  |
+--------------------------------------+-----------------+-------------+---------------------+----------+

Create Your Own Octavia Service

stack@devoct:~/devstack$ neutron lbaas-loadbalancer-create --name octlb1 privateA

Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | add9a3a0-cec8-495d-8e65-f30c25acd323 |
| listeners           |                                      |
| name                | octlb1                               |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 2e72284266cc4259908fbb4d346aa804     |
| vip_address         | 10.0.0.19                            |
| vip_port_id         | 13029199-1637-4428-bdad-edd582ecf5dd |
| vip_subnet_id       | ac07fa63-3052-4a79-9449-d33b3260c8af |
+---------------------+--------------------------------------+
stack@devoct:~/devstack$ neutron lbaas-loadbalancer-list
+--------------------------------------+-----------------+-------------+---------------------+----------+
| id                                   | name            | vip_address | provisioning_status | provider |
+--------------------------------------+-----------------+-------------+---------------------+----------+
| 0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 | Load Balancer 1 | 10.0.0.10   | ACTIVE              | octavia  |
| 93a920f0-a934-4ef9-bcb7-32c9ed022966 | Load Balancer 2 | 10.0.0.17   | ACTIVE              | octavia  |
| add9a3a0-cec8-495d-8e65-f30c25acd323 | octlb1          | 10.0.0.19   | PENDING_CREATE      | octavia  |
+--------------------------------------+-----------------+-------------+---------------------+----------+

Wait octlb1 until provisioning_staus to be ACTIVE. You can also check o-cw.log for debuging. You will see the following log that still connecting to Amphora VM, and it might takes 5 more mins.

2016-08-17 22:02:15.029 20588 INFO octavia.controller.queue.endpoint [-] Creating load balancer 'add9a3a0-cec8-495d-8e65-f30c25acd323'...
2016-08-17 22:02:15.330 20588 INFO octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB with id 4c033fee-64aa-405b-b75b-f0eedf849379
2016-08-17 22:02:15.539 20588 INFO octavia.certificates.generator.local [-] Signing a certificate request using OpenSSL locally.
2016-08-17 22:02:15.540 20588 INFO octavia.certificates.generator.local [-] Using CA Certificate from config.
2016-08-17 22:02:15.540 20588 INFO octavia.certificates.generator.local [-] Using CA Private Key from config.
2016-08-17 22:02:15.540 20588 INFO octavia.certificates.generator.local [-] Using CA Private Key Passphrase from config.
2016-08-17 22:02:23.599 20588 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for amphora: 4c033fee-64aa-405b-b75b-f0eedf849379 with compute id fdbaf688-4cef-4e2c-9092-0d7e537e6e5f for load balancer: add9a3a0-cec8-495d-8e65-f30c25acd323
2016-08-17 22:02:23.655 20588 INFO octavia.network.drivers.neutron.allowed_address_pairs [-] Port 13029199-1637-4428-bdad-edd582ecf5dd already exists. Nothing to be done.
2016-08-17 22:02:30.930 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 22:02:33.930 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 22:02:36.930 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 22:02:39.930 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.

Finally, you will see the result that the load balancer is ACTIVE.

2016-08-17 22:08:41.320 20588 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ACTIVE in DB for load balancer id: add9a3a0-cec8-495d-8e65-f30c25acd323

And with command line check status

stack@devoct:~/devstack$ neutron lbaas-loadbalancer-list
+--------------------------------------+-----------------+-------------+---------------------+----------+
| id                                   | name            | vip_address | provisioning_status | provider |
+--------------------------------------+-----------------+-------------+---------------------+----------+
| 0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 | Load Balancer 1 | 10.0.0.10   | ACTIVE              | octavia  |
| 93a920f0-a934-4ef9-bcb7-32c9ed022966 | Load Balancer 2 | 10.0.0.17   | ACTIVE              | octavia  |
| add9a3a0-cec8-495d-8e65-f30c25acd323 | octlb1          | 10.0.0.19   | ACTIVE              | octavia  |
+--------------------------------------+-----------------+-------------+---------------------+----------+

Now we are going to set up LISTENER for backend server.

stack@devoct:~/devstack$ neutron lbaas-listener-create --loadbalancer octlb1 --protocol HTTP --protocol-port 80 --name listener1
Created a new listener:
+---------------------------+------------------------------------------------+
| Field                     | Value                                          |
+---------------------------+------------------------------------------------+
| admin_state_up            | True                                           |
| connection_limit          | -1                                             |
| default_pool_id           |                                                |
| default_tls_container_ref |                                                |
| description               |                                                |
| id                        | 281965c8-c633-4615-a25a-f87722f86aa3           |
| loadbalancers             | {"id": "add9a3a0-cec8-495d-8e65-f30c25acd323"} |
| name                      | listener1                                      |
| protocol                  | HTTP                                           |
| protocol_port             | 80                                             |
| sni_container_refs        |                                                |
| tenant_id                 | 2e72284266cc4259908fbb4d346aa804               |
+---------------------------+------------------------------------------------+

To see the detailed of Listener

stack@devoct:~/devstack$ neutron lbaas-listener-list
+--------------------------------------+--------------------------------------+------------+----------+---------------+----------------+
| id                                   | default_pool_id                      | name       | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------+----------+---------------+----------------+
| 906bd763-d1eb-4d7f-967b-669cbdde9bab | 51ae89ca-5f0a-4344-b217-a0527cb11992 | Listener 1 | HTTP     |            80 | True           |
| b13e6283-3612-416f-bbaf-abadb0eccf89 | 75660078-a989-4fbe-8b7f-5576ea05e937 | Listener 1 | HTTP     |            80 | True           |
| 281965c8-c633-4615-a25a-f87722f86aa3 |                                      | listener1  | HTTP     |            80 | True           |
+--------------------------------------+--------------------------------------+------------+----------+---------------+----------------+

Now we can set pool and lb-algorithm to the listner we created.

stack@devoct:~/devstack$ neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
Created a new pool:
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| healthmonitor_id    |                                                |
| id                  | 68f3eda9-396b-4e06-9e88-03e74c38cc35           |
| lb_algorithm        | ROUND_ROBIN                                    |
| listeners           | {"id": "281965c8-c633-4615-a25a-f87722f86aa3"} |
| loadbalancers       | {"id": "add9a3a0-cec8-495d-8e65-f30c25acd323"} |
| members             |                                                |
| name                | pool1                                          |
| protocol            | HTTP                                           |
| session_persistence |                                                |
| tenant_id           | 2e72284266cc4259908fbb4d346aa804               |
+---------------------+------------------------------------------------+

Adding Member Server(Backend Server)

To see how to launch a member server from a Hacked cloud image.

http://gogosatellite.blogspot.tw/2016/08/study-openstack-octavia-in-mitaka-by.html

Assume we have launched a member server with private ip address 10.0.0.15.

stack@devoct:~/devstack$ neutron lbaas-member-create  --subnet privateA --address 10.0.0.15 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| address        | 10.0.0.15                            |
| admin_state_up | True                                 |
| id             | df142a6e-fa6a-4d44-a9b4-f59628dc4e96 |
| name           |                                      |
| protocol_port  | 80                                   |
| subnet_id      | ac07fa63-3052-4a79-9449-d33b3260c8af |
| tenant_id      | 2e72284266cc4259908fbb4d346aa804     |
| weight         | 1                                    |
+----------------+--------------------------------------+

Where --address is the backend server, member, private IP address.

Security Group

Security Group is quite important, you can see my previous blog.

http://gogosatellite.blogspot.tw/2016/08/study-openstack-octavia-in-mitaka-by.html

check security group of Amphora VM.

Here is my setting of the Amphora VM security group by using admin user. We did nothting about the security group.

stack@devoct:~/devstack$ nova secgroup-list-rules lb-add9a3a0-cec8-495d-8e65-f30c25acd323
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 1025      | 1025    | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

check security group of backend server.

Here is my setting of Backend VM default security group by using Demo user. We enable most of port and protocols for connecting from Amphora.

stack@devoct:~/devstack$ nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 53        | 53      | 0.0.0.0/0 |              |
| tcp         | 443       | 443     | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Adding floating IP

To create floating IP from floating network.

stack@devoct:~/devstack$ neutron floatingip-create 20daa65b-bcb1-44e0-9297-980050a988a0
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| description         |                                      |
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.140.9                        |
| floating_network_id | 20daa65b-bcb1-44e0-9297-980050a988a0 |
| id                  | d6277507-e1eb-4bad-a111-22f787617932 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 2e72284266cc4259908fbb4d346aa804     |
+---------------------+--------------------------------------+

Assiicate the id of floating ip to port id of VIP. The port id of VIP was created by command line neutron lbaas-loadbalancer-create --name octlb1 privateA.

stack@devoct:~/devstack$ neutron floatingip-associate d6277507-e1eb-4bad-a111-22f787617932 13029199-1637-4428-bdad-edd582ecf5dd
Associated floating IP d6277507-e1eb-4bad-a111-22f787617932

We can access the web service through the floating ip binded in Octavia load balancer.

Result

junmeinde-MacBook-Pro:~ junmein$ curl 192.168.140.9
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".cache/">.cache/</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".sudo_as_admin_successful">.sudo_as_admin_successful</a>
<li><a href=".viminfo">.viminfo</a>
</ul>
<hr>
</body>
</html>

We can connect to backend server via Octavia load balancer.

Study OpenStack Octavia in Mitaka by using DevStack (1)

octavia

Octavia

You must prepare ram greater equal than 32GB. That's in my minimum ram testing.
In my previous testing, I only have 4GB ram, it always failed and cannot launch Amphora.
So don't make the mistake.

Installation

You can see my previous Blog to show how to install DevStack Mitaka.

http://gogosatellite.blogspot.tw/2016/04/using-devstack-to-install-openstack.html

Here is the local.conf.

[[local|localrc]]

# Load the external LBaaS plugin.
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/mitaka
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/mitaka
enable_plugin neutron-lbaas-dashboard https://git.openstack.org/openstack/neutron-lbaas-dashboard stable/mitaka

DEST=/opt/stack

HOST_IP=192.168.2.131

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken


ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

Q_PLUGIN=ml2
#Q_USE_DEBUG_COMMAND=True
if [ "$Q_PLUGIN" = "ml2" ]; then
  #Q_ML2_TENANT_NETWORK_TYPE=gre
  Q_ML2_TENANT_NETWORK_TYPE=vxlan
  :
fi
## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=192.168.2.0/24
PUBLIC_NETWORK_GATEWAY=192.168.2.2
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

# Enable LBaaS v2
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api


IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

[[post-config|$NOVA_CONF]]
[DEFAULT]
cpu_allocation_ratio=32.0
ram_allocation_ratio=32.0
disk_allocation_ratio=32.0

Now you can start to install devstack.

./stack.sh

Create Backend Server

Be patient, We lauch Backend Server first.

You can see my previous Blog to download and hack a Ubuntu VM.

http://gogosatellite.blogspot.tw/2016/08/html-charsetutf-8-cloud-vm-to-use-user.html

After launch a Ubuntu VM and run a SimpleHTTPServer.

python -m SimpleHTTPServer 80

where we enable port 80.

Create Octavia Service

Horizon

You can easily use Horizon to launch Octavia service. It's quite simple. Mention that you must add the Backend VM as member VM and adding floating IP to Amphora.

After Create Octavia Service.

After that, wait for 5min+ util you see the log from o-cw.log.

tail -f o-cw.log -n 100
.

2016-08-17 04:00:02.604 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:03.607 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:04.609 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:05.612 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:06.614 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:07.616 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
.
.
2016-08-17 04:01:12.083 20588 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ACTIVE in DB for load balancer id: 93a920f0-a934-4ef9-bcb7-32c9ed022966
2016-08-17 04:01:16.512 20588 INFO octavia.controller.queue.endpoint [-] Creating listener '906bd763-d1eb-4d7f-967b-669cbdde9bab'...
2016-08-17 04:01:25.631 20588 INFO octavia.controller.queue.endpoint [-] Creating pool '51ae89ca-5f0a-4344-b217-a0527cb11992'...

After you see the poll created, you can see the provisioning_status=ACTIVE or it is PENDING_CREATED. Or you can wait for provisioning_status becoming ACTIVE.

stack@devoct:~/devstack$ neutron lbaas-loadbalancer-list
+--------------------------------------+-----------------+-------------+---------------------+----------+
| id                                   | name            | vip_address | provisioning_status | provider |
+--------------------------------------+-----------------+-------------+---------------------+----------+
| 0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 | Load Balancer 1 | 10.0.0.10   | ACTIVE              | octavia  |
| 93a920f0-a934-4ef9-bcb7-32c9ed022966 | Load Balancer 2 | 10.0.0.17   | ACTIVE              | octavia  |
+--------------------------------------+-----------------+-------------+---------------------+----------+

Where Load Balancer 2 is our target.

Remodify Security Group of the Amphora and VM

It's quite important that if you can not connect to the backend server, it probly is due to security group. So we allow all traffic connected in the network to avoid problem. Mention that, when you create Octavia Service by user Demo, the Amphora VM is launched by admin (not Demo). You will not see the Amphora VM information in user Demo, but you can see the Octavia Service in User Demo. If you are using Horizon, switch to admin user and check the instance you will see the Amphora VM running on user admin not demo.

So if you want to allow all traffic in the network, you must remodify security group for both user demo and admin.

Security Group in User Demo

To allow all traffic, you can use Horizon to enable icmp, tcp, udp, dns, http, ssh for ingress and egress.

stack@devoct:~/devstack$ nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 53        | 53      | 0.0.0.0/0 |              |
| tcp         | 443       | 443     | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Security Group in User Admin

stack@devoct:~/devstack$ nova secgroup-list
+--------------------------------------+-----------------------------------------+------------------------+
| Id                                   | Name                                    | Description            |
+--------------------------------------+-----------------------------------------+------------------------+
| 001944c9-721a-44fa-b400-0a07f598221c | default                                 | Default security group |
| 95697e4b-cd96-49e3-a865-093ee7835390 | lb-0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 |                        |
| 67c13a32-a5f4-4cec-bc7a-10758b503671 | lb-93a920f0-a934-4ef9-bcb7-32c9ed022966 |                        |
| 2fb55456-98e4-411d-9442-a3b72a30bf4a | lb-mgmt-sec-grp                         |                        |
+--------------------------------------+-----------------------------------------+------------------------+

where lb-93a920f0-a934-4ef9-bcb7-32c9ed022966 is our lb service. To modify the security group as followed by using Horizon. Here we enable all tcp, icmp, http for both ingress and egress.

stack@devoct:~/devstack$ nova secgroup-list-rules lb-93a920f0-a934-4ef9-bcb7-32c9ed022966
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 1025      | 1025    | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Result

From Client and to Access Floating IP of the Amphora.
where we add 192.168.140.8 to Amphora as a floating IP.

junmeinde-MacBook-Pro:~ junmein$ curl 192.168.140.8
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".cache/">.cache/</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".sudo_as_admin_successful">.sudo_as_admin_successful</a>
<li><a href=".viminfo">.viminfo</a>
</ul>
<hr>
</body>
</html>

In the Backend VM

ubuntu@testu:~$ sudo python -m SimpleHTTPServer 80

10.0.0.18 - - [17/Aug/2016 08:09:30] "GET / HTTP/1.1" 200 -
10.0.0.18 - - [17/Aug/2016 08:09:53] "GET / HTTP/1.1" 200 -

You will see the traffic flow from client to Amphora and to Backend Server.

What is 10.0.0.18 from the above result

stack@devoct:~/devstack$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                                                                         |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+
| 66cac671-ad9e-4815-9f00-9d0889b8bdca | amphora-54fd4f13-6b56-49e5-af76-88aabc01df68 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.6; private=fd4a:3193:7fb5:0:f816:3eff:fe22:9bde, 10.0.0.18 |
| 48a7ad73-57d9-4e75-80ca-5753a9f047ed | amphora-73a06a20-42d7-4e1c-8320-613984f35428 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.5; private=fd4a:3193:7fb5:0:f816:3eff:fe11:b756, 10.0.0.11 |

You can use admin to see the detailed of Amphora instance and you will find the 10.0.0.18 is the IP of the Amphora.

And you can ping this IP, 10.0.0.18, from backend server for trouble shooting to make sure the security group with right setting.

ubuntu@testu:~$ ping 10.0.0.18
PING 10.0.0.18 (10.0.0.18) 56(84) bytes of data.
64 bytes from 10.0.0.18: icmp_req=1 ttl=64 time=2.25 ms
64 bytes from 10.0.0.18: icmp_req=2 ttl=64 time=1.13 ms
^C

Debug

  • No neutron-lbaasv2-agent Daemon in Mitaka version /usr/bin/python /usr/local/bin/neutron-lbaasv2-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini

  • Memory too Small

Please make sure you have large memory, since I waste a lot of time to debug it, since the Amphora cannot be launched. The following is the error message:

Cannot set up guest memory 'pc.ram': Cannot allocate memory

After enlarge memory to 32GB and Disk to 50GB, everything seems better.

Reference

How to use command line to launch a VM.

http://gogosatellite.blogspot.tw/search?q=minimum

Hacking Ubuntu Cloud Image to Have Your Own User and Password

Hacking Cloud VM to use User name password to Login

Hacking Cloud VM to use User password to Login

Most of cloud image use key-pair to login. It's really hard for a beginner.
There is a way to hack the cloud image to login with usual user and password without keypairs.
This is a greate tool for me.

Hacking the Cloud VM

Download backdoor-image

$bzr branch lp:~smoser/+junk/backdoor-image

You might need to install bzr through apt-get install bzr.

Into backdoor-image folder

$cd backdoor-image

Download a Ubuntu cloud image

$ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

cp image to a backup image

$cp precise-server-cloudimg-amd64-disk1.img ubuntu12.04-server-cloudimg-amg64-disk1.img

Set the user and password to the cloud image

$./backdoor-image --user ubuntu --password ubuntu --password-auth ubuntu12.04-server-cloudimg-amg64-disk1.img

Perfectly

Thursday, August 4, 2016

努力升级

最近好多周邊配備升級了,包含我的MAC電腦也升級SSD。
自己也該隨著升級,心態,能力,都該升級了。
更努力,更專注,更投入,才行。