Wednesday, August 17, 2016

Study OpenStack Octavia in Mitaka by using DevStack (1)

octavia

Octavia

You must prepare ram greater equal than 32GB. That's in my minimum ram testing.
In my previous testing, I only have 4GB ram, it always failed and cannot launch Amphora.
So don't make the mistake.

Installation

You can see my previous Blog to show how to install DevStack Mitaka.

http://gogosatellite.blogspot.tw/2016/04/using-devstack-to-install-openstack.html

Here is the local.conf.

[[local|localrc]]

# Load the external LBaaS plugin.
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/mitaka
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/mitaka
enable_plugin neutron-lbaas-dashboard https://git.openstack.org/openstack/neutron-lbaas-dashboard stable/mitaka

DEST=/opt/stack

HOST_IP=192.168.2.131

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken


ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

Q_PLUGIN=ml2
#Q_USE_DEBUG_COMMAND=True
if [ "$Q_PLUGIN" = "ml2" ]; then
  #Q_ML2_TENANT_NETWORK_TYPE=gre
  Q_ML2_TENANT_NETWORK_TYPE=vxlan
  :
fi
## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=192.168.2.0/24
PUBLIC_NETWORK_GATEWAY=192.168.2.2
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

# Enable LBaaS v2
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api


IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

[[post-config|$NOVA_CONF]]
[DEFAULT]
cpu_allocation_ratio=32.0
ram_allocation_ratio=32.0
disk_allocation_ratio=32.0

Now you can start to install devstack.

./stack.sh

Create Backend Server

Be patient, We lauch Backend Server first.

You can see my previous Blog to download and hack a Ubuntu VM.

http://gogosatellite.blogspot.tw/2016/08/html-charsetutf-8-cloud-vm-to-use-user.html

After launch a Ubuntu VM and run a SimpleHTTPServer.

python -m SimpleHTTPServer 80

where we enable port 80.

Create Octavia Service

Horizon

You can easily use Horizon to launch Octavia service. It's quite simple. Mention that you must add the Backend VM as member VM and adding floating IP to Amphora.

After Create Octavia Service.

After that, wait for 5min+ util you see the log from o-cw.log.

tail -f o-cw.log -n 100
.

2016-08-17 04:00:02.604 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:03.607 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:04.609 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:05.612 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:06.614 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
2016-08-17 04:00:07.616 20588 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.
.
.
2016-08-17 04:01:12.083 20588 INFO octavia.controller.worker.tasks.database_tasks [-] Mark ACTIVE in DB for load balancer id: 93a920f0-a934-4ef9-bcb7-32c9ed022966
2016-08-17 04:01:16.512 20588 INFO octavia.controller.queue.endpoint [-] Creating listener '906bd763-d1eb-4d7f-967b-669cbdde9bab'...
2016-08-17 04:01:25.631 20588 INFO octavia.controller.queue.endpoint [-] Creating pool '51ae89ca-5f0a-4344-b217-a0527cb11992'...

After you see the poll created, you can see the provisioning_status=ACTIVE or it is PENDING_CREATED. Or you can wait for provisioning_status becoming ACTIVE.

stack@devoct:~/devstack$ neutron lbaas-loadbalancer-list
+--------------------------------------+-----------------+-------------+---------------------+----------+
| id                                   | name            | vip_address | provisioning_status | provider |
+--------------------------------------+-----------------+-------------+---------------------+----------+
| 0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 | Load Balancer 1 | 10.0.0.10   | ACTIVE              | octavia  |
| 93a920f0-a934-4ef9-bcb7-32c9ed022966 | Load Balancer 2 | 10.0.0.17   | ACTIVE              | octavia  |
+--------------------------------------+-----------------+-------------+---------------------+----------+

Where Load Balancer 2 is our target.

Remodify Security Group of the Amphora and VM

It's quite important that if you can not connect to the backend server, it probly is due to security group. So we allow all traffic connected in the network to avoid problem. Mention that, when you create Octavia Service by user Demo, the Amphora VM is launched by admin (not Demo). You will not see the Amphora VM information in user Demo, but you can see the Octavia Service in User Demo. If you are using Horizon, switch to admin user and check the instance you will see the Amphora VM running on user admin not demo.

So if you want to allow all traffic in the network, you must remodify security group for both user demo and admin.

Security Group in User Demo

To allow all traffic, you can use Horizon to enable icmp, tcp, udp, dns, http, ssh for ingress and egress.

stack@devoct:~/devstack$ nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 53        | 53      | 0.0.0.0/0 |              |
| tcp         | 443       | 443     | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Security Group in User Admin

stack@devoct:~/devstack$ nova secgroup-list
+--------------------------------------+-----------------------------------------+------------------------+
| Id                                   | Name                                    | Description            |
+--------------------------------------+-----------------------------------------+------------------------+
| 001944c9-721a-44fa-b400-0a07f598221c | default                                 | Default security group |
| 95697e4b-cd96-49e3-a865-093ee7835390 | lb-0cc3a0dd-3849-4a95-9f9c-126bb8dc1437 |                        |
| 67c13a32-a5f4-4cec-bc7a-10758b503671 | lb-93a920f0-a934-4ef9-bcb7-32c9ed022966 |                        |
| 2fb55456-98e4-411d-9442-a3b72a30bf4a | lb-mgmt-sec-grp                         |                        |
+--------------------------------------+-----------------------------------------+------------------------+

where lb-93a920f0-a934-4ef9-bcb7-32c9ed022966 is our lb service. To modify the security group as followed by using Horizon. Here we enable all tcp, icmp, http for both ingress and egress.

stack@devoct:~/devstack$ nova secgroup-list-rules lb-93a920f0-a934-4ef9-bcb7-32c9ed022966
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 80        | 80      | 0.0.0.0/0 |              |
| udp         | 1         | 65535   | 0.0.0.0/0 |              |
| tcp         | 1025      | 1025    | 0.0.0.0/0 |              |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

Result

From Client and to Access Floating IP of the Amphora.
where we add 192.168.140.8 to Amphora as a floating IP.

junmeinde-MacBook-Pro:~ junmein$ curl 192.168.140.8
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".cache/">.cache/</a>
<li><a href=".profile">.profile</a>
<li><a href=".ssh/">.ssh/</a>
<li><a href=".sudo_as_admin_successful">.sudo_as_admin_successful</a>
<li><a href=".viminfo">.viminfo</a>
</ul>
<hr>
</body>
</html>

In the Backend VM

ubuntu@testu:~$ sudo python -m SimpleHTTPServer 80

10.0.0.18 - - [17/Aug/2016 08:09:30] "GET / HTTP/1.1" 200 -
10.0.0.18 - - [17/Aug/2016 08:09:53] "GET / HTTP/1.1" 200 -

You will see the traffic flow from client to Amphora and to Backend Server.

What is 10.0.0.18 from the above result

stack@devoct:~/devstack$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                                                                         |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+
| 66cac671-ad9e-4815-9f00-9d0889b8bdca | amphora-54fd4f13-6b56-49e5-af76-88aabc01df68 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.6; private=fd4a:3193:7fb5:0:f816:3eff:fe22:9bde, 10.0.0.18 |
| 48a7ad73-57d9-4e75-80ca-5753a9f047ed | amphora-73a06a20-42d7-4e1c-8320-613984f35428 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.5; private=fd4a:3193:7fb5:0:f816:3eff:fe11:b756, 10.0.0.11 |

You can use admin to see the detailed of Amphora instance and you will find the 10.0.0.18 is the IP of the Amphora.

And you can ping this IP, 10.0.0.18, from backend server for trouble shooting to make sure the security group with right setting.

ubuntu@testu:~$ ping 10.0.0.18
PING 10.0.0.18 (10.0.0.18) 56(84) bytes of data.
64 bytes from 10.0.0.18: icmp_req=1 ttl=64 time=2.25 ms
64 bytes from 10.0.0.18: icmp_req=2 ttl=64 time=1.13 ms
^C

Debug

  • No neutron-lbaasv2-agent Daemon in Mitaka version /usr/bin/python /usr/local/bin/neutron-lbaasv2-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini

  • Memory too Small

Please make sure you have large memory, since I waste a lot of time to debug it, since the Amphora cannot be launched. The following is the error message:

Cannot set up guest memory 'pc.ram': Cannot allocate memory

After enlarge memory to 32GB and Disk to 50GB, everything seems better.

Reference

How to use command line to launch a VM.

http://gogosatellite.blogspot.tw/search?q=minimum

No comments:

Post a Comment