Showing posts with label openstack. Show all posts
Showing posts with label openstack. Show all posts

Thursday, August 3, 2017

OpenStack and Sqlalchemy

sqlalchmy

很感謝這篇文章

http://www.dangtrinh.com/2013/06/sqlalchemy-python-module-with-mysql.html

OpneStack的Sqlalchemy

要用Python我們的確應該看看OpenStack提供了什麼樣的幫助,首先 OpenStack定義的DB schema都放在這個檔案下

nova/nova/db/sqlalchemy/models.py

我們來看一下他是用什麼方式定義的,將定義放在Class中。

class InstanceTypes(BASE, NovaBase):
    """Represents possible flavors for instances.

    Note: instance_type and flavor are synonyms and the term instance_type is
    deprecated and in the process of being removed.
    """
    __tablename__ = "instance_types"

    __table_args__ = (
        schema.UniqueConstraint("flavorid", "deleted",
                                name="uniq_instance_types0flavorid0deleted"),
        schema.UniqueConstraint("name", "deleted",
                                name="uniq_instance_types0name0deleted")
    )

    # Internal only primary key/id
    id = Column(Integer, primary_key=True)
    name = Column(String(255))
    memory_mb = Column(Integer, nullable=False)
    vcpus = Column(Integer, nullable=False)
    root_gb = Column(Integer)
    ephemeral_gb = Column(Integer)
    # Public facing id will be renamed public_id
    flavorid = Column(String(255))
    swap = Column(Integer, nullable=False, default=0)
    rxtx_factor = Column(Float, default=1)
    vcpu_weight = Column(Integer)
    disabled = Column(Boolean, default=False)
    is_public = Column(Boolean, default=True)

我們再看看OpenStack import了什麼

from sqlalchemy import (Column, Index, Integer, BigInteger, Enum, String,
                        schema, Unicode)
from sqlalchemy.dialects.mysql import MEDIUMTEXT
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import orm
from sqlalchemy import ForeignKey, DateTime, Boolean, Text, Float

from nova.db.sqlalchemy import types

Rabbitmq and Linux Keepalive Setting

rabbitmq

RabbitMQ Keepalive System Setting

需配置系统tcpkeepalivetime相关参数,减小keepalive时间及探测次数

tcp keepalive (time=7200, intvl=5, probes=9)


net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5

OpenStack Trove

trove

Trove Installation

ls
wget http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
mv mysql.qcow2 trove-mysql.qcow2

glance image-create --name "mysql-5.6" --file trove-mysql.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

sudo trove-manage datastore_update mysql ''
Glance_Image_ID=$(glance image-list | awk '/ mysql-5.6 / { print $2 }')
sudo trove-manage datastore_version_update mysql mysql-5.6 mysql ${Glance_Image_ID} '' 1
FLAVOR_ID=$(openstack flavor list | awk '/ m1.small / { print $2 }')
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
cd ../devstack/
ls
tail -f n-cpu.log -n 100
echo $FLAVOR_ID1
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove list

Error

Error Message

2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Successfully created security group for instance: d87625a2-17ac-4bb0-9c50-19ca1fe92084 create_instance /opt/stack/trove/trove/taskmanager/models.py:393
2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Begin _create_server_volume_individually for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_server_volume_individually /opt/stack/trove/trove/taskmanager/models.py:783
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] trove volume support = True _build_volume_info /opt/stack/trove/trove/taskmanager/models.py:811
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] Begin _create_volume for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_volume /opt/stack/trove/trove/taskmanager/models.py:844
2016-08-22 16:15:16.713 7651 ERROR trove.taskmanager.models [-] Failed to create volume for instance d87625a2-17ac-4bb0-9c50-19ca1fe92084
Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.
Traceback (most recent call last):
  File "/opt/stack/trove/trove/taskmanager/models.py", line 815, in _build_volume_info
    volume_size, volume_type, datastore_manager)
  File "/opt/stack/trove/trove/taskmanager/models.py", line 845, in _create_volume
    volume_client = create_cinder_client(self.context)
  File "/opt/stack/trove/trove/common/remote.py", line 128, in cinder_client
    endpoint_type=CONF.cinder_endpoint_type)
  File "/opt/stack/trove/trove/common/remote.py", line 71, in get_endpoint
    endpoint_type=endpoint_type)
NoServiceEndpoint: Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.

Trove needs volumev2 from Cinder for a data drive as data storage ?

119 def cinder_client(context):
120     if CONF.cinder_url:
121         url = '%(cinder_url)s%(tenant)s' % {
122             'cinder_url': normalize_url(CONF.cinder_url),
123             'tenant': context.tenant}
124     else:
125         url = get_endpoint(context.service_catalog,
126                            service_type=CONF.cinder_service_type,
127                            endpoint_region=CONF.os_region_name,
128                            endpoint_type=CONF.cinder_endpoint_type)
stack@trove:/etc/trove$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 7ebcc121e88c427a81b509334dd839e4 | trove       | database       |
| 90125dfe6a434ef3b0174cb7248c69f2 | nova_legacy | compute_legacy |
| 9a07a66686fa4e0a89201d98f137a898 | neutron     | network        |
| 9a8a8b2da8104b8c8422d134b2dff319 | nova        | compute        |
| b506135021f64a98899c378cbd47bf5f | keystone    | identity       |
| e0cb6a6687b043db869e5c0e06683d33 | glance      | image          |
+----------------------------------+-------------+----------------+

Hence, we add cinder to local.conf

CINDER_BRANCH=stable/mitaka
# Enable Cinder - Block Storage service for OpenStack
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

After that, We can see the volumev2

stack@ubuntu:~/devstack$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 23058a3ea403442fb92f602fd4ebb777 | cinderv2    | volumev2       |
| 297f61ee0df84e4f8b49657af3b816cf | nova        | compute        |
| 674ab4b086c64dc8aa51afabc7a8f203 | neutron     | network        |
| 6e506e2ae0c14ca6a605cbf7828f0a1d | cinder      | volume         |
| b961bd89072e4abeabdf7088854f4e55 | glance      | image          |
| ddd741dae5904cd49d26badc8d17e7ef | keystone    | identity       |
| f6ade7c1e3564fa28e5c5c73a181c3a3 | nova_legacy | compute_legacy |
+----------------------------------+-------------+----------------+
[[local|localrc]]
DEST=/opt/stack

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
HOST_IP=192.168.140.20

ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

# Enable Cinder - Block Storage service for OpenStack
CINDER_BRANCH=stable/mitaka
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

# Enabling trove
TROVE_BRANCH=stable/mitaka
enable_plugin trove git://git.openstack.org/openstack/trove stable/mitaka stable/mitaka
enable_plugin trove-dashboard git://git.openstack.org/openstack/trove-dashboard stable/mitaka


# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

Q_PLUGIN=ml2
#Q_USE_DEBUG_COMMAND=True
if [ "$Q_PLUGIN" = "ml2" ]; then
  #Q_ML2_TENANT_NETWORK_TYPE=gre
  Q_ML2_TENANT_NETWORK_TYPE=vxlan
  :
fi
## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=192.168.140.0/24
PUBLIC_NETWORK_GATEWAY=192.168.140.254
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

## Enable Trove

ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond


IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

After installing Cinder, we still got the error message.

No valid host was found. There are not enough hosts available.
Code
500
Details
File "/opt/stack/nova/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/opt/stack/nova/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
Created
Aug. 23, 2016, 5:48 a.m.

After we switch to flavor m1.small. It works. We can see the status from Horizon

mysql-instance  mysql-5.6    10.0.0.4 fd1d:6b4e:634a:0:f816:3eff:fea4:f2c2 m1.small  Active nova    None    Running 1 minute    
stack@trove2:~/trove-test$ trove list
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| ID                                   | Name           | Datastore | Datastore Version | Status | Flavor ID | Size |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| 0d1cf949-2db9-4d73-8843-fc7a7d279a11 | mysql-instance | mysql     | mysql-5.6         | ERROR  | 3         |    5 |
| f86da618-0d7f-464b-b051-769f1864095e | mysql-instance | mysql     | mysql-5.6         | BUILD  | 2         |    5 |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+

Monday, July 24, 2017

Architect of OpenStack L3 Router HA

l3routerha

在两个netowrk node中,我们分别看到了virtual-router qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec

[root@openstackcontroller13 ~]# ip netns list|grep f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec
qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec
[root@openstackcontroller13 ~]# ip netns exec qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec bash
[root@openstackcontroller13 ~]# ifconfig
ha-880fa0e2-8d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 169.254.192.4  netmask 255.255.192.0  broadcast 169.254.255.255
        inet6 fe80::f816:3eff:fe90:adf0  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:90:ad:f0  txqueuelen 0  (Ethernet)
        RX packets 609493  bytes 32927812 (31.4 MiB)
        RX errors 0  dropped 43  overruns 0  frame 0
        TX packets 304608  bytes 16449072 (15.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-3674d949-4c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.89.151.168  netmask 255.255.0.0  broadcast 0.0.0.0
        inet6 fe80::f816:3eff:fe8e:b815  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:8e:b8:15  txqueuelen 0  (Ethernet)
        RX packets 68441559  bytes 19571436387 (18.2 GiB)
        RX errors 0  dropped 2251  overruns 0  frame 0
        TX packets 55319  bytes 5194356 (4.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-352424b9-3e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 192.168.20.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::f816:3eff:fe12:5526  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:12:55:26  txqueuelen 0  (Ethernet)
        RX packets 3675  bytes 366823 (358.2 KiB)
        RX errors 0  dropped 13  overruns 0  frame 0
        TX packets 1394  bytes 132232 (129.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@openstackcontroller12 ~]# ip netns exec qrouter-f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec bash
[root@openstackcontroller12 ~]# ifconfig
ha-71d6264d-9d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 169.254.192.3  netmask 255.255.192.0  broadcast 169.254.255.255
        inet6 fe80::f816:3eff:fee7:4c03  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:e7:4c:03  txqueuelen 0  (Ethernet)
        RX packets 800714  bytes 43265351 (41.2 MiB)
        RX errors 0  dropped 31  overruns 0  frame 0
        TX packets 12  bytes 1008 (1008.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-3674d949-4c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether fa:16:3e:8e:b8:15  txqueuelen 0  (Ethernet)
        RX packets 58884872  bytes 18031883270 (16.7 GiB)
        RX errors 0  dropped 2002  overruns 0  frame 0
        TX packets 1  bytes 110 (110.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-352424b9-3e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether fa:16:3e:12:55:26  txqueuelen 0  (Ethernet)
        RX packets 500  bytes 57320 (55.9 KiB)
        RX errors 0  dropped 17  overruns 0  frame 0
        TX packets 1  bytes 110 (110.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

我们查一下keepalived

ps aux|grep keepalived |grep f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec

root     39958  0.0  0.0 111636  1364 ?        Ss   Oct26   0:23 keepalived -P -f /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec/keepalived.conf -p /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec.pid -r /var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec.pid-vrrp

上述进程是透过network namespace下执行的。然而,network namespace并没有隔离进程,因此,在任何地方均可以看到全部进程。 顺便refer我之前对network namespace的研究

http://gogosatellite.blogspot.tw/2016/06/playing-openvswitch-and-namespace-veth.html

/var/lib/neutron/ha_confs/f8d376d2-6d8a-4ec1-9939-4c6ec25f8cec/keepalived.conf

vrrp_instance VR_2 {
    state BACKUP
    interface ha-71d6264d-9d
    virtual_router_id 2
    priority 50
    garp_master_delay 60
    nopreempt
    advert_int 2
    track_interface {
        ha-71d6264d-9d
    }
    virtual_ipaddress {
        169.254.0.2/24 dev ha-71d6264d-9d
    }
    virtual_ipaddress_excluded {
        10.89.151.168/16 dev qg-3674d949-4c
        192.168.20.1/24 dev qr-352424b9-3e
        fe80::f816:3eff:fe12:5526/64 dev qr-352424b9-3e scope link
        fe80::f816:3eff:fe8e:b815/64 dev qg-3674d949-4c scope link
    }
    virtual_routes {
        0.0.0.0/0 via 10.89.1.254 dev qg-3674d949-4c
    }
}

官网的解释

https://wiki.openstack.org/wiki/Neutron/L3HighAvailability_VRRP

global_defs {
    router_id ${VR_ID}
}
vrrp_sync_group VG${VR_GROUP_ID} {
    group {
        VI_HA
    }
    % if NOTIFY_SCRIPT:
    notify_master ${NOTIFY_SCRIPT}
    % endif
}

vrrp_instance VI_HA {
    % if TYPE == 'MASTER':
    state MASTER
    % else:
    state SLAVE
    % endif
    interface ${L3_AGENT.get_ha_device_name(TRACK_PORT_ID)}
    virtual_router_id ${VR_ID}
    priority ${PRIORITY}
    track_interface {
        ${L3_AGENT.get_ha_device_name(TRACK_PORT_ID)}
    }
    virtual_ipaddress {
        % if EXTERNAL_PORT:
        ${EXTERNAL_PORT['ip_cidr']} dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % if FLOATING_IPS:
        ${FLOATING_IPS[0]['floating_ip_address']}/32 dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % endif
        % endif

        % if INTERNAL_PORTS:
        ${INTERNAL_PORTS[0]['ip_cidr']} dev ${L3_AGENT.get_internal_device_name(INTERNAL_PORTS[0]['id'])}
        % endif
    }
    virtual_ipaddress_excluded {
        % if EXTERNAL_PORT:
        % for FLOATING_IP in FLOATING_IPS[1:]:
        ${FLOATING_IP['floating_ip_address']}/32 dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
        % endfor
        % endif

        % for INTERNAL_PORT in INTERNAL_PORTS[1:]:
        ${INTERNAL_PORT['ip_cidr']} dev ${L3_AGENT.get_internal_device_name(INTERNAL_PORT['id'])}
        % endfor
    }

    % if EXTERNAL_PORT:
    virtual_routes {
        0.0.0.0/0 via ${EXTERNAL_PORT['ip_cidr'].split('/')[0]} dev ${L3_AGENT.get_external_device_name(EXTERNAL_PORT['id'])}
    }
    % endif
}

virtual_ipaddress为VIP的设定,virtual_ipaddress_excluded为namespace内network device的IP的设定。 standby并未设定任何值,直到fail over产生。此网路设定还包含Mac Address的设定,两台相同。

Tuesday, October 4, 2016

OpenStack: How to bind a Tenant/Project to a Specific Region

keystonetenantbindRegion

How to bind a Tenant to a specific Region

Long time ago, I am trying to solve the problem that can a tenant bind to a region ?
Now we got the answer, we can use Mitaka keystone and with v3 API to solve this problem.

http://developer.openstack.org/api-ref/identity/v3-ext/?expanded=create-endpoint-group-detail

V3 API changes the Tenant to Project, so sometimes we use these two words mix.

We can use the keystone v3 extension API: v3/OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}.

Started Binding Tenant and Region

先找出来,我们现有的Region ID。 First of all, we find out the region that we want to bind to.

root@mitakakeystone:~/v3keystone# curl  -si -H"X-Auth-Token:admintoken" -H "Content-type: application/json" "http://localhost:35357/v3/endpoints"
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 3327
X-Openstack-Request-Id: req-99c406c8-2f1e-41b8-8bb4-bfad7beb2c99
Date: Tue, 04 Oct 2016 06:20:45 GMT

.
 {"region_id": "RR", "links": {"self": "http://localhost:35357/v3/endpoints/909a15358430456cab93020fab101b9f"},   
 "url": "http://localhost:35357/v3/services/c6f427a715a54d0190ec8364d46f307b", "region": "RR", "enabled": true,   
 "interface": "public", "service_id": "c6f427a715a54d0190ec8364d46f307b", "id": "909a15358430456cab93020fab101b9f"}   
.

Remember the Region_ID 909a15358430456cab93020fab101b9f, we will bind a project to this region.

That's see Is there any endpoint binding to the project(tenant) id via /v3/OS-EP-FILTE API.

root@mitakakeystone:~/v3keystone# curl -si -H "X-Auth-Token:admintoken" -H "Content-type: application/json" "http://localhost:35357/v3/OS-EP-FILTER/projects/404838490fab42e7ad560b66725e4f64/endpoints"
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 162
X-Openstack-Request-Id: req-560e8715-e992-42e6-b761-9c3487040a93
Date: Tue, 04 Oct 2016 06:16:38 GMT

{"endpoints": [], "links": {"self": "http://localhost:35357/v3/OS-EP-FILTER/projects/404838490fab42e7ad560b66725e4f64/endpoints", "previous": null, "next": null}}

No binding at all.

Now we can start to bind project id 404838490fab42e7ad560b66725e4f64 to Region ID 909a15358430456cab93020fab101b9f

curl -X PUT -si -H "X-Auth-Token:admintoken" -H "Content-type: application/json" "http://localhost:35357/v3/OS-EP-FILTER/projects/404838490fab42e7ad560b66725e4f64/endpoints/909a15358430456cab93020fab101b9f"

To check it out.

root@mitakakeystone:~/v3keystone# curl -si -H "X-Auth-Token:admintoken" -H "Content-type: application/json" "http://localhost:35357/v3/OS-EP-FILTER/projects/404838490fab42e7ad560b66725e4f64/endpoints"
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 498
X-Openstack-Request-Id: req-404dd3f3-df71-4665-bd8e-acae1f33b2a9
Date: Tue, 04 Oct 2016 06:18:02 GMT

{"endpoints": [{"region_id": "RR", "links": {"self": "http://localhost:35357/v3/endpoints/909a15358430456cab93020fab101b9f"}, "url": "http://localhost:35357/v3/services/c6f427a715a54d0190ec8364d46f307b", "region": "RR", "enabled": true, "interface": "public", "service_id": "c6f427a715a54d0190ec8364d46f307b", "id": "909a15358430456cab93020fab101b9f"}], "links": {"self": "http://localhost:35357/v3/OS-EP-FILTER/projects/404838490fab42e7ad560b66725e4f64/endpoints", "previous": null, "next": null}}

We can see we can bind the project to a spcific region.

To see the original API, no changes any more

root@mitakakeystone:~/v3keystone# curl  -si -H"X-Auth-Token:admintoken" -H "Content-type: application/json" "http://localhost:35357/v3/projects/404838490fab42e7ad560b66725e4f64"
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 283
X-Openstack-Request-Id: req-26d3a544-83ac-4b9b-a10b-ba7f5bbfbf19
Date: Tue, 04 Oct 2016 06:29:47 GMT

{"project": {"is_domain": false, "description": "Service Tenant", "links": {"self": "http://localhost:35357/v3/projects/404838490fab42e7ad560b66725e4f64"}, "enabled": true, "id": "404838490fab42e7ad560b66725e4f64", "parent_id": "default", "domain_id": "default", "name": "service1"}}

So to solve the problem, we must use v3/OS-EP-FILTER/ API.

Thursday, September 22, 2016

Explaining Keystone Domain, Project, Group, User and Role in a Single Image

keystoneUserandRole

Explaining Keystone Domain, Project, Group, User and Role in a Single Image

  1. Fig1
    • Each Domain is unique in OpenStack
    • Different Project can be mapped to one Domain, but cannot map to another Domain
  2. Fig2
    • One User can be mapped to different Prject
    • One User have different role in different Project
  3. Fig 3
    • Different User can map to one Group
    • One Group can be mapped to different Project
    • One Group can have different role to different Project
  4. Fig 4
    • Domains have admin
    • Projects have admin
    • Group have admin
    • Group can have different admin to different Project

Friday, September 9, 2016

Minimun Installation Steps to Ceph Jewel and Plaing with It

Cephjewelinstall

Minimun Installation Steps to Ceph Jewel and Plaing with It

Our goal is to reduce the installation procudures that can help peolple playing Ceph Jewel immediately.
Hence, we use the minimum installation with single VM and one disk drive to achive the goal.
I do not recommand using OS direcotry as OSD drive, since it's not a usual situation.
So attaching one disk on this VM first.

Let's start to discuss the following topic.

  • Envrironment
  • Installation
  • Result
  • Playing with Ceph Jewel

Envrironment

OS

  • Ubuntu14.04
  • Kernel 3.16.0-30-generic
  • VMware VM

/etc/hosts Setting

root@cephserver:~/mycephfiles# cat /etc/hosts
127.0.0.1   localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.2.141 cephserver

Data Drive setting

root@cephserver:~/mycephfiles# ls /dev/sd*
sda   sda1  sda2  sda5  sdb

The sdb will be the only one OSD in our Ceph testing bed

Ceph Jewel Installation

We installed the Ceph Jewel by using the following commands.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
apt-get install ceph-deploy
mkdir mycephfiles
cd mycephfiles
ceph-deploy new cephserver
echo 'osd crush chooseleaf type = 0' >> ceph.conf
echo osd pool default size = 1 >> ceph.conf
ceph-deploy install cephserver
ceph-deploy mon create-initial
mkfs.ext4 /dev/sdb
ceph-deploy osd create cephserver:sdb
ceph-deploy osd activate cephserver:/dev/sdb1
ceph-deploy admin cephserver
chmod +r /etc/ceph/ceph.client.admin.keyring
Ceph -s
ceph osd tree

Result

After installation, we check the Ceph is healthy or not. First of all, check the status of Ceph.

root@cephserver:~# ceph -s
    cluster af0ac66e-5020-4218-926a-66d57895fafd
     health HEALTH_WARN
            too many PGs per OSD (448 > max 300)
     monmap e1: 1 mons at {cephserver=192.168.2.141:6789/0}
            election epoch 5, quorum 0 cephserver
     osdmap e24: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v230: 448 pgs, 4 pools, 3291 MB data, 947 objects
            3333 MB used, 1990 GB / 1994 GB avail
                 448 active+clean

Check the status of all the OSD, it must be the "up" status.

root@cephserver:~# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.94730 root default
-2 1.94730     host cephserver
 0 1.94730         osd.0            up  1.00000          1.00000

As before, we directly activate the /dev/sdb, and now we can see which directory it located.

root@cephserver:~# mount 
.
.
/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)

Playing with Ceph Jewel

A big Difference to Other Ceph version

Ceph Jewel is running properly in Kernel greater than 4.4. So, if you kernel is less the 4.4, you will get the errors when you map a image shown as followed.

root@cephserver:~#rbd map pool101/realimage1

rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

The solution is disable some features to make it working properly by using the below command.

rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
rbd map pool101/realimage1

Playing Pool, Create, Map and Mount

Now we can add some data in the Ceph.

root@cephserver:~# ceph osd pool create pool101 128
root@cephserver:~# rbd create -p pool101 realimage1 --size 102400 --image-format 2
rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map pool101/realimage1
/dev/rbd0
root@cephserver:~# mkfs.ext4 /dev/rbd0
root@cephserver:~# mount /dev/rbd0 /mnt/cephtest/
root@cephserver:~# touch /mnt/cephtest/aa && echo "v1"> /mnt/cephtest/aa

Playing Snapshot

You can snapshot the data, and the snapshot is read only.

root@cephserver:~# rbd snap create pool101/realimage1@snap1
rbd snap protect pool101/realimage1@snap1

Playing Clone and Mount

If you want to write to some snapshot, you have to clone the snapshot. Now you can read and write the data to it.

root@cephserver:~# rbd clone pool101/realimage1@snap1 realimage1snap1clone1
root@cephserver:~#rbd feature disable realimage1snap1clone1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map realimage1snap1clone1
/dev/rbd1
root@cephserver:~# mount /dev/rbd1 /mnt/cephclone
root@cephserver:~# cat /mnt/cephclone/aa
v1

Tuesday, August 30, 2016

Trace Source Code: nova scheduler (OpenStack Kilo)

scheduler

Code tracing in Nova Scheduler (Kilo)

Nova Scheduler in Kilo

We started from nova/nova/scheduler/manage.py and its function select_destinations,
that is the beginning of the nova-schduler.
Here is the an example for quick understanding the RabbitMQ Server Side that compared the section of reference for oslo.messaging in this Blog.

And the Client side is from nova-conductor.

And the nova/nova/scheduler/manage.py is here.

class SchedulerManager(manager.Manager):
    """Chooses a host to run instances on."""

    target = messaging.Target(version='4.2')

    def __init__(self, scheduler_driver=None, *args, **kwargs):
        if not scheduler_driver:
            scheduler_driver = CONF.scheduler_driver
        self.driver = importutils.import_object(scheduler_driver)
        super(SchedulerManager, self).__init__(service_name='scheduler',
                                               *args, **kwargs)
        self.additional_endpoints.append(_SchedulerManagerV3Proxy(self))
        
    @messaging.expected_exceptions(exception.NoValidHost)
    def select_destinations(self, context, request_spec, filter_properties):

This is the result of tracing nova scheduler code.

Adding more filter, you can put it on /schedule/filters and reconfig nova.conf to add the new filter you put.

Reference for oslo.messaging

http://www.cinlk.com/2015/12/04/rabbitmq/

import oslo.messaging
from oslo.config import cfg
class TestEndpoint(object):
    target = oslo.messaging.Target(namespace='test', version='2.0')
    def __init__(self, server):
        self.server = server
    def foo(self, ctx, id):
        return id
oslo.messaging.set_transport_defaults('myexchange')
transport = oslo.messaging.get_transport(cfg.CONF)
target = oslo.messaging.Target(topic='myroutingkey', server='myserver')
endpoints = [TestEndpoint(None)]
server = oslo.messaging.get_rpc_server(transport, target, endpoints,
                                      executor='blocking')
server.start()
server.wait()